WO2019044264A1 - Gaze analysis system, gaze analysis method, and training method - Google Patents

Gaze analysis system, gaze analysis method, and training method Download PDF

Info

Publication number
WO2019044264A1
WO2019044264A1 PCT/JP2018/027397 JP2018027397W WO2019044264A1 WO 2019044264 A1 WO2019044264 A1 WO 2019044264A1 JP 2018027397 W JP2018027397 W JP 2018027397W WO 2019044264 A1 WO2019044264 A1 WO 2019044264A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
analysis
sight
analysis result
line
Prior art date
Application number
PCT/JP2018/027397
Other languages
French (fr)
Japanese (ja)
Inventor
正行 篠田
Original Assignee
Agc株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agc株式会社 filed Critical Agc株式会社
Priority to JP2019539061A priority Critical patent/JPWO2019044264A1/en
Publication of WO2019044264A1 publication Critical patent/WO2019044264A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Definitions

  • the present invention relates to a gaze analysis system, a gaze analysis method, and a training method.
  • the eye camera can not be attached at the time of inspection, and the line of sight of the inspector is to the inspector and other inspectors. It can not be observed. For this reason, in a visual inspection using a conventional microscope, for example, it is difficult to grasp the cause of the variation in the accuracy of the inspection for each inspector. Moreover, in the visual inspection using a conventional microscope, it is not possible to show the inspector what standard eye movement is like, and the inspection training can not be performed. Furthermore, in the conventional visual inspection using a microscope, it is not possible for a plurality of inspectors to discuss while looking at the viewpoint of a certain inspector or to teach how to move the product while observing the movement of the actual viewpoint. .
  • the digital microscope which is an optical microscope which made it possible to observe an object on a display directly via a digital camera, it is not necessary for the inspector to look into the lens.
  • the performance of the digital microscope such as three-dimensionality, resolution, focus adjustment function, how many images of the inspection object in motion can be captured in one second, and the like, and the eye function of the inspector differ.
  • the digital microscope it is difficult to reproduce the state in which the inspector observes and observes the object, and if it becomes an apparatus capable of reproducing this state, a large investment is required at the time of filing of the present application. Is difficult.
  • the disclosed technique is a line-of-sight analysis system, and includes a storage unit storing an analysis image generated using image data captured through an eyepiece, a display unit displaying the analysis image, A line of sight detection unit for detecting the line of sight of a subject who views the image for analysis displayed on the display unit, and an output for outputting image data of an analysis result image in which the image for analysis and the line of sight of the subject are superimposed Part.
  • the disclosed technique is a gaze analysis system, and a storage unit storing an analysis image generated using image data captured through an eyepiece lens, and a display unit displaying the analysis image And a gaze detection unit attached to a wearer who looks at the analysis image displayed on the display unit, which detects the gaze of the wearer, and analysis in which the analysis image and the gaze of the wearer are superimposed And an output unit that outputs image data of a result image.
  • the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device
  • a visual line detection target person visually makes an analysis result image obtained by superimposing the analysis image and the visual line of the target person detected on the analysis image after the visual observation to the target person
  • the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device And allowing the wearer of the gaze detection apparatus to visually check, after the visual inspection, an analysis result image in which the analysis image and the gaze of the wearer detected on the analysis image are superimposed on the wearer. Make it visible.
  • the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device An analysis result image obtained by superimposing the image for analysis and the line of sight of the subject detected on the image for analysis after the visual inspection with the subject Have another person look at it.
  • the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device And a visual detection result of the wearer of the gaze detection apparatus, and after the visual inspection, an analysis result image in which the analysis image and the gaze of the wearer detected on the analysis image are superimposed with the wearer. Make another person look at it.
  • the disclosed technique is a line-of-sight analysis system for analysis generated using image data captured through a lens held by a holder that contacts the face of a subject whose line of sight is to be detected.
  • a storage unit storing an image, a display unit displaying the analysis image, a gaze detection unit detecting a gaze of the subject in the analysis image displayed on the display unit, the analysis image
  • an output unit that outputs image data of an analysis result image in which the line of sight of the subject is superimposed.
  • the detected line of sight can be observed.
  • FIG. 1A is a diagram for explaining a gaze analysis system according to a first embodiment.
  • the eye gaze analysis system 100 includes a display device 110, a control device 120, an eye gaze detection device 130, a control device 160, and a display device 170.
  • the display device 110 displays an analysis image G used to detect the movement of the gaze of the subject P to be analyzed.
  • the control device 120 holds analysis image data that is the source of the analysis image G, and causes the display device 110 to display the analysis image G.
  • the sight line detection device 130 is, for example, an eye camera or the like, is attached to the subject person P, captures an image of the field of view of the subject person P, and image data of the captured image and the sight line of the subject person P in this image. Obtain information indicating the position. Further, the sight line detection device 130 outputs the acquired image data and information indicating the position of the sight line of the subject P to the control device 160.
  • the sight line detection device 130 may detect the sight line of the subject P.
  • the sight line detection device 130 is not attached to the subject person P such as an eye camera, but is a camera 130A installed toward the subject person P under the display device 110. It may be.
  • the camera 130A preferably has a high-definition, near-infrared light source to detect the line of sight.
  • the installation position of the camera 130A may be the upper side or the lower side of the display device 110 in order to detect the line of sight, the lower side is preferable, and the lower side is more preferable.
  • the visual axis detection device 130 outputs, to the control device 160, image data of an image in which a mark indicating the position of the visual axis of the object person P is superimposed on the captured image.
  • image data acquired by the control device 160 from the gaze detection device 130 is referred to as analysis result image data.
  • the line of sight of the wearer of the line of sight detection device 130 in the image picked up by the line of sight detection device 130 is detected, and the image showing the position of the line of sight is superimposed on the captured image. It may be expressed as ".
  • control device 160 causes the display device 170 to display the analysis result image data, whereby the target person P analyzes the image for analysis with respect to the target person P and a third party other than the target person P. It is possible to observe the movement of the line of sight of the object person P when watching.
  • the analysis result image data output from the sight line detection device 130 to the control device 160 may be stored in, for example, a portable storage device such as a USB (Universal Serial Bus) memory, or the like. It may be stored in the stored storage device.
  • the analysis result image data stored in the storage device may be read by the control device 160 and displayed on the display device 170.
  • the line-of-sight analysis system 100 will be further described below.
  • a visual field in a state in which a target person P looks through an eyepiece lens provided in a microscope or the like is reproduced.
  • the image for analysis G is an image created using an image captured through an eyepiece lens by attaching an imaging device to the eyepiece lens of a microscope.
  • a region other than the circular region 112 on the screen 111 of the display device 110 is masked in order to indicate a state in which the eyepiece lens is viewed.
  • the analysis image G is visually recognized by the subject P as a circular image.
  • the analysis image G displayed in the circular area 111 can be moved (scrolled) by the pointer 140 that communicates with the control device 120.
  • the pointer 140 may be a touch pad or the like attached to the control device 120, or may be a mouse or the like connected to the control device 120.
  • FIG. 2 is a diagram for explaining the relationship between the subject and the position of the display device in the gaze analysis system according to the first embodiment.
  • a circular area such that the analysis image G displayed in the circular area 112 of the screen 111 of the display device 110 enters the viewing angle of the object person P in a state close to looking through the microscope.
  • the diameter r of 112 was about 20 cm.
  • the display device 110 of this embodiment is a monitor of 22 inches or more.
  • the distance L between the display device 110 and the eye of the subject P is about 40 cm.
  • the distance L may be adjusted in accordance with the magnification of the inspection object when the subject P performs a visual inspection with a microscope.
  • FIG. 3 is a diagram for explaining an analysis image of the first embodiment.
  • the analysis image G of the present embodiment is, for example, an image G1 captured by attaching an imaging device such as a digital camera to any one of eyepieces of an optical microscope in a state where a product to be inspected is set in the optical microscope. It is good also as an image which connected two or more.
  • the analysis image G of this embodiment may include images of a plurality of types of products.
  • the analysis image may include a plurality of images G1 and a plurality of images G2 obtained by imaging a product of a type different from the product that is the source of the image G1.
  • the analysis image G of the present embodiment is created as, for example, a high resolution vertically long image maintaining the resolution of the imaging device using an application for image processing (such as Illustrator) used in the printing industry and the like.
  • an application for image processing such as Illustrator
  • by setting the analysis image G to be a vertically long image it is possible to scroll the analysis image G in the circular area 112 using the pointer 140.
  • by causing the subject P to scroll the analysis image G it is possible to reproduce the operation of moving the product by hand in the visual inspection using a microscope.
  • the line-of-sight analysis system 100 by configuring the line-of-sight analysis system 100 as described above, it is possible to create, for example, a state imitating a state in which the subject P visually inspects a semiconductor chip or the like using an optical microscope. .
  • the line-of-sight analysis system 100 according to the present embodiment can reproduce a state in which the subject P wearing the line-of-sight detection device 130 performs visual inspection of a semiconductor chip or the like using an optical microscope.
  • the eye gaze analysis system 100 provides the camera 130A and the like even when the eye gaze detection device 130 is not attached, so that the object person P visually observes a semiconductor chip or the like using an optical microscope. The condition under test can be reproduced.
  • the image G1 is a base image.
  • the image may be an image including eight rows of nine rows arranged in the horizontal direction. In that case, the size of the image G1 is 14,400 kB.
  • the analysis image G may be an image including 30 rows in which nine base images BG are arranged in the horizontal direction. In that case, the size of the analysis image G is 150,000 kB (150 GB).
  • the resolution of the analysis image 1 is maintained by holding the analysis image G of this size as it is without compressing it.
  • the image G for analysis of this embodiment is a two-dimensional image captured by an imaging device attached to any one of two eyepiece lenses, it is not limited to this.
  • the analysis image G of the present embodiment may be, for example, a three-dimensional image generated by using an image captured by each imaging device with the imaging device attached to both of two eyepieces.
  • the image of the present embodiment also includes a moving image. Therefore, the analysis image G of the present embodiment may be a three-dimensional moving image.
  • the field of vision when the object person P looks at the analysis image G approaches the field of vision where the object person P looks in a state of looking through a microscope. be able to.
  • 3D glasses having a line-of-sight detection function may be attached to the subject person P instead of the line-of-sight detection device 130.
  • the image of the product to be inspected can be shown three-dimensionally to the subject P, and the visual field can be made closer to the actual visual field.
  • FIG. 4 is a diagram for explaining the function of each device of the gaze analysis system according to the first embodiment.
  • control device 120 includes an analysis image storage unit 121 and a display control unit 122.
  • the analysis image storage unit 121 holds the analysis image data Gd created in advance.
  • the display control unit 122 causes the display device 110 to display the analysis image G based on the analysis image data Gd.
  • the display control unit 122 also causes the display device 110 to display the analysis result image GS based on the analysis result image data GSd.
  • the analysis result image data GSd of the present embodiment is image data indicating the movement of the sight line on the analysis image G of the subject P.
  • the analysis result image data GSd of the present embodiment is moving image data indicating the locus of the line of sight of the subject P on the analysis image G.
  • the control device 160 of the present embodiment includes an analysis result acquisition unit 161, an analysis result image storage unit 162, and a display control unit 163.
  • the analysis result acquisition unit 161 acquires the analysis result image data GSd output from the sight line detection device 130 and causes the analysis result image storage unit 162 to store the analysis result image data GSd.
  • the analysis result acquisition unit 161 according to the present embodiment, for example, when displaying the analysis image G on the display device 110, the identification information for identifying the object person P to the object person P to which the sight line detection device 130 is attached.
  • the identification information and the analysis result image data GSd may be input and stored in the analysis result image storage unit 162 in association with each other.
  • the analysis result image storage unit 162 holds analysis result image data GSd for displaying the analysis result image GS.
  • the display control unit 163 causes the display device 170 to display the analysis result image data GSd.
  • the gaze detection apparatus 130 includes an imaging unit 131, a gaze detection unit 132, an image generation unit 133, and an output unit 134.
  • the imaging unit 131 is an imaging device such as a camera, and captures an image of a landscape in a gaze direction of the subject P wearing the gaze detection device 130.
  • the gaze detection unit 132 detects the gaze of the wearer of the gaze detection apparatus 130 in the image captured by the imaging unit 131.
  • the image generation unit 133 generates an image in which an image indicating the position of the gaze detected by the gaze detection unit 132 is superimposed on the image captured by the imaging unit 131.
  • the output unit 134 outputs the image data of the image generated by the image generation unit 133 to the control device 160 as analysis result image data.
  • FIG. 5 is a diagram for explaining the procedure of analyzing the gaze of the subject by the gaze analysis system according to the first embodiment.
  • control device 120 causes the display control unit 122 to read out the analysis image data held in the analysis image storage unit 121 and causes the display device 110 to display the analysis image G (see FIG. Step S501).
  • the line-of-sight analysis system 100 causes the imaging unit 131 of the line-of-sight detection device 130 mounted on the object person P to start imaging the image of the field of view of the object person P looking at the analysis image G of the display device 110 (Step S502).
  • the line-of-sight detection device 130 detects the line-of-sight of the subject P in the image captured by the line-of-sight detection unit 132, and the image generation unit 133 analyzes the resultant image indicating the position of the line of sight on the captured image. Analysis result image data to be shown is generated (step S503). Details of the analysis result image will be described later.
  • the visual axis detection device 130 causes the output unit 134 to output the analysis result image data to the control device 160 (step S504).
  • the control device 160 causes the analysis result acquisition unit 161 to acquire the analysis result image data output from the sight line detection device 130, and stores the analysis result image data in the analysis result image storage unit 162 (step S505).
  • the analysis result image data acquired as described above is displayed on the display device 170 so that the movement of the line of sight of the subject P in a state showing the field of vision of the subject P through the eyepiece lens. Can be visualized. Therefore, according to the present embodiment, it is possible to cause the subject P and other people to observe the movement of the line of sight in a state in which the field of vision of the subject P through the eyepiece is shown.
  • the line-of-sight analysis system 100 of the present embodiment for example, in the visual inspection using an optical microscope, the analysis result image of the inspector having the ability to inspect in a short time with high accuracy Can be observed.
  • FIG. 6 is a diagram for explaining an analysis result image of the first embodiment.
  • the subject person P wearing the visual axis detection device 130 is looking at the screen 111 displayed on the display device 110. Therefore, the visual axis detection device 130 captures an image of the screen 111 as shown in FIG.
  • the sight line detection device 130 detects the position of the sight line of the object person P on the captured image of the screen 111, and superimposes the image 113 indicating the position of the sight line. Further, the image 113 on the analysis image G moves in accordance with the movement of the line of sight of the object person P. In other words, the image 113 moves following the movement of the line of sight of the subject P.
  • the movement amount of the line of sight of the object person P from the predetermined position in the analysis image G may be determined, and may be stored in the analysis result storage unit 162 together with the analysis result image data GSd as numerical data.
  • the predetermined position may be coordinates indicating the center of the image of the product to be inspected included in the analysis image G.
  • a numerical value indicating the movement amount may be output together with the analysis result image data GSd. Furthermore, in the present embodiment, a numerical value indicating the movement amount may be displayed on the analysis image G.
  • the subject P scrolls the analysis image G vertically and horizontally using the pointer 140, and is displayed in the circular area 112 in the analysis image G. Move the area.
  • the movement of the image 113 is the movement of the line of sight of the subject P in a state where the analysis image G displayed in the circular area 112 is stationary and the movement of the analysis image G displayed in the circular area 112 And both are followed.
  • FIG. 7 shows an example of an analysis result image of the inspector A capable of performing visual inspection with high accuracy in a short time in a visual inspection using an optical microscope, for example.
  • FIG. An example of the analysis result image of the inspector B with low productivity which has a missing compared with A is shown.
  • FIG. 7 is a view showing an example of an analysis result image of the inspector A according to the first embodiment.
  • FIGS. 7A to 7D show images of each frame cut out from the analysis result image data (moving image) of the inspector A displayed on the display device 170.
  • FIG. 7A to 7D show images of each frame cut out from the analysis result image data (moving image) of the inspector A displayed on the display device 170.
  • the image 113-1A indicating the position of the line of sight of the inspector A in the analysis image G in the circular area 112 is the image 71 adjacently arranged It is located between the image 72.
  • the image 113-2A is positioned between the image 73 and the image 74 arranged adjacent to each other.
  • the image 113-3A is located at the center of the image 73, the image 74, the image 75, and the image 76.
  • the image 113-4A is located between the image 75 and the image 76 arranged adjacent to each other.
  • the inspector A does not look at the image of each product but looks at the space between the images of a plurality of products. From this, it is understood that the inspector A does not stare at the image of the product with a central view, but instantaneously views the image of the product by peripheral vision. Further, in this example, the viewing angle (an angle representing the size of the field of view) is 17 degrees, which indicates peripheral vision.
  • the inspector A visually observes the position of the images 113-1A to 113-4A in the analysis result images GS-1A to GS-4A by the appropriate visual line movement using peripheral vision and instantaneous vision. It can be understood that the inspection is being conducted. In other words, it can be understood that the inspector performing the visual inspection with the appropriate movement of the line of sight can perform the visual inspection with high accuracy in a short time.
  • the improvement of the productivity in the present embodiment is to shorten the time required for the visual inspection.
  • FIG. 8 is a view showing an example of an analysis result image of the inspector B according to the first embodiment.
  • FIGS. 8A to 8D show images of each frame cut out from the analysis result image data (moving image) of the inspector B displayed on the display device 170.
  • FIG. 8A to 8D show images of each frame cut out from the analysis result image data (moving image) of the inspector B displayed on the display device 170.
  • the image 113-1B indicating the position of the line of sight of the inspector B in the analysis image G in the circular area 112 is at a position overlapping the product image 71.
  • the image 113-2B is at a position overlapping the image 74.
  • the image 113-3B is at a position overlapping the image 74.
  • the image 113-4B is at a position overlapping the image 75.
  • the present embodiment even if it is impossible to capture the movement of the eyeball of the inspector under inspection in order to bring the face into contact with the inspection instrument such as a microscope, the movement of the line of sight of the inspector By showing in the analysis result image, it is possible to make the inspector himself or another inspector observe the line of sight.
  • visual inspection training by the inspector can be performed by causing the inspector to visually check his / her analysis result image and analysis result images of other inspectors.
  • FIG. 9 is a diagram showing the procedure of a training method of visual inspection using the gaze analysis system of the first embodiment.
  • the inspector specifies the analysis result image data GSd to be displayed on the display device 170 by the control device 160 (step S901).
  • the control device 160 receives the selection of the analysis result image data GSd to be displayed on the display device 170.
  • control device 160 causes the display control unit 163 to read the designated analysis result image data GSd from the analysis result image storage unit 162 (step S902). Then, the control device 160 causes the display control unit 163 to display the read analysis result image data GSd on the display device 170, and the inspector who has specified the analysis result image data GSd displays the analysis result image displayed on the display device 170. The user visually checks the GS (step S903).
  • the inspector by causing the inspector to visually check the analysis result image data GSd in this manner, the inspector can see the movement of his / her line of sight in the visual field through the eyepiece and the lines of sight of other inspectors. Can be observed.
  • another inspector is caused to observe the analysis result image GS of an inspector who performs a high-accuracy inspection in a short time, and the productivity in the visual inspection of the other inspectors is improved. It can be done.
  • the examiner B is caused to observe the analysis result image GS of his own (see FIG. 8) and the analysis result image GS of the examiner A (see FIG. 7).
  • FIG. 10 is a first diagram showing a modification of the gaze analysis system.
  • a gaze analysis system 100A shown in FIG. 10 includes a display device 110, a control device 120, a gaze detection device 130, and a display device 170.
  • the control device 120 doubles as the control device 160.
  • the analysis result image data GSd acquired by the line-of-sight detection device 130 is stored in the control device 120. Then, the control device 120 causes the target person P or a third party to observe the movement of the line of sight of the target person P by causing the display device 170 to display the analysis result image GS.
  • FIG. 11 is a second diagram showing a modified example of the gaze analysis system.
  • a gaze analysis system 100B shown in FIG. 11 includes a display device 110, a control device 120, and a gaze detection device 130.
  • the display device 110 doubles as the display device 170.
  • the control device 120 doubles as the control device 160. There is.
  • the analysis result image data GSd acquired by the line-of-sight detection device 130 is stored in the control device 120. Further, in the line-of-sight analysis system 100B, the control device 120 causes the target person P or a third party to observe the movement of the line of sight of the subject P by causing the display device 110 to display the analysis result image GS.
  • the gaze analysis system can be realized as long as the display device, the control device, and the gaze detection device are provided, and the number of devices included in the gaze analysis system may be arbitrary. .
  • the second embodiment will be described below with reference to the drawings.
  • the second embodiment is different from the first embodiment in that a message corresponding to a result of analysis of analysis result image data for each inspector is notified. Therefore, in the following description of the second embodiment, only differences from the first embodiment will be described, and for those having the same functional configuration as the first embodiment, the description of the first embodiment The same reference numerals as the reference numerals used in FIG.
  • the average production number [pcs / hour], the missing rate [ppm], and the overdetection rate [ppm] can be obtained as information on the inspector.
  • the average production number is the number of products inspected by the inspector in unit time.
  • the unit time may be one hour, one minute, or any unit.
  • the miss rate indicates the ratio of missed defects to the number of products inspected by the inspector.
  • the over detection rate indicates the ratio of the number of non-defective products detected as defective products to the number of products inspected by the inspector.
  • the unit of the size of the viewpoint may be [number].
  • the average production number is correlated with the size of the viewpoint and the moving speed of the viewpoint. Also, the missing rate is correlated with the completeness of the viewpoint. The overdetection rate is moderately correlated with the movement speed of the viewpoint.
  • FIG. 12 is a diagram for explaining the size of the viewpoint, the moving speed of the viewpoint, and the coverage of the viewpoint.
  • the size of the viewpoint is, for example, centered on the image 113-5 when the image 113-5 indicating the position of the line of sight displayed in the analysis result image GS-12 is stagnant (when not moving) It is an area of the region R1 of a predetermined range.
  • the size of the viewpoint may be indicated as the number of products including at least a part in the region R1 of the predetermined range.
  • the size of the viewpoint may be expressed as “two”.
  • the movement speed of the viewpoint is the total time between the time when the image showing the position of the line of sight stays in the analysis result image and the time when the image showing the position of the line of sight moves and then stays next It is a value calculated and averaged for each.
  • the moving speed of the viewpoint is the time when the image 113-5 indicating the position of the line of sight is stagnating and the image 113-5 indicating the position of the line of sight starts moving until the next
  • the total time of time is the moving speed of the viewpoint for each of the product 77 and the product 78.
  • the completeness of the viewpoint indicates the ratio of the number of products staying in a predetermined range area indicating the size of the viewpoint to the number of products to be inspected by the inspector.
  • a part of the products 77 and 78 is included in the region R1 when the viewpoint is stagnant in the image 113-5 indicating the position of the sight line.
  • a part of the products 79 and 80 is included in the region R2 when the viewpoint is stagnant in the image 113-6 indicating the position of the sight line. Therefore, it is understood that the coverage of the viewpoints is not reduced in the regions R1 and R2.
  • FIG. 13 is a diagram for explaining the relationship between the size of the viewpoint, the movement speed of the viewpoint, and the average productivity.
  • FIG. 13A shows an example of an analysis result image GS-13A of an inspector using central vision
  • FIG. 13B shows an analysis result image GS-13B of an inspector using peripheral vision. Is a diagram illustrating an example of
  • the regions R3, R4, R5, and R6 indicating the size of the viewpoint are products 81, 82, respectively. It overlaps with 83 and 84. Therefore, in the example of FIG. 13A, when the size of the viewpoint is represented by the number of products, it is “one”.
  • the moving speed of the viewpoint is, for example, the total of the time when the viewpoint is stagnated in the region R3 and the time until the viewpoint moves from the region 3 to the region 4 It corresponds to the movement speed of the corresponding viewpoint.
  • the total time of the time when the viewpoint stays in the region R4 and the time until the viewpoint moves from the region 4 to the region 5 corresponds to the moving speed of the viewpoint corresponding to the product 82.
  • the region R7 indicating the size of the viewpoint is one of the products 81 and 82.
  • the area R8 overlaps a portion of the product 83, 84. Therefore, in the example of FIG. 13B, when the size of the viewpoint is represented by the number of products, it is “2”.
  • the moving speed of the viewpoint is, for example, the total time of the time when the viewpoint is stagnated in the region R7 and the time until the viewpoint moves from the region R7 to the region R8, product 81, It becomes the movement speed of the viewpoint corresponding to each of 82.
  • the user when peripheral vision is used, the user does not stare at one point, so the time for which the viewpoint is stagnating is shorter than when central vision is used.
  • the average number of productions increases as the viewpoint size value increases and the viewpoint movement speed decreases, and the average production number correlates with the viewpoint size and viewpoint movement speed. I know that there is.
  • FIG. 14 is a diagram for explaining the relationship between the completeness of viewpoints and the missing rate.
  • FIG. 14 (A) is a first diagram showing the relationship between the completeness of the viewpoint and the missing rate
  • FIG. 14 (B) is the second diagram showing the relationship between the completeness of the viewpoint and the missing rate.
  • the regions R3 to R7 indicating the size of the viewpoint overlap for all the products 81 to 86. Therefore, for example, for all ten products, as shown in FIG. 14A, when the area indicating the size of the viewpoint overlaps the product, the coverage of the viewpoint is 10 pcs / 10 pcs, 100% ].
  • the missing rate of the subject P corresponding to the analysis result image GS-13A in FIG. 14 is 258 [ppm].
  • the region R11 indicating the size of the line of sight overlaps the product 81
  • the region R12 overlaps the product 82
  • the region R13 It overlaps with the product 83
  • the region R14 overlaps with the product 85.
  • the area indicating the size of the viewpoint does not overlap.
  • the missing rate of the subject P corresponding to the analysis result image GS-13C of FIG. 14 is 1512 [ppm].
  • the gaze analysis system of the present embodiment will be described.
  • the analysis result image is analyzed to acquire the size of the viewpoint for each target person P (examiner), the moving speed of the viewpoint, and the coverage of the viewpoint.
  • information obtained for each inspector from the analysis result image (viewpoint size, viewpoint movement speed, viewpoint coverage) and visual inspection using an actual optical microscope
  • inspectors on visual inspection based on the information for each inspector (average productivity, over detection rate, miss rate).
  • FIG. 15 is a diagram for explaining the function of each device of the gaze analysis system according to the second embodiment.
  • the gaze analysis system 100C of the present embodiment includes a display device 110, a control device 120A, and a gaze detection device 130.
  • the control device 120A includes an analysis image storage unit 121, a display control unit 122, an analysis result acquisition unit 123, a training support unit 125, an analysis result image database 210, an analysis result database 220, and a message database 230.
  • the analysis result acquisition unit 123 acquires analysis result image data GSd for each inspector by each group of inspectors, and stores the acquired analysis data in the analysis result image database 210. Further, the control device 120A causes the training support unit 125 to store the analysis result image data GSd for each inspector stored in the analysis result image database 210 into the analysis result database 220.
  • the information about the inspector obtained by the actual visual inspection is stored in advance in the analysis result database 220, and the information about the inspector obtained from the analysis result of the analysis result image data GSd is the visual inspection. It is stored in association with the information on the obtained inspector.
  • control device 120A identifies the information indicating the optimum movement of the line of sight in the visual inspection of the microscope inspection according to the analysis result, and when the inspector conducts training of the visual inspection, the message database 230 Refer to and notify you when training.
  • the training support unit 125 will be described below.
  • the training support unit 125 of the present embodiment is realized by the training processing program stored in the memory device being read and executed by the arithmetic processing unit of the control device 120A.
  • the training support unit 125 of the present embodiment includes an image analysis unit 126, an optimum value identification unit 127, a comparison unit 128, and a message selection unit 129.
  • the image analysis unit 126 analyzes the analysis result image data GSd for each inspector stored in the analysis result image database 210. Details of the analysis of the analysis result image data GSd by the image analysis unit 126 will be described later.
  • the optimum value specifying unit 127 specifies, from the analysis result by the image analysis unit 126, a value indicating an optimal eye movement in a group of inspectors who perform training. Details of the process of the optimum value identifying unit 127 will be described later.
  • the comparison unit 128 compares the analysis result of the analysis result image data GSd of the inspector who performs training, which is stored in the analysis result database 220, with the value indicating the optimum eye movement.
  • the message selection unit 129 selects a message corresponding to the comparison result by the comparison unit 128 from the message database 230.
  • each database included in the control device 120A of the present embodiment will be described with reference to FIGS. 16 to 18.
  • each database is provided in the control device 120A, but the present invention is not limited to this.
  • Each database may be stored in an external device other than the control device 120A.
  • FIG. 16 is a view showing an example of the analysis result image database of the second embodiment.
  • the analysis result image database 210 of the present embodiment has an inspector ID and analysis result image data as items of information, and they are associated with each other.
  • the value of the item “inspector ID” indicates identification information for identifying the inspector.
  • the inspector ID may be input when the inspector wears the visual axis detection device 130, visually checks the analysis image G, and acquires the analysis result image data GSd.
  • analysis result image data is the analysis result image data GSd itself.
  • FIG. 17 is a diagram showing an example of the analysis result database of the second embodiment.
  • the analysis result of the analysis result image data GSd by the image analysis unit 126 is stored.
  • the analysis result database 220 of the present embodiment has, as items of information, an inspector ID, the size of the viewpoint, the coverage of the viewpoint, the moving speed of the viewpoint, the missing rate, the average production number, and the overdetection rate.
  • the item “inspector ID” is associated with other items.
  • information including the value of the item “inspector ID” and the values of other items is referred to as analysis result information.
  • the items “viewpoint size”, “viewpoint coverage”, and “viewpoint movement speed” are analyzed by the analysis result image data GSd for each inspector by the image analysis unit 126. Desired.
  • the values of the items "size of the viewpoint”, “coverage of the viewpoint”, and “moving speed of the viewpoint” are information on the inspector obtained from the analysis by the gaze analysis system 100C.
  • the values of the items “missing rate”, “average number of productions”, and “overdetection rate” are information on the inspector obtained from the actual visual inspection by the inspector.
  • Each item included in the analysis result database 220 is as described with reference to FIGS. 12 to 14.
  • FIG. 18 is a diagram showing an example of a message database of the second embodiment.
  • the message database 230 of the present embodiment is provided for each of the items "viewpoint size”, “viewpoint coverage”, and “line of sight moving speed” in the analysis result database 220.
  • FIG. 18 shows an example of the message database 230 corresponding to the item “size of sight line”.
  • the message database 230 has, as items of information, comparison results and messages, and both are associated with each other.
  • the value of the item “comparison result” indicates the result of comparison between the optimum value of the size of the viewpoint specified by the optimum value specification unit 127 and the size of the viewpoint for each inspector.
  • the value of the item “message” indicates a message corresponding to the comparison result.
  • FIG. 19 is a flowchart for explaining the processing of the training support unit of the second embodiment.
  • the training support unit 125 causes the image analysis unit 126 to acquire analysis result image data GSd of an inspector belonging to a certain group from the analysis result image database 210 (step S1901).
  • the image analysis unit 126 calculates the average value of the size of each viewpoint from the image 113 indicating the position of the sight line in the acquired analysis result image data GSd, and sets it as the size of the viewpoint of this inspector (step S1902). ).
  • the image analysis unit 126 calculates the coverage of the viewpoint in the acquired analysis result image data GSd (step S1903). Subsequently, the image analysis unit 126 calculates the moving speed of the sight line in the acquired analysis result image data GSd (step S1904).
  • the image analysis unit 126 identifies the information obtained from the visual inspection of the inspector, associates the identified information, the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the sight line. Analysis result information is set (step S1905).
  • the image analysis unit 126 determines the missing rate, the average number of productions, the overdetection rate, the size of the viewpoint, the coverage of the viewpoint, the movement of the gaze, and the like associated with the inspector ID of the inspector.
  • the velocity is associated with the analysis result information.
  • the image analysis unit 126 stores the analysis result information in the analysis result database 220 (step S1906).
  • the missing rate, the average number of productions, and the overdetection rate for each inspector may be stored in advance in the analysis result database 220, or may be stored in another storage device.
  • the image analysis unit 126 determines whether or not the processing up to step S1906 has been performed on the analysis result image data GSd of all the inspectors who belong to the group (step S1907). If the process has not been performed on all the analysis result image data GSd in step S1907, the process returns to step S1901.
  • step S1907 when the processing is performed on all the analysis result image data GSd, the training support unit 125 causes the optimum value specifying unit 127 to select “view size”, “view coverage”, “vision movement speed”. The "optimum value” is specified and held (step S1908), and the process ends.
  • the optimum value specifying unit 127 has an inspector with the lowest miss rate, an inspector with the largest average production number, and a relatively low miss rate and overdetection rate, and An inspector or the like whose average production number is relatively high may be selected as an inspector who refers to analysis result information in specifying the optimum value.
  • the optimum value specifying unit 127 may set the values of “viewpoint size”, “viewpoint completeness”, and “vision movement speed” as analysis value information including the selected inspector ID as optimum values.
  • the optimum value specifying unit 127 of the present embodiment for example, among the values of the items “viewpoint size”, “viewpoint coverage”, and “line of sight moving speed” stored in the analysis result database 220.
  • the maximum value may be taken as the optimum value.
  • FIG. 20 is a flow chart for explaining a process of supporting training of an inspector in the gaze analysis system according to the second embodiment.
  • the training support unit 125 determines whether a training start request has been received (step S2001). If the start request has not been received in step S2001, the process waits until the start request is received.
  • step S2001 when the start request is received, the training support unit 125 determines whether the input of the inspector ID is received (step S2002). If the inspector ID is not received in step S2002, the training support unit 125 may end the processing and may display the analysis result image data GSd acquired immediately before receiving the training start request.
  • step S2002 when the inspector ID is input, the training support unit 125 causes the comparison unit 128 to acquire analysis result information corresponding to the input inspector ID from the analysis result database 220 (step S2003).
  • the comparison unit 128 compares the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the sight line included in the acquired analysis result information with the optimum value stored in the optimum value specifying unit 127 (step S2004). .
  • the training support unit 125 causes the message selection unit 129 to refer to the message database 230 and select a message corresponding to the comparison result of the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the gaze (step S2005).
  • the display control unit 122 causes the display device 110 to display the selected message (step S2006), and ends the process.
  • “viewpoint size” and “viewpoint coverage” are calculated based on the analysis result image data GSd.
  • "Gender”, "speed of movement of gaze” is compared with the optimal value in the group. Then, in the present embodiment, a message corresponding to the comparison result is displayed on the display device 110 for each of the “size of the viewpoint”, the “coverage of the viewpoint”, and the “moving speed of the sight line”.
  • the inspector can perform training of visual inspection with a microscope for a certain period of time, thereby reducing the variation in the rate of the inspector inspection. In other words, variations in productivity in visual inspection with a microscope for each inspector can be reduced.
  • the controller 120A notifies the inspector of cautions in training, but the present invention is not limited to this.
  • a training leader who browses the analysis result database 220 of the control device 120A may appropriately convey cautions according to the analysis result information for each inspector.
  • the movement speed of the viewpoint is 0.56 [s / pcs] To 0.34 [s / pcs], and the average number of production has improved by about 65%.
  • the above-described embodiments can be applied to analysis of the visual line in visual observation through an eyepiece, such as binoculars, telescopes, and opera glasses.
  • the analysis image G when grasping the visual field in the case of using binoculars or opera glass, an image captured through an eyepiece of binoculars or opera glass may be used as the analysis image G.
  • the analysis image G may not be high resolution and may not be a vertically long image, but is preferably a three-dimensional image.
  • the line-of-sight analysis system may use an image captured through the eyepiece of the telescope as the analysis image G.
  • the analysis image G may not be high resolution and may not be a vertically long image.
  • goggles for sports such as swimming, skiing, snowboarding, and the like
  • equipment used with safety members in contact with a holding member holding a lens on the face are used for safety management in medical sites, construction sites, etc.
  • Goggles etc. are included.
  • Such a device when worn by the wearer, is closed by a holding member for holding the lens between the face and the lens, and therefore the eye gaze detection device for photographing the movement of the eye is attached together Can not do it.
  • the lens in this case includes a transparent body such as glass covering the face in addition to the meaning of the lens in a narrow sense.
  • the device mounted in a state in which the holding member holding the lens is in contact with the face includes, for example, VR (Virtual Reality) goggles and the like in addition to the above-described goggles in general.
  • VR Virtual Reality
  • Gaze Analysis System 110 100, 100A to 100C Gaze Analysis System 110, 170 Display Device 120, 120A, 160 Controller 121 Analysis Image Storage Unit 122 Display Control Unit 123 Analysis Result Acquisition Unit 124 Analysis Result Image Storage Unit 125 Training Support Unit 130 Gaze Detection Device 210 Analysis result image database 220 Analysis result database 230 Message database

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Eye Examination Apparatus (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

A gaze analysis system comprises: a storage unit for storing an image for analysis produced by using image data captured via an eyepiece; a display unit for displaying the image for analysis; a gaze detection unit for detecting the gaze of a subject viewing the image for analysis presented on the display unit; and an output unit that outputs image data resulting from the analysis where the image for analysis and the subject's gaze are superimposed.

Description

視線分析システム、視線分析方法及びトレーニング方法Gaze analysis system, gaze analysis method and training method
 本発明は、視線分析システム、視線分析方法及びトレーニング方法に関する。 The present invention relates to a gaze analysis system, a gaze analysis method, and a training method.
 従来から、電子デバイス等に実装されるチップ等の製品の品質検査の手法として、光学顕微鏡等の顕微鏡を用いた目視検査が知られている。また、近年の顕微鏡等による目視検査では、検査対象となるチップ等のより高い品質要求により、以前にも増して、信頼性を維持しつつ、更に検査を短時間で行うことが要求されている。 Conventionally, visual inspection using a microscope such as an optical microscope has been known as a method of quality inspection of a product such as a chip mounted on an electronic device or the like. Further, in visual inspection with a microscope or the like in recent years, due to higher quality requirements of a chip to be inspected, it is required to carry out inspection in a short time while maintaining reliability more than before. .
 そこで、近年では、製品の外観に基づき品質を検査する外観検査装置等を用いて、目視検査の一部を自動化することで、品質検査の効率を向上させている。しかし、目視検査は、製品の生産ラインの中で、最も自動化が困難な工程であり、特に、顕微鏡による目視検査は、検査員が顕微鏡を覗きながら手で製品を移動させ、製品毎に検査を行うものであり、習熟を要する。 So, in recent years, the efficiency of quality inspection is improved by automating a part of visual inspection using an appearance inspection device etc. which inspect quality based on the appearance of a product. However, visual inspection is the most difficult step in the production line of the product, and in particular, visual inspection with a microscope is that the inspector manually moves the product by hand while looking through the microscope and inspects each product It is done and requires learning.
 このため、顕微鏡等を用いた目視検査では、依然として多くの人が検査作業に携わっており、検査作業を行う検査員によって、検査の精度のばらつきが存在している。そこで、従来から、検査員による検査の精度のばらつきを低減させるために、検査員の能力を向上させる方法も模索されている。 For this reason, in visual inspection using a microscope or the like, many people are still involved in the inspection operation, and there are variations in inspection accuracy among inspectors who perform the inspection operation. Therefore, conventionally, in order to reduce the variation in the accuracy of the inspection by the inspector, a method of improving the ability of the inspector has also been sought.
 目視検査の検査員の能力を向上させるための方法としては、例えば、アイカメラを優秀な検査員に装着させ、この検査員の検査時の視線の動きを、他の検査員に観察させ、他の検査員にも同様の動きをさせる方法等が知られている。 As a method for improving the ability of the visual inspection inspector, for example, attach an eye camera to an excellent inspector, and let other inspectors observe the movement of the line of sight of this inspector at the time of inspection. It is known how to make the same move for inspectors.
特開平3-257413号公報Japanese Patent Laid-Open No. 3-257413 特表2015-500732号公報JP-A-2015-500732 特開2006-254145号公報Unexamined-Japanese-Patent No. 2006-254145
 顕微鏡を用いた目視検査では、検査員が接眼レンズを覗いた状態で検査が行われるため、検査時にアイカメラを装着することができず、検査員の視線を検査員自身や他の検査員に観察させることができない。このため、従来の顕微鏡を用いた目視検査では、例えば、検査員毎の検査の精度のばらつきの原因を把握することが困難である。また、従来の顕微鏡を用いた目視検査では、検査員に対して標準的な視線の動きがどのようなものであるかを提示することができず、検査の訓練を行うことができない。さらに、従来の顕微鏡を用いた目視検査では、ある検査員の視点をみながら複数の検査員が議論することや、実際の視点の動きを観察しながら製品の動かし方の教育を行うことができない。 In the visual inspection using a microscope, since the inspection is performed with the inspector looking through the eyepiece lens, the eye camera can not be attached at the time of inspection, and the line of sight of the inspector is to the inspector and other inspectors. It can not be observed. For this reason, in a visual inspection using a conventional microscope, for example, it is difficult to grasp the cause of the variation in the accuracy of the inspection for each inspector. Moreover, in the visual inspection using a conventional microscope, it is not possible to show the inspector what standard eye movement is like, and the inspection training can not be performed. Furthermore, in the conventional visual inspection using a microscope, it is not possible for a plurality of inspectors to discuss while looking at the viewpoint of a certain inspector or to teach how to move the product while observing the movement of the actual viewpoint. .
 また、デジタルカメラを介して直接ディスプレイで対象物を観察できるようにした光学顕微鏡であるデジタル顕微鏡による検査では、検査員がレンズに接眼させる必要はない。しかし、デジタル顕微鏡では、立体感、解像度、ピント調整機能、動いている検査対象物の画像を1秒間に何枚取り込めるか、等といったデジタル顕微鏡の性能と、検査員の目の機能とが異なる。このため、デジタル顕微鏡では、検査員が接眼して対象物を観察する状態の再現が難しく、この状態の再現が可能な装置となると、本願の出願時点においては、高額の投資が必要であり適用は難しい。 Moreover, in the examination by the digital microscope which is an optical microscope which made it possible to observe an object on a display directly via a digital camera, it is not necessary for the inspector to look into the lens. However, in the digital microscope, the performance of the digital microscope, such as three-dimensionality, resolution, focus adjustment function, how many images of the inspection object in motion can be captured in one second, and the like, and the eye function of the inspector differ. For this reason, in the digital microscope, it is difficult to reproduce the state in which the inspector observes and observes the object, and if it becomes an apparatus capable of reproducing this state, a large investment is required at the time of filing of the present application. Is difficult.
 開示の技術は、上記事情に鑑みてなされたものであり、検出された視線を観察させることを目的としている。 The technology disclosed herein has been made in view of the above-described circumstances, and is directed to observing a detected line of sight.
 開示の技術は、視線分析システムであって、接眼レンズを介して撮像された画像データを用いて生成された分析用画像が記憶された記憶部と、前記分析用画像を表示する表示部と、前記表示部に表示された前記分析用画像を目視する対象者の視線を検出する視線検出部と、前記分析用画像と前記対象者の視線を重畳させた分析結果画像の画像データを出力する出力部と、を有する。 The disclosed technique is a line-of-sight analysis system, and includes a storage unit storing an analysis image generated using image data captured through an eyepiece, a display unit displaying the analysis image, A line of sight detection unit for detecting the line of sight of a subject who views the image for analysis displayed on the display unit, and an output for outputting image data of an analysis result image in which the image for analysis and the line of sight of the subject are superimposed Part.
 また、開示の技術は、視線分析システムであって、接眼レンズを介して撮像された画像データを用いて生成された分析用画像が記憶された記憶部と、前記分析用画像を表示する表示部と、前記表示部に表示された前記分析用画像を目視する装着者に装着され、前記装着者の視線を検出する視線検出部と、前記分析用画像と前記装着者の視線を重畳させた分析結果画像の画像データを出力する出力部と、を有する。 Further, the disclosed technique is a gaze analysis system, and a storage unit storing an analysis image generated using image data captured through an eyepiece lens, and a display unit displaying the analysis image And a gaze detection unit attached to a wearer who looks at the analysis image displayed on the display unit, which detects the gaze of the wearer, and analysis in which the analysis image and the gaze of the wearer are superimposed And an output unit that outputs image data of a result image.
 また、開示の技術は、接眼レンズを介した対象物の目視のトレーニング方法であって、表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出の対象者に目視させ、前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記対象者の視線と、を重畳させた分析結果画像を前記対象者に目視させる。 Also, the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device A visual line detection target person visually makes an analysis result image obtained by superimposing the analysis image and the visual line of the target person detected on the analysis image after the visual observation to the target person Let
 また、開示の技術は、接眼レンズを介した対象物の目視のトレーニング方法であって、表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出装置の装着者に目視させ、前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記装着者の視線と、を重畳させた分析結果画像を前記装着者に目視させる。 Also, the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device And allowing the wearer of the gaze detection apparatus to visually check, after the visual inspection, an analysis result image in which the analysis image and the gaze of the wearer detected on the analysis image are superimposed on the wearer. Make it visible.
 また、開示の技術は、接眼レンズを介した対象物の目視のトレーニング方法であって、表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出の対象者に目視させ、前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記対象者の視線と、を重畳させた分析結果画像を前記対象者とは別の者に目視させる。 Also, the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device An analysis result image obtained by superimposing the image for analysis and the line of sight of the subject detected on the image for analysis after the visual inspection with the subject Have another person look at it.
 また、開示の技術は、接眼レンズを介した対象物の目視のトレーニング方法であって、表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出装置の装着者に目視させ、前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記装着者の視線と、を重畳させた分析結果画像を前記装着者とは別の者に目視させる。 Also, the disclosed technique is a training method for visual observation of an object through an eyepiece lens, and an analysis image generated using image data captured through the eyepiece lens displayed on a display device And a visual detection result of the wearer of the gaze detection apparatus, and after the visual inspection, an analysis result image in which the analysis image and the gaze of the wearer detected on the analysis image are superimposed with the wearer. Make another person look at it.
 また、開示の技術は、視線分析システムであって、視線の検出対象となる対象者の顔面に接触する保持具によって保持されたレンズを介して撮像された画像データを用いて生成された分析用画像が記憶された記憶部と、前記分析用画像を表示する表示部と、前記表示部に表示された前記分析用画像における前記対象者の視線を検出する視線検出部と、前記分析用画像と前記対象者の視線を重畳させた分析結果画像の画像データを出力する出力部と、を有する。 Also, the disclosed technique is a line-of-sight analysis system for analysis generated using image data captured through a lens held by a holder that contacts the face of a subject whose line of sight is to be detected. A storage unit storing an image, a display unit displaying the analysis image, a gaze detection unit detecting a gaze of the subject in the analysis image displayed on the display unit, the analysis image And an output unit that outputs image data of an analysis result image in which the line of sight of the subject is superimposed.
 検出された視線を観察させることができる。 The detected line of sight can be observed.
第一の実施形態の視線分析システムを説明する図である。BRIEF DESCRIPTION OF THE DRAWINGS It is a figure explaining the gaze analysis system of 1st embodiment. 第一の実施形態の他の視線分析システムを説明する図である。It is a figure explaining the other gaze analysis system of a first embodiment. 第一の実施形態の視線分析システムにおける対象者と表示装置の位置の関係を説明する図である。It is a figure explaining the relation of the object person in a gaze analysis system of a first embodiment, and the position of a display. 第一の実施形態の分析用画像を説明する図である。It is a figure explaining the picture for analysis of a first embodiment. 第一の実施形態の視線分析システムの有する各装置の機能を説明する図である。It is a figure explaining the function of each apparatus which the gaze analysis system of a first embodiment has. 第一の実施形態の視線分析システムによる対象者の視線の分析の手順を説明する図である。It is a figure explaining the procedure of analysis of the eye gaze of the subject by the gaze analysis system of a first embodiment. 第一の実施形態の分析結果画像を説明する図である。It is a figure explaining the analysis result picture of a first embodiment. 第一の実施形態の検査員Aの分析結果画像の一例を示す図である。It is a figure which shows an example of the analysis result image of the inspector A of 1st embodiment. 第一の実施形態の検査員Bの分析結果画像の一例を示す図である。It is a figure which shows an example of the analysis result image of the inspector B of 1st embodiment. 第一の実施形態の視線分析システムを用いた目視検査のトレーニング方法の手順を示す図である。It is a figure which shows the procedure of the training method of visual inspection using the gaze analysis system of 1st embodiment. 視線分析システムの変形例を示す第一の図である。It is a 1st figure which shows the modification of a gaze analysis system. 視線分析システムの変形例を示す第二の図である。It is the 2nd figure which shows the deformation example of eye gaze analysis system. 視点の大きさ、視点の移動速度及び視点の網羅性を説明する図である。It is a figure explaining the size of a viewpoint, the moving speed of a viewpoint, and the exhaustivity of a viewpoint. 視点の大きさと及び視点の移動速度と、平均生産性との関係を説明する図である。It is a figure explaining the relationship between the size of a viewpoint, the movement speed of a viewpoint, and average productivity. 視点の網羅性と見逃し率の関係を説明する図である。It is a figure explaining the relationship between the completeness of a viewpoint, and the missing rate. 第二の実施形態の視線分析システムの有する各装置の機能を説明する図である。It is a figure explaining the function of each apparatus which the gaze analysis system of 2nd embodiment has. 第二の実施形態の分析結果画像データベースの一例を示す図である。It is a figure which shows an example of the analysis result image database of 2nd embodiment. 第二の実施形態の集計結果データベースの一例を示す図である。It is a figure which shows an example of the count result database of 2nd embodiment. 第二の実施形態のメッセージデータベースの一例を示す図である。It is a figure which shows an example of the message database of 2nd embodiment. 第二の実施形態のトレーニング支援部の処理を説明するフローチャートである。It is a flowchart explaining the process of the training assistance part of 2nd embodiment. 第二の実施形態の視線分析システムにおける検査員のトレーニングを支援する処理を説明するフローチャートである。It is a flowchart explaining the process which supports the training of the inspector in the gaze analysis system of 2nd embodiment.
 (第一の実施形態)
 以下に、図面を参照して、第一の実施形態について説明する。図1Aは、第一の実施形態の視線分析システムを説明する図である。
(First embodiment)
The first embodiment will be described below with reference to the drawings. FIG. 1A is a diagram for explaining a gaze analysis system according to a first embodiment.
 本実施形態の視線分析システム100は、表示装置110、制御装置120、視線検出装置130、制御装置160、表示装置170を有する。 The eye gaze analysis system 100 according to the present embodiment includes a display device 110, a control device 120, an eye gaze detection device 130, a control device 160, and a display device 170.
 表示装置110は、分析の対象者Pの視線の動きを検出するために用いられる分析用画像Gが表示される。制御装置120は、分析用画像Gの元となる分析用画像データを保持しており、この分析用画像Gを表示装置110に表示させる。 The display device 110 displays an analysis image G used to detect the movement of the gaze of the subject P to be analyzed. The control device 120 holds analysis image data that is the source of the analysis image G, and causes the display device 110 to display the analysis image G.
 視線検出装置130は、例えば、アイカメラ等であり、対象者Pに装着され、対象者Pの視野の画像を撮像し、撮像された画像の画像データと、この画像における対象者Pの視線の位置を示す情報とを取得する。また、視線検出装置130は、取得した画像データと、対象者Pの視線の位置を示す情報とを、制御装置160へ出力する。 The sight line detection device 130 is, for example, an eye camera or the like, is attached to the subject person P, captures an image of the field of view of the subject person P, and image data of the captured image and the sight line of the subject person P in this image. Obtain information indicating the position. Further, the sight line detection device 130 outputs the acquired image data and information indicating the position of the sight line of the subject P to the control device 160.
 尚、図1Aでは、視線検出装置130を、対象者Pに装着される形態としたが、これに限定されない。視線検出装置130は、対象者Pの視線を検出できれば良い。例えば、視線検出装置130は、図1Bに示すように、アイカメラ等の対象者Pに装着されるものではなく、表示装置110の下部に、対象者Pに向けて設置されたカメラ130A等であっても良い。このカメラ130Aは、視線を検出するために、高精細で近赤外の光源を有するものが好ましい。また、カメラ130Aの設置位置は、視線を検出するために、表示装置110の上側でも下側でもよく、下側が好ましく、下側の左右2箇所がより好ましい。 In addition, although it was set as the form with which the visual line detection apparatus 130 is mounted | worn with the subject P in FIG. 1A, it is not limited to this. The sight line detection device 130 may detect the sight line of the subject P. For example, as shown in FIG. 1B, the sight line detection device 130 is not attached to the subject person P such as an eye camera, but is a camera 130A installed toward the subject person P under the display device 110. It may be. The camera 130A preferably has a high-definition, near-infrared light source to detect the line of sight. Further, the installation position of the camera 130A may be the upper side or the lower side of the display device 110 in order to detect the line of sight, the lower side is preferable, and the lower side is more preferable.
 具体的には、視線検出装置130は、撮像した画像上に、対象者Pの視線の位置を示すマーク等を重畳させた画像の画像データを制御装置160へ出力する。以下の説明では、制御装置160が視線検出装置130から取得する画像データを、分析結果画像データと呼ぶ。 Specifically, the visual axis detection device 130 outputs, to the control device 160, image data of an image in which a mark indicating the position of the visual axis of the object person P is superimposed on the captured image. In the following description, the image data acquired by the control device 160 from the gaze detection device 130 is referred to as analysis result image data.
 また、本実施形態では、視線検出装置130によって撮像された画像における、視線検出装置130の装着者の視線を検出し、視線の位置を示す画像を撮像された画像に重畳させることを、「分析」と表現することがある。 In the present embodiment, the line of sight of the wearer of the line of sight detection device 130 in the image picked up by the line of sight detection device 130 is detected, and the image showing the position of the line of sight is superimposed on the captured image. It may be expressed as ".
 本実施形態では、制御装置160が、この分析結果画像データを、表示装置170に表示させることで、対象者Pや、対象者P以外の第三者に対し、対象者Pが分析用画像を見ているときの対象者Pの視線の動きを観察させることができる。 In the present embodiment, the control device 160 causes the display device 170 to display the analysis result image data, whereby the target person P analyzes the image for analysis with respect to the target person P and a third party other than the target person P. It is possible to observe the movement of the line of sight of the object person P when watching.
 尚、視線検出装置130から制御装置160に出力された分析結果画像データは、例えば、USB(Universal Serial Bus)メモリ等の可搬型の記憶装置等に記憶されても良いし、制御装置160に設けられた記憶装置に記憶されても良い。記憶装置に記憶された分析結果画像データは、制御装置160によって読み出され、表示装置170に表示されても良い。 The analysis result image data output from the sight line detection device 130 to the control device 160 may be stored in, for example, a portable storage device such as a USB (Universal Serial Bus) memory, or the like. It may be stored in the stored storage device. The analysis result image data stored in the storage device may be read by the control device 160 and displayed on the display device 170.
 以下に、さらに視線分析システム100について説明する。本実施形態の視線分析システム100では、例えば、対象者Pが、顕微鏡等に設けられた接眼レンズを覗いた状態の視野が再現されるように構成されている。 The line-of-sight analysis system 100 will be further described below. In the line-of-sight analysis system 100 according to the present embodiment, for example, a visual field in a state in which a target person P looks through an eyepiece lens provided in a microscope or the like is reproduced.
 具体的には、例えば、本実施形態では、分析用画像Gを、顕微鏡の接眼レンズに撮像装置を取り付けて、接眼レンズを介して撮像した画像を用いて作成された画像としている。また、本実施形態では、表示装置110に分析用画像Gを表示させる際に、接眼レンズを覗いた状態を表すために、例えば、表示装置110の画面111における円形の領域112以外の領域をマスクし、分析用画像Gが円形の画像として対象者Pに視認されるようした。さらに、本実施形態では、制御装置120と通信を行うポインタ140によって、円形の領域111内に表示される分析用画像Gを移動(スクロール)させることができるようにした。尚、ポインタ140は、制御装置120に取り付けられたタッチパット等であっても良いし、制御装置120と接続されたマウス等であっても良い。 Specifically, for example, in the present embodiment, the image for analysis G is an image created using an image captured through an eyepiece lens by attaching an imaging device to the eyepiece lens of a microscope. Further, in the present embodiment, when displaying the analysis image G on the display device 110, for example, a region other than the circular region 112 on the screen 111 of the display device 110 is masked in order to indicate a state in which the eyepiece lens is viewed. The analysis image G is visually recognized by the subject P as a circular image. Furthermore, in the present embodiment, the analysis image G displayed in the circular area 111 can be moved (scrolled) by the pointer 140 that communicates with the control device 120. The pointer 140 may be a touch pad or the like attached to the control device 120, or may be a mouse or the like connected to the control device 120.
 以下に、図2を参照して、本実施形態の視線分析システム100について、さらに説明する。図2は、第一の実施形態の視線分析システムにおける対象者と表示装置の位置の関係を説明する図である。 The eye gaze analysis system 100 according to the present embodiment will be further described below with reference to FIG. FIG. 2 is a diagram for explaining the relationship between the subject and the position of the display device in the gaze analysis system according to the first embodiment.
 本実施形態では、表示装置110の画面111の円形の領域112内に表示される分析用画像Gが、顕微鏡を覗いた状態に近い状態で対象者Pの視野角に入るように、円形の領域112の直径rを20センチ程度とした。また、本実施形態の表示装置110は、直径rを20センチ程度の円形の領域112を確保するために、22inch以上のモニタとした。 In the present embodiment, a circular area such that the analysis image G displayed in the circular area 112 of the screen 111 of the display device 110 enters the viewing angle of the object person P in a state close to looking through the microscope. The diameter r of 112 was about 20 cm. In addition, in order to secure a circular area 112 with a diameter r of about 20 cm, the display device 110 of this embodiment is a monitor of 22 inches or more.
 さらに、本実施形態では、表示装置110と、対象者Pの眼との距離Lを40cm程度とした。尚、距離Lは、対象者Pが顕微鏡で目視検査を行う場合における、検査対象物の拡大倍率に応じて調整されても良い。 Furthermore, in the present embodiment, the distance L between the display device 110 and the eye of the subject P is about 40 cm. The distance L may be adjusted in accordance with the magnification of the inspection object when the subject P performs a visual inspection with a microscope.
 次に、図3を参照して、本実施形態の分析用画像Gについて説明する。図3は、第一の実施形態の分析用画像を説明する図である。 Next, the analysis image G of this embodiment will be described with reference to FIG. FIG. 3 is a diagram for explaining an analysis image of the first embodiment.
 本実施形態の分析用画像Gは、例えば、検査対象となる製品を光学顕微鏡にセットした状態で、光学顕微鏡の接眼レンズの何れか一方に、デジタルカメラ等の撮像装置を取り付けて撮像した画像G1を複数つなげた画像としても良い。 The analysis image G of the present embodiment is, for example, an image G1 captured by attaching an imaging device such as a digital camera to any one of eyepieces of an optical microscope in a state where a product to be inspected is set in the optical microscope. It is good also as an image which connected two or more.
 尚、本実施形態の分析用画像Gには、複数種類の製品の画像が含まれていても良い。具体的には、分析用画像には、画像G1と、画像G1の元となる製品とは異なる種類の製品を撮像した画像G2と、がそれぞれ複数含まれていても良い。 The analysis image G of this embodiment may include images of a plurality of types of products. Specifically, the analysis image may include a plurality of images G1 and a plurality of images G2 obtained by imaging a product of a type different from the product that is the source of the image G1.
 本実施形態の分析用画像Gは、例えば、印刷業界等で用いられる画像処理用のアプリケーション(Illustrator等)を利用し、撮像装置の解像度を保った高解像度の縦長の画像として作成される。本実施形態では、分析用画像Gを縦長の画像とすることで、ポインタ140を用いた円形の領域112内における分析用画像Gのスクロールが可能となる。本実施形態では、対象者Pに、分析用画像Gをスクロールさせることで、顕微鏡を用いた目視検査において、手で製品を移動させる動作を再現させることができる。 The analysis image G of the present embodiment is created as, for example, a high resolution vertically long image maintaining the resolution of the imaging device using an application for image processing (such as Illustrator) used in the printing industry and the like. In the present embodiment, by setting the analysis image G to be a vertically long image, it is possible to scroll the analysis image G in the circular area 112 using the pointer 140. In the present embodiment, by causing the subject P to scroll the analysis image G, it is possible to reproduce the operation of moving the product by hand in the visual inspection using a microscope.
 本実施形態では、以上のように視線分析システム100を構成することで、例えば、対象者Pが光学顕微鏡を用いて半導体チップ等の目視検査を行っている状態を模倣した状態を作り出すことができる。言い換えれば、本実施形態の視線分析システム100は、視線検出装置130を装着した対象者Pが光学顕微鏡を用いて半導体チップ等の目視検査を行っている状態を再現できる。 In the present embodiment, by configuring the line-of-sight analysis system 100 as described above, it is possible to create, for example, a state imitating a state in which the subject P visually inspects a semiconductor chip or the like using an optical microscope. . In other words, the line-of-sight analysis system 100 according to the present embodiment can reproduce a state in which the subject P wearing the line-of-sight detection device 130 performs visual inspection of a semiconductor chip or the like using an optical microscope.
 また、その他の例として、本実施形態の視線分析システム100は、視線検出装置130を装着しなくても、カメラ130A等を設けることで、対象者Pが光学顕微鏡を用いて半導体チップ等の目視検査を行っている状態を再現できる。 Further, as another example, the eye gaze analysis system 100 according to the present embodiment provides the camera 130A and the like even when the eye gaze detection device 130 is not attached, so that the object person P visually observes a semiconductor chip or the like using an optical microscope. The condition under test can be reproduced.
 このため、本実施形態では、対象者Pが光学顕微鏡を用いた目視検査を行っている状態における、対象者Pの視線の動きを示す情報(分析結果画像)を取得することができる。 For this reason, in the present embodiment, it is possible to obtain information (analysis result image) indicating the movement of the line of sight of the subject P in a state in which the subject P performs a visual inspection using an optical microscope.
 尚、本実施形態では、例えば、検査対象の製品1つを、接眼レンズを介して撮像した場合の画像のサイズを200kBとし、この画像をベースの画像BGとすると、画像G1は、ベースの画を横方向に9個並べた行を8列含む画像としても良い。その場合、画像G1のサイズは、14,400kBとなる。 In the present embodiment, for example, assuming that the size of an image obtained by imaging one product to be inspected through an eyepiece lens is 200 kB and this image is a base image BG, the image G1 is a base image. The image may be an image including eight rows of nine rows arranged in the horizontal direction. In that case, the size of the image G1 is 14,400 kB.
 また、本実施形態の分析用画像Gは、ベースの画像BGを横方向に9個並べた行を30列含む画像としても良い。その場合、分析用画像Gのサイズは、150,000kB(150GB)となる。 Further, the analysis image G according to the present embodiment may be an image including 30 rows in which nine base images BG are arranged in the horizontal direction. In that case, the size of the analysis image G is 150,000 kB (150 GB).
 本実施形態では、このサイズの分析用画像Gを、圧縮したりせずに、このままの状態で保持することで、分析用画像1の解像度を維持する。 In the present embodiment, the resolution of the analysis image 1 is maintained by holding the analysis image G of this size as it is without compressing it.
 尚、本実施形態の分析用画像Gは、2つの接眼レンズの何れか一方に取り付けられた撮像装置によって撮像された二次元の画像としたが、これに限定されない。本実施形態の分析用画像Gは、例えば、2つの接眼レンズの双方に撮像装置を取り付け、それぞれの撮像装置によって撮像された画像を用いて生成された三次元画像であっても良い。尚、本実施形態の画像は、動画も含む。したがって、本実施形態の分析用画像Gは、三次元動画であっても良い。 In addition, although the image G for analysis of this embodiment is a two-dimensional image captured by an imaging device attached to any one of two eyepiece lenses, it is not limited to this. The analysis image G of the present embodiment may be, for example, a three-dimensional image generated by using an image captured by each imaging device with the imaging device attached to both of two eyepieces. The image of the present embodiment also includes a moving image. Therefore, the analysis image G of the present embodiment may be a three-dimensional moving image.
 分析用画像Gを三次元画像とすれば、視線分析システム100において、対象者Pが分析用画像Gを見たときの視界を、対象者Pが顕微鏡を覗いた状態で視認される視界に近づけることができる。 Assuming that the analysis image G is a three-dimensional image, in the gaze analysis system 100, the field of vision when the object person P looks at the analysis image G approaches the field of vision where the object person P looks in a state of looking through a microscope. be able to.
 尚、本実施形態では、分析用画像Gを三次元画像とした場合には、視線検出装置130の代わりに、視線検出機能を有する3D眼鏡を対象者Pに装着させても良い。このように、3D眼鏡を用いれば、対象者Pに対して、検査対象の製品の画像を立体的に見せることができ、より、実際の視野に近づけることができる。 In the present embodiment, when the analysis image G is a three-dimensional image, 3D glasses having a line-of-sight detection function may be attached to the subject person P instead of the line-of-sight detection device 130. Thus, if 3D glasses are used, the image of the product to be inspected can be shown three-dimensionally to the subject P, and the visual field can be made closer to the actual visual field.
 以下に、図4を参照して、本実施形態の視線分析システム100の有する各装置に
ついて説明する。図4は、第一の実施形態の視線分析システムの有する各装置の機能を説明する図である。
Below, with reference to FIG. 4, each apparatus which the gaze analysis system 100 of this embodiment has is demonstrated. FIG. 4 is a diagram for explaining the function of each device of the gaze analysis system according to the first embodiment.
 本実施形態の視線分析システム100において、制御装置120は、分析用画像記憶部121、表示制御部122を有する。 In the gaze analysis system 100 according to the present embodiment, the control device 120 includes an analysis image storage unit 121 and a display control unit 122.
 分析用画像記憶部121は、予め作成された分析用画像データGdを保持する。 The analysis image storage unit 121 holds the analysis image data Gd created in advance.
 表示制御部122は、分析用画像データGdに基づき、表示装置110に分析用画像Gを表示させる。また、表示制御部122は、分析結果画像データGSdに基づき、表示装置110に分析結果画像GSを表示させる。尚、本実施形態の分析結果画像データGSdは、対象者Pの分析用画像G上の視線の動きを示す画像データである。言い換えれば、本実施形態の分析結果画像データGSdは、分析用画像G上の対象者Pの視線の軌跡を示す動画データである。 The display control unit 122 causes the display device 110 to display the analysis image G based on the analysis image data Gd. The display control unit 122 also causes the display device 110 to display the analysis result image GS based on the analysis result image data GSd. The analysis result image data GSd of the present embodiment is image data indicating the movement of the sight line on the analysis image G of the subject P. In other words, the analysis result image data GSd of the present embodiment is moving image data indicating the locus of the line of sight of the subject P on the analysis image G.
 本実施形態の制御装置160は、分析結果取得部161、分析結果画像記憶部162、表示制御部163を有する。 The control device 160 of the present embodiment includes an analysis result acquisition unit 161, an analysis result image storage unit 162, and a display control unit 163.
 分析結果取得部161は、視線検出装置130から出力される分析結果画像データGSdを取得し、分析結果画像記憶部162に記憶させる。尚、本実施形態の分析結果取得部161は、例えば、分析用画像Gを表示装置110に表示させる際に、視線検出装置130を装着する対象者Pに、対象者Pを識別する識別情報を入力させ、識別情報と、分析結果画像データGSdとを対応付けて、分析結果画像記憶部162に記憶させても良い。 The analysis result acquisition unit 161 acquires the analysis result image data GSd output from the sight line detection device 130 and causes the analysis result image storage unit 162 to store the analysis result image data GSd. The analysis result acquisition unit 161 according to the present embodiment, for example, when displaying the analysis image G on the display device 110, the identification information for identifying the object person P to the object person P to which the sight line detection device 130 is attached. The identification information and the analysis result image data GSd may be input and stored in the analysis result image storage unit 162 in association with each other.
 分析結果画像記憶部162は、分析結果画像GSを表示させるための分析結果画像データGSdを保持する。 The analysis result image storage unit 162 holds analysis result image data GSd for displaying the analysis result image GS.
 表示制御部163は、分析結果画像データGSdを表示装置170に表示させる。 The display control unit 163 causes the display device 170 to display the analysis result image data GSd.
 本実施形態の視線検出装置130は、撮像部131、視線検出部132、画像生成部133、出力部134を有する。 The gaze detection apparatus 130 according to the present embodiment includes an imaging unit 131, a gaze detection unit 132, an image generation unit 133, and an output unit 134.
 撮像部131は、カメラ等の撮像装置であり、視線検出装置130を装着している対象者Pの視線方向の風景の画像を撮像する。 The imaging unit 131 is an imaging device such as a camera, and captures an image of a landscape in a gaze direction of the subject P wearing the gaze detection device 130.
 視線検出部132は、撮像部131が撮像した画像における、視線検出装置130の装着者の視線を検出する。画像生成部133は、撮像部131により撮像された画像に、視線検出部132によって検出された視線の位置を示す画像を重畳した画像を生成する。出力部134は、画像生成部133が生成した画像の画像データを分析結果画像データとして、制御装置160へ出力する。 The gaze detection unit 132 detects the gaze of the wearer of the gaze detection apparatus 130 in the image captured by the imaging unit 131. The image generation unit 133 generates an image in which an image indicating the position of the gaze detected by the gaze detection unit 132 is superimposed on the image captured by the imaging unit 131. The output unit 134 outputs the image data of the image generated by the image generation unit 133 to the control device 160 as analysis result image data.
 次に、図5を参照して、本実施形態の視線分析システム100により対象者Pの視線を分析する手順を説明する。図5は、第一の実施形態の視線分析システムによる対象者の視線の分析の手順を説明する図である。 Next, with reference to FIG. 5, a procedure for analyzing the line of sight of the subject P by the line-of-sight analysis system 100 of the present embodiment will be described. FIG. 5 is a diagram for explaining the procedure of analyzing the gaze of the subject by the gaze analysis system according to the first embodiment.
 本実施形態の視線分析システム100において、制御装置120は、表示制御部122により、分析用画像記憶部121に保持された分析用画像データを読み出し、表示装置110に分析用画像Gを表示させる(ステップS501)。 In the gaze analysis system 100 according to the present embodiment, the control device 120 causes the display control unit 122 to read out the analysis image data held in the analysis image storage unit 121 and causes the display device 110 to display the analysis image G (see FIG. Step S501).
 続いて、視線分析システム100は、対象者Pに装着された視線検出装置130の撮像部131によって、表示装置110の分析用画像Gを見ている対象者Pの視野の画像の撮像を開始する(ステップS502)。 Subsequently, the line-of-sight analysis system 100 causes the imaging unit 131 of the line-of-sight detection device 130 mounted on the object person P to start imaging the image of the field of view of the object person P looking at the analysis image G of the display device 110 (Step S502).
 続いて、視線検出装置130は、視線検出部132により撮像した画像における対象者Pの視線を検出し、画像生成部133により、撮像した画像に視線の位置を示す画像を重ねた分析結果画像を示す分析結果画像データを生成する(ステップS503)。分析結果画像の詳細は後述する。 Subsequently, the line-of-sight detection device 130 detects the line-of-sight of the subject P in the image captured by the line-of-sight detection unit 132, and the image generation unit 133 analyzes the resultant image indicating the position of the line of sight on the captured image. Analysis result image data to be shown is generated (step S503). Details of the analysis result image will be described later.
 続いて、視線検出装置130は、出力部134により、分析結果画像データを制御装置160に出力する(ステップS504)。制御装置160は、分析結果取得部161により、視線検出装置130から出力された分析結果画像データを取得し、分析結果画像記憶部162に格納する(ステップS505)。 Subsequently, the visual axis detection device 130 causes the output unit 134 to output the analysis result image data to the control device 160 (step S504). The control device 160 causes the analysis result acquisition unit 161 to acquire the analysis result image data output from the sight line detection device 130, and stores the analysis result image data in the analysis result image storage unit 162 (step S505).
 本実施形態では、以上のようにして取得した分析結果画像データを、表示装置170に表示させることで、接眼レンズを介した対象者Pの視界を示した状態における、対象者Pの視線の動きを可視化することができる。したがって、本実施形態によれば、接眼レンズを介した対象者Pの視界を示した状態における視線の動きを、対象者Pや、その他の人に観察させることができる。 In the present embodiment, the analysis result image data acquired as described above is displayed on the display device 170 so that the movement of the line of sight of the subject P in a state showing the field of vision of the subject P through the eyepiece lens. Can be visualized. Therefore, according to the present embodiment, it is possible to cause the subject P and other people to observe the movement of the line of sight in a state in which the field of vision of the subject P through the eyepiece is shown.
 このため、本実施形態の視線分析システム100を用いることで、例えば、光学顕微鏡を用いた目視検査において、高い精度で短時間に検査を行う能力を有する検査員の分析結果画像を他の検査員に観察させることができる。 For this reason, by using the line-of-sight analysis system 100 of the present embodiment, for example, in the visual inspection using an optical microscope, the analysis result image of the inspector having the ability to inspect in a short time with high accuracy Can be observed.
 以下に、図6を参照して、本実施形態の分析結果画像について説明する。図6は、第一の実施形態の分析結果画像を説明する図である。 The analysis result image of the present embodiment will be described below with reference to FIG. FIG. 6 is a diagram for explaining an analysis result image of the first embodiment.
 視線検出装置130を装着した対象者Pは、表示装置110に表示された画面111を見ている。したがって、視線検出装置130は、図6に示すような、画面111を撮像することになる。 The subject person P wearing the visual axis detection device 130 is looking at the screen 111 displayed on the display device 110. Therefore, the visual axis detection device 130 captures an image of the screen 111 as shown in FIG.
 また、視線検出装置130は、撮像した画面111の画像上における、対象者Pの視線の位置を検出し、視線の位置を示す画像113を重畳させる。また、分析用画像G上における画像113は、対象者Pの視線の動きに応じて移動する。言い換えれば、画像113は、対象者Pの視線の動きに追従して動く。 Further, the sight line detection device 130 detects the position of the sight line of the object person P on the captured image of the screen 111, and superimposes the image 113 indicating the position of the sight line. Further, the image 113 on the analysis image G moves in accordance with the movement of the line of sight of the object person P. In other words, the image 113 moves following the movement of the line of sight of the subject P.
 尚、本実施形態では、分析用画像Gにおける所定位置からの対象者Pの視線の移動量を求め、数値データとして、分析結果画像データGSdと共に、分析結果記憶部162に保持していても良い。また、所定位置とは、分析用画像Gに含まれる検査対象となる製品の画像の中心を示す座標であっても良い。 In the present embodiment, the movement amount of the line of sight of the object person P from the predetermined position in the analysis image G may be determined, and may be stored in the analysis result storage unit 162 together with the analysis result image data GSd as numerical data. . The predetermined position may be coordinates indicating the center of the image of the product to be inspected included in the analysis image G.
 また、本実施形態では、移動量を示す数値を分析結果画像データGSdと共に出力しても良い。さらに、本実施形態では、移動量を示す数値を、分析用画像G上に表示させても良い。 In the present embodiment, a numerical value indicating the movement amount may be output together with the analysis result image data GSd. Furthermore, in the present embodiment, a numerical value indicating the movement amount may be displayed on the analysis image G.
 さらに、視線分析システム100による視線の分析が行われる間、対象者Pは、ポインタ140を用いて分析用画像Gを上下左右にスクロールさせ、分析用画像Gにおいて円形の領域112内に表示される領域を移動させていく。 Furthermore, while the line-of-sight analysis is performed by the line-of-sight analysis system 100, the subject P scrolls the analysis image G vertically and horizontally using the pointer 140, and is displayed in the circular area 112 in the analysis image G. Move the area.
 したがって、画像113の動きは、円形の領域112内に表示される分析用画像Gが静止した状態における対象者Pの視線の動きと、円形の領域112内に表示される分析用画像Gの動きと、の両方を追従したものとなる。 Therefore, the movement of the image 113 is the movement of the line of sight of the subject P in a state where the analysis image G displayed in the circular area 112 is stationary and the movement of the analysis image G displayed in the circular area 112 And both are followed.
 以下に、図7及び図8を参照して、本実施形態の分析結果画像の例について説明する。図7は、例えば、光学顕微鏡を用いた目視検査において、短時間に高精度の目視検査を行うことができる検査員Aの分析結果画像の一例を示しており、図8は、例えば、検査員Aと比較して見逃しがある生産性が低い検査員Bの分析結果画像の一例を示している。 An example of the analysis result image of the present embodiment will be described below with reference to FIGS. 7 and 8. FIG. 7 shows an example of an analysis result image of the inspector A capable of performing visual inspection with high accuracy in a short time in a visual inspection using an optical microscope, for example. FIG. An example of the analysis result image of the inspector B with low productivity which has a missing compared with A is shown.
 図7は、第一の実施形態の検査員Aの分析結果画像の一例を示す図である。図7(A)~図7(D)は、表示装置170に表示された検査員Aの分析結果画像データ(動画)から切り出したフレーム毎の画像を示している。 FIG. 7 is a view showing an example of an analysis result image of the inspector A according to the first embodiment. FIGS. 7A to 7D show images of each frame cut out from the analysis result image data (moving image) of the inspector A displayed on the display device 170. FIG.
 また、図7に示す円形の領域112内の分析用画像Gには、検査対象となる製品の画像71~76が表示されている。 Further, in the analysis image G in the circular area 112 shown in FIG. 7, images 71 to 76 of the product to be inspected are displayed.
 図7(A)では、分析結果画像GS-1Aにおいて、円形の領域112内の分析用画像Gにおける検査員Aの視線の位置を示す画像113-1Aは、隣接して並んでいる画像71と画像72との間に位置している。また、図7(B)では、分析用画像GS-2Aにおいて、画像113-2Aが、隣接して並んでいる画像73と画像74との間に位置している。また、図7(C)では、分析用画像GS-3Aにおいて、画像113-3Aが、画像73と画像74と、画像75と、画像76との中心に位置している。さらに、図7(D)では、分析用画像GS-4Aにおいて、画像113-4Aが、隣接して並んでいる画像75と画像76との間に位置している。 In FIG. 7A, in the analysis result image GS-1A, the image 113-1A indicating the position of the line of sight of the inspector A in the analysis image G in the circular area 112 is the image 71 adjacently arranged It is located between the image 72. Further, in FIG. 7B, in the image for analysis GS-2A, the image 113-2A is positioned between the image 73 and the image 74 arranged adjacent to each other. Further, in FIG. 7C, in the image for analysis GS-3A, the image 113-3A is located at the center of the image 73, the image 74, the image 75, and the image 76. Further, in FIG. 7D, in the image for analysis GS-4A, the image 113-4A is located between the image 75 and the image 76 arranged adjacent to each other.
 図7の分析結果画像から、検査員Aは、製品1つ1つの画像に対しては視線が向いておらず、複数の製品の画像の間に視線が向いていることがわかる。このことから、検査員Aは、製品の画像を中心視で凝視するのではなく、製品の画像を周辺視によって瞬間視していることがわかる。また、この例では、視野角(視野の広さを表す角度)が、17度となることからも、周辺視とわかる。 From the analysis result image of FIG. 7, it can be understood that the inspector A does not look at the image of each product but looks at the space between the images of a plurality of products. From this, it is understood that the inspector A does not stare at the image of the product with a central view, but instantaneously views the image of the product by peripheral vision. Further, in this example, the viewing angle (an angle representing the size of the field of view) is 17 degrees, which indicates peripheral vision.
 一般的に、目視検査において、生産性を向上させるために重要な要素として、検査対象物を中心視によって凝視するのではなく、周辺視によって瞬間視することが知られている。 Generally, in visual inspection, as an important factor for improving the productivity, it is known that the inspection object is not stared by central vision but momentarily viewed by peripheral vision.
 図7の例から、検査員Aは、分析結果画像GS-1A~GS-4Aにおける画像113-1A~113-4Aの位置から、周辺視及び瞬間視を用いた適切な視線の動きによって、目視検査を行っていることが把握できる。言い換えれば、適切な視線の動きで目視検査を行っている検査員は、短時間で高精度な目視検査を行うことができることが把握できる。尚、本実施形態における生産性の向上とは、目視検査にかかる時間を短縮することである。 From the example of FIG. 7, the inspector A visually observes the position of the images 113-1A to 113-4A in the analysis result images GS-1A to GS-4A by the appropriate visual line movement using peripheral vision and instantaneous vision. It can be understood that the inspection is being conducted. In other words, it can be understood that the inspector performing the visual inspection with the appropriate movement of the line of sight can perform the visual inspection with high accuracy in a short time. The improvement of the productivity in the present embodiment is to shorten the time required for the visual inspection.
 図8は、第一の実施形態の検査員Bの分析結果画像の一例を示す図である。図8(A)~図8(D)は、表示装置170に表示された検査員Bの分析結果画像データ(動画)から切り出したフレーム毎の画像を示している。 FIG. 8 is a view showing an example of an analysis result image of the inspector B according to the first embodiment. FIGS. 8A to 8D show images of each frame cut out from the analysis result image data (moving image) of the inspector B displayed on the display device 170. FIG.
 図8(A)では、分析結果画像GS-1Bにおいて、円形の領域112内の分析用画像Gにおける検査員Bの視線の位置を示す画像113-1Bは、製品の画像71と重なる位置にある。また、図8(B)では、分析用画像GS-2Bにおいて、画像113-2Bが、画像74と重なる位置にある。また、図8(C)では、分析用画像GS-3Bにおいて、画像113-3Bが、画像74と重なる位置にある。さらに、図8(D)では、分析用画像GS-4Bにおいて、画像113-4Bが、画像75と重なる位置にある。 In FIG. 8A, in the analysis result image GS-1B, the image 113-1B indicating the position of the line of sight of the inspector B in the analysis image G in the circular area 112 is at a position overlapping the product image 71. . Further, in FIG. 8B, in the image for analysis GS-2B, the image 113-2B is at a position overlapping the image 74. Further, in FIG. 8C, in the image for analysis GS-3B, the image 113-3B is at a position overlapping the image 74. Further, in FIG. 8D, in the image for analysis GS-4B, the image 113-4B is at a position overlapping the image 75.
 図8に示す分析結果画像から、検査員Bは、製品の画像1つ1つを中心視によって凝視していることがわかる。また、図8に示す分析結果画像から、検査員Bは、画像73に視線が向かないまま、画像73、74の下に並んでいる画像75に移動しており、画像73を見逃していることがわかる。 It can be understood from the analysis result image shown in FIG. 8 that the inspector B stares at each image of the product by central vision. Further, from the analysis result image shown in FIG. 8, the inspector B has moved to the image 75 lined under the images 73 and 74 without looking at the image 73 and has missed the image 73 I understand.
 このように、本実施形態によれば、顔面を顕微鏡等の検査器具に接触させるために、検査中の検査員の眼球の動きを捉えることが不可能となる場合でも、検査員の視線の動きを分析結果画像に示すことで、検査員自身や他の検査員に視線を観察させることができる。 As described above, according to the present embodiment, even if it is impossible to capture the movement of the eyeball of the inspector under inspection in order to bring the face into contact with the inspection instrument such as a microscope, the movement of the line of sight of the inspector By showing in the analysis result image, it is possible to make the inspector himself or another inspector observe the line of sight.
 したがって、本実施形態では、検査員に対し、自身の分析結果画像や、他の検査員の分析結果画像を目視させることで、検査員による目視検査のトレーニングとすることができる。 Therefore, in the present embodiment, visual inspection training by the inspector can be performed by causing the inspector to visually check his / her analysis result image and analysis result images of other inspectors.
 以下に、図9を参照して、本実施形態の視線分析システム100を用いたトレーニング方法を説明する。 Hereinafter, a training method using the gaze analysis system 100 according to the present embodiment will be described with reference to FIG.
 図9は、第一の実施形態の視線分析システムを用いた目視検査のトレーニング方法の手順を示す図である。 FIG. 9 is a diagram showing the procedure of a training method of visual inspection using the gaze analysis system of the first embodiment.
 本実施形態において、検査員は、制御装置160により、表示装置170に表示させる分析結果画像データGSdを指定する(ステップS901)。言い換えれば、制御装置160は、表示装置170に表示させる分析結果画像データGSdの選択を受け付ける。 In the present embodiment, the inspector specifies the analysis result image data GSd to be displayed on the display device 170 by the control device 160 (step S901). In other words, the control device 160 receives the selection of the analysis result image data GSd to be displayed on the display device 170.
 次に、制御装置160は、表示制御部163により、指定された分析結果画像データGSdを分析結果画像記憶部162から読み出す(ステップS902)。そして、制御装置160は、表示制御部163により、読み出した分析結果画像データGSdを表示装置170に表示させ、分析結果画像データGSdを指定した検査員に、表示装置170に表示された分析結果画像GSを目視させる(ステップS903)。 Next, the control device 160 causes the display control unit 163 to read the designated analysis result image data GSd from the analysis result image storage unit 162 (step S902). Then, the control device 160 causes the display control unit 163 to display the read analysis result image data GSd on the display device 170, and the inspector who has specified the analysis result image data GSd displays the analysis result image displayed on the display device 170. The user visually checks the GS (step S903).
 本実施形態では、このように、分析結果画像データGSdを検査員に目視させることで、検査員に対し、接眼レンズを介した状態の視野における自分の視線の動きや、他の検査員の視線の動きを観察させることができる。また、本実施形態では、他の多くの検査員と共に視線の動きを観察したり、視線の動きついての議論をすることができる。 In this embodiment, by causing the inspector to visually check the analysis result image data GSd in this manner, the inspector can see the movement of his / her line of sight in the visual field through the eyepiece and the lines of sight of other inspectors. Can be observed. In addition, in this embodiment, it is possible to observe the movement of the sight line with many other inspectors and to discuss the movement of the sight line.
 したがって、本実施形態によれば、例えば、短時間で高精度な検査を行う検査員の分析結果画像GSを他の検査員達に観察させ、他の検査員達の目視検査における生産性を向上させることができる。具体的には、例えば、検査員Bに対し、自身の分析結果画像GS(図8参照)と、検査員Aの分析結果画像GS(図7参照)とを観察させることで、検査員Bに対して、目視検査における適切な視線の動きがどのようなものであるかを学習させることができる。 Therefore, according to the present embodiment, for example, another inspector is caused to observe the analysis result image GS of an inspector who performs a high-accuracy inspection in a short time, and the productivity in the visual inspection of the other inspectors is improved. It can be done. Specifically, for example, the examiner B is caused to observe the analysis result image GS of his own (see FIG. 8) and the analysis result image GS of the examiner A (see FIG. 7). On the other hand, it is possible to learn what is the proper eye movement in the visual inspection.
 尚、本実施形態の視線分析システム100は、表示装置110、制御装置120、視線検出装置130、制御装置160、表示装置170を有する構成としたが、これに限定されない。以下に、視線分析システム100の変形例について説明する。 In addition, although it was set as the structure which has the display apparatus 110, the control apparatus 120, the gaze detection apparatus 130, the control apparatus 160, and the display apparatus 170 of the gaze analysis system 100 of this embodiment, it is not limited to this. Below, the modification of eye gaze analysis system 100 is explained.
 図10は、視線分析システムの変形例を示す第一の図である。図10に示す視線分析システム100Aは、表示装置110、制御装置120、視線検出装置130、表示装置170を有しており、制御装置120が制御装置160を兼ねている。 FIG. 10 is a first diagram showing a modification of the gaze analysis system. A gaze analysis system 100A shown in FIG. 10 includes a display device 110, a control device 120, a gaze detection device 130, and a display device 170. The control device 120 doubles as the control device 160.
 視線分析システム100Aでは、視線検出装置130が取得した分析結果画像データGSdが、制御装置120に格納される。そして、制御装置120は、分析結果画像GSを表示装置170に表示させることで、対象者Pや第三者に、対象者Pの視線の動きを観察させる。 In the line-of-sight analysis system 100A, the analysis result image data GSd acquired by the line-of-sight detection device 130 is stored in the control device 120. Then, the control device 120 causes the target person P or a third party to observe the movement of the line of sight of the target person P by causing the display device 170 to display the analysis result image GS.
 図11は、視線分析システムの変形例を示す第二の図である。図11に示す視線分析システム100Bは、表示装置110、制御装置120、視線検出装置130を有しており、表示装置110が表示装置170を兼ねており、制御装置120が制御装置160を兼ねている。 FIG. 11 is a second diagram showing a modified example of the gaze analysis system. A gaze analysis system 100B shown in FIG. 11 includes a display device 110, a control device 120, and a gaze detection device 130. The display device 110 doubles as the display device 170. The control device 120 doubles as the control device 160. There is.
 視線分析システム100Bでは、視線検出装置130が取得した分析結果画像データGSdが、制御装置120に格納される。また、視線分析システム100Bでは、制御装置120は、分析結果画像GSを表示装置110に表示させることで、対象者Pや第三者に、対象者Pの視線の動きを観察させる。 In the line-of-sight analysis system 100 </ b> B, the analysis result image data GSd acquired by the line-of-sight detection device 130 is stored in the control device 120. Further, in the line-of-sight analysis system 100B, the control device 120 causes the target person P or a third party to observe the movement of the line of sight of the subject P by causing the display device 110 to display the analysis result image GS.
 このように、本実施形態の視線分析システムは、表示装置、制御装置、視線検出装置を備えていれば、実現可能であり、視線分析システムに含まれる各装置の数は、任意であって良い。 As described above, the gaze analysis system according to the present embodiment can be realized as long as the display device, the control device, and the gaze detection device are provided, and the number of devices included in the gaze analysis system may be arbitrary. .
 (第二の実施形態)
 以下に図面を参照して、第二の実施形態について説明する。第二の実施形態では、検査員毎の分析結果画像データを解析した結果に応じたメッセージが通知される点が、第一の実施形態と相違する。よって、以下の第二の実施形態の説明では、第一の実施形態との相違点についてのみ説明し、第一の実施形態と同様の機能構成を有するものには、第一の実施形態の説明で用いた符号と同様の符号を付与し、その説明を省略する。
Second Embodiment
The second embodiment will be described below with reference to the drawings. The second embodiment is different from the first embodiment in that a message corresponding to a result of analysis of analysis result image data for each inspector is notified. Therefore, in the following description of the second embodiment, only differences from the first embodiment will be described, and for those having the same functional configuration as the first embodiment, the description of the first embodiment The same reference numerals as the reference numerals used in FIG.
 以下に、本実施形態の視線分析システムの説明に先立ち、実際の目視検査から得られる検査員に関する情報と、視線分析システムによる分析結果から得られる検査員に関する情報について説明する。 Prior to the description of the gaze analysis system according to the present embodiment, information on an inspector obtained from an actual visual inspection and information on an inspector obtained from an analysis result by the gaze analysis system will be described below.
 実際の光学顕微鏡を用いた目視検査では、検査員に関する情報として、平均生産数[pcs/時間]、見逃し率[ppm]、過検出率[ppm]が得られる。 In an actual visual inspection using an optical microscope, the average production number [pcs / hour], the missing rate [ppm], and the overdetection rate [ppm] can be obtained as information on the inspector.
 平均生産数は、単位時間において検査員が検査した製品の数である。尚、単位時間は、1時間でも良いし、1分でも良く、任意の単位であっても良い。見逃し率は、検査員が検査した製品の数に対する、見逃した不良品の数の割合を示す。過検出率は、検査員が検査した製品の数に対する、良品を不良品として検出した数の割合を示す。 The average production number is the number of products inspected by the inspector in unit time. The unit time may be one hour, one minute, or any unit. The miss rate indicates the ratio of missed defects to the number of products inspected by the inspector. The over detection rate indicates the ratio of the number of non-defective products detected as defective products to the number of products inspected by the inspector.
 これに対し、本実施形態の視線分析システムによる分析結果では、検査員に関する情報として、検査員の視点の大きさ[mm]、視点の移動速度[s/pcs]、視点の網羅性[%]である。尚、視点の大きさの単位は、[個数]であっても良い。 On the other hand, in the analysis result by the gaze analysis system of the present embodiment, the size of the viewpoint of the inspector [mm 2 ], the movement speed of the viewpoint [s / pcs], the coverage of the viewpoint [%] ]. The unit of the size of the viewpoint may be [number].
 平均生産数は、視点の大きさ及び視点の移動速度と相関がある。また、見逃し率は、視点の網羅性と相関がある。過検出率は、視点の移動速度と中程度の相関がある。 The average production number is correlated with the size of the viewpoint and the moving speed of the viewpoint. Also, the missing rate is correlated with the completeness of the viewpoint. The overdetection rate is moderately correlated with the movement speed of the viewpoint.
 以下に、図12を参照して、視点の大きさ、視点の移動速度及び視点の網羅性について説明する。図12は、視点の大きさ、視点の移動速度及び視点の網羅性を説明する図である。 The size of the viewpoint, the moving speed of the viewpoint, and the coverage of the viewpoint will be described below with reference to FIG. FIG. 12 is a diagram for explaining the size of the viewpoint, the moving speed of the viewpoint, and the coverage of the viewpoint.
 視点の大きさは、例えば、分析結果画像GS-12に表示される視線の位置を示す画像113-5が滞留しているとき(移動していないとき)における、画像113-5を中心とした所定範囲の領域R1の面積である。 The size of the viewpoint is, for example, centered on the image 113-5 when the image 113-5 indicating the position of the line of sight displayed in the analysis result image GS-12 is stagnant (when not moving) It is an area of the region R1 of a predetermined range.
 尚、視点の大きさは、所定範囲の領域R1に、少なくとも一部が含まれる製品の数として示されても良い。図12の場合、領域R1には、製品77の一部と製品78の一部が含まれるため、視点の大きさを「2個」と表現しても良い。 Note that the size of the viewpoint may be indicated as the number of products including at least a part in the region R1 of the predetermined range. In the case of FIG. 12, since the region R1 includes a part of the product 77 and a part of the product 78, the size of the viewpoint may be expressed as “two”.
 視点の移動速度は、分析結果画像において、視線の位置を示す画像が滞留している時間と、視線の位置を示す画像が移動して、次に滞留するまでの時間との合計時間を、製品毎に算出し、平均した値である。 The movement speed of the viewpoint is the total time between the time when the image showing the position of the line of sight stays in the analysis result image and the time when the image showing the position of the line of sight moves and then stays next It is a value calculated and averaged for each.
 図12の例では、視点の移動速度は、視線の位置を示す画像113-5が滞留している時間と、視線の位置を示す画像113-5が移動を開始してから次に滞留するまでの時間の合計時間が、製品77と製品78のそれぞれについての視点の移動速度となる。 In the example of FIG. 12, the moving speed of the viewpoint is the time when the image 113-5 indicating the position of the line of sight is stagnating and the image 113-5 indicating the position of the line of sight starts moving until the next The total time of time is the moving speed of the viewpoint for each of the product 77 and the product 78.
 視点の網羅性は、検査員が検査すべき製品の数に対する、視点の大きさを示す所定範囲の領域内に滞留した製品の数の割合を示す。図12の例では、視点が視線の位置を示す画像113-5で滞留しているときの領域R1内に、製品77、78の一部が含まれている。また、図12の例では、視点が視線の位置を示す画像113-6で滞留しているときの領域R2内に、製品79、80の一部が含まれている。したがって、領域R1、R2内では、視点の網羅性は低下していないことがわかる。 The completeness of the viewpoint indicates the ratio of the number of products staying in a predetermined range area indicating the size of the viewpoint to the number of products to be inspected by the inspector. In the example of FIG. 12, a part of the products 77 and 78 is included in the region R1 when the viewpoint is stagnant in the image 113-5 indicating the position of the sight line. Further, in the example of FIG. 12, a part of the products 79 and 80 is included in the region R2 when the viewpoint is stagnant in the image 113-6 indicating the position of the sight line. Therefore, it is understood that the coverage of the viewpoints is not reduced in the regions R1 and R2.
 次に、図13を参照して、視点の大きさと及び視点の移動速度と、平均生産性との関係について説明する。 Next, with reference to FIG. 13, the relationship between the size of the viewpoint and the movement speed of the viewpoint and the average productivity will be described.
 図13は、視点の大きさと及び視点の移動速度と、平均生産性との関係を説明する図である。図13(A)は、中心視を用いた検査員の分析結果画像GS-13Aの例を示す図であり、図13(B)は、周辺視を用いた検査員の分析結果画像GS-13Bの例を示す図である。 FIG. 13 is a diagram for explaining the relationship between the size of the viewpoint, the movement speed of the viewpoint, and the average productivity. FIG. 13A shows an example of an analysis result image GS-13A of an inspector using central vision, and FIG. 13B shows an analysis result image GS-13B of an inspector using peripheral vision. Is a diagram illustrating an example of
 図13(A)に示すように、中心視を用いた場合には、分析結果画像GS-13Aにおいて、視点の大きさを示す領域R3、R4、R5、R6は、それぞれが製品81、82、83、84と重なっている。したがって、図13(A)の例では、視点の大きさを製品の個数で表現すると、「1個」となる。 As shown in FIG. 13A, in the case of using central vision, in the analysis result image GS-13A, the regions R3, R4, R5, and R6 indicating the size of the viewpoint are products 81, 82, respectively. It overlaps with 83 and 84. Therefore, in the example of FIG. 13A, when the size of the viewpoint is represented by the number of products, it is “one”.
 また、図13(A)の例では、視点の移動速度は、例えば、視点が領域R3で滞留した時間と、視点が領域3から領域4に移動するまでの時間の合計時間が、製品81に対応した視点の移動速度となる。同様に、図13(A)の例では、視点が領域R4で滞留した時間と、視点が領域4から領域5に移動するまでの時間の合計時間が、製品82に対応した視点の移動速度となる。尚、図13(A)の例では、中心視を用いているため、一点を凝視することになり、視点が滞留している時間が、周辺視を用いた場合よりも長くなる。 Further, in the example of FIG. 13A, the moving speed of the viewpoint is, for example, the total of the time when the viewpoint is stagnated in the region R3 and the time until the viewpoint moves from the region 3 to the region 4 It corresponds to the movement speed of the corresponding viewpoint. Similarly, in the example of FIG. 13A, the total time of the time when the viewpoint stays in the region R4 and the time until the viewpoint moves from the region 4 to the region 5 corresponds to the moving speed of the viewpoint corresponding to the product 82. Become. In the example of FIG. 13A, since central vision is used, one of the points is gazed, and the time during which the viewpoint is stagnating is longer than when peripheral vision is used.
 図13(A)において、上述した製品毎の視点の移動速度の平均値を算出した場合、0.56[s/pcs]となった。また、図13(A)の分析結果画像GS-13Aと対応する対象者Pの平均生産数は、2394[pcs/h]である。 In FIG. 13A, when the average value of the moving speed of the viewpoint of each product described above is calculated, it is 0.56 [s / pcs]. Further, the average production number of the subject P corresponding to the analysis result image GS-13A in FIG. 13A is 2394 [pcs / h].
 これに対し、図13(B)に示すように、周辺視を用いた場合には、分析結果画像GS-13Bに示すように、視点の大きさを示す領域R7は、製品81、82の一部と重なり、領域R8は、製品83、84の一部と重なっている。したがって、図13(B)の例では、視点の大きさを製品の個数で表現すると、「2個」となる。 On the other hand, as shown in FIG. 13B, when peripheral vision is used, as shown in the analysis result image GS-13B, the region R7 indicating the size of the viewpoint is one of the products 81 and 82. The area R8 overlaps a portion of the product 83, 84. Therefore, in the example of FIG. 13B, when the size of the viewpoint is represented by the number of products, it is “2”.
 また、図13(B)の例では、視点の移動速度は、例えば、視点が領域R7で滞留した時間と、視点が領域R7から領域R8に移動するまでの時間の合計時間が、製品81、82のそれぞれに対応した視点の移動速度となる。尚、図13(B)の例では、周辺視を用いている場合、一点を凝視することがないため、視点が滞留している時間が、中心視を用いた場合よりも短くなる。 Further, in the example of FIG. 13B, the moving speed of the viewpoint is, for example, the total time of the time when the viewpoint is stagnated in the region R7 and the time until the viewpoint moves from the region R7 to the region R8, product 81, It becomes the movement speed of the viewpoint corresponding to each of 82. In the example of FIG. 13 (B), when peripheral vision is used, the user does not stare at one point, so the time for which the viewpoint is stagnating is shorter than when central vision is used.
 図13(B)において、上述した製品毎の視点の移動速度の平均値を算出した場合、0.34[s/pcs]となった。また、図13(B)の分析結果画像GS-13Bと対応する対象者Pの平均生産数は、3950[pcs/h]である。 In FIG. 13B, when the average value of the movement speed of the viewpoint of each product described above is calculated, it is 0.34 [s / pcs]. The average number of production of the subject P corresponding to the analysis result image GS-13B in FIG. 13B is 3950 [pcs / h].
 このように、平均生産数は、視点の大きさの値が大きく、視点の移動速度の値が小さいほど大きくなつており、平均生産数と、視点の大きさ及び視点の移動速度とは相関があるとがわかる。 Thus, the average number of productions increases as the viewpoint size value increases and the viewpoint movement speed decreases, and the average production number correlates with the viewpoint size and viewpoint movement speed. I know that there is.
 次に、図14を参照して、本実施形態の視点の網羅性と見逃し率との関係について説明する。図14は、視点の網羅性と見逃し率の関係を説明する図である。図14(A)は、視点の網羅性と見逃し率の関係を示す第一の図であり、図14(B)は、視点の網羅性と見逃し率の関係を示す第二の図である。 Next, with reference to FIG. 14, the relationship between the coverage and the missing rate of the viewpoint of the present embodiment will be described. FIG. 14 is a diagram for explaining the relationship between the completeness of viewpoints and the missing rate. FIG. 14 (A) is a first diagram showing the relationship between the completeness of the viewpoint and the missing rate, and FIG. 14 (B) is the second diagram showing the relationship between the completeness of the viewpoint and the missing rate.
 図14(A)に示す分析結果画像GS-13Aの例では、製品81~86の全てについて、視点の大きさを示す領域R3~R7が重なっている。したがって、例えば、10個の製品全てについて、図14(A)に示すように、製品に視点の大きさを示す領域が重なっていた場合、視点の網羅性は、10pcs/10pcsとなり、100[%]となる。また、図14の分析結果画像GS-13Aと対応する対象者Pの見逃し率は、258[ppm]である。 In the example of the analysis result image GS-13A shown in FIG. 14A, the regions R3 to R7 indicating the size of the viewpoint overlap for all the products 81 to 86. Therefore, for example, for all ten products, as shown in FIG. 14A, when the area indicating the size of the viewpoint overlaps the product, the coverage of the viewpoint is 10 pcs / 10 pcs, 100% ]. The missing rate of the subject P corresponding to the analysis result image GS-13A in FIG. 14 is 258 [ppm].
 これに対し、図14(B)に示す分析結果画像GS-13Cの例では、視線の大きさを示す領域R11は製品81と重なっており、領域R12は製品82と重なっており、領域R13は製品83と重なっており、領域R14は製品85と重なっている。しかし、製品84には、視点の大きさを示す領域は重なっていない。 On the other hand, in the example of the analysis result image GS-13C shown in FIG. 14B, the region R11 indicating the size of the line of sight overlaps the product 81, the region R12 overlaps the product 82, and the region R13 It overlaps with the product 83, and the region R14 overlaps with the product 85. However, in the product 84, the area indicating the size of the viewpoint does not overlap.
 したがって、例えば、10個の製品全てについて、図14(B)に示すように、製品に視点の大きさを示す領域が重なっていた場合、視点の網羅性は、9pcs/10pcsとなり、90[%]となる。また、図14の分析結果画像GS-13Cと対応する対象者Pの見逃し率は、1512[ppm]である。 Therefore, for example, for all ten products, as shown in FIG. 14B, when the area indicating the size of the viewpoint overlaps the product, the coverage of the viewpoint is 9 pcs / 10 pcs, 90% ]. Further, the missing rate of the subject P corresponding to the analysis result image GS-13C of FIG. 14 is 1512 [ppm].
 このように、見逃し率は、視点の網羅性の値が高いほど小さくなっており、見逃し率と視点の網羅性とは相関があることがわかる。 Thus, it can be seen that the higher the coverage value of the viewpoint is, the smaller the blindness rate is, and there is a correlation between the blindness ratio and the coverage rate of the viewpoint.
 以上のことをふまえ、本実施形態の視線分析システムについて説明する。本実施形態の視線分析システムでは、分析結果画像を解析することで、対象者P(検査員)毎の視点の大きさ、視点の移動速度、視点の網羅性を取得する。 Based on the above, the gaze analysis system of the present embodiment will be described. In the gaze analysis system of the present embodiment, the analysis result image is analyzed to acquire the size of the viewpoint for each target person P (examiner), the moving speed of the viewpoint, and the coverage of the viewpoint.
 そして、本実施形態では、分析結果画像から得られた検査員毎の情報(視点の大きさ、視点の移動速度、視点の網羅性)と、実際の光学顕微鏡を用いた目視検査から得られた検査員毎の情報(平均生産性、過検出率、見逃し率)と、に基づき、検査員に目視検査のトレーニングを行わせる。 And, in the present embodiment, information obtained for each inspector from the analysis result image (viewpoint size, viewpoint movement speed, viewpoint coverage) and visual inspection using an actual optical microscope Train inspectors on visual inspection based on the information for each inspector (average productivity, over detection rate, miss rate).
 図15は、第二の実施形態の視線分析システムの有する各装置の機能を説明する図である。 FIG. 15 is a diagram for explaining the function of each device of the gaze analysis system according to the second embodiment.
 本実施形態の視線分析システム100Cは、表示装置110、制御装置120A、視線検出装置130を有する。 The gaze analysis system 100C of the present embodiment includes a display device 110, a control device 120A, and a gaze detection device 130.
 制御装置120Aは、分析用画像記憶部121、表示制御部122、分析結果取得部123、トレーニング支援部125、分析結果画像データベース210、解析結果データベース220、メッセージデータベース230を有する。 The control device 120A includes an analysis image storage unit 121, a display control unit 122, an analysis result acquisition unit 123, a training support unit 125, an analysis result image database 210, an analysis result database 220, and a message database 230.
 本実施形態の制御装置120Aは、例えば、検査員のグループ毎に、分析結果取得部123により、検査員毎の分析結果画像データGSdを取得し、分析結果画像データベース210に格納する。また、制御装置120Aは、トレーニング支援部125により、分析結果画像データベース210に格納された検査員毎の分析結果画像データGSdを解析した結果を解析結果データベース220へ格納する。尚、解析結果データベース220には、実際の目視検査により得られた検査員に関する情報が予め格納されており、分析結果画像データGSdを解析した結果から得られた検査員に関する情報が、目視検査により得られた検査員に関する情報と対応付けられて格納される。 In the control device 120A of this embodiment, for example, the analysis result acquisition unit 123 acquires analysis result image data GSd for each inspector by each group of inspectors, and stores the acquired analysis data in the analysis result image database 210. Further, the control device 120A causes the training support unit 125 to store the analysis result image data GSd for each inspector stored in the analysis result image database 210 into the analysis result database 220. In addition, the information about the inspector obtained by the actual visual inspection is stored in advance in the analysis result database 220, and the information about the inspector obtained from the analysis result of the analysis result image data GSd is the visual inspection. It is stored in association with the information on the obtained inspector.
 また、本実施形態の制御装置120Aは、解析結果に応じて、顕微鏡検査の目視検査における視線の最適な動きを示す情報を特定し、検査員が目視検査のトレーニングを行う際に、メッセージデータベース230を参照してトレーニングの際の注意点を通知する。 In addition, the control device 120A according to the present embodiment identifies the information indicating the optimum movement of the line of sight in the visual inspection of the microscope inspection according to the analysis result, and when the inspector conducts training of the visual inspection, the message database 230 Refer to and notify you when training.
 以下に、トレーニング支援部125について説明する。本実施形態のトレーニング支援部125は、制御装置120Aが有する演算処理装置によって、メモリ装置に格納されたトレーニング支援プログラムが読み出されて実行されることで実現される。 The training support unit 125 will be described below. The training support unit 125 of the present embodiment is realized by the training processing program stored in the memory device being read and executed by the arithmetic processing unit of the control device 120A.
 本実施形態のトレーニング支援部125は、画像解析部126、最適値特定部127、比較部128、メッセージ選択部129を有する。 The training support unit 125 of the present embodiment includes an image analysis unit 126, an optimum value identification unit 127, a comparison unit 128, and a message selection unit 129.
 画像解析部126は、分析結果画像データベース210に格納された検査員毎の分析結果画像データGSdを解析する。画像解析部126による分析結果画像データGSdの解析の詳細は後述する。 The image analysis unit 126 analyzes the analysis result image data GSd for each inspector stored in the analysis result image database 210. Details of the analysis of the analysis result image data GSd by the image analysis unit 126 will be described later.
 最適値特定部127は、画像解析部126による解析結果から、トレーニングを行う検査員のグループの中で、最適な視線の動きを示す値を特定する。最適値特定部127の処理の詳細は後述する。 The optimum value specifying unit 127 specifies, from the analysis result by the image analysis unit 126, a value indicating an optimal eye movement in a group of inspectors who perform training. Details of the process of the optimum value identifying unit 127 will be described later.
 比較部128は、解析結果データベース220に格納された、トレーニングを行う検査員の分析結果画像データGSdの解析の結果と、最適な視線の動きを示す値とを比較する。 The comparison unit 128 compares the analysis result of the analysis result image data GSd of the inspector who performs training, which is stored in the analysis result database 220, with the value indicating the optimum eye movement.
 メッセージ選択部129は、比較部128による比較した結果に対応するメッセージをメッセージデータベース230から選択する。 The message selection unit 129 selects a message corresponding to the comparison result by the comparison unit 128 from the message database 230.
 以下に、図16乃至図18を参照して、本実施形態の制御装置120Aの有する各データベースについて説明する。尚、本実施形態では、各データベースが制御装置120Aに設けられるものとしたが、これに限定されない。各データベースは、制御装置120A以外の外部装置に記憶されていても良い。 Hereinafter, each database included in the control device 120A of the present embodiment will be described with reference to FIGS. 16 to 18. In the present embodiment, each database is provided in the control device 120A, but the present invention is not limited to this. Each database may be stored in an external device other than the control device 120A.
 図16は、第二の実施形態の分析結果画像データベースの一例を示す図である。本実施形態の分析結果画像データベース210は、情報の項目として、検査員IDと、分析結果画像データとを有し、それぞれが対応付けられている。 FIG. 16 is a view showing an example of the analysis result image database of the second embodiment. The analysis result image database 210 of the present embodiment has an inspector ID and analysis result image data as items of information, and they are associated with each other.
 項目「検査員ID」の値は、検査員を特定するための識別情報を示す。検査員IDは、検査員が視線検出装置130を装着して分析用画像Gを目視し、分析結果画像データGSdを取得する際に、入力されても良い。 The value of the item “inspector ID” indicates identification information for identifying the inspector. The inspector ID may be input when the inspector wears the visual axis detection device 130, visually checks the analysis image G, and acquires the analysis result image data GSd.
 項目「分析結果画像データ」の値は、分析結果画像データGSdそのものである。 The value of the item “analysis result image data” is the analysis result image data GSd itself.
 図17は、第二の実施形態の解析結果データベースの一例を示す図である。本実施形態の解析結果データベース220では、画像解析部126による分析結果画像データGSdの解析結果が格納される。 FIG. 17 is a diagram showing an example of the analysis result database of the second embodiment. In the analysis result database 220 of the present embodiment, the analysis result of the analysis result image data GSd by the image analysis unit 126 is stored.
 本実施形態の解析結果データベース220は、情報の項目として、検査員ID、視点の大きさ、視点の網羅性、視点の移動速度、見逃し率、平均生産数、過検出率を有する。解析結果データベース220において、項目「検査員ID」と、その他の項目とが対応付けられている。以下の説明では、解析結果データベース220において、項目「検査員ID」の値と、その他の項目の値とを含む情報を解析結果情報と呼ぶ。 The analysis result database 220 of the present embodiment has, as items of information, an inspector ID, the size of the viewpoint, the coverage of the viewpoint, the moving speed of the viewpoint, the missing rate, the average production number, and the overdetection rate. In the analysis result database 220, the item “inspector ID” is associated with other items. In the following description, in the analysis result database 220, information including the value of the item “inspector ID” and the values of other items is referred to as analysis result information.
 本実施形態の解析結果データベース220では、項目「視点の大きさ」、「視点の網羅性」、「視点の移動速度」は、画像解析部126による検査員毎の分析結果画像データGSdの解析によって求められる。言い換えれば、項目「視点の大きさ」、「視点の網羅性」、「視点の移動速度」の値は、視線分析システム100Cによる分析から得られる検査員に関する情報である。 In the analysis result database 220 of the present embodiment, the items “viewpoint size”, “viewpoint coverage”, and “viewpoint movement speed” are analyzed by the analysis result image data GSd for each inspector by the image analysis unit 126. Desired. In other words, the values of the items "size of the viewpoint", "coverage of the viewpoint", and "moving speed of the viewpoint" are information on the inspector obtained from the analysis by the gaze analysis system 100C.
 本実施形態の解析結果データベース220では、項目「見逃し率」、「平均生産数」、「過検出率」の値は、検査員による実際の目視検査から得られる検査員に関する情報である。 In the analysis result database 220 of the present embodiment, the values of the items “missing rate”, “average number of productions”, and “overdetection rate” are information on the inspector obtained from the actual visual inspection by the inspector.
 解析結果データベース220に含まれる各項目については、図12乃至図14を参照して説明した通りである。 Each item included in the analysis result database 220 is as described with reference to FIGS. 12 to 14.
 図18は、第二の実施形態のメッセージデータベースの一例を示す図である。本実施形態のメッセージデータベース230は、解析結果データベース220における項目「視点の大きさ」、「視点の網羅性」、「視線の移動速度」毎に設けられている。 FIG. 18 is a diagram showing an example of a message database of the second embodiment. The message database 230 of the present embodiment is provided for each of the items "viewpoint size", "viewpoint coverage", and "line of sight moving speed" in the analysis result database 220.
 図18では、項目「視線の大きさ」と対応するメッセージデータベース230の一例を示している。 FIG. 18 shows an example of the message database 230 corresponding to the item “size of sight line”.
 メッセージデータベース230は、情報の項目として、比較結果と、メッセージとを有し、両者は対応付けられている。 The message database 230 has, as items of information, comparison results and messages, and both are associated with each other.
 項目「比較結果」の値は、最適値特定部127によって特定された視点の大きさの最適値と、検査員毎の視点の大きさとを比較した結果を示す。項目「メッセージ」の値は、比較結果と対応したメッセージを示す。 The value of the item “comparison result” indicates the result of comparison between the optimum value of the size of the viewpoint specified by the optimum value specification unit 127 and the size of the viewpoint for each inspector. The value of the item "message" indicates a message corresponding to the comparison result.
 図18の例では、例えば、比較結果が「最適値より小さい」であった場合、「一点を凝視しないようにする」というメッセージが選択される。 In the example of FIG. 18, for example, when the comparison result is “less than the optimum value”, a message “do not stare at one point” is selected.
 次に、図19を参照して、本実施形態の制御装置120Aのトレーニング支援部125の処理について説明する。 Next, the processing of the training support unit 125 of the control device 120A of the present embodiment will be described with reference to FIG.
 図19は、第二の実施形態のトレーニング支援部の処理を説明するフローチャートである。 FIG. 19 is a flowchart for explaining the processing of the training support unit of the second embodiment.
 本実施形態の制御装置120Aにおいて、トレーニング支援部125は、画像解析部126により、分析結果画像データベース210から、あるグループに属する検査員の分析結果画像データGSdを取得する(ステップS1901)。 In the control device 120A of the present embodiment, the training support unit 125 causes the image analysis unit 126 to acquire analysis result image data GSd of an inspector belonging to a certain group from the analysis result image database 210 (step S1901).
 続いて、画像解析部126は、取得した分析結果画像データGSdにおける視線の位置を示す画像113から、各視点の大きさの平均値を算出し、この検査員の視点の大きさとする(ステップS1902)。 Subsequently, the image analysis unit 126 calculates the average value of the size of each viewpoint from the image 113 indicating the position of the sight line in the acquired analysis result image data GSd, and sets it as the size of the viewpoint of this inspector (step S1902). ).
 続いて、画像解析部126は、取得した分析結果画像データGSdにおける視点の網羅性を算出する(ステップS1903)。続いて、画像解析部126は、取得した分析結果画像データGSdにおける視線の移動速度を算出する(ステップS1904)。 Subsequently, the image analysis unit 126 calculates the coverage of the viewpoint in the acquired analysis result image data GSd (step S1903). Subsequently, the image analysis unit 126 calculates the moving speed of the sight line in the acquired analysis result image data GSd (step S1904).
 続いて、画像解析部126は、この検査員の目視検査から得られた情報を特定し、特定した情報と、視点の大きさと、視点の網羅性と、視線の移動速度と、を対応付けて解析結果情報とする(ステップS1905)。 Subsequently, the image analysis unit 126 identifies the information obtained from the visual inspection of the inspector, associates the identified information, the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the sight line. Analysis result information is set (step S1905).
 より具体的には、画像解析部126は、この検査員の検査員IDと対応付けられた見逃し率、平均生産数、過検出率に、視点の大きさと、視点の網羅性と、視線の移動速度と、を対応付けて、解析結果情報とする。 More specifically, the image analysis unit 126 determines the missing rate, the average number of productions, the overdetection rate, the size of the viewpoint, the coverage of the viewpoint, the movement of the gaze, and the like associated with the inspector ID of the inspector. The velocity is associated with the analysis result information.
 続いて、画像解析部126は、解析結果情報を解析結果データベース220に格納する(ステップS1906)。 Subsequently, the image analysis unit 126 stores the analysis result information in the analysis result database 220 (step S1906).
 尚、本実施形態では、検査員毎の見逃し率、平均生産数、過検出率は、予め解析結果データベース220内に格納されていても良いし、他の記憶装置に格納されていても良い。 In the present embodiment, the missing rate, the average number of productions, and the overdetection rate for each inspector may be stored in advance in the analysis result database 220, or may be stored in another storage device.
 続いて、画像解析部126は、グループに所属する全ての検査員の分析結果画像データGSdについて、ステップS1906までの処理を実行したか否かを判定する(ステップS1907)。ステップS1907において、全ての分析結果画像データGSdについて処理を行っていない場合、ステップS1901へ戻る。 Subsequently, the image analysis unit 126 determines whether or not the processing up to step S1906 has been performed on the analysis result image data GSd of all the inspectors who belong to the group (step S1907). If the process has not been performed on all the analysis result image data GSd in step S1907, the process returns to step S1901.
 ステップS1907において、全ての分析結果画像データGSdについて処理を実行した場合、トレーニング支援部125は、最適値特定部127により、「視点の大きさ」、「視点の網羅性」、「視線の移動速度」の最適値を特定して保持し(ステップS1908)、処理を終了する。 In step S1907, when the processing is performed on all the analysis result image data GSd, the training support unit 125 causes the optimum value specifying unit 127 to select “view size”, “view coverage”, “vision movement speed”. The "optimum value" is specified and held (step S1908), and the process ends.
 以下に、最適値特定部127による最適値の決定について説明する。本実施形態の最適値特定部127は、例えば、解析結果データベース220において、見逃し率が最も低い検査員や、平均生産数が最も大きい検査員や、見逃し率と過検出率が比較的低く、且つ、平均生産数が比較的高い検査員等を、最適値の特定において解析結果情報を参照する検査員として選択しても良い。 The determination of the optimum value by the optimum value specifying unit 127 will be described below. For example, in the analysis result database 220, the optimum value specifying unit 127 of the present embodiment has an inspector with the lowest miss rate, an inspector with the largest average production number, and a relatively low miss rate and overdetection rate, and An inspector or the like whose average production number is relatively high may be selected as an inspector who refers to analysis result information in specifying the optimum value.
 最適値特定部127は、選択された検査員IDを含む解析結果情報の「視点の大きさ」、「視点の網羅性」、「視線の移動速度」のそれぞれの値を最適値としても良い。 The optimum value specifying unit 127 may set the values of “viewpoint size”, “viewpoint completeness”, and “vision movement speed” as analysis value information including the selected inspector ID as optimum values.
 また、本実施形態の最適値特定部127は、例えば、解析結果データベース220に格納された項目「視点の大きさ」、「視点の網羅性」、「視線の移動速度」のそれぞれの値のうち、最大の値を最適値としても良い。 In addition, the optimum value specifying unit 127 of the present embodiment, for example, among the values of the items “viewpoint size”, “viewpoint coverage”, and “line of sight moving speed” stored in the analysis result database 220. The maximum value may be taken as the optimum value.
 次に、図20を参照して、本実施形態の制御装置120Aによるトレーニング支援について説明する。図20は、第二の実施形態の視線分析システムにおける検査員のトレーニングを支援する処理を説明するフローチャートである。 Next, training support by the control device 120A of the present embodiment will be described with reference to FIG. FIG. 20 is a flow chart for explaining a process of supporting training of an inspector in the gaze analysis system according to the second embodiment.
 本実施形態の制御装置120Aにおいて、トレーニング支援部125は、トレーニングの開始要求を受け付けたか否かを判定する(ステップS2001)。ステップS2001において、開始要求を受け付けていない場合、開始要求を受け付けるまで待機する。 In the control device 120A of the present embodiment, the training support unit 125 determines whether a training start request has been received (step S2001). If the start request has not been received in step S2001, the process waits until the start request is received.
 ステップS2001において、開始要求を受け付けると、トレーニング支援部125は、検査員IDの入力を受け付けたか否かを判定する(ステップS2002)。ステップS2002において、検査員IDを受け付けない場合、トレーニング支援部125は、処理を終了し、トレーニングの開始要求を受け付ける直前に取得した分析結果画像データGSdを表示させても良い。 In step S2001, when the start request is received, the training support unit 125 determines whether the input of the inspector ID is received (step S2002). If the inspector ID is not received in step S2002, the training support unit 125 may end the processing and may display the analysis result image data GSd acquired immediately before receiving the training start request.
 ステップS2002において、検査員IDが入力されると、トレーニング支援部125は、比較部128により、解析結果データベース220から、入力された検査員IDと対応する解析結果情報を取得する(ステップS2003)。 In step S2002, when the inspector ID is input, the training support unit 125 causes the comparison unit 128 to acquire analysis result information corresponding to the input inspector ID from the analysis result database 220 (step S2003).
 続いて、比較部128は、取得した解析結果情報に含まれる視点の大きさ、視点の網羅性、視線の移動速度を、最適値特定部127に保持された最適値と比較する(ステップS2004)。 Subsequently, the comparison unit 128 compares the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the sight line included in the acquired analysis result information with the optimum value stored in the optimum value specifying unit 127 (step S2004). .
 続いて、トレーニング支援部125は、メッセージ選択部129により、メッセージデータベース230を参照し、視点の大きさ、視点の網羅性、視線の移動速度のそれぞれの比較結果と対応したメッセージを選択する(ステップS2005)。 Subsequently, the training support unit 125 causes the message selection unit 129 to refer to the message database 230 and select a message corresponding to the comparison result of the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the gaze (step S2005).
 続いて、表示制御部122は、選択されたメッセージを表示装置110に表示させ(ステップS2006)、処理を終了する。 Subsequently, the display control unit 122 causes the display device 110 to display the selected message (step S2006), and ends the process.
 以上のように、本実施形態によれば、複数の検査員が所属するグループにおいて、検査員のトレーニングを行わせる場合、分析結果画像データGSdに基づき、「視点の大きさ」、「視点の網羅性」、「視線の移動速度」をグループ内の最適値と比較する。そして、本実施形態では、「視点の大きさ」、「視点の網羅性」、「視線の移動速度」毎に、比較結果に応じたメッセージを表示装置110に表示させる。 As described above, according to the present embodiment, when training inspectors in a group to which a plurality of inspectors belong, “viewpoint size” and “viewpoint coverage” are calculated based on the analysis result image data GSd. "Gender", "speed of movement of gaze" is compared with the optimal value in the group. Then, in the present embodiment, a message corresponding to the comparison result is displayed on the display device 110 for each of the “size of the viewpoint”, the “coverage of the viewpoint”, and the “moving speed of the sight line”.
 したがって、本実施形態によれば、光学顕微鏡による目視検査の際に、検査員の視線が適切な動きとなるように、具体的に指導を行うことができる。 Therefore, according to the present embodiment, at the time of visual inspection with an optical microscope, it is possible to give specific instructions so that the line of sight of the inspector becomes an appropriate movement.
 本実施形態によれば、上述した検査員に対する指導に基づき、検査員が顕微鏡による目視検査のトレーニングを一定の期間行うことで、検査員の見逃し率のばらつきを低減させることができる。言い換えれば、検査員毎の顕微鏡による目視検査における生産性のばらつきを低減することができる。 According to the present embodiment, based on the above-mentioned instruction to the inspector, the inspector can perform training of visual inspection with a microscope for a certain period of time, thereby reducing the variation in the rate of the inspector inspection. In other words, variations in productivity in visual inspection with a microscope for each inspector can be reduced.
 尚、本実施形態では、制御装置120Aにより、トレーニングの際の注意点を検査員に対し通知するものとしたが、これに限定されない。本実施形態では、例えば、制御装置120Aの解析結果データベース220を閲覧したトレーニングの指導者が、検査員毎の解析結果情報に合わせて、適宜注意点を口頭で伝えるようにしても良い。 In the present embodiment, the controller 120A notifies the inspector of cautions in training, but the present invention is not limited to this. In the present embodiment, for example, a training leader who browses the analysis result database 220 of the control device 120A may appropriately convey cautions according to the analysis result information for each inspector.
 実際に、短時間に高精度な検査を行う検査員の分析結果画像を、他の検査員に観察させ、違いや改善ポイントを指導したところ、視点の移動速度が0.56[s/pcs]から0.34[s/pcs]となり、平均生産数が約65%向上した。 In fact, when we let other inspectors observe the analysis result image of the inspector who performs high-precision inspection in a short time, and teach differences and improvement points, the movement speed of the viewpoint is 0.56 [s / pcs] To 0.34 [s / pcs], and the average number of production has improved by about 65%.
 また、視点の網羅性が90[%]の検査員に対して、上述した指導を行ったところ、視点の網羅性が100[%]となり、見逃し率が1512[ppm]から258[ppm]となった。したがって、この検査員は、見逃し率を指導前の1/6まで低減させることができた。 In addition, when the above-mentioned instruction was given to an inspector with 90% coverage, the coverage is 100%, and the missing rate is 1512 ppm to 258 ppm. became. Therefore, this inspector was able to reduce the missing rate to 1/6 before the instruction.
 尚、上述した各実施形態では、顕微鏡を用いた目視検査を行う場合について説明したが、顕微鏡に限定されない。 In each of the above-described embodiments, the case of performing a visual inspection using a microscope has been described, but the present invention is not limited to the microscope.
 上述した各実施形態は、例えば、双眼鏡や望遠鏡、オペラグラス等のように、接眼レンズを介した目視における視線の分析に適用することができる。 The above-described embodiments can be applied to analysis of the visual line in visual observation through an eyepiece, such as binoculars, telescopes, and opera glasses.
 例えば、視線分析システムにおいて、双眼鏡やオペラグラスを用いた場合の目視における視野を把握する場合には、分析用画像Gには双眼鏡やオペラグラスの接眼レンズを介して撮影した画像を用いれば良い。この場合、分析用画像Gは、高解像度でなくても良く、また、縦長の画像でなくても良いが、三次元画像であることが好ましい。 For example, in the case of using a line-of-sight analysis system, when grasping the visual field in the case of using binoculars or opera glass, an image captured through an eyepiece of binoculars or opera glass may be used as the analysis image G. In this case, the analysis image G may not be high resolution and may not be a vertically long image, but is preferably a three-dimensional image.
 また、視線分析システムは、望遠鏡を用いた場合の目視における視野を把握する場合には、分析用画像Gには望遠鏡の接眼レンズを介して撮影した画像を用いれば良い。この場合、分析用画像Gは、高解像度でなくても良く、また、縦長の画像でなくても良い。 Further, in the case of grasping the visual field in visual observation in the case of using a telescope, the line-of-sight analysis system may use an image captured through the eyepiece of the telescope as the analysis image G. In this case, the analysis image G may not be high resolution and may not be a vertically long image.
 さらに、本実施形態によれば、顔面にレンズを保持する保持具を接触させた状態で装着される器具を用いているときの、装着者の視線を把握することかできる。 Furthermore, according to the present embodiment, it is possible to grasp the line of sight of the wearer when using the device mounted in a state in which the holder holding the lens is in contact with the face.
 顔面にレンズを保持する保持部材を接触させた状態で装着される器具とは、例えば、水泳、スキー、スノーボード等といったスポーツ用のゴーグルや、医療現場や工事現場等における安全管理のために用いられるゴーグル等が含まれる。このような器具は、装着者に装着されると、顔面とレンズとの間がレンズを保持するための保持部材によって塞がれるため、眼球の動きを撮影するための視線検出装置を一緒に装着することができない。この観点から、ここでのレンズとは、狭義のレンズの意味に加え、顔面を覆うガラス等の透明体を含む。 For example, goggles for sports such as swimming, skiing, snowboarding, and the like, and equipment used with safety members in contact with a holding member holding a lens on the face are used for safety management in medical sites, construction sites, etc. Goggles etc. are included. Such a device, when worn by the wearer, is closed by a holding member for holding the lens between the face and the lens, and therefore the eye gaze detection device for photographing the movement of the eye is attached together Can not do it. From this point of view, the lens in this case includes a transparent body such as glass covering the face in addition to the meaning of the lens in a narrow sense.
 したがって、本実施形態によれば、器具を装着したときの視野を示す画像を分析用画像Gとすることで、ゴーグル等の器具を装着した状態の視線の動きを把握することができる。 Therefore, according to the present embodiment, by setting the image showing the field of view when the instrument is mounted as the analysis image G, it is possible to grasp the movement of the line of sight in the state where the instrument such as goggles is attached.
 顔面にレンズを保持する保持部材を接触させた状態で装着される器具には、上述した一般的にゴーグル以外にも、例えば、VR(Virtual  Reality)ゴーグル等が含まれる。 The device mounted in a state in which the holding member holding the lens is in contact with the face includes, for example, VR (Virtual Reality) goggles and the like in addition to the above-described goggles in general.
 以上、各実施形態に基づき本発明の説明を行ってきたが、上記実施形態に示した要件に本発明が限定されるものではない。これらの点に関しては、本発明の主旨をそこなわない範囲で変更することができ、その応用形態に応じて適切に定めることができる。 As mentioned above, although this invention was demonstrated based on each embodiment, this invention is not limited to the requirements shown to the said embodiment. In these respects, the subject matter of the present invention can be modified without departing from the scope of the present invention, and can be appropriately determined depending on the application form.
 また、出願人は、本システムを「覗く化」(Looking-through visualization)と名付けた。 The applicant has also named the system "Looking-through visualization".
 本願は、2017年8月30日に出願した日本国特許出願2017-165249号に基づく優先権を主張するものであり、同日本国出願の全内容を本願に参照により援用する。 The present application claims priority based on Japanese Patent Application No. 2017-165249 filed on Aug. 30, 2017, the entire contents of which are incorporated herein by reference.
 100、100A~100C 視線分析システム
 110、170 表示装置
 120、120A、160 制御装置
 121 分析用画像記憶部
 122 表示制御部
 123 分析結果取得部
 124 分析結果画像記憶部
 125 トレーニング支援部
 130 視線検出装置
 210 分析結果画像データベース
 220 解析結果データベース
 230 メッセージデータベース
100, 100A to 100C Gaze Analysis System 110, 170 Display Device 120, 120A, 160 Controller 121 Analysis Image Storage Unit 122 Display Control Unit 123 Analysis Result Acquisition Unit 124 Analysis Result Image Storage Unit 125 Training Support Unit 130 Gaze Detection Device 210 Analysis result image database 220 Analysis result database 230 Message database

Claims (16)

  1.  視線分析システムであって、
     接眼レンズを介して撮像された画像データを用いて生成された分析用画像が記憶された記憶部と、
     前記分析用画像を表示する表示部と、
     前記表示部に表示された前記分析用画像を目視する対象者の視線を検出する視線検出部と、
     前記分析用画像と前記対象者の視線を重畳させた分析結果画像の画像データを出力する出力部と、
    を有する視線分析システム。
    Gaze analysis system,
    A storage unit storing an analysis image generated using image data captured through an eyepiece;
    A display unit for displaying the analysis image;
    A line of sight detection unit that detects a line of sight of a subject who views the analysis image displayed on the display unit;
    An output unit that outputs image data of an analysis result image in which the analysis image and the line of sight of the subject are superimposed;
    Gaze analysis system with.
  2.  視線分析システムであって、
     接眼レンズを介して撮像された画像データを用いて生成された分析用画像が記憶された記憶部と、
     前記分析用画像を表示する表示部と、
     前記表示部に表示された前記分析用画像を目視する装着者に装着され、前記装着者の視線を検出する視線検出部と、
     前記分析用画像と前記装着者の視線を重畳させた分析結果画像の画像データを出力する出力部と、
    を有する視線分析システム。
    Gaze analysis system,
    A storage unit storing an analysis image generated using image data captured through an eyepiece;
    A display unit for displaying the analysis image;
    A line-of-sight detection unit which is worn by a wearer viewing the analysis image displayed on the display unit and detects a line of sight of the wearer;
    An output unit that outputs image data of an analysis result image in which the analysis image and the line of sight of the wearer are superimposed;
    Gaze analysis system with.
  3.  前記分析用画像は、二次元画像と、三次元画像と、を含む、請求項1又は2記載の視線分析システム。 The gaze analysis system according to claim 1, wherein the analysis image includes a two-dimensional image and a three-dimensional image.
  4.  前記分析用画像は、スクロール操作に応じて、前記表示部に表示させる領域が移動する画像である、請求項1乃至3の何れか一項に記載の視線分析システム。 The line-of-sight analysis system according to any one of claims 1 to 3, wherein the analysis image is an image in which a region displayed on the display unit moves in response to a scroll operation.
  5.  前記分析用画像は、前記表示部に表示されるとき、
     前記接眼レンズを介した視界と対応した中心部分の所定の領域以外の領域が、マスクされる、請求項1乃至4の何れか一項に記載の視線分析システム。
    When the analysis image is displayed on the display unit,
    The line-of-sight analysis system according to any one of claims 1 to 4, wherein an area other than the predetermined area of the central portion corresponding to the field of view through the eyepiece is masked.
  6.  前記接眼レンズは、顕微鏡に設けられた接眼レンズである、請求項1乃至5の何れか一項に記載の視線分析システム。 The line-of-sight analysis system according to any one of claims 1 to 5, wherein the eyepiece is an eyepiece provided in a microscope.
  7.  前記分析結果画像と共に、検出される前記視線の所定位置からの移動量を更に出力する、請求項1乃至6の何れか一項に記載の視線分析システム。 The line-of-sight analysis system according to any one of claims 1 to 6, further outputting, together with the analysis result image, an amount of movement of the detected line-of-sight from a predetermined position.
  8.  前記出力部により出力された分析結果画像の画像データが、前記対象者を識別する識別情報と対応付けられて格納される画像記憶部と、
     前記画像記憶部に格納された前記分析結果画像の画像データを解析し、前記対象者毎の分析結果画像における視点の大きさ、視点の網羅性、視点の移動速度を含む解析結果情報を生成して解析結果記憶部へ格納する画像解析部と、
     前記解析結果記憶部に格納された解析結果情報に基づき、前記視点の大きさ、前記視点の網羅性、前記視点の移動速度の最適値を特定する最適値特定部と、
     前記識別情報の指定を受けて、前記識別情報と対応付けられた解析結果情報と、前記最適値とを比較する比較部と、
     前記比較の結果と対応付するメッセージが記憶されたメッセージ記憶部を参照し、前記対象者毎の前記比較の結果と対応するメッセージを選択して前記表示部に表示させるメッセージ選択部と、を有する、請求項1、及び、3乃至7の何れか一項に記載の視線分析システム。
    An image storage unit in which image data of an analysis result image output by the output unit is stored in association with identification information for identifying the subject;
    The image data of the analysis result image stored in the image storage unit is analyzed, and analysis result information including the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the viewpoint in the analysis result image for each subject is generated An image analysis unit to be stored in the analysis result storage unit;
    An optimum value specifying unit that specifies an optimum value of the size of the viewpoint, the coverage of the viewpoint, and the moving speed of the viewpoint based on the analysis result information stored in the analysis result storage unit;
    A comparison unit that receives the specification of the identification information and compares analysis result information associated with the identification information with the optimum value;
    And a message selection unit which selects a message corresponding to the result of the comparison for each subject by referring to a message storage unit in which a message corresponding to the result of the comparison is stored, and displaying the message on the display unit. The line-of-sight analysis system according to any one of claims 1 and 3-7.
  9.  前記出力部により出力された分析結果画像の画像データが、前記装着者を識別する識別情報と対応付けられて格納される画像記憶部と、
     前記画像記憶部に格納された前記分析結果画像の画像データを解析し、前記装着者毎の分析結果画像における視点の大きさ、視点の網羅性、視点の移動速度を含む解析結果情報を生成して解析結果記憶部へ格納する画像解析部と、
     前記解析結果記憶部に格納された解析結果情報に基づき、前記視点の大きさ、前記視点の網羅性、前記視点の移動速度の最適値を特定する最適値特定部と、
     前記識別情報の指定を受けて、前記識別情報と対応付けられた解析結果情報と、前記最適値とを比較する比較部と、
     前記比較の結果と対応付するメッセージが記憶されたメッセージ記憶部を参照し、前記装着者毎の前記比較の結果と対応するメッセージを選択して前記表示部に表示させるメッセージ選択部と、を有する、請求項2に記載の視線分析システム。
    An image storage unit in which image data of an analysis result image output by the output unit is stored in association with identification information identifying the wearer;
    The image data of the analysis result image stored in the image storage unit is analyzed, and analysis result information including the size of the viewpoint, the coverage of the viewpoint, and the movement speed of the viewpoint in the analysis result image for each wearer is generated An image analysis unit to be stored in the analysis result storage unit;
    An optimum value specifying unit that specifies an optimum value of the size of the viewpoint, the coverage of the viewpoint, and the moving speed of the viewpoint based on the analysis result information stored in the analysis result storage unit;
    A comparison unit that receives the specification of the identification information and compares analysis result information associated with the identification information with the optimum value;
    And a message selection unit for selecting a message corresponding to the comparison result for each wearer with reference to a message storage unit in which a message corresponding to the comparison result is stored, and displaying the message on the display unit. The gaze analysis system according to claim 2.
  10.  視線分析方法であって、
     接眼レンズを介して撮像された画像データを用いて生成された分析用画像を表示部に表示し、
     前記表示部に表示された前記分析用画像の目視する対象者の視線を視線検出部により検出し、
     前記分析用画像に前記検出した視線を重畳した分析結果画像の画像データを制御部により出力する、視線分析方法。
    It is a gaze analysis method, and
    Displaying an analysis image generated using image data captured through an eyepiece on the display unit;
    The line of sight detection unit detects the line of sight of the subject to be viewed by the analysis image displayed on the display unit;
    A gaze analysis method, wherein image data of an analysis result image in which the detected gaze is superimposed on the analysis image is output by a control unit.
  11.  視線分析方法であって、
     接眼レンズを介して撮像された画像データを用いて生成された分析用画像を表示部に表示し、
     前記表示部に表示された前記分析用画像の目視する装着者に装着され、前記装着者の視線を視線検出部により検出し、
     前記分析用画像に前記検出した視線を重畳した分析結果画像の画像データを制御部により出力する、視線分析方法。
    It is a gaze analysis method, and
    Displaying an analysis image generated using image data captured through an eyepiece on the display unit;
    The user wears the visual image of the analysis image displayed on the display unit, and the visual line detection unit detects the visual line of the wearer.
    A gaze analysis method, wherein image data of an analysis result image in which the detected gaze is superimposed on the analysis image is output by a control unit.
  12.  接眼レンズを介した対象物の目視のトレーニング方法であって、
     表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出の対象者に目視させ、
     前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記対象者の視線と、を重畳させた分析結果画像を前記対象者に目視させる、トレーニング方法。
    A method for visually training an object through an eyepiece, comprising:
    Allowing a subject of sight line detection to visually observe an analysis image generated using image data captured through an eyepiece lens displayed on a display device;
    A training method, wherein after the visual inspection, an analysis result image obtained by superimposing the analysis image and the line of sight of the object person detected on the analysis image is visually confirmed by the object person.
  13.  接眼レンズを介した対象物の目視のトレーニング方法であって、
     表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出装置の装着者に目視させ、
     前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記装着者の視線と、を重畳させた分析結果画像を前記装着者に目視させる、トレーニング方法。
    A method for visually training an object through an eyepiece, comprising:
    Allowing the wearer of the visual line detection device to visually observe the analysis image generated using the image data captured through the eyepiece lens displayed on the display device;
    A training method for making the wearer visually check an analysis result image obtained by superimposing the analysis image and the line of sight of the wearer detected on the analysis image after the visual inspection;
  14.  接眼レンズを介した対象物の目視のトレーニング方法であって、
     表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出の対象者に目視させ、
     前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記対象者の視線と、を重畳させた分析結果画像を前記対象者とは別の者に目視させる、トレーニング方法。
    A method for visually training an object through an eyepiece, comprising:
    Allowing a subject of sight line detection to visually observe an analysis image generated using image data captured through an eyepiece lens displayed on a display device;
    A training method, wherein after the visual inspection, an analysis result image in which the image for analysis and the line of sight of the subject detected on the image for analysis are superimposed is viewed by a person other than the subject.
  15.  接眼レンズを介した対象物の目視のトレーニング方法であって、
     表示装置に表示された、接眼レンズを介して撮像された画像データを用いて生成された分析用画像を、視線検出装置の装着者に目視させ、
     前記目視の後に、前記分析用画像と、前記分析用画像上において検出された前記装着者の視線と、を重畳させた分析結果画像を前記装着者とは別の者に目視させる、トレーニング方法。
    A method for visually training an object through an eyepiece, comprising:
    Allowing the wearer of the visual line detection device to visually observe the analysis image generated using the image data captured through the eyepiece lens displayed on the display device;
    The training method which makes a person different from the wearer visually see an analysis result image in which the image for analysis and the line of sight of the wearer detected on the image for analysis are superimposed after the visual inspection.
  16.  視線分析システムであって、
     視線の検出対象となる対象者の顔面に接触する保持具によって保持されたレンズを介して撮像された画像データを用いて生成された分析用画像が記憶された記憶部と、
     前記分析用画像を表示する表示部と、
     前記表示部に表示された前記分析用画像における前記対象者の視線を検出する視線検出部と、
     前記分析用画像と前記対象者の視線を重畳させた分析結果画像の画像データを出力する出力部と、
    を有する視線分析システム。
    Gaze analysis system,
    A storage unit that stores an analysis image generated using image data captured through a lens held by a holder that contacts a face of a subject whose gaze is to be detected;
    A display unit for displaying the analysis image;
    A gaze detection unit for detecting the gaze of the subject in the analysis image displayed on the display unit;
    An output unit that outputs image data of an analysis result image in which the analysis image and the line of sight of the subject are superimposed;
    Gaze analysis system with.
PCT/JP2018/027397 2017-08-30 2018-07-20 Gaze analysis system, gaze analysis method, and training method WO2019044264A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019539061A JPWO2019044264A1 (en) 2017-08-30 2018-07-20 Eye gaze analysis system, eye gaze analysis method and training method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-165249 2017-08-30
JP2017165249 2017-08-30

Publications (1)

Publication Number Publication Date
WO2019044264A1 true WO2019044264A1 (en) 2019-03-07

Family

ID=65526346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/027397 WO2019044264A1 (en) 2017-08-30 2018-07-20 Gaze analysis system, gaze analysis method, and training method

Country Status (2)

Country Link
JP (1) JPWO2019044264A1 (en)
WO (1) WO2019044264A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021220429A1 (en) * 2020-04-28 2021-11-04 株式会社日立製作所 Learning support system
WO2023187973A1 (en) * 2022-03-29 2023-10-05 株式会社ソニー・インタラクティブエンタテインメント Information processing device, method for controlling information processing device, and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000042023A (en) * 1998-07-30 2000-02-15 Nidek Co Ltd Protective goggles with monitor
JP2001117046A (en) * 1999-10-22 2001-04-27 Shimadzu Corp Head mounted type display system provided with line-of- sight detecting function
JP2006277396A (en) * 2005-03-29 2006-10-12 Kyocera Mita Corp Personal identification device
US7401920B1 (en) * 2003-05-20 2008-07-22 Elbit Systems Ltd. Head mounted eye tracking and display system
JP2013088291A (en) * 2011-10-18 2013-05-13 Fuji Electric Co Ltd Visual inspection support device and method for controlling visual inspection support device
JP3211551U (en) * 2017-05-10 2017-07-20 国立大学法人 熊本大学 Microscope attachment and microscope image observation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000042023A (en) * 1998-07-30 2000-02-15 Nidek Co Ltd Protective goggles with monitor
JP2001117046A (en) * 1999-10-22 2001-04-27 Shimadzu Corp Head mounted type display system provided with line-of- sight detecting function
US7401920B1 (en) * 2003-05-20 2008-07-22 Elbit Systems Ltd. Head mounted eye tracking and display system
JP2006277396A (en) * 2005-03-29 2006-10-12 Kyocera Mita Corp Personal identification device
JP2013088291A (en) * 2011-10-18 2013-05-13 Fuji Electric Co Ltd Visual inspection support device and method for controlling visual inspection support device
JP3211551U (en) * 2017-05-10 2017-07-20 国立大学法人 熊本大学 Microscope attachment and microscope image observation system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021220429A1 (en) * 2020-04-28 2021-11-04 株式会社日立製作所 Learning support system
JPWO2021220429A1 (en) * 2020-04-28 2021-11-04
JP7253216B2 (en) 2020-04-28 2023-04-06 株式会社日立製作所 learning support system
WO2023187973A1 (en) * 2022-03-29 2023-10-05 株式会社ソニー・インタラクティブエンタテインメント Information processing device, method for controlling information processing device, and program

Also Published As

Publication number Publication date
JPWO2019044264A1 (en) 2020-08-13

Similar Documents

Publication Publication Date Title
CN102905609B (en) Visual function testing device
JP5437992B2 (en) Visual ability inspection device and inspection method
US20180295350A1 (en) Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor
CN105916432B (en) The method shown for optotype
CN104089606B (en) A kind of free space eye tracking measuring method
JPWO2012077713A1 (en) Gaze point detection method and gaze point detection device
JP5417417B2 (en) Visual function inspection device
JPH11202256A (en) Head-mounting type image display device
CN109106333A (en) A kind of self-service vision drop system of automatic adjustable and device
EP3675066A1 (en) Information processing device, system, image processing method, computer program, and storage medium
WO2019044264A1 (en) Gaze analysis system, gaze analysis method, and training method
JP2019215688A (en) Visual line measuring device, visual line measurement method and visual line measurement program for performing automatic calibration
US20170202454A1 (en) Eye Examining System And Method For Eye Examination
US8434870B2 (en) Method and apparatus for simulating an optical effect of an optical lens
CN203970352U (en) A kind of self-service eyes overall checkout equipment
JP6723843B2 (en) Ophthalmic equipment
US20070263923A1 (en) Method for Stereoscopic Measuring Image Points and Device for Carrying Out Said Method
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
JP6901474B2 (en) Methods for determining human visual behavior parameters and related testing equipment
US20220079484A1 (en) Evaluation device, evaluation method, and medium
JP2017055233A (en) Display device, display system, and control method of display device
JP2016057906A (en) Measurement method and system of viewpoint position
JP4873103B2 (en) Visual function inspection program and visual function inspection control device
Laramee et al. Visual interference with a transparent head mounted display
EP4011273A1 (en) Method and device for determining at least one astigmatic effect of at least one eye

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18850833

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019539061

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18850833

Country of ref document: EP

Kind code of ref document: A1