TW202222245A - Examination method, machine learning execution method, examination device, and machine learning execution method - Google Patents

Examination method, machine learning execution method, examination device, and machine learning execution method Download PDF

Info

Publication number
TW202222245A
TW202222245A TW110139818A TW110139818A TW202222245A TW 202222245 A TW202222245 A TW 202222245A TW 110139818 A TW110139818 A TW 110139818A TW 110139818 A TW110139818 A TW 110139818A TW 202222245 A TW202222245 A TW 202222245A
Authority
TW
Taiwan
Prior art keywords
learning
inference
data
subject
machine learning
Prior art date
Application number
TW110139818A
Other languages
Chinese (zh)
Inventor
吉田雅貴
宮崎汐理
長谷川誠
Original Assignee
日商獅子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020180593A external-priority patent/JP2022071558A/en
Priority claimed from JP2020217683A external-priority patent/JP2022102757A/en
Priority claimed from JP2020217684A external-priority patent/JP2022102758A/en
Priority claimed from JP2020217685A external-priority patent/JP2022102759A/en
Application filed by 日商獅子股份有限公司 filed Critical 日商獅子股份有限公司
Publication of TW202222245A publication Critical patent/TW202222245A/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

This examination method acquires image data for inference that represents an image for inference on which an eye of a subject for inference is drawn; inputs the image data for inference to a machine learning program learnt using teacher data in which image data for learning that represents an image for learning on which an eye of a subject for learning is drawn is given as a problem and condition data for learning that indicates the condition of the eye of the subject for learning is provided as an answer thereto; causes the machine learning program to estimate the condition of the eye of the subject for inference; and causes the machine learning program to output data for inference that indicates the condition of the eye of the subject for inference.

Description

檢查方法、機械學習執行方法、檢查裝置及機械學習執行方法Inspection method, machine learning execution method, inspection device, and machine learning execution method

本發明是有關於一種檢查方法、機械學習執行方法、檢查裝置及機械學習執行方法。本申請案基於2020年10月28日於日本提出申請的日本專利特願2020-180593號、2020年12月25日於日本提出申請的日本專利特願2020-217683號、2020年12月25日於日本提出申請的日本專利特願2020-217684號及2020年12月25日於日本提出申請的日本專利特願2020-217685號而主張優先權,並將該些的內容引用於本文中。The present invention relates to an inspection method, a machine learning execution method, an inspection device and a machine learning execution method. This application is based on Japanese Patent Application No. 2020-180593 filed in Japan on October 28, 2020, Japanese Patent Application No. 2020-217683 filed in Japan on December 25, 2020, and Japanese Patent Application No. 2020-217683 filed in Japan on December 25, 2020 Japanese Patent Application No. 2020-217684 for which it applied in Japan and Japanese Patent Application No. 2020-217685 for which it applied in Japan on December 25, 2020 claims priority, and the content of these is used here.

目前,特別是於發達國家,個人電腦、智慧型手機、輸入板(tablet)等搭載有顯示器的電子設備廣泛普及,因此自覺眼睛不適的人急劇增加。進而,由於經由網際網路的線上服務(on-line service)的增加、推進遠程辦公(telework)的企業的增加等,預計自覺眼睛不適的人數會進一步增加。Currently, especially in developed countries, electronic devices equipped with displays, such as personal computers, smartphones, and tablets, are widely used, and the number of people who feel uncomfortable with their eyes has increased rapidly. Furthermore, due to the increase of on-line services via the Internet, the increase of companies promoting telework, etc., it is expected that the number of people who feel eye discomfort will further increase.

作為此種因生活習慣引起的比較頻繁的眼睛不適的一例,可列舉眼睛疲勞、以及伴有角結膜上皮損傷、黏液素損傷、淚液層的不穩定化及瞼板線功能不全中的至少一個的乾眼症(dry eye)。為了辨識該些的有無及程度,需要使用試劑、醫療設備等進行檢查。作為此種檢查的一例,例如可列舉使用專利文獻1中所揭示的眼科診斷支援裝置進行的檢查。Examples of such relatively frequent eye discomfort due to lifestyle habits include eye fatigue, and at least one of corneal and conjunctival epithelial damage, mucus damage, instability of the tear layer, and tarsal line insufficiency. Dry eye. In order to identify the presence or absence of these, inspections using reagents, medical equipment, and the like are required. As an example of such an examination, the examination performed using the ophthalmology diagnosis support apparatus disclosed in patent document 1 is mentioned, for example.

該眼科診斷支援裝置包括病例資料保存部、機械學習部、患者資料獲取部、對比判定部、以及結果顯示部。病例資料保存部保存包含眼科診斷圖像資料與診斷結果的組合的多個病例基礎圖像資料。機械學習部提取特徵圖像要素,並且對病例基礎圖像資料進行分類。患者資料獲取部獲取患者的患者眼科診斷圖像資料。對比判定部對患者眼科診斷圖像資料與病例基礎圖像資料進行對比,執行資料彼此間的相似性判定。結果顯示部顯示相似性判定的結果。 [現有技術文獻] [專利文獻] The ophthalmic diagnosis support apparatus includes a case data storage unit, a machine learning unit, a patient data acquisition unit, a comparison determination unit, and a result display unit. The case data storage unit stores a plurality of case-based image data including a combination of ophthalmic diagnostic image data and diagnostic results. The machine learning section extracts characteristic image elements and classifies the case-based image data. The patient data acquisition unit acquires patient ophthalmological diagnostic image data of the patient. The comparison and determination unit compares the patient's ophthalmic diagnostic image data with the case-based image data, and executes the similarity determination between the data. The result display unit displays the result of the similarity determination. [Prior Art Literature] [Patent Literature]

[專利文獻1]日本專利特開2020-36835號公報[Patent Document 1] Japanese Patent Laid-Open No. 2020-36835

[發明所欲解決之課題][The problem to be solved by the invention]

然而,所述眼科診斷支援裝置需要使用醫療機構中所使用的檢查設備來獲取眼科診斷資料,因此無法簡易地檢查眼睛疲勞的有無及程度、自覺眼睛不適之人的乾眼症的有無及程度。However, the ophthalmological diagnosis support apparatus needs to acquire ophthalmological diagnosis data using the examination equipment used in medical institutions, and thus cannot easily check the presence or absence of eye fatigue and the presence or absence of dry eye syndrome in people who perceive eye discomfort.

因此,本發明的課題在於提供一種無需實際實施檢查便可預測與眼睛相關的檢查結果的檢查方法、機械學習執行方法、檢查裝置及機械學習執行方法。 [解決課題之手段] Therefore, an object of the present invention is to provide an inspection method, a machine learning execution method, an inspection apparatus, and a machine learning execution method capable of predicting an eye-related inspection result without actually performing the inspection. [Means of Solving Problems]

本發明一態樣為一種檢查方法,獲取推論用圖像資料,所述推論用圖像資料表示描繪有推論用被檢查體的眼睛的推論用圖像,並且將所述推論用圖像資料輸入至機械學習程式,所述機械學習程式使用將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛狀態,並使所述機械學習程式輸出表示所述推論用被檢查體的眼睛狀態的推論用資料。One aspect of the present invention is an inspection method that acquires image data for inference, the image data for inference representing an image for inference depicting the eyes of a subject for inference, and inputs the image data for inference To a machine learning program that uses, as a question, image data for learning representing an image for learning on which the eyes of the subject for learning are drawn, and learning that represents the state of the eyes of the subject for learning The inspection method causes the machine learning program to infer the eye state of the subject for inference, and causes the machine learning program to output a representation of the subject for inference after learning from the teacher data for which the state data is the answer Data for inferences about the state of the eye.

本發明的一態樣為如上所述的檢查方法,輸出所述推論用資料,所述推論用資料表示所述推論用圖像所描繪出的所述推論用被檢查體的眼睛中滿足規定條件的區域的顯示態樣、以及與不滿足所述規定條件的區域的顯示態樣不同的圖像。One aspect of the present invention is the inspection method as described above, wherein the inference data is output, and the inference data indicates that a predetermined condition is satisfied in the eyes of the inference subject depicted in the inference image. , and an image that differs from the display aspect of the area that does not satisfy the predetermined condition.

本發明的一態樣為如上所述的檢查方法,其中,所述機械學習程式使用更包含表示改善所述學習用被檢查體的眼睛狀態的措施的學習用措施資料作為所述答案的一部分的所述教師資料進行了學習,所述檢查方法使所述機械學習程式輸出表示改善所述推論用被檢查體的眼睛狀態的措施的所述推論用資料。One aspect of the present invention is the above-described inspection method, wherein the machine learning program uses, as a part of the answer, learning measure data that further includes measures for improving the eye condition of the learning subject. The teacher data is learned, and the inspection method causes the machine learning program to output the inference data indicating measures to improve the eye state of the inference subject.

本發明的一態樣為如上所述的檢查方法,獲取表示描繪有所述推論用被檢查體的眼睛的角膜的至少一部分的所述推論用圖像的所述推論用圖像資料,並且使所述機械學習程式輸出所述推論用資料,所述機械學習程式使用將所述學習用圖像資料作為所述問題的所述教師資料進行了學習,所述學習用圖像資料表示描繪有所述學習用被檢查體的眼睛的角膜的至少一部分的所述學習用圖像。One aspect of the present invention is the inspection method described above, in which the inference image data representing the inference image in which at least a part of the cornea of the eye of the inference subject is drawn is acquired, and the inference image data is used. The machine learning program outputs the data for inference, and the machine learning program learns using the teacher data using the image data for learning as the problem, and the image data for learning indicates that there is a problem in the description. The image for learning is at least a part of the cornea of the eye of the subject for learning.

本發明的一態樣為如上所述的檢查方法,其中,所述機械學習程式使用更包含表示調查所述學習用被檢查體的眼睛的淚液油層的厚度的檢查結果的學習用檢查結果資料作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式輸出所述推論用資料。One aspect of the present invention is the above-described inspection method, wherein the machine learning program uses, as the inspection result data for learning, the inspection result further including an inspection result indicating the thickness of the tear oil layer of the eye of the subject for learning. The teacher data of the answer is learned, and the checking method causes the machine learning program to output the data for inference.

本發明的一態樣為如上所述的檢查方法,獲取表示描繪有所述推論用被檢查體的眼睛的結膜的至少一部分的所述推論用圖像的所述推論用圖像資料,並且使所述機械學習程式輸出所述推論用資料,所述機械學習程式使用將所述學習用圖像資料作為所述問題的所述教師資料進行了學習,所述學習用圖像資料表示描繪有所述學習用被檢查體的眼睛的結膜的至少一部分的所述學習用圖像。One aspect of the present invention is the above-described inspection method, in which the inference image data representing the inference image in which at least a part of the conjunctiva of the eye of the inference subject is drawn is acquired, and the inference image data is used. The machine learning program outputs the data for inference, and the machine learning program learns using the teacher data using the image data for learning as the problem, and the image data for learning indicates that there is a problem in the description. The image for learning of at least a part of the conjunctiva of the eye of the subject for learning.

本發明的一態樣為如上所述的檢查方法,進而獲取表示所述推論用被檢查體的眼睛的充血程度的推論用充血資料,所述機械學習程式使用更包含表示所述學習用被檢查體的眼睛的充血程度的學習用充血資料作為所述問題的一部分、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示所述檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。One aspect of the present invention is the above-described inspection method, further acquiring inference hyperemia data indicating the degree of hyperemia of the eye of the inference subject, and the machine learning program further includes using the machine learning program to indicate the learning subject The hyperemia data for learning the degree of hyperemia of the eyes of the subject is included as a part of the question, and further includes the learning when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning is depicted. Learning is performed using at least one of examination image data for learning of images of the eyes of the subject and examination result data for learning showing the examination results as the teacher data for the answer, and the examination method The machine learning program is made to infer the symptoms of dry eye syndrome appearing in the eyes of the subject for inference, and symptom data representing the symptoms of dry eye syndrome appearing in the eyes of the subject for inference are output as the subject. Data for inferences.

本發明的一態樣為如上所述的檢查方法,進而獲取推論用回答資料,所述推論用回答資料表示與所述推論用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果,所述機械學習程式使用更包含表示與所述學習用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果的學習用回答資料作為所述問題的一部分、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示所述檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。One aspect of the present invention is that the above-mentioned examination method further acquires answer data for inference, the answer data for inference indicating the answer result of the inquiry about the subjective symptoms of the eye possessed by the subject for inference, The machine learning program uses, as a part of the question, response data for learning that further includes an answer result of an inquiry about the subjective symptoms of the eyes possessed by the subject for learning, and further includes a representation that depicts the subject. The learning test image data of images of the eyes of the learning subject when an examination related to the symptoms of dry eye is performed on the eyes of the learning subject, and the learning test showing the test results At least one of the result data is learned as the teacher data for the answer, the inspection method causes the machine learning program to infer the symptoms of dry eye appearing in the eyes of the inference subject, and Symptom data representing the symptoms of dry eye appearing in the eyes of the subject for inference are output as the data for inference.

本發明的一態樣為如上所述的檢查方法,進而獲取表示最大開眼瞼時間的推論用開眼瞼資料,所述最大開眼瞼時間為所述推論用被檢查體能夠連續張開被拍攝所述推論用圖像的眼睛的時間,所述機械學習程式使用更包含表示所述學習用被檢查體能夠連續張開被拍攝所述學習用圖像的眼睛的時間即最大開眼瞼時間的學習用開眼瞼資料作為所述問題的一部分、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示與乾眼症的症狀相關的檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。One aspect of the present invention is the inspection method described above, and further acquires the eyelid opening data for inference indicating the maximum eyelid opening time when the inference subject can be continuously opened and photographed. The time of the eye of the image for inference, the machine learning program uses a learning tool that further includes a maximum eyelid opening time that indicates the time that the subject for learning can continuously open the eye of the image for which the learning image is taken. The eyelid data further includes, as a part of the question, an image representing the eyes of the learning subject when the eyes of the learning subject have been examined for symptoms of dry eye syndrome At least one of examination image data for learning and examination result data for learning showing examination results related to symptoms of dry eye is learned as the teacher data of the answer, and the examination method causes the The machine learning program infers the symptoms of dry eye symptoms that appear in the eyes of the subject for inference, and outputs symptom data representing the symptoms of dry eye symptoms that appear in the eyes of the subject for inference as the inference material.

本發明的一態樣為如上所述的檢查方法,獲取表示推論用淚液彎液面圖像的推論用淚液彎液面圖像資料作為所述推論用圖像資料,所述推論用淚液彎液面圖像是自描繪有所述推論用被檢查體的眼睛的淚液彎液面的所述推論用圖像中剪切描繪出所述推論用被檢查體的淚液彎液面的區域而得,所述機械學習程式使用將表示學習用淚液彎液面圖像的學習用淚液彎液面圖像資料將所述學習用圖像資料作為所述問題、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示與乾眼症的症狀相關的檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述學習用淚液彎液面圖像是自描繪有所述學習用被檢查體的眼睛的淚液彎液面的所述學習用圖像中剪切描繪出所述學習用被檢查體的淚液彎液面的區域而得,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。One aspect of the present invention is the inspection method as described above, in which inference tear meniscus image data representing an inference tear meniscus image are acquired as the inference image data, and the inference tear meniscus is obtained. The surface image is obtained by cutting out the region in which the tear meniscus of the inference subject is drawn from the inference image in which the tear meniscus of the eye of the inference subject is drawn, The machine learning program uses learning tear meniscus image data representing a learning tear meniscus image as the problem, and further includes a representation that depicts the learning image data as the problem. The image data of the learning test image for learning the image of the eye of the test subject when the test related to the symptoms of dry eye is performed on the eye of the test subject, and the test results showing the test results related to the symptoms of dry eye At least one of the examination result data for learning has been studied as the teacher data of the answer, and the tear meniscus image for learning is a tear meniscus drawn from the eye of the subject for learning A region in which the tear meniscus of the subject for learning is drawn in the image for learning on the surface is cut out, and the inspection method causes the machine learning program to infer the eye of the subject for inference. Symptoms of dry eye symptoms appearing in the eyes of the subject for inference are output as the data for inferences.

本發明的一態樣為如上所述的檢查方法,獲取表示推論用照明圖像的推論用照明圖像資料作為所述推論用圖像資料,所述推論用照明圖像是自描繪有映入至所述推論用被檢查體的角膜上的照明的所述推論用圖像中剪切描繪出映入至所述推論用被檢查體的角膜上的照明的區域而得,所述機械學習程式使用將表示學習用照明圖像的學習用照明圖像資料將所述學習用圖像資料作為所述問題、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示與乾眼症的症狀相關的檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述學習用照明圖像是自描繪有映入至所述學習用被檢查體的角膜上的照明的所述學習用圖像中剪切描繪出映入至所述學習用被檢查體的角膜上的照明的區域而得,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。One aspect of the present invention is the inspection method as described above, in which inference illumination image data representing an inference illumination image is acquired as the inference image data, and the inference illumination image is self-drawn and reflected The machine learning program is obtained by clipping and drawing a region of the illumination reflected on the cornea of the inference subject from the inference image to the illumination on the cornea of the inference subject, and the machine learning program Using the illumination image data for learning representing the illumination image for learning as the question, and further including a representation that depicts the eye of the subject for learning that has been treated with dry eye syndrome At least one of the examination image data for learning of the image of the eye of the subject to be examined and the examination result data for learning showing the examination result related to the symptoms of dry eye at the time of the examination related to the symptoms of The teacher profile of the answer has been learned, and the learning illumination image is cut out and drawn from the learning image in which the illumination reflected on the cornea of the learning subject is drawn. obtained by reflecting on the illuminated area on the cornea of the subject for learning, the test method causes the machine learning program to infer the symptoms of dry eye appearing in the eye of the subject for inference, and Symptom data representing the symptoms of dry eye appearing in the eyes of the subject for inference are output as the data for inference.

本發明的一態樣為如上所述的檢查方法,獲取表示描繪有假定所述推論用被檢查體的眼睛被色溫較照射著所述推論用被檢查體的眼睛的光低的光照射時的所述推論用被檢查體的眼睛的所述推論用圖像的所述推論用圖像資料,所述機械學習程式使用將表示描繪有假定所述學習用被檢查體的眼睛被色溫較照射著所述學習用被檢查體的眼睛的光低的光照射時的所述學習用被檢查體的眼睛的所述學習用圖像的所述學習用圖像資料作為所述問題、且更包含表示調查所述學習用被檢查體的眼睛的淚液油層的厚度的檢查結果的學習用檢查結果資料作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。One aspect of the present invention is the above-described inspection method, which acquires the information indicating that the eye on which the inference subject is drawn is irradiated with light having a lower color temperature than the light irradiating the inference subject's eye. The inference image data of the inference image of the eye of the inference subject, and the inference image data of the inference subject's eye, using the machine learning program to represent that the eye of the learning subject's eye is irradiated with a relatively high color temperature. The image data for learning of the image for learning of the eye of the subject for learning when the light of the eye of the subject for learning is irradiated with low light is the question, and further includes an expression The test result data for learning that investigates the test result of the thickness of the tear oil layer of the eye of the test subject for learning is learned as the teacher data for the answer, and the test method causes the machine learning program to infer the The symptoms of dry eye appearing in the eyes of the subject for inference are inferred, and symptom data representing the symptoms of dry eye appearing in the eyes of the subject for inference are output as the data for inference.

本發明的一態樣為一種機械學習執行方法,獲取將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料,並且將所述教師資料輸入至機械學習程式,使所述機械學習程式進行學習。One aspect of the present invention is a machine learning execution method that acquires, as a question, image data for learning representing an image for learning on which an eye of a subject for learning is drawn, and an eye that represents the subject for learning to be examined. The learning of the state uses the state data as the teacher data of the answer, and the teacher data is input into the machine learning program, and the machine learning program is made to learn.

本發明的一態樣為一種檢查裝置,包括:資料獲取部,獲取推論用圖像資料,所述推論用圖像資料表示描繪有推論用被檢查體的眼睛的推論用圖像;以及推斷部,將所述推論用圖像資料輸入至機械學習程式,所述機械學習程式使用將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料進行了學習,所述推斷部使所述機械學習程式推斷所述推論用被檢查體的眼睛狀態,並使所述機械學習程式輸出表示所述推論用被檢查體的眼睛狀態的推論用資料。One aspect of the present invention is an inspection apparatus including: a data acquisition unit that acquires image data for inference, the image data for inference representing an inference image depicting an eye of a subject for inference; and an inference unit , and input the image data for inference into a machine learning program, and the machine learning program uses the image data for learning representing the image for learning in which the eyes of the subject for learning are drawn as a problem, The learning state data for learning the eye state of the subject to be examined is learned as the teacher data for the answer, and the inference unit causes the machine learning program to infer the eye state of the subject for inference, and causes the machine to learn The program outputs data for inference indicating the eye state of the subject for inference.

本發明的一態樣為一種機械學習執行裝置,包括:教師資料獲取部,獲取將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料;以及機械學習執行部,將所述教師資料輸入至機械學習程式,使所述機械學習程式進行學習。 [發明的效果] One aspect of the present invention is a machine learning execution device comprising: a teacher data acquisition unit that acquires, as a question, image data for learning representing an image for learning in which the eyes of a subject for learning are drawn, and which represents the Teacher data for learning the state data for learning the eye state of the subject to be examined as an answer; and a machine learning execution unit for inputting the teacher data into a machine learning program and causing the machine learning program to learn. [Effect of invention]

根據本發明,無需實際實施檢查,便可預測與眼睛相關的檢查結果。According to the present invention, examination results related to the eyes can be predicted without actually carrying out the examination.

[實施形態] 參照圖1至圖72,對實施形態的檢查方法、機械學習執行方法、檢查裝置及機械學習執行裝置的具體例進行說明。 [embodiment] 1 to 72 , specific examples of the inspection method, the machine learning execution method, the inspection apparatus, and the machine learning execution apparatus according to the embodiment will be described.

首先,參照圖1至圖3,對實施形態的機械學習執行方法及機械學習執行裝置的具體例進行說明。圖1是表示實施形態的機械學習執行裝置的硬體結構一例的圖。圖1所示的機械學習執行裝置10是於後述的機械學習裝置700的學習階段(learning phase)中使機械學習裝置700執行機械學習的裝置。另外,如圖1所示,機械學習執行裝置10包括處理器11、主儲存裝置12、通訊介面13、輔助儲存裝置14、輸入輸出裝置15、以及匯流排16。First, with reference to FIGS. 1 to 3 , specific examples of the machine learning execution method and the machine learning execution apparatus according to the embodiment will be described. FIG. 1 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the embodiment. The machine learning execution apparatus 10 shown in FIG. 1 is an apparatus that causes the machine learning apparatus 700 to perform machine learning in a learning phase of the machine learning apparatus 700 to be described later. In addition, as shown in FIG. 1 , the machine learning execution device 10 includes a processor 11 , a main storage device 12 , a communication interface 13 , an auxiliary storage device 14 , an input and output device 15 , and a bus bar 16 .

處理器11例如為中央處理單元(Central Processing Unit,CPU),讀出並執行後述的機械學習執行程式100,以達成機械學習執行程式100所具有的各功能。另外,處理器11亦可讀出並執行機械學習執行程式100以外的程式,以於達成機械學習執行程式100所具有的各功能的基礎上達成必要的功能。The processor 11 is, for example, a central processing unit (CPU), and reads out and executes the machine learning execution program 100 to be described later, so as to achieve each function of the machine learning execution program 100 . In addition, the processor 11 may also read out and execute programs other than the machine learning execution program 100 , so as to achieve necessary functions on the basis of achieving each function of the machine learning execution program 100 .

主儲存裝置12例如為隨機存取記憶體(Random Access Memory,RAM),預先儲存有由處理器11讀出並執行的機械學習執行程式100以及其他程式。The main storage device 12 is, for example, a random access memory (Random Access Memory, RAM), and stores in advance a machine learning execution program 100 and other programs read and executed by the processor 11 .

通訊介面13是用於經由圖1所示的網路NW而與機械學習裝置700以及其他設備執行通訊的介面電路。另外,網路NW例如為局域網路(Local Area Network,LAN)、內部網路(intranet)。The communication interface 13 is an interface circuit for performing communication with the machine learning device 700 and other devices via the network NW shown in FIG. 1 . In addition, the network NW is, for example, a local area network (LAN) or an intranet.

輔助儲存裝置14例如為硬碟驅動機(HDD:Hard Disk Drive)、固態驅動機(SSD:Solid State Drive)、快閃記憶體(Flash Memory)、唯讀記憶體(Read Only Memory,ROM)。The auxiliary storage device 14 is, for example, a hard disk drive (HDD: Hard Disk Drive), a solid state drive (SSD: Solid State Drive), a flash memory (Flash Memory), and a read only memory (Read Only Memory, ROM).

輸入輸出裝置15例如為輸入輸出端口(Input/Output Port)。輸入輸出裝置15例如連接有圖1所示的鍵盤811、滑鼠812、顯示器910。鍵盤811及滑鼠812例如用於輸入為了操作機械學習執行裝置10所必需的資料的作業中。顯示器910例如顯示機械學習執行裝置10的圖形使用者介面(GUI:Graphical User Interface)。The input/output device 15 is, for example, an input/output port. The input/output device 15 is connected to, for example, a keyboard 811 , a mouse 812 , and a display 910 shown in FIG. 1 . The keyboard 811 and the mouse 812 are used, for example, for inputting data necessary for operating the machine learning execution device 10 . The display 910 displays, for example, a Graphical User Interface (GUI: Graphical User Interface) of the machine learning execution device 10 .

匯流排16將處理器11、主儲存裝置12、通訊介面13、輔助儲存裝置14及輸入輸出裝置15連接,以使該些能夠相互進行資料的收發。The bus bar 16 connects the processor 11 , the main storage device 12 , the communication interface 13 , the auxiliary storage device 14 and the input/output device 15 , so that these can transmit and receive data to and from each other.

圖2是表示實施形態的機械學習執行裝置的軟體結構一例的圖。機械學習執行裝置10使用處理器11讀出並執行機械學習執行程式100,以達成圖2所示的教師資料獲取功能101及機械學習執行功能102。FIG. 2 is a diagram showing an example of a software configuration of the machine learning execution device according to the embodiment. The machine learning execution device 10 uses the processor 11 to read out and execute the machine learning execution program 100 to achieve the teacher data acquisition function 101 and the machine learning execution function 102 shown in FIG. 2 .

教師資料獲取功能101獲取將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料。The teacher data acquisition function 101 acquires a teacher who sets the learning image data representing the learning image depicting the eyes of the learning subject as a question and the learning state data representing the eye state of the learning subject as the answer material.

學習用被檢查體的眼睛狀態例如被反映至對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查、與角膜上皮損傷相關的檢查、與黏液素損傷相關的檢查、調查學習用被檢查體的眼睛的淚液層破壞時間的檢查或調查淚液油層的厚度的檢查時的該些檢查結果中。The eye state of the subject for learning is reflected, for example, when the eyes of the subject for learning have undergone an examination related to symptoms of dry eye, an examination related to corneal epithelial damage, an examination related to mucin damage, and a study study. These inspection results are used in the inspection of the tear layer destruction time of the eye of the subject or the inspection to investigate the thickness of the tear oil layer.

學習用圖像資料是構成教師資料的問題的至少一部分、且表示描繪有學習用被檢查體的眼睛的學習用圖像的資料。學習用圖像例如是使用連接於後述的檢查裝置的照相機、搭載於智慧型手機上的照相機等拍攝而成。The image data for learning is data representing at least a part of the questions constituting the teacher data and an image for learning in which the eyes of the subject for learning are drawn. The learning image is captured using, for example, a camera connected to an inspection apparatus described later, a camera mounted on a smartphone, or the like.

另外,教師資料亦可更包含表示改善學習用被檢查體的眼睛狀態的措施的學習用措施資料作為答案的一部分。In addition, the teacher data may further include, as a part of the answer, data on measures for learning indicating measures to improve the eye condition of the subject for learning.

機械學習執行功能102將教師資料輸入至機械學習裝置700中所安裝的機械學習程式750,使機械學習程式750進行學習。例如,機械學習執行功能102使包括卷積類神經網路(CNN:Convolutional Neural Network)的機械學習程式750藉由後向傳播(Backpropagation)進行學習。The machine learning execution function 102 inputs the teacher data into the machine learning program 750 installed in the machine learning device 700, and causes the machine learning program 750 to learn. For example, the machine learning execution function 102 makes the machine learning program 750 including a Convolutional Neural Network (CNN) learn by backpropagation.

接下來,參照圖3,對實施形態的機械學習執行程式100執行的處理一例進行說明。圖3是表示利用實施形態的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100執行至少一次圖3所示的處理。Next, an example of processing executed by the machine learning execution program 100 of the embodiment will be described with reference to FIG. 3 . FIG. 3 is a flowchart showing an example of processing executed by the machine learning execution program according to the embodiment. The machine learning execution program 100 executes the process shown in FIG. 3 at least once.

於步驟S11中,教師資料獲取功能101獲取教師資料,所述教師資料將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示學習用被檢查體的眼睛狀態的學習用狀態資料作為答案。In step S11, the teacher data acquisition function 101 acquires teacher data that uses, as a question, the learning image data representing the learning image in which the eyes of the learning subject are drawn, and the learning image data representing the learning subject. The learning of the eye state uses the state data as the answer.

於步驟S12中,機械學習執行功能102將教師資料輸入至機械學習程式750,使機械學習程式750進行學習。In step S12, the machine learning execution function 102 inputs the teacher data into the machine learning program 750, so that the machine learning program 750 learns.

接下來,參照圖4至圖17,對實施形態的檢查程式、檢查裝置及檢查方法的具體例進行說明。Next, specific examples of an inspection program, an inspection apparatus, and an inspection method according to the embodiment will be described with reference to FIGS. 4 to 17 .

圖4是表示實施形態的檢查裝置的硬體結構一例的圖。圖4所示的檢查裝置20是於已利用機械學習執行程式100學習完畢的機械學習裝置700的推論階段中,使用機械學習裝置700推斷推論用被檢查體的眼睛狀態的裝置。另外,如圖4所示,檢查裝置20包括處理器21、主儲存裝置22、通訊介面23、輔助儲存裝置24、觸控面板顯示器25、以及匯流排26。FIG. 4 is a diagram showing an example of the hardware configuration of the inspection apparatus according to the embodiment. The inspection apparatus 20 shown in FIG. 4 uses the machine learning apparatus 700 to infer the eye state of the inference subject in the inference stage of the machine learning apparatus 700 that has been learned by the machine learning execution program 100 . In addition, as shown in FIG. 4 , the inspection device 20 includes a processor 21 , a main storage device 22 , a communication interface 23 , an auxiliary storage device 24 , a touch panel display 25 , and a bus bar 26 .

處理器21例如為CPU,讀出並執行後述的檢查程式200,以達成檢查程式200所具有的各功能。另外,處理器21亦可讀出並執行檢查程式200以外的程式,以於達成檢查程式200所具有的各功能的基礎上達成必要的功能。The processor 21 is, for example, a CPU, and reads out and executes the later-described inspection program 200 to achieve each function of the inspection program 200 . In addition, the processor 21 may read out and execute programs other than the check program 200 to achieve necessary functions in addition to each function of the check program 200 .

主儲存裝置22例如為RAM,預先儲存有由處理器21讀出並執行的檢查程式200以及其他程式。The main storage device 22 is, for example, a RAM, and stores the check program 200 and other programs read and executed by the processor 21 in advance.

通訊介面23是用於經由圖4所示的網路NW而與機械學習裝置700以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 23 is an interface circuit for performing communication with the machine learning device 700 and other devices via the network NW shown in FIG. 4 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置24例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24 is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

觸控面板顯示器25例如為輸入輸出端口。觸控面板顯示器25例如顯示檢查裝置20的圖形使用者介面、利用後述的推論用資料表示的內容。The touch panel display 25 is, for example, an input/output port. The touch-panel display 25 displays, for example, the graphical user interface of the inspection apparatus 20 and the content represented by the inference data to be described later.

匯流排26將處理器21、主儲存裝置22、通訊介面23、輔助儲存裝置24及觸控面板顯示器25連接,以使該些能夠相互進行資料的收發。The bus bar 26 connects the processor 21 , the main storage device 22 , the communication interface 23 , the auxiliary storage device 24 and the touch panel display 25 , so that these can send and receive data to and from each other.

圖5是表示實施形態的檢查裝置的外觀一例的圖。如圖5所示,檢查裝置20由安裝於底座B的上部的支柱P支撐。另外,檢查裝置20、底座B及支柱P例如設置於雜貨西藥店(drugstore)等中。FIG. 5 is a diagram showing an example of the appearance of the inspection apparatus according to the embodiment. As shown in FIG. 5, the inspection apparatus 20 is supported by the support|pillar P attached to the upper part of the base B. As shown in FIG. In addition, the inspection apparatus 20, the base B, and the support|pillar P are installed in a grocery store (drugstore) etc., for example.

圖6是表示實施形態的檢查裝置的軟體結構一例的圖。檢查裝置20使用處理器21讀出並執行檢查程式200,以達成圖6所示的資料獲取功能201及推斷功能202。FIG. 6 is a diagram showing an example of a software configuration of the inspection apparatus according to the embodiment. The inspection device 20 uses the processor 21 to read out and execute the inspection program 200 to achieve the data acquisition function 201 and the inference function 202 shown in FIG. 6 .

資料獲取功能201獲取推論用圖像資料。推論用圖像資料是表示描繪有推論用被檢查體的眼睛的推論用圖像的資料。推論用圖像例如是使用後述的連接於檢查裝置的照相機、搭載於智慧型手機上的照相機拍攝而成。The data acquisition function 201 acquires image data for inference. The image data for inference is a data showing an image for inference in which the eyes of the subject for inference are drawn. The image for inference is captured using, for example, a camera connected to an inspection apparatus described later, or a camera mounted on a smartphone.

推斷功能202將推論用圖像資料輸入至已利用機械學習執行功能102學習完畢的機械學習程式750,並使機械學習程式750推斷推論用被檢查體的眼睛狀態。The inference function 202 inputs the image data for inference to the machine learning program 750 that has been learned by the machine learning execution function 102, and causes the machine learning program 750 to infer the eye state of the inference subject.

然後,推斷功能202使機械學習程式750輸出表示推論用被檢查體的眼睛狀態的推論用資料。該情況下,推論用資料例如用於在觸控面板顯示器25上顯示表現推論用被檢查體的眼睛狀態的數值等。另外,推論用資料例如亦可表示基於能夠自推論用圖像資料讀取的推論用被檢查體的眼睛的角膜、淚液等的狀態判定的該眼睛的疲勞程度。Then, the inference function 202 causes the machine learning program 750 to output inference data indicating the eye state of the inference subject. In this case, the data for inference is used to display, for example, on the touch panel display 25 a numerical value representing the state of the eyes of the subject for inference, or the like. In addition, the data for inference may represent, for example, the degree of fatigue of the eye determined based on the state of the cornea, tears, etc. of the eye of the subject for inference that can be read from the image data for inference.

另外,推斷功能202亦可使機械學習程式750輸出表示改善推論用被檢查體的眼睛狀態的措施的推論用資料。In addition, the inference function 202 may cause the machine learning program 750 to output inference data indicating measures to improve the eye state of the inference subject.

圖7是表示利用實施形態的檢查裝置顯示的圖像一例的圖。圖7所示的圖像A7是於推論用被檢查體站在連接於檢查裝置20的照相機之前的狀態下辨識出該推論用被檢查體的面部後,利用該照相機拍攝的該推論用被檢查體的面部圖像。圖像A7中所描繪的兩個長方形分別表示該推論用被檢查體的左眼及右眼。另外,圖像A7為推論用圖像的一例。FIG. 7 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. The image A7 shown in FIG. 7 is the inference subject photographed by the camera after the inference subject has recognized the face of the subject in the state in front of the camera connected to the inspection apparatus 20 . body facial image. The two rectangles drawn in the image A7 represent the left eye and the right eye of the subject for inference, respectively. In addition, the image A7 is an example of the image for inference.

圖8是表示利用實施形態的檢查裝置顯示的圖像一例的圖。圖8所示的圖像A8是基於表示作為推論用圖像一例的圖像A7的推論用圖像資料而推斷的、表示推論用被檢查體的眼睛狀態的推斷結果的圖像。如圖8所示,圖像A8包括表示推論用被檢查體的角膜狀態的得分及表示該得分低的圖標「低」。另外,如圖8所示,圖像A8包括於推論用被檢查體的眼睛圖像上重疊表示角膜受損的部分的圖形而成的圖像A81。FIG. 8 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. The image A8 shown in FIG. 8 is an image which is estimated based on the image data for inference showing the image A7 for inference, which is an example of the image for inference, and shows the estimation result of the eye state of the subject for inference. As shown in FIG. 8 , the image A8 includes a score indicating the corneal state of the subject for inference, and an icon “low” indicating that the score is low. In addition, as shown in FIG. 8 , the image A8 includes an image A81 in which a graph representing the damaged portion of the cornea is superimposed on the eye image of the subject for inference.

圖9是表示利用實施形態的檢查裝置顯示的圖像一例的圖。圖9所示的圖像A9是基於表示作為推論用圖像一例的圖像A7的推論用圖像資料而推斷的、表示推論用被檢查體的眼睛狀態的推斷結果的圖像。圖像A9表示雷達圖(radar chart),該雷達圖表示與推論用被檢查體的角膜狀態相關的得分、與淚液的水層相關的得分、與淚液的油層相關的得分及與黏液素層相關的得分。再者,圖像A9亦可示出表示上述四個得分的數值、圖標等來代替圖9所示的雷達圖。另外,圖像A9利用五階段顯示來表示推論用被檢查體的眼睛的疲勞程度。亦可於觸控面板顯示器25上顯示圖像A9來代替圖像A8。FIG. 9 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. The image A9 shown in FIG. 9 is an image which is estimated based on the image data for inference showing the image A7 which is an example of the image for inference, and shows the estimation result of the eye state of the subject for inference. The image A9 shows a radar chart showing the score related to the corneal state of the subject for inference, the score related to the water layer of tears, the score related to the oil layer of tears, and the score related to the mucus layer score. In addition, the image A9 may show numerical values representing the above-mentioned four scores, icons, etc., instead of the radar chart shown in FIG. 9 . In addition, the image A9 shows the degree of fatigue of the eyes of the subject for inference by five-stage display. The image A9 may also be displayed on the touch panel display 25 instead of the image A8.

圖10是表示利用實施形態的檢查裝置顯示的圖像一例的圖。圖10所示的圖像A11表示推斷為引起圖像A10中所示推論用被檢查體的眼睛狀態的原因的結果。例如於觸控面板顯示器25上顯示圖像A8及圖像A9之後,於觸控面板顯示器25上顯示圖像A11。FIG. 10 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. The image A11 shown in FIG. 10 shows the result estimated as the cause of the eye state of the subject for inference shown in the image A10. For example, after the images A8 and A9 are displayed on the touch panel display 25 , the image A11 is displayed on the touch panel display 25 .

圖11是表示利用實施形態的檢查裝置顯示的圖像一例的圖。圖11所示的圖像A13表示針對圖像A11中所示原因的應對方法的一例。於觸控面板顯示器25上顯示圖像A11之後,於觸控面板顯示器25上顯示圖像A13。FIG. 11 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. An image A13 shown in FIG. 11 shows an example of a countermeasure against the cause shown in the image A11. After the image A11 is displayed on the touch panel display 25 , the image A13 is displayed on the touch panel display 25 .

圖12是表示利用實施形態的檢查裝置顯示的圖像一例的圖。圖12所示的圖像A14表示對於圖像A11或圖像A13而言有用的滴眼液的一例。於觸控面板顯示器25上顯示圖像A12或圖像A13之後,於觸控面板顯示器25上顯示圖像A14。FIG. 12 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. The image A14 shown in FIG. 12 shows an example of the eye drops useful for the image A11 or the image A13. After the image A12 or the image A13 is displayed on the touch panel display 25 , the image A14 is displayed on the touch panel display 25 .

圖13至圖16是表示代替圖8所示的眼睛圖像而顯示的眼睛圖像一例的圖。推斷功能202輸出推論用資料,所述推論用資料表示推論用圖像所描繪出的推論用被檢查體的眼睛中滿足規定條件的區域的顯示態樣、與不滿足規定條件的區域的顯示態樣不同的圖像。此處提及的規定條件為淚液的水層減少的可能性超過規定閾值的條件、淚液的油層減少的可能性超過規定閾值的條件、以及角膜及結膜中的至少一者受損的可能性超過規定閾值的條件中的至少一個。13 to 16 are diagrams showing an example of an eye image displayed instead of the eye image shown in FIG. 8 . The inference function 202 outputs data for inference, which indicates the display state of the area that satisfies the predetermined condition and the display state of the area that does not satisfy the predetermined condition in the eye of the subject for inference drawn by the inference image. different images. The prescribed conditions mentioned here are conditions in which the possibility of a reduction in the aqueous layer of tears exceeds a prescribed threshold, conditions in which the possibility of a reduction in the oil layer of tears exceeds a prescribed threshold, and the possibility of damage to at least one of the cornea and the conjunctiva exceeds the threshold. At least one of the conditions that specify the threshold.

例如,推斷功能202亦可如圖13所示,於推論用圖像上重疊顯示表示有淚液的水層減少的可能性的區域的斜線陰影線。該斜線陰影線可根據淚液的水層減少的程度具有不同的濃度。For example, as shown in FIG. 13 , the inference function 202 may superimpose and display, on the image for inference, a hatched slanted line indicating an area where the water layer of the tear fluid is likely to decrease. The diagonal hatching can have different concentrations depending on the degree of reduction of the aqueous layer of the tear fluid.

或者,推斷功能202亦可如圖14所示,於推論用圖像上重疊顯示表示有淚液的油層減少的可能性的區域的縱線陰影線。該縱線陰影線可根據淚液的油層減少的程度具有不同的濃度。Alternatively, as shown in FIG. 14 , the inference function 202 may superimpose and display a vertical hatched line indicating a region where the oil layer of tears is likely to decrease on the inference image. The vertical hatching can have different concentrations depending on the degree of reduction of the oil layer of the tear fluid.

或者,推斷功能202亦可如圖15所示,於推論用圖像上重疊顯示表示有角膜及結膜受損的可能性的區域的點狀陰影線。該點狀陰影線可根據損傷的程度具有不同的濃度。Alternatively, as shown in FIG. 15 , the inference function 202 may superimpose and display dotted hatching indicating areas where the cornea and conjunctiva are likely to be damaged on the inference image. The dotted hatching can have different concentrations depending on the degree of damage.

或者,推斷功能202亦可如圖16所示,於推論用圖像上重疊顯示所述斜線陰影線、縱線陰影線及點狀陰影線。Alternatively, as shown in FIG. 16 , the inference function 202 may superimpose and display the oblique hatching, vertical hatching, and dot hatching on the inference image.

接下來,參照圖17,對實施形態的檢查程式200執行的處理一例進行說明。圖17是表示利用實施形態的檢查程式執行的處理一例的流程圖。Next, an example of processing executed by the inspection program 200 of the embodiment will be described with reference to FIG. 17 . FIG. 17 is a flowchart showing an example of processing performed by the inspection program of the embodiment.

於步驟S21中,資料獲取功能201獲取推論用圖像資料,所述推論用圖像資料表示描繪有推論用被檢查體的眼睛的推論用圖像。In step S21, the data acquisition function 201 acquires image data for inference, the image data for inference representing the inference image in which the eyes of the subject for inference are drawn.

於步驟S22中,推斷功能202將推論用圖像資料輸入至機械學習程式750,使機械學習程式750推斷推論用被檢查體的眼睛狀態,並使機械學習程式750輸出表示推論用被檢查體的眼睛狀態的推論用資料。In step S22, the inference function 202 inputs the image data for inference into the machine learning program 750, causes the machine learning program 750 to infer the eye state of the inference subject, and causes the machine learning program 750 to output the data representing the inference subject. Data for inference of eye state.

以上,對實施形態的機械學習執行方法、檢查方法、機械學習執行裝置及檢查裝置進行了說明。The machine learning execution method, the inspection method, the machine learning execution apparatus, and the inspection apparatus according to the embodiments have been described above.

機械學習執行程式100具備教師資料獲取功能101、以及機械學習執行功能102。The machine learning execution program 100 includes a teacher data acquisition function 101 and a machine learning execution function 102 .

教師資料獲取功能101獲取將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料。The teacher data acquisition function 101 acquires a teacher who sets the learning image data representing the learning image depicting the eyes of the learning subject as a question and the learning state data representing the eye state of the learning subject as the answer material.

機械學習執行功能102將教師資料輸入至機械學習程式750,使機械學習程式750進行學習。The machine learning execution function 102 inputs the teacher data into the machine learning program 750, and causes the machine learning program 750 to learn.

藉此,機械學習執行程式100可生成基於學習用圖像資料預測眼睛狀態的機械學習程式750。Thereby, the machine learning execution program 100 can generate the machine learning program 750 for predicting the eye state based on the image data for learning.

檢查程式200具備資料獲取功能201、以及推斷功能202。The inspection program 200 includes a data acquisition function 201 and an inference function 202 .

資料獲取功能201獲取推論用圖像資料。推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。The data acquisition function 201 acquires image data for inference. The image data for inference is data representing an image in which the eyes of the subject for inference are drawn.

推斷功能202將推論用圖像資料輸入至已利用機械學習執行程式100學習完畢的機械學習程式750,以推斷推論用被檢查體的眼睛狀態。然後,推斷功能202使機械學習程式750輸出表示推論用被檢查體的眼睛狀態的推論用資料。The inference function 202 inputs the image data for inference to the machine learning program 750 that has been learned by the machine learning execution program 100 to infer the eye state of the subject for inference. Then, the inference function 202 causes the machine learning program 750 to output inference data indicating the eye state of the inference subject.

藉此,檢查程式200無需實際實施檢查,便可預測與眼睛相關的檢查結果。Thereby, the examination program 200 can predict the examination result related to the eye without actually carrying out the examination.

另外,檢查程式200輸出推論用資料,所述推論用資料表示推論用圖像所描繪出的推論用被檢查體的眼睛中滿足規定條件的區域的顯示態樣、與不滿足規定條件的區域的顯示態樣不同的圖像。藉此,檢查程式200可以更容易視認的態樣提示滿足規定條件的區域及不滿足規定條件的區域中的各個區域。In addition, the examination program 200 outputs data for inference, which indicates the display state of the area that satisfies the predetermined condition in the eyes of the subject for inference drawn by the image for inference and the area that does not satisfy the predetermined condition. Displays images in different ways. In this way, the inspection program 200 can present each of the areas that satisfy the predetermined conditions and the areas that do not satisfy the predetermined conditions in a more easily recognizable form.

另外,檢查程式200基於推論用資料而輸出表示改善推論用被檢查體的眼睛狀態的措施的推論用資料。藉此,檢查程式200可向推論用被檢查體通知改善眼睛狀態的措施。In addition, the examination program 200 outputs data for inference indicating measures to improve the eye state of the subject for inference based on the data for inference. In this way, the examination program 200 can notify the inference subject of measures to improve the state of the eyes.

再者,機械學習執行程式100所具有的功能的至少一部分可藉由包括電路部(circuitry)的硬體達成。同樣地,檢查程式200所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為大規模積體電路(Large Scale Integration,LSI)、應用特定積體電路(Application Specific Integrated Circuit,ASIC)、現場可程式閘陣列(Field-Programmable Gate Array,FPGA)、圖形處理單元(Graphics Processing Unit,GPU)。Furthermore, at least a part of the functions of the machine learning execution program 100 can be realized by hardware including circuitry. Likewise, at least a part of the functions of the checking program 200 can be realized by hardware including a circuit portion. In addition, such hardware is, for example, a Large Scale Integration (LSI), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), Graphics Processing Unit (GPU).

另外,機械學習執行程式100所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,檢查程式200所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100 can also be achieved by the cooperation of software and hardware. Likewise, at least a part of the functions of the checking program 200 can also be achieved through the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於實施形態中,列舉機械學習執行裝置10、機械學習裝置700以及檢查裝置20為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In addition, in the embodiment, the case where the machine learning execution device 10 , the machine learning device 700 , and the inspection device 20 are mutually independent devices has been described as an example, but the present invention is not limited to this. The devices can also be implemented as one device.

另外,檢查裝置20亦可以上述態樣以外的態樣來實施。例如,檢查裝置20亦可為學習用輸入板、遊戲機、電腦等。該情況下,檢查裝置20可預測學習者、玩遊戲者、使用電腦者的與眼睛相關的檢查結果。In addition, the inspection apparatus 20 may be implemented in the aspect other than the above-mentioned aspect. For example, the inspection device 20 may be a learning tablet, a game machine, a computer, or the like. In this case, the examination apparatus 20 can predict the examination result concerning the eyes of a learner, a game player, and a computer user.

另外,由於檢查裝置20不需要醫療設備、醫療器具等特殊的設備、器具等,因此可省去推論用被檢查體去醫院的勞力,達成遠程診療等。例如,於檢查裝置20為智慧型手機的情況下,可於省去推論用被檢查體去醫院的勞力的同時,預測推論用被檢查體的與眼睛相關的檢查結果。或者,於檢查裝置20為智慧型手機的情況下,能夠於省去推論用被檢查體去醫院的勞力的同時,預測推論用被檢查體的與眼睛相關的檢查結果,並使推論用被檢查體真實感受到該治療的效果,或者針對隱形眼鏡等開處方。另外,作為該治療的效果,例如可列舉角膜或結膜的損傷減少、淚液的水層或油層對眼睛的滋潤。進而,推論用被檢查體亦可為狗、貓等動物。In addition, since the inspection apparatus 20 does not require special equipment such as medical equipment and medical instruments, it is possible to eliminate labor for inferring that the subject to be inspected goes to the hospital, and to achieve remote diagnosis and the like. For example, in the case where the inspection apparatus 20 is a smartphone, it is possible to predict the eye-related inspection result of the inference subject while saving the labor of going to the hospital for the inference subject. Alternatively, in the case where the inspection apparatus 20 is a smartphone, it is possible to predict the eye-related inspection result of the inference subject, while saving the labor of going to the hospital for the inference subject, and to make the inference subject to be inspected. You can actually feel the effects of the treatment, or prescribe contact lenses, etc. In addition, the effects of the treatment include, for example, reduction in damage to the cornea or conjunctiva, and moisturizing of the eyes by the aqueous or oily layers of tears. Furthermore, the subject for inference may be an animal such as a dog or a cat.

接下來,參照圖18至圖31,對所述實施形態的第一實施例的具體例進行說明。Next, a specific example of the first embodiment of the above-described embodiment will be described with reference to FIGS. 18 to 31 .

圖18是表示第一實施例的檢查系統的結構一例的圖。檢查系統E1包括攝影裝置1、以及檢查裝置2。FIG. 18 is a diagram showing an example of the configuration of the inspection system according to the first embodiment. The inspection system E1 includes the imaging device 1 and the inspection device 2 .

攝影裝置1包括數位照相機,以拍攝用戶U1的眼睛。作為一例,攝影裝置1為包括數位照相機的智慧型手機。攝影裝置1將利用數位照相機拍攝的用戶U1的眼睛圖像、即用戶圖像資料P1發送至檢查裝置2。作為一例,檢查裝置2為伺服器。The photographing apparatus 1 includes a digital camera to photograph the eyes of the user U1. As an example, the photographing apparatus 1 is a smartphone including a digital camera. The photographing apparatus 1 transmits the eye image of the user U1 captured by the digital camera, that is, the user image data P1 to the inspection apparatus 2 . As an example, the inspection apparatus 2 is a server.

檢查裝置2儲存有藉由深層學習而學習到的學習結果L1。學習結果L1是藉由深層學習來學習利用數位照相機拍攝的被檢查者的眼睛圖像即被檢查者圖像資料、與包含被檢查者的眼睛狀態的檢查結果的被檢查者檢查資料之間的關係而得的結果。被檢查者檢查資料可為測定值、基於已定的基準的得分值、圖像資料。於第一實施例中,所謂眼睛狀態為淚液狀態與角結膜狀態中的至少一個以上。淚液狀態包括淚液的量、淚液油層厚度、黏液素層的損傷面積、淚液層破壞時間、非侵入性淚液層破壞時間中的至少一個以上。角結膜狀態包括角結膜的損傷面積。得分值例如可使用非專利文獻「島▲崎▼潤,2006年乾眼症診斷基準,『新眼科』,醫學葵(Medical Aoi)出版,2007年2月28日,24卷,2月號,p.181-184」中記載的基準。The inspection apparatus 2 stores the learning result L1 learned by deep learning. The learning result L1 is obtained by deep learning between the subject's image data, which is the subject's eye image captured by the digital camera, and the subject's test data including the test result of the subject's eye state. the result of the relationship. The examination data of the examinee may be a measured value, a score value based on a predetermined standard, or image data. In the first embodiment, the so-called eye state is at least one of the tear state and the corneal conjunctiva state. The tear state includes at least one of the amount of tear liquid, the thickness of the tear oil layer, the damaged area of the mucus layer, the tear layer destruction time, and the non-invasive tear layer destruction time. The corneal and conjunctival state includes the damaged area of the corneal and conjunctiva. For the score value, for example, the non-patent document "Jun Shima ▲ Saki ▼, 2006 Dry Eye Diagnosis Criteria, "New Ophthalmology", published by Medical Aoi, Feb. 28, 2007, Vol. 24, Feb. issue , p.181-184".

所謂被檢查者是指學習中所使用的被檢查者圖像資料及被檢查者檢查資料的提供者。The so-called examinee refers to the provider of the examinee's image data and the examinee's examination data used in the study.

檢查裝置2基於藉由深層學習而學習到的學習結果以及用戶圖像資料P1,預測包含對用戶U1的眼睛狀態進行檢查時的結果的檢查資料。檢查裝置2將作為預測檢查資料而得的結果的預測結果A1發送至攝影裝置1。攝影裝置1向用戶U1提示預測結果A1。The inspection apparatus 2 predicts inspection data including the result of inspecting the eye state of the user U1 based on the learning result learned by deep learning and the user image data P1. The inspection apparatus 2 transmits the prediction result A1, which is a result obtained by predicting inspection data, to the imaging apparatus 1 . The photographing apparatus 1 presents the prediction result A1 to the user U1.

[攝影裝置的結構] 圖19是表示第一實施例的變形例的攝影裝置的結構一例的圖。攝影裝置1包括攝影裝置控制部100、攝影部110、攝影裝置通訊部120、顯示部130、以及操作受理部140。 [Structure of the photographing device] FIG. 19 is a diagram showing an example of the configuration of an imaging device according to a modification of the first embodiment. The photographing apparatus 1 includes a photographing apparatus control unit 100 , a photographing unit 110 , a photographing apparatus communication unit 120 , a display unit 130 , and an operation accepting unit 140 .

攝影裝置控制部100包括圖像獲取部1000、攝影條件判定部1010、白眼球與黑眼球部分提取部1020、圖像輸出部1030、預測結果獲取部1040、以及提示部1050。作為一例,攝影裝置控制部100利用CPU而達成,圖像獲取部1000、攝影條件判定部1010、白眼球與黑眼球部分提取部1020、圖像輸出部1030、預測結果獲取部1040、以及提示部1050分別藉由CPU自ROM讀入程式並執行處理來達成。The imaging device control unit 100 includes an image acquisition unit 1000 , an imaging condition determination unit 1010 , a white eyeball and black eyeball part extraction unit 1020 , an image output unit 1030 , a prediction result acquisition unit 1040 , and a presentation unit 1050 . As an example, the imaging device control unit 100 is implemented by a CPU, an image acquisition unit 1000, an imaging condition determination unit 1010, a white eyeball and black eyeball part extraction unit 1020, an image output unit 1030, a prediction result acquisition unit 1040, and a presentation unit 1050 are respectively achieved by the CPU reading the program from the ROM and executing the processing.

圖像獲取部1000利用攝影部110獲取攝影圖像P0。攝影圖像P0是拍攝用戶U1的眼睛而得的圖像。作為一例,攝影圖像P0是於用戶U1的眼睛未經染色試劑等染色的狀態下拍攝該眼睛而得的圖像。作為一例,攝影圖像P0為一張圖像。攝影圖像P0亦可為多個圖像的集。再者,攝影部110亦可於用戶U1的眼睛經染色的狀態下拍攝該眼睛。所謂染色例如是指螢光素染色、或麗絲胺綠(Lissamine green)染色等染色。 攝影條件判定部1010針對利用圖像獲取部1000獲取的攝影圖像P0,判定是否滿足規定的攝影條件。 The image acquisition unit 1000 acquires the photographed image P0 using the photographing unit 110 . The photographed image P0 is an image obtained by photographing the eyes of the user U1. As an example, the photographed image P0 is an image obtained by photographing the eye of the user U1 in a state in which the eye of the user U1 is not dyed with a dyeing agent or the like. As an example, the captured image P0 is one image. The photographed image P0 may be a set of a plurality of images. Furthermore, the photographing unit 110 may photograph the eyes of the user U1 in a state where the eyes are dyed. The term "staining" refers to, for example, luciferin staining or Lissamine green staining. The imaging condition determination unit 1010 determines whether or not predetermined imaging conditions are satisfied with respect to the captured image P0 acquired by the image acquisition unit 1000 .

此處,參照圖20,對規定的攝影條件進行說明。圖20是表示第一實施例變形例的規定的攝影條件一例的圖。於對規定的攝影條件進行說明之前,對第一實施例中的黑眼球與白眼球進行說明。所謂黑眼球是指自正視觀察眼睛時與角膜對應的部分。當自正視觀察眼睛時,於角膜的內側具有瞳孔以及虹膜,因此,換言之,黑眼球亦為包含瞳孔以及虹膜的部分。另一方面,所謂白眼球是指自正視觀察眼睛時與球結膜對應的部分。Here, predetermined imaging conditions will be described with reference to FIG. 20 . FIG. 20 is a diagram showing an example of predetermined imaging conditions in a modification of the first embodiment. Before describing the predetermined imaging conditions, the black eyeball and the white eyeball in the first embodiment will be described. The so-called black eye refers to the part corresponding to the cornea when the eye is viewed from the front. When the eye is viewed from the front, the pupil and the iris are located inside the cornea. Therefore, in other words, the black eye also includes the pupil and the iris. On the other hand, the so-called white eye refers to the portion corresponding to the bulbar conjunctiva when the eye is viewed from the front.

於第一實施例中,規定的攝影條件包含以下三個攝影條件。In the first embodiment, the predetermined photographing conditions include the following three photographing conditions.

第一個攝影條件為拍攝到了黑眼球整體。The first photography condition is to capture the entire black eyeball.

第二個攝影條件包括:關於白眼球與黑眼球組合成的部分,縱向長度與橫向長度的比率(於以下的說明中稱為縱橫比率)為規定的範圍內。關於縱橫比率,所謂規定的範圍,作為一例為0.62至0.75的範圍。於圖20中,攝影圖像R1、攝影圖像R2、及攝影圖像R3分別為用於說明第二個攝影條件的攝影圖像的一例。關於攝影圖像R1、攝影圖像R2、及攝影圖像R3的各者,縱橫比率分別為0.61、0.64、及0.76。於攝影圖像R1中,眼瞼未充分張開,不滿足第二個攝影條件。於攝影圖像R2中,眼瞼充分張開,滿足第二個攝影條件。於攝影圖像R3中,眼瞼過度張開,不滿足第二個攝影條件。The second imaging condition includes that the ratio of the vertical length to the horizontal length (referred to as an aspect ratio in the following description) is within a predetermined range for the portion where the white eyeball and the black eyeball are combined. Regarding the aspect ratio, the so-called predetermined range is, as an example, the range of 0.62 to 0.75. In FIG. 20 , the captured image R1 , the captured image R2 , and the captured image R3 are examples of captured images for explaining the second imaging condition, respectively. Regarding each of the captured image R1 , the captured image R2 , and the captured image R3 , the aspect ratios are 0.61, 0.64, and 0.76, respectively. In the photographed image R1, the eyelids are not sufficiently opened, and the second photographing condition is not satisfied. In the photographed image R2, the eyelid is sufficiently opened to satisfy the second photographing condition. In the photographed image R3, the eyelid is excessively opened, and the second photographing condition is not satisfied.

第三個攝影條件包括:白眼球與黑眼球組合成的部分的縱向長度、和白眼球與黑眼球組合成的部分中黑眼球的縱向長度的比率(於以下的說明中稱為白眼球與黑眼球比率)為規定的範圍內。關於白眼球與黑眼球比率,所謂規定的範圍,作為一例為1.20至1.40的範圍。於圖20中,攝影圖像R4、攝影圖像R5、及攝影圖像R6分別為用於說明第三個攝影條件的攝影圖像的一例。關於攝影圖像R4、攝影圖像R5、及攝影圖像R6的各者,白眼球與黑眼球比率分別為1.12、1.31、及1.51。於攝影圖像R5中,滿足第三個攝影條件。於攝影圖像R4及攝影圖像R6中,不滿足第三個攝影條件。The third photographing condition includes: the longitudinal length of the portion where the white eyeball and the black eyeball are combined, and the ratio of the longitudinal length of the black eyeball in the portion where the white eyeball and the black eyeball are combined (referred to as the white eyeball and the black eyeball in the following description). eyeball ratio) is within the specified range. Regarding the ratio of white eyeball to black eyeball, the so-called predetermined range is, as an example, the range of 1.20 to 1.40. In FIG. 20 , the captured image R4 , the captured image R5 , and the captured image R6 are examples of captured images for explaining the third imaging condition, respectively. For each of the captured image R4, the captured image R5, and the captured image R6, the ratios of white eyeballs and black eyeballs are 1.12, 1.31, and 1.51, respectively. In the photographed image R5, the third photographing condition is satisfied. In the photographed image R4 and the photographed image R6, the third photographing condition is not satisfied.

返回圖19,繼續說明攝影裝置1的結構。Returning to FIG. 19 , the description of the configuration of the photographing apparatus 1 will be continued.

白眼球與黑眼球部分提取部1020自攝影圖像P0中剪切眼睛中白眼球與黑眼球組合成的部分。所謂眼睛中白眼球與黑眼球組合成的部分是指眼睛中除去眼瞼及皮膚等的眼球的部分。將自攝影圖像P0中剪切眼睛中白眼球與黑眼球組合成的部分而得的圖像稱為用戶圖像資料P1。於圖21中表示用戶圖像資料P1的一例。於用戶圖像資料P1中,刪除了眼睛中除了白眼球與黑眼球組合成的部分以外的部分。眼睛中除了白眼球與黑眼球組合成的部分以外的部分包含眼瞼及皮膚。於第一實施例中,用戶圖像資料P1是剪切眼睛中白眼球與黑眼球組合成的部分而得的圖像。The white eyeball and black eyeball portion extraction unit 1020 cuts out the portion of the eye in which the white eyeball and the black eyeball are combined from the captured image P0. The part of the eye in which the white eyeball and the black eyeball are combined refers to the part of the eye excluding the eyelid, the skin, and the like. An image obtained by cutting out the portion of the eye in which the white eyeball and the black eyeball are combined from the photographed image P0 is referred to as a user image material P1. An example of the user image data P1 is shown in FIG. 21 . In the user image material P1, the portion of the eyes other than the portion where the white eyeball and the black eyeball are combined is deleted. The portion of the eye other than the portion where the white eyeball and the black eyeball are combined includes the eyelid and the skin. In the first embodiment, the user image data P1 is an image obtained by cutting out the portion of the eye that is composed of the white eyeball and the black eyeball.

再者,亦可不進行對攝影圖像P0剪切眼睛中白眼球與黑眼球組合成的部分的處理,而將攝影圖像P0直接設為用戶圖像資料P1。該情況下,可自攝影裝置控制部100的結構中省略白眼球與黑眼球部分提取部1020。Furthermore, the captured image P0 may be directly used as the user image data P1 without the processing of cutting out the portion of the eye where the white eyeball and the black eyeball are combined in the captured image P0. In this case, the white eyeball and black eyeball portion extraction unit 1020 can be omitted from the configuration of the imaging device control unit 100 .

圖像輸出部1030將用戶圖像資料P1輸出至檢查裝置2。The image output unit 1030 outputs the user image data P1 to the inspection apparatus 2 .

預測結果獲取部1040自檢查裝置2獲取預測結果A1。預測結果A1例如利用得分表示眼睛的狀態。預測結果A1例如包含關於淚液狀態的得分與關於角結膜狀態的得分中的至少一個以上。The prediction result acquisition unit 1040 acquires the prediction result A1 from the inspection device 2 . The prediction result A1 represents the state of the eyes by, for example, a score. The prediction result A1 includes, for example, at least one or more of a score related to the tear state and a score related to the cornea and conjunctiva.

提示部1050提示預測結果A1。提示部1050藉由在顯示部130上顯示預測結果A1來提示預測結果A1。The presentation unit 1050 presents the prediction result A1. The presentation unit 1050 presents the prediction result A1 by displaying the prediction result A1 on the display unit 130 .

攝影部110包括數位照相機而構成。攝影部110拍攝用戶U1的眼睛。較佳為自正面拍攝用戶U1的眼睛。另外,於攝影部110拍攝用戶U1的眼睛時,較佳為即便於構成攝影部110的數位照相機的鏡頭與用戶U1的眼睛之間的距離近的情況下,焦距亦得到適當調整。The imaging unit 110 includes a digital camera. The photographing unit 110 photographs the eyes of the user U1. Preferably, the eyes of the user U1 are photographed from the front. In addition, when the photographing unit 110 captures the eyes of the user U1, it is preferable that the focal length is appropriately adjusted even when the distance between the lens of the digital camera constituting the photographing unit 110 and the eyes of the user U1 is short.

攝影部110拍攝的圖像可為靜止畫,亦可為動畫。於攝影部110拍攝動畫時,圖像獲取部1000自動畫所含的多個圖框中選擇攝影圖像P0。例如,圖像獲取部1000自動畫所含的多個圖框中,選擇滿足所述規定的攝影條件的圖框。該情況下,圖像獲取部1000使攝影條件判定部1010針對動畫所含的多個圖框的每一個執行是否滿足規定的攝影條件的判定。The image captured by the photographing unit 110 may be a still image or a moving image. When the photographing unit 110 captures a moving image, the image acquiring unit 1000 automatically selects the photographed image P0 from a plurality of frames included in the moving image. For example, the image acquisition unit 1000 automatically selects a frame that satisfies the predetermined photographing conditions from a plurality of frames included in the animation. In this case, the image acquisition unit 1000 causes the imaging condition determination unit 1010 to determine whether or not a predetermined imaging condition is satisfied for each of a plurality of frames included in the animation.

攝影裝置通訊部120經由無線網路而與檢查裝置2進行通訊。攝影裝置通訊部120包括用於經由無線網路進行通訊的硬體。The imaging device communication unit 120 communicates with the inspection device 2 via a wireless network. The camera communication unit 120 includes hardware for communicating via a wireless network.

顯示部130顯示各種資訊。顯示部130顯示攝影部110所拍攝的攝影圖像P0、或預測結果A1。顯示部130為液晶顯示器、或有機電致發光(El:Electroluminescence)顯示器。The display unit 130 displays various kinds of information. The display unit 130 displays the photographed image P0 captured by the photographing unit 110 or the prediction result A1. The display unit 130 is a liquid crystal display or an organic electroluminescence (El: Electroluminescence) display.

操作受理部140受理來自用戶U1的操作。操作受理部140例如包括觸控面板且與顯示部130一體地構成。The operation accepting unit 140 accepts an operation from the user U1. The operation accepting unit 140 includes, for example, a touch panel, and is configured integrally with the display unit 130 .

[檢查裝置的結構] 圖22是表示第一實施例的變形例的檢查裝置的結構一例的圖。檢查裝置2包括檢查裝置控制部200、儲存部210、以及檢查裝置通訊部220。 [Structure of Inspection Device] FIG. 22 is a diagram showing an example of the configuration of an inspection apparatus according to a modification of the first embodiment. The inspection apparatus 2 includes an inspection apparatus control unit 200 , a storage unit 210 , and an inspection apparatus communication unit 220 .

檢查裝置控制部200包括圖像資料獲取部2000、預測部2010、預測結果輸出部2020、以及學習部2030。作為一例,檢查裝置控制部200利用CPU而達成,圖像資料獲取部2000、預測部2010、預測結果輸出部2020、以及學習部2030分別藉由CPU自ROM讀入程式並執行處理來達成。The inspection apparatus control unit 200 includes an image data acquisition unit 2000 , a prediction unit 2010 , a prediction result output unit 2020 , and a learning unit 2030 . As an example, the inspection apparatus control unit 200 is implemented by a CPU, and the image data acquisition unit 2000 , the prediction unit 2010 , the prediction result output unit 2020 , and the learning unit 2030 are implemented by the CPU reading a program from the ROM and executing the processing.

圖像資料獲取部2000自攝影裝置1獲取用戶圖像資料P1。The image data acquisition unit 2000 acquires the user image data P1 from the photographing device 1 .

預測部2010基於學習結果L10以及用戶圖像資料P1,預測用戶檢查資料。用戶檢查資料包含對用戶U1的淚液狀態與角結膜狀態中的至少一個以上進行檢查時的結果。The prediction unit 2010 predicts the user inspection data based on the learning result L10 and the user image data P1. The user inspection data includes results when at least one or more of the tear state and the corneal conjunctiva state of the user U1 are inspected.

預測結果輸出部2020將預測部2010預測出的用戶檢查資料作為預測結果A1輸出至攝影裝置1。The prediction result output unit 2020 outputs the user inspection data predicted by the prediction unit 2010 to the imaging device 1 as the prediction result A1.

學習部2030學習被檢查者圖像資料與被檢查者檢查資料之間的關係。被檢查者圖像資料是利用數位照相機拍攝的被檢查者的眼睛圖像。該數位照相機可與攝影裝置1所包括的數位照相機不同。The learning unit 2030 learns the relationship between the subject image data and the subject examination data. The subject image data is an image of the subject's eyes captured by a digital camera. The digital camera may be different from the digital camera included in the photographing apparatus 1 .

被檢查者檢查資料包含被檢查者的眼睛狀態的檢查結果。被檢查者可為多個。被檢查者中可包含用戶U1,亦可不包含用戶U1。被檢查者的眼睛圖像的拍攝及被檢查者的眼睛狀態的檢查於同一日期與時間下進行。The examinee examination data includes examination results of the eye state of the examinee. There can be multiple examinees. User U1 may or may not be included in the inspected persons. The photographing of the subject's eye image and the examination of the subject's eye state are performed on the same date and time.

淚液的量例如利用光干涉斷層儀,以淚液彎液面高度為指標來檢查。淚液油層厚度例如利用光干涉儀來檢查。黏液素層損傷面積例如以經麗絲胺綠染色的眼睛的染色面積或染色濃度為指標來檢查。淚液層破壞時間例如以經螢光素染色的淚液自眨眼即刻起至被破壞的時間為指標來檢查。非侵入性淚液層破壞時間以藉由光干涉像而可視化的淚液自眨眼即刻起至被破壞的時間為指標來檢查。角結膜損傷面積例如以經螢光素染色的眼睛的染色面積或染色濃度為指標來檢查。The amount of tear fluid is examined using, for example, the height of the tear fluid meniscus using an optical interference tomography instrument. The tear oil layer thickness is checked, for example, with an optical interferometer. The damage area of the mucus layer is examined using, for example, the staining area or staining concentration of the lissamine green-stained eye as an index. The tear layer destruction time is examined, for example, with the time from the moment of blinking to the time when luciferin-stained tears are destroyed as an index. The non-invasive tear layer destruction time was examined as the time from the moment of blinking to the tear liquid, visualized by optical interference imaging, as an index. The corneal and conjunctival lesion area is examined by, for example, the staining area or staining concentration of the fluorescein-stained eye.

學習部2030基於包含被檢查者圖像資料與被檢查者檢查資料的組合的資料集來執行學習。該資料集預先儲存於儲存部210中,學習部2030可自儲存部210獲取該資料集,學習部2030亦可自伺服器等外部裝置獲取該資料集。學習部2030將學習到的結果作為學習結果L10儲存於儲存部210中。The learning unit 2030 performs learning based on a data set including a combination of subject image data and subject examination data. The data set is pre-stored in the storage part 210, the learning part 2030 can acquire the data set from the storage part 210, and the learning part 2030 can also acquire the data set from an external device such as a server. The learning unit 2030 stores the learned result in the storage unit 210 as the learning result L10.

作為一例,學習部2030基於深層學習來執行學習。學習結果L10是表示藉由執行學習而權重(weight)及偏置(bias)的各參數發生了變更的類神經網路的資訊。As an example, the learning unit 2030 performs learning based on deep learning. The learning result L10 is information representing a neural network-like network whose parameters of weight and bias have been changed by performing learning.

被檢查者圖像資料是於滿足所述規定的攝影條件的狀態下拍攝被檢查者的眼睛而得的圖像。再者,被檢查者圖像資料亦可不滿足所述規定的攝影條件。另外,被檢查者圖像資料亦可包含滿足規定的攝影條件的圖像以及不滿足規定的攝影條件的圖像兩者。The subject image data is an image obtained by photographing the eyes of the subject in a state where the predetermined imaging conditions are satisfied. Furthermore, the image data of the subject may not satisfy the predetermined photographing conditions. In addition, the subject image data may include both images satisfying predetermined imaging conditions and images not satisfying predetermined imaging conditions.

於如第一實施例般使用滿足規定的攝影條件的圖像作為用戶圖像資料P1的情況下,為了提高預測的精度,被檢查者圖像資料較佳為滿足規定的攝影條件。When an image satisfying predetermined imaging conditions is used as the user image data P1 as in the first embodiment, it is preferable that the subject image data satisfy the predetermined imaging conditions in order to improve the accuracy of prediction.

被檢查者圖像資料是剪切眼睛中白眼球與黑眼球組合成的部分而得的圖像。再者,被檢查者圖像資料亦可為未剪切出該部分的圖像。另外,被檢查者圖像資料亦可包含剪切眼睛中白眼球與黑眼球組合成的部分而得的圖像以及未剪切出該部分的圖像兩者。The image data of the examinee is an image obtained by cutting out the part of the eye where the white eyeball and the black eyeball are combined. Furthermore, the image data of the examinee may also be an image from which the portion is not cut out. In addition, the image data of the examinee may include both an image obtained by cutting out a portion of the eye where the white eyeball and the black eyeball are combined, and an image in which the portion is not cut out.

於如第一實施例般使用剪切眼睛中白眼球與黑眼球組合成的部分而得的圖像作為用戶圖像資料P1的情況下,為了提高預測的精度,被檢查者圖像資料較佳為剪切該部分而得的圖像。In the case where an image obtained by cutting out the portion of the eye that is combined with the white eyeball and the black eyeball is used as the user image data P1 as in the first embodiment, in order to improve the accuracy of prediction, the image data of the examinee is preferred. An image to crop out that part.

被檢查者圖像資料包含正常眼睛的圖像以及異常眼睛的圖像。所謂角結膜損傷面積異常的眼睛的圖像,例如是指於與角結膜損傷面積為33%以上時相同的日期及時間拍攝的眼睛圖像。所謂角結膜損傷面積正常的眼睛的圖像,例如是指於與角結膜損傷面積小於33%時相同的日期及時間下拍攝的眼睛圖像。同樣地,所謂黏液素層損傷面積、淚液層破壞時間、非侵入性淚液層破壞時間、淚液油層厚度、淚液的量異常的圖像,例如是指於與黏液素層損傷面積為33%以上、淚液層破壞時間為5秒以下、非侵入性淚液層破壞時間為10秒以下、淚液油層厚度小於80 μm、淚液的量小於310 μm時相同的日期及時間下拍攝的眼睛圖像。所謂黏液素層損傷面積、淚液層破壞時間、非侵入性淚液層破壞時間、淚液油層厚度、淚液的量正常的圖像,例如是指於與黏液素層損傷面積小於33%、淚液層破壞時間超過5秒、非侵入性淚液層破壞時間超過10秒、淚液油層厚度為80 μm以上、淚液的量為310 μm以上時相同的日期及時間下拍攝的眼睛圖像。The image data of the examinee includes images of normal eyes and images of abnormal eyes. The image of the eye with an abnormal corneal and conjunctival lesion area refers to, for example, an eye image taken on the same date and time as when the corneal and conjunctival lesion area is 33% or more. The image of an eye with a normal corneal and conjunctival damage area refers to, for example, an eye image taken on the same date and time as when the corneal and conjunctival damage area is less than 33%. Similarly, the so-called mucus layer damaged area, tear layer destruction time, non-invasive tear layer destruction time, tear oil layer thickness, and abnormality in the amount of tear fluid refer to, for example, an image with an abnormal mucus layer damage area of 33% or more, Eye images taken at the same date and time when the tear layer destruction time is less than 5 seconds, the non-invasive tear layer destruction time is less than 10 seconds, the thickness of the tear oil layer is less than 80 μm, and the amount of tear fluid is less than 310 μm. The so-called mucus layer damage area, tear layer destruction time, non-invasive tear layer destruction time, tear oil layer thickness, and tear volume are normal images, for example, refers to the mucus layer damage area less than 33%, tear layer destruction time Eye images taken at the same date and time when more than 5 seconds, the non-invasive tear layer destruction time is more than 10 seconds, the thickness of the tear oil layer is more than 80 μm, and the amount of tear fluid is more than 310 μm.

於第一實施例中,被檢查者圖像資料所含的正常眼睛的圖像的數量與異常眼睛的圖像的數量相等。正常眼睛的圖像的數量與異常眼睛的圖像的數量例如分別為1000張。為了提高預測的精度,正常眼睛的圖像的數量與異常眼睛的圖像的數量較佳為相等。再者,被檢查者圖像資料所含的正常眼睛的圖像的數量與異常眼睛的圖像的數量亦可不同。In the first embodiment, the number of images of normal eyes included in the image data of the subject is equal to the number of images of abnormal eyes. The number of images of normal eyes and the number of images of abnormal eyes are, for example, 1000, respectively. In order to improve the accuracy of prediction, the number of images of normal eyes and the number of images of abnormal eyes are preferably equal. Furthermore, the number of images of normal eyes and the number of images of abnormal eyes included in the image data of the subject may be different.

學習部2030亦可於執行學習之前對被檢查者圖像資料施加變形。該變形例如包含由放大及縮小、旋轉、縱橫的平行移動、或剪切應變引起的變形。例如,學習部2030藉由對被檢查者圖像資料施加變形,例如使黑眼球的部分位於圖像內的中心,且使黑眼球的部分的大小於多個圖像間相同。 學習部2030亦可將施加變形前的原始的被檢查者圖像資料、與對被檢查者圖像資料施加變形而得的新的圖像組合,藉此擴大被檢查者圖像資料的數量。 The learning unit 2030 may also apply deformation to the subject image data before performing learning. The deformation includes, for example, deformation caused by enlargement and reduction, rotation, horizontal and vertical translation, or shear strain. For example, the learning unit 2030 applies deformation to the image data of the subject, for example, so that the part of the black eye is located in the center of the image, and the size of the part of the black eye is made the same as between multiple images. The learning unit 2030 may also combine the original image data of the examinee before deformation and a new image obtained by deforming the image data of the examinee, thereby increasing the number of image data of the examinee.

於學習部2030在深層學習中使用的類神經網路中,中間層的數量為三層。再者,用於深層學習的類神經網路的中間層亦可多於三層。 學習部2030以規定的次數執行深層學習。該規定的次數例如為五次。再者,規定的次數亦可為五次以外的次數。 In the neural network-like network used in the deep learning by the learning unit 2030, the number of intermediate layers is three. Furthermore, the intermediate layers of a neural network-like network for deep learning can also have more than three layers. The learning unit 2030 executes deep learning a predetermined number of times. The predetermined number of times is, for example, five times. In addition, the predetermined number of times may be other than five times.

再者,學習部2030亦可基於深層學習以外的機械學習來執行學習。Furthermore, the learning unit 2030 may perform learning based on machine learning other than deep learning.

利用外部裝置學習到的結果可作為學習結果L10而預先儲存於儲存部210中。該情況下,可自檢查裝置控制部200的結構中省略學習部2030。The result learned by the external device may be stored in the storage unit 210 in advance as the learning result L10. In this case, the learning unit 2030 can be omitted from the configuration of the inspection apparatus control unit 200 .

儲存部210儲存各種資訊。儲存部210儲存的資訊包含學習結果L10。儲存部210使用磁性硬碟裝置或半導體儲存裝置等儲存裝置構成。The storage unit 210 stores various kinds of information. The information stored in the storage unit 210 includes the learning result L10. The storage unit 210 is configured using a storage device such as a magnetic hard disk device or a semiconductor storage device.

檢查裝置通訊部220經由無線網路而與攝影裝置1進行通訊。檢查裝置通訊部220包括用於經由無線網路進行通訊的硬體。The inspection device communication unit 220 communicates with the imaging device 1 via a wireless network. The inspection device communication section 220 includes hardware for communication via a wireless network.

[攝影裝置的眼睛狀態預測處理] 接下來,參照圖23,對眼睛狀態預測處理進行說明,該眼睛狀態預測處理為攝影裝置1拍攝用戶U1的眼睛並提示預測結果A1的處理。圖23是表示第一實施例的變形例的眼睛狀態預測處理一例的圖。 [Eye state prediction processing of photographing device] Next, with reference to FIG. 23 , an eye state prediction process, which is a process in which the photographing apparatus 1 captures the eyes of the user U1 and presents the prediction result A1 , will be described. FIG. 23 is a diagram showing an example of an eye state prediction process in a modification of the first embodiment.

步驟S10:攝影裝置控制部100使顯示部130顯示攝影畫面。攝影畫面為攝影部110拍攝用戶U1的眼睛時顯示於顯示部130上的畫面。於攝影畫面上即時地顯示利用攝影部110拍攝的用戶U1的臉部圖像。Step S10 : The photographing device control unit 100 displays the photographing screen on the display unit 130 . The photographing screen is a screen displayed on the display unit 130 when the photographing unit 110 photographs the eyes of the user U1. The face image of the user U1 photographed by the photographing unit 110 is displayed on the photographing screen in real time.

於攝影畫面上,亦可於顯示利用攝影部110拍攝的用戶U1的臉部圖像之外亦顯示攝影指南。攝影指南是用於向用戶U1表示攝影程序的文本、圖標等。攝影指南亦可包含為了於滿足規定的攝影條件的狀態下拍攝用戶U1的眼睛而對用戶U1進行指示的文本、圖標。例如,文本為「請將眼睛張開至眼瞼不會遮住黑眼球的程度。」等文本。圖標例如為攝影畫面中顯示於用戶U1的眼睛圖像的上下的、指示張開眼睛的箭頭等。On the photographing screen, a photographing guide may be displayed in addition to the facial image of the user U1 photographed by the photographing unit 110 . The photography guide is text, icons, and the like for indicating the photography program to the user U1. The photographing guide may include text and icons for instructing the user U1 to photograph the eyes of the user U1 in a state where predetermined photographing conditions are satisfied. For example, the text is "Please open your eyes to the extent that your eyelids will not cover your black eyeballs." The icon is, for example, an arrow or the like which is displayed on the upper and lower sides of the eye image of the user U1 on the photographing screen and instructs to open the eyes.

步驟S20:攝影部110拍攝用戶U1的眼睛。攝影部110生成拍攝用戶U1的眼睛而得的攝影圖像P0。Step S20: The photographing unit 110 photographs the eyes of the user U1. The photographing unit 110 generates a photographed image P0 obtained by photographing the eyes of the user U1.

步驟S30:圖像獲取部1000自攝影部110獲取攝影圖像P0。圖像獲取部1000將所獲取的攝影圖像P0供給至攝影條件判定部1010。Step S30 : the image acquisition unit 1000 acquires the photographed image P0 from the photographing unit 110 . The image acquisition unit 1000 supplies the acquired photographed image P0 to the photographing condition determination unit 1010 .

步驟S40:攝影條件判定部1010針對藉由圖像獲取部1000獲取的攝影圖像P0,判定是否滿足規定的攝影條件。攝影條件判定部1010提取攝影圖像P0中黑眼球的部分與白眼球的部分。攝影條件判定部1010於該提取處理中使用圖像辨識的技術。攝影條件判定部1010基於提取出的黑眼球的部分與白眼球的部分進行判定。Step S40 : The imaging condition determination unit 1010 determines whether or not a predetermined imaging condition is satisfied with respect to the captured image P0 acquired by the image acquisition unit 1000 . The photographing condition determination unit 1010 extracts the portion of the black eyeball and the portion of the white eyeball in the photographed image P0. The imaging condition determination unit 1010 uses an image recognition technique for this extraction process. The imaging condition determination unit 1010 performs determination based on the extracted part of the black eyeball and the part of the white eyeball.

於第一實施例中,攝影條件判定部1010於滿足作為規定的攝影條件的、所述三個攝影條件全部的情況下,判定為滿足規定的攝影條件。再者,攝影條件判定部1010亦可於滿足作為規定的攝影條件的、所述三個攝影條件中的一個以上的情況下,判定為滿足規定的攝影條件。In the first embodiment, the imaging condition determination unit 1010 determines that the predetermined imaging conditions are satisfied when all of the three imaging conditions are satisfied as the predetermined imaging conditions. In addition, the imaging condition determination unit 1010 may determine that the predetermined imaging conditions are satisfied when one or more of the three imaging conditions as the predetermined imaging conditions are satisfied.

於攝影條件判定部1010判定為滿足規定的攝影條件的情況下(步驟S40;是(YES)),攝影條件判定部1010將攝影圖像P0供給至白眼球與黑眼球部分提取部1020。之後,攝影裝置控制部100執行步驟S50的處理。另一方面,於攝影條件判定部1010判定為未滿足規定的攝影條件的情況下(步驟S40;否(NO)),攝影裝置控制部100執行步驟S90的處理。When the imaging condition determination unit 1010 determines that the predetermined imaging conditions are satisfied (step S40 ; YES), the imaging condition determination unit 1010 supplies the captured image P0 to the white eyeball and black eyeball portion extraction unit 1020 . After that, the imaging device control unit 100 executes the process of step S50. On the other hand, when the imaging condition determination unit 1010 determines that the predetermined imaging conditions are not satisfied (step S40 ; NO (NO)), the imaging device control unit 100 executes the process of step S90 .

步驟S50:白眼球與黑眼球部分提取部1020自攝影圖像P0中剪切出眼睛中白眼球與黑眼球組合成的部分。白眼球與黑眼球部分提取部1020基於圖像辨識的技術,自攝影圖像P0中判定眼睛中白眼球與黑眼球組合成的部分、以及除此以外的部分。白眼球與黑眼球部分提取部1020藉由自攝影圖像P0中刪除眼睛中白眼球與黑眼球組合成的部分以外的部分,剪切出眼睛中白眼球與黑眼球組合成的部分。Step S50 : The white eyeball and black eyeball portion extraction unit 1020 cuts out the portion of the eye that is composed of the white eyeball and the black eyeball from the photographed image P0 . The white eyeball and black eyeball portion extraction unit 1020 determines, from the photographed image P0, the portion of the eye where the white eyeball and the black eyeball are combined, and the other portions, based on the image recognition technology. The white eyeball and black eyeball portion extraction unit 1020 cuts out the portion of the eye that is combined with the white eyeball and the black eyeball by deleting the portion other than the combined portion of the white eyeball and the black eyeball from the photographed image P0.

白眼球與黑眼球部分提取部1020將進行自攝影圖像P0中剪切出白眼球與黑眼球組合成的部分的處理而得的圖像作為用戶圖像資料P1供給至圖像輸出部1030。The white eyeball and black eyeball portion extraction unit 1020 supplies an image obtained by cutting out the portion where the white eyeball and the black eyeball are combined from the captured image P0 to the image output unit 1030 as user image data P1.

步驟S60:圖像輸出部1030將用戶圖像資料P1輸出至檢查裝置2。此處,圖像輸出部1030經由攝影裝置通訊部120,將用戶圖像資料P1發送至檢查裝置2。Step S60 : the image output unit 1030 outputs the user image data P1 to the inspection apparatus 2 . Here, the image output unit 1030 transmits the user image data P1 to the inspection apparatus 2 via the photographing apparatus communication unit 120 .

再者,攝影裝置控制部100亦可於圖像輸出部1030將用戶圖像資料P1輸出至檢查裝置2之前使顯示部130顯示所拍攝的用戶圖像資料P1,以提醒用戶U1進行確認。於由用戶U1認可了用戶圖像資料P1的情況下,圖像輸出部1030可將用戶圖像資料P1輸出至檢查裝置2。於未由用戶U1認可用戶圖像資料P1的情況下,攝影裝置控制部100可再次執行步驟S10的處理。基於用戶U1的認可是作為來自操作受理部140的操作而進行。Furthermore, before the image output unit 1030 outputs the user image data P1 to the inspection apparatus 2 , the photographing device control unit 100 may display the captured user image data P1 on the display unit 130 to remind the user U1 for confirmation. When the user U1 approves the user image data P1 , the image output unit 1030 may output the user image data P1 to the inspection apparatus 2 . If the user U1 does not approve the user image data P1, the photographing device control unit 100 may execute the process of step S10 again. The approval by the user U1 is performed as an operation from the operation accepting unit 140 .

步驟S70:預測結果獲取部1040自檢查裝置2獲取預測結果A1。此處,預測結果獲取部1040經由攝影裝置通訊部120,自檢查裝置2接收預測結果A1。Step S70 : the prediction result acquisition unit 1040 acquires the prediction result A1 from the inspection device 2 . Here, the prediction result acquisition unit 1040 receives the prediction result A1 from the inspection device 2 via the imaging device communication unit 120 .

步驟S80:提示部1050提示預測結果A1。提示部1050藉由在顯示部130上顯示預測結果A1來提示預測結果A1。Step S80: The presentation unit 1050 presents the prediction result A1. The presentation unit 1050 presents the prediction result A1 by displaying the prediction result A1 on the display unit 130 .

步驟S90:攝影條件判定部1010對顯示部130進行控制,以使顯示部130顯示攝影指南。於步驟S90中顯示的攝影指南例如包含與規定的攝影條件中未滿足的攝影條件對應的內容。例如,於攝影圖像P0不滿足第一個攝影條件的情況下,顯示「請將眼睛張開至眼瞼不會遮住黑眼球的程度。」等文本。於步驟S90中,亦可顯示與步驟S10中所顯示者相同的攝影指南。Step S90 : The photographing condition determination unit 1010 controls the display unit 130 so that the display unit 130 displays the photographing guide. The photographing guide displayed in step S90 includes, for example, content corresponding to a photographing condition that is not satisfied among the predetermined photographing conditions. For example, when the photographed image P0 does not satisfy the first photographing condition, a text such as "Please open your eyes to the extent that the eyelids do not cover the dark eyeballs." is displayed. In step S90, the same photography guide as that displayed in step S10 may also be displayed.

於攝影條件判定部1010結束步驟S90的處理後,攝影裝置1再次執行步驟S20的處理。After the photographing condition determination unit 1010 completes the processing of step S90, the photographing apparatus 1 executes the processing of step S20 again.

藉由以上所述,攝影裝置1結束眼睛狀態預測處理。As described above, the photographing apparatus 1 ends the eye state prediction process.

再者,如上所述,用戶圖像資料P1是自滿足規定的攝影條件的攝影圖像P0生成的。因此,用戶圖像資料P1是於滿足規定的攝影條件的狀態下拍攝用戶U1的眼睛而得的圖像。Furthermore, as described above, the user image data P1 is generated from the photographed image P0 that satisfies the predetermined photographing conditions. Therefore, the user image data P1 is an image obtained by photographing the eyes of the user U1 in a state in which predetermined photographing conditions are satisfied.

再者,用戶圖像資料P1亦可不滿足規定的攝影條件。該情況下,省略所述步驟S40的處理。另外,該情況下,可自攝影裝置控制部100的結構中省略攝影條件判定部1010。Furthermore, the user image data P1 may not satisfy the predetermined photographing conditions. In this case, the process of step S40 described above is omitted. In this case, the imaging condition determination unit 1010 may be omitted from the configuration of the imaging device control unit 100 .

此處,參照圖24至圖27,對攝影裝置1的顯示部130所顯示的各種畫面進行說明。Here, various screens displayed on the display unit 130 of the photographing apparatus 1 will be described with reference to FIGS. 24 to 27 .

圖24是表示第一實施例的變形例的攝影畫面一例的圖。於攝影畫面G1中,顯示有包含用戶U1的眼睛的面部圖像以及攝影指南。於攝影畫面G1中,作為攝影指南,利用文本顯示有攝影的程序。FIG. 24 is a diagram showing an example of a photographing screen in a modification of the first embodiment. On the photographing screen G1, a facial image including the eyes of the user U1 and a photographing guide are displayed. On the photographing screen G1, a photographing procedure is displayed by text as a photographing guide.

圖25是表示第一實施例的變形例的攝影畫面一例的圖。於攝影畫面G2中,用戶U1的眼睛的攝影完成,顯示有拍攝到的用戶圖像資料P2。FIG. 25 is a diagram showing an example of a photographing screen in a modification of the first embodiment. In the photographing screen G2, the photographing of the eyes of the user U1 is completed, and the photographed user image data P2 is displayed.

於攝影畫面G2中,顯示有「評價開始」按鈕以及「再次攝影」按鈕,所述「評價開始」按鈕是用於供用戶U1認可拍攝到的用戶圖像資料P2的按鈕,所述「再次攝影」按鈕是用於再次執行攝影的按鈕。On the photographing screen G2, an "evaluation start" button and a "re-shooting" button are displayed. The "evaluation start" button is a button for the user U1 to approve the captured user image data P2. ” button is used to execute photography again.

圖26是表示第一實施例的變形例的預測結果顯示畫面一例的圖。預測結果顯示畫面G3基於預測結果A1而顯示有包含對用戶U1的眼睛狀態進行檢查時的結果的檢查資料。於預測結果顯示畫面G3中,作為一例,使用得分顯示有淚液油層厚度、淚液的量、黏液素層損傷面積、及角結膜損傷面積各自的狀態。另外,於預測結果顯示畫面G3中,基於預測結果A1,顯示有針對用戶U1的眼睛的健康風險的綜合性預測結果。FIG. 26 is a diagram showing an example of a prediction result display screen in a modification of the first embodiment. On the prediction result display screen G3, based on the prediction result A1, examination data including the result of examining the eye state of the user U1 are displayed. On the prediction result display screen G3, as an example, the use scores display the respective states of the tear oil layer thickness, the amount of tear liquid, the damaged area of the mucus layer, and the damaged area of the cornea and conjunctiva. In addition, on the prediction result display screen G3, based on the prediction result A1, a comprehensive prediction result for the health risk of the eyes of the user U1 is displayed.

[檢查裝置的預測處理] 接著,參照圖27對檢查裝置2的預測處理進行說明。圖27是表示第一實施例的變形例的預測處理一例的圖。圖27所示的預測處理是於圖23所示的眼睛狀態預測處理中執行了步驟S60的處理之後執行。 [Predictive processing of inspection equipment] Next, the prediction processing of the inspection apparatus 2 will be described with reference to FIG. 27 . FIG. 27 is a diagram showing an example of prediction processing in a modification of the first embodiment. The prediction process shown in FIG. 27 is performed after the process of step S60 is performed in the eye state prediction process shown in FIG. 23 .

步驟S100:圖像資料獲取部2000自攝影裝置1獲取用戶圖像資料P1。圖像資料獲取部2000經由檢查裝置通訊部220,自攝影裝置1接收用戶圖像資料P1。圖像資料獲取部2000將所獲取的用戶圖像資料P1供給至預測部2010。Step S100 : the image data acquisition unit 2000 acquires the user image data P1 from the photographing device 1 . The image data acquisition unit 2000 receives the user image data P1 from the photographing device 1 via the inspection device communication unit 220 . The image data acquisition unit 2000 supplies the acquired user image data P1 to the prediction unit 2010 .

步驟S110:預測部2010自儲存部210獲取學習結果L10。Step S110 : the prediction unit 2010 acquires the learning result L10 from the storage unit 210 .

步驟S120:預測部2010基於學習結果L10以及用戶圖像資料P1,預測用戶檢查資料。預測部2010基於深層學習進行預測。Step S120: The prediction unit 2010 predicts the user inspection data based on the learning result L10 and the user image data P1. The prediction unit 2010 performs prediction based on deep learning.

如上所述,所謂學習結果L10,是指基於包括利用數位照相機拍攝的被檢查者的眼睛圖像即被檢查者圖像資料、與包含被檢查者的眼睛狀態的檢查結果的被檢查者檢查資料的組合的資料集,藉由深層學習來學習被檢查者圖像資料與被檢查者檢查資料之間的關係而得的結果。即,所謂學習結果L10,是指基於包括被檢查者圖像資料與被檢查者檢查資料的組合的資料集進行學習而得的結果。因此,預測部2010基於包括被檢查者圖像資料與被檢查者檢查資料的組合的資料集、以及用戶圖像資料P1,預測用戶檢查資料。As described above, the learning result L10 refers to the subject's examination data including the subject's image data, which is the subject's eye image captured by the digital camera, and the examination result including the subject's eye state. The combined data set is the result of learning the relationship between the subject's image data and the subject's inspection data through deep learning. That is, the learning result L10 refers to a result of learning based on a data set including a combination of the subject image data and the subject examination data. Therefore, the prediction unit 2010 predicts the user examination data based on the data set including the combination of the subject image data and the subject examination data, and the user image data P1.

預測部2010向深層學習中所使用的類神經網路的輸入層輸入構成用戶圖像資料P1的畫素的各畫素值。於類神經網路的輸出層,可輸出與所輸入的畫素值、以及權重及偏置對應的值。被輸出層輸出的值與關於眼睛狀態的得分預先建立了對應。The prediction unit 2010 inputs each pixel value of the pixels constituting the user image data P1 to the input layer of the neural network-like network used in the deep learning. In the output layer of the neural network-like network, values corresponding to the input pixel values, as well as weights and biases can be output. The value output by the output layer is pre-established with the score about the eye state.

預測部2010將預測結果A1供給至預測結果輸出部2020。The prediction unit 2010 supplies the prediction result A1 to the prediction result output unit 2020 .

步驟S130:預測結果輸出部2020將預測結果A1輸出至攝影裝置1。此處,預測結果輸出部2020經由檢查裝置通訊部220,將預測結果A1發送至攝影裝置1。Step S130 : The prediction result output unit 2020 outputs the prediction result A1 to the imaging device 1 . Here, the prediction result output unit 2020 transmits the prediction result A1 to the imaging device 1 via the inspection device communication unit 220 .

藉由以上所述,檢查裝置2結束預測處理。As described above, the inspection apparatus 2 ends the prediction process.

[第一實施例的彙總] 如以上所說明般,第一實施例的檢查裝置2包括圖像資料獲取部2000以及預測部2010。 [Summary of the first embodiment] As described above, the inspection apparatus 2 of the first embodiment includes the image data acquisition unit 2000 and the prediction unit 2010 .

圖像資料獲取部2000獲取用戶圖像資料P1,該用戶圖像資料P1為利用數位照相機(於第一實施例中為攝影裝置1)拍攝的用戶U1的眼睛圖像。The image data acquisition unit 2000 acquires user image data P1 , which is an eye image of the user U1 captured by a digital camera (the photographing device 1 in the first embodiment).

預測部2010基於以下的資料集、即包括利用數位照相機拍攝的被檢查者的眼睛圖像即被檢查者圖像資料、與包含該被檢查者的淚液狀態和角結膜狀態中的至少一個以上(於第一實施例中為眼睛狀態)的檢查結果的被檢查者檢查資料的組合的資料集(於第一實施例中為學習結果L10)、以及用戶圖像資料P1,預測包含對用戶U1的淚液狀態與角結膜狀態中的至少一個以上(於第一實施例中為眼睛狀態)進行檢查時的結果的用戶檢查資料。The prediction unit 2010 is based on a data set that includes subject image data, which is an image of the subject's eyes captured by a digital camera, and at least one or more ( The data set (in the first embodiment, the learning result L10) and the user image data P1, which are the combined data set of the inspection data of the examinee of the inspection result of the eye state in the first embodiment, are predicted to contain the data of the user U1. User inspection data of the results when at least one of the tear state and the corneal and conjunctiva state (the eye state in the first embodiment) is inspected.

藉由該結構,第一實施例的檢查裝置2可根據利用數位照相機拍攝的用戶U1的眼睛圖像,預測包含對用戶U1的淚液狀態與角結膜狀態中的至少一個以上進行檢查時的結果的用戶檢查資料,因此可藉由簡易的檢查來檢查眼睛狀態。於第一實施例中,所謂眼睛狀態為淚液狀態與角結膜狀態中的至少一個以上。於檢查裝置2中,不使用眼科檢查設備,而使用附屬於智慧型手機等的數位照相機拍攝用戶U1的眼睛,藉此可藉由簡易的檢查來檢查眼睛狀態。With this configuration, the inspection apparatus 2 according to the first embodiment can predict the results of the inspection including at least one of the tear state and the corneal conjunctiva state of the user U1 based on the eye image of the user U1 captured by the digital camera. The user checks the data, so the eye condition can be checked by a simple check. In the first embodiment, the so-called eye state is at least one of the tear state and the corneal conjunctiva state. In the inspection apparatus 2, the eye state of the user U1 can be inspected by a simple inspection by using a digital camera attached to a smartphone or the like to photograph the eyes of the user U1 without using an eye inspection device.

另外,於第一實施例的檢查裝置2中,被檢查者的眼睛圖像及用戶U1的眼睛圖像分別是於眼睛未染色的狀態下拍攝該眼睛而得的圖像。In addition, in the inspection apparatus 2 of the first embodiment, the eye image of the subject and the eye image of the user U1 are images obtained by photographing the eye in an unstained state, respectively.

藉由該結構,於第一實施例的檢查裝置2中,不使用用於染色的藥劑,而使用附屬於智慧型手機等的數位照相機拍攝用戶U1的眼睛,藉此可藉由簡易的檢查來檢查眼睛狀態。With this configuration, in the inspection apparatus 2 according to the first embodiment, a digital camera attached to a smartphone or the like is used to photograph the eyes of the user U1 without using a chemical for dyeing, thereby making it possible to perform a simple inspection. Check eye status.

另外,於第一實施例的檢查裝置2中,被檢查者的眼睛圖像及用戶U1的眼睛圖像分別是於滿足規定的攝影條件的狀態下拍攝眼睛而得的圖像。In addition, in the inspection apparatus 2 of the first embodiment, the eye image of the subject and the eye image of the user U1 are images obtained by photographing the eyes in a state satisfying predetermined imaging conditions, respectively.

藉由該結構,於第一實施例的檢查裝置2中,與不滿足規定的攝影條件的情況相比,可提高預測用戶檢查資料的精度。With this configuration, in the inspection apparatus 2 of the first embodiment, the accuracy of predicting user inspection data can be improved compared to the case where the predetermined imaging conditions are not satisfied.

另外,於第一實施例的檢查裝置2中,被檢查者的眼睛圖像及用戶U1的眼睛圖像分別是剪切眼睛中白眼球與黑眼球組合成的部分而得的圖像。In addition, in the inspection apparatus 2 according to the first embodiment, the eye image of the subject and the eye image of the user U1 are images obtained by clipping the portion of the eye where the white eyeball and the black eyeball are combined.

藉由該結構,於第一實施例的檢查裝置2中,可於被檢查者的眼睛圖像及用戶U1的眼睛圖像中除去關於眼睛中眼瞼及皮膚等眼球以外的部分的特徵量來預測用戶檢查資料,因此與未剪切眼睛中白眼球與黑眼球組合成的部分的情況相比,可提高預測用戶檢查資料的精度。With this configuration, in the inspection apparatus 2 according to the first embodiment, it is possible to predict the eye image by excluding the feature quantities of the parts of the eye other than the eyeball such as the eyelid and the skin from the eye image of the subject and the eye image of the user U1. Since the user examines the data, the accuracy of predicting the user's inspection data can be improved compared to the case where the portion of the eye that is composed of the white eyeball and the black eyeball is not clipped.

另外,於第一實施例的檢查裝置2中,預測部2010基於學習結果L10以及用戶圖像資料P1,預測用戶檢查資料,所述學習結果L10是基於包括被檢查者圖像資料與被檢查者檢查資料的組合的資料集,藉由深層學習來學習被檢查者圖像資料與被檢查者檢查資料的關係而得。 藉由該結構,於第一實施例的檢查裝置2中,由於是基於藉由深層學習而學習到的學習結果L10進行預測,因此與不使用深層學習的情況相比,可提高預測的精度。 In addition, in the inspection apparatus 2 of the first embodiment, the prediction unit 2010 predicts the user inspection data based on the learning result L10 based on the image data of the examinee and the user image data P1, and the learning result L10 is based on the image data of the examinee and the examinee. The data set of the combination of inspection data is obtained by learning the relationship between the image data of the subject and the inspection data of the subject by deep learning. With this configuration, in the inspection apparatus 2 of the first embodiment, since prediction is performed based on the learning result L10 learned by deep learning, the accuracy of prediction can be improved compared with the case where deep learning is not used.

再者,於第一實施例中,對在檢查系統E1中以分體的形式包括攝影裝置1與檢查裝置2的情況的一例進行了說明,但並不限於此。攝影裝置1與檢查裝置2亦可為一體的裝置。Furthermore, in the first embodiment, an example of the case where the imaging device 1 and the inspection device 2 are included in the inspection system E1 as separate bodies has been described, but the present invention is not limited to this. The imaging device 1 and the inspection device 2 may also be an integrated device.

另外,於第一實施例中,對攝影裝置1與檢查裝置2藉由無線通訊進行通訊的情況的一例進行了說明,但不限於此。攝影裝置1與檢查裝置2亦可藉由有線通訊進行通訊。In addition, in the first embodiment, an example of the case where the imaging device 1 and the inspection device 2 communicate by wireless communication has been described, but the present invention is not limited to this. The photographing device 1 and the inspection device 2 can also communicate through wired communication.

亦可使攝影裝置1具備檢查裝置2的一部分功能。例如,於攝影裝置1中亦可包括預測部2010。另外,學習結果L10亦可儲存於攝影裝置1中。The imaging device 1 may be provided with a part of the functions of the inspection device 2 . For example, the prediction unit 2010 may be included in the photographing apparatus 1 . In addition, the learning result L10 may also be stored in the photographing device 1 .

亦可使檢查裝置2具備攝影裝置1的一部分功能。例如,於檢查裝置2中亦可包括攝影條件判定部1010、或白眼球與黑眼球部分提取部1020等。於第一實施例中,藉由在攝影裝置1中包括攝影條件判定部1010及白眼球與黑眼球部分提取部1020,利用攝影裝置1進行判定攝影條件的處理及剪切出白眼球與黑眼球部分的處理。因此,可減輕伴隨該些處理的檢查裝置2(即,伺服器)的負荷。The inspection device 2 may be provided with a part of the functions of the imaging device 1 . For example, the inspection apparatus 2 may include an imaging condition determination unit 1010, a white eyeball and black eyeball portion extraction unit 1020, and the like. In the first embodiment, the photographing device 1 includes the photographing condition determining unit 1010 and the white eyeball and black eyeball part extraction unit 1020, and the photographing device 1 is used to perform the processing of determining photographing conditions and to cut out the white eyeball and the black eyeball. part of the processing. Therefore, the load of the inspection apparatus 2 (ie, the server) accompanying these processes can be reduced.

再者,於第一實施例中,對攝影裝置1為智慧型手機的情況的一例進行了說明,但並不限於此。攝影裝置1亦可為設置於雜貨西藥店等店鋪中且包括數位照相機的個人電腦(Personal Computer:PC)等終端裝置。另外,攝影裝置1亦可為包括數位照相機的輸入板終端。另外,攝影裝置1亦可為包括數位照相機的遊戲機。Furthermore, in the first embodiment, an example in which the photographing device 1 is a smartphone has been described, but the present invention is not limited to this. The photographing device 1 may be a terminal device such as a personal computer (Personal Computer: PC) that is installed in a store such as a grocery store or a pharmacy and includes a digital camera. In addition, the photographing device 1 may also be a tablet terminal including a digital camera. In addition, the photographing device 1 may be a game machine including a digital camera.

另外,攝影裝置1亦可於用戶U1使用攝影裝置1的過程中拍攝用戶U1的眼睛,並基於預測結果A1、藉由顯示警報或警告音等來告知眼睛疲勞的情況。該情況下,攝影裝置1以規定的週期拍攝用戶U1的眼睛。In addition, the photographing device 1 can also photograph the eyes of the user U1 while the user U1 is using the photographing device 1 , and based on the prediction result A1 , display an alarm or a warning sound to notify the eyestrain condition. In this case, the imaging device 1 images the eyes of the user U1 at a predetermined cycle.

(第一實施例的變形例) 參照圖28至圖31,對所述第一實施例的變形例進行說明。於所述第一實施例中,說明了基於利用數位照相機拍攝的用戶的眼睛圖像即用戶圖像資料,針對眼睛狀態進行預測的情況。於第一實施例中,對基於用戶圖像資料、並且基於用戶對問卷的回答結果,針對眼睛狀態進行預測的情況進行說明。 將第一實施例的攝影裝置稱為攝影裝置1a,將檢查裝置稱為檢查裝置2a。再者,針對與所述第一實施例相同的結構附注相同的符號,針對相同的結構及動作省略其說明。 (Variation of the first embodiment) Modifications of the first embodiment will be described with reference to FIGS. 28 to 31 . In the first embodiment, the case where the eye state is predicted based on the user's eye image captured by the digital camera, that is, the user's image data, has been described. In the first embodiment, the case where the eye state is predicted based on the user's image data and based on the user's answer to the questionnaire will be described. The imaging device of the first embodiment is referred to as imaging device 1a, and the inspection device is referred to as inspection device 2a. In addition, the same code|symbol is attached|subjected to the same structure as the said 1st Example, and the description is abbreviate|omitted about the same structure and operation.

除了攝影裝置1a獲取用戶回答結果Q1的方面以外,攝影裝置1a的結構與所述第一實施例的攝影裝置1的結構相同,因此省略結構的詳細說明。The configuration of the photographing device 1a is the same as that of the photographing device 1 of the first embodiment except that the photographing device 1a acquires the user's answer result Q1, and thus the detailed description of the structure is omitted.

攝影裝置1a藉由操作受理部140受理用戶U1的操作而獲取用戶回答結果Q1。用戶回答結果Q1為用戶U1對問卷的回答結果。該問卷例如顯示於攝影畫面中。該問卷例如包含與用戶U1眼睛的自覺症狀相關的詢問事項。於該問卷中,例如藉由自選擇項中選擇利用攝影裝置1a拍攝用戶U1的眼睛時眼睛的自覺症狀來進行申報。The photographing apparatus 1a acquires the user answer result Q1 by accepting the operation of the user U1 by the operation accepting unit 140 . The user answer result Q1 is the answer result of the user U1 to the questionnaire. This questionnaire is displayed, for example, on a photographing screen. This questionnaire includes, for example, questions related to subjective symptoms of the eyes of the user U1. In this questionnaire, for example, the subjective symptoms of the eyes of the user U1 when the eyes of the user U1 are photographed with the photographing device 1a are selected for reporting.

圖28是表示本變形例的檢查裝置2a的結構一例的圖。 當將本變形例的檢查裝置2a(圖28)與第一實施例的檢查裝置2(圖22)進行比較時,檢查裝置控制部200a、及學習結果L10a不同。此處,其他結構單元所具有的功能與第一實施例相同。省略與第一實施例相同的功能的說明,於本變形例中,以與第一實施例不同的部分為中心進行說明。 FIG. 28 is a diagram showing an example of the configuration of the inspection apparatus 2a according to this modification. When the inspection device 2 a ( FIG. 28 ) of the present modification is compared with the inspection device 2 ( FIG. 22 ) of the first embodiment, the inspection device control unit 200 a and the learning result L10 a are different. Here, other structural units have the same functions as the first embodiment. The description of the same functions as those of the first embodiment will be omitted, and in this modification, the description will be focused on the parts different from those of the first embodiment.

檢查裝置控制部200a包括圖像資料獲取部2000、預測部2010a、預測結果輸出部2020、學習部2030a、以及回答結果獲取部2040a。 學習部2030a學習被檢查者回答結果、被檢查者圖像資料、以及被檢查者檢查資料之間的關係。被檢查者回答結果為被檢查者對問卷的回答結果。 The inspection apparatus control unit 200a includes an image data acquisition unit 2000, a prediction unit 2010a, a prediction result output unit 2020, a learning unit 2030a, and an answer result acquisition unit 2040a. The learning unit 2030a learns the relationship between the subject's answer result, the subject's image data, and the subject's examination data. The result of the examinee's response is the result of the examinee's answer to the questionnaire.

學習部2030a於學習中使用深層學習。向該深層學習中所使用的類神經網路中,一併輸入構成被檢查者圖像資料的畫素的各畫素值以及被檢查者回答結果。The learning unit 2030a uses deep learning in learning. To the neural-like network used in the deep learning, each pixel value of the pixels constituting the subject's image data and the subject's answer result are input together.

回答結果獲取部2040a自攝影裝置1獲取用戶回答結果Q1。 預測部2010a基於學習結果L10a、用戶回答結果Q1、以及用戶圖像資料P1,預測用戶檢查資料。預測部2010a基於深層學習進行預測。 The answer result acquisition unit 2040 a acquires the user answer result Q1 from the photographing device 1 . The prediction unit 2010a predicts the user inspection data based on the learning result L10a, the user answer result Q1, and the user image data P1. The prediction unit 2010a performs prediction based on deep learning.

儲存部210儲存學習結果L10a。學習結果L10a是基於包括被檢查者回答結果、被檢查者圖像資料、以及被檢查者檢查資料的組合的被檢查者第二資料集,藉由深層學習來學習被檢查者回答結果及被檢查者圖像資料與被檢查者檢查資料之間的關係而得的結果。即,所謂學習結果L10a,是指基於包括被檢查者回答結果、被檢查者圖像資料以及被檢查者檢查資料的組合的被檢查者第二資料集進行學習而得的結果。因此,預測部2010a基於包括被檢查者回答結果、被檢查者圖像資料以及被檢查者檢查資料的組合的被檢查者第二資料集、用戶圖像資料P1、以及用戶回答結果Q1,預測用戶檢查資料。The storage unit 210 stores the learning result L10a. The learning result L10a is based on the second data set of the examinee including the combination of the examinee's answer result, the examinee's image data, and the examinee's examination data, and the examinee's answer result and the examinee are learned by deep learning The results obtained from the relationship between the subject's image data and the subject's inspection data. That is, the learning result L10a refers to a result of learning based on the second data set of the examinee including a combination of the examinee's answer result, the examinee image data, and the examinee's examination data. Therefore, the predicting unit 2010a predicts the user based on the second data set of the examinee, the user image data P1, and the user answer result Q1, which includes the combination of the examinee's answer result, the examinee's image data, and the examinee's examination data. Check the information.

於攝影裝置1a將用戶圖像資料P1及用戶回答結果Q1輸出至檢查裝置2之前,顯示圖29所示的攝影畫面G4。Before the photographing apparatus 1a outputs the user image data P1 and the user answer result Q1 to the inspection apparatus 2, the photographing screen G4 shown in FIG. 29 is displayed.

於攝影畫面G4中,一併顯示有用戶圖像資料P2以及問卷。On the photographing screen G4, the user image data P2 and the questionnaire are displayed together.

為了提示攝影裝置1a自檢查裝置2獲取的預測結果A1,顯示圖30所示的預測結果顯示畫面G6、及圖31所示的歷史畫面G7。The prediction result display screen G6 shown in FIG. 30 and the history screen G7 shown in FIG. 31 are displayed in order to present the prediction result A1 acquired by the imaging apparatus 1 a from the inspection apparatus 2 .

圖30所示的預測結果顯示畫面G6根據預測結果A1而顯示有用於改善眼睛狀態的建議。The prediction result display screen G6 shown in FIG. 30 displays suggestions for improving the eye condition based on the prediction result A1.

於圖31所示的歷史畫面G7中,利用圖表顯示有關於眼睛狀態的得分的歷史。In the history screen G7 shown in FIG. 31, the history of the score regarding an eye state is displayed by a graph.

本變形例的檢查裝置2a包括回答結果獲取部2040a。回答結果獲取部2040a獲取用戶U1對問卷的回答結果即用戶回答結果Q1。預測部2010a基於包括被檢查者對問卷的回答結果即被檢查者回答結果、被檢查者圖像資料、以及被檢查者檢查資料的組合的被檢查者第二資料集、用戶圖像資料P1、以及用戶回答結果Q1,預測用戶檢查資料。The inspection apparatus 2a of the present modification includes an answer result acquisition unit 2040a. The answer result acquisition unit 2040a acquires the user answer result Q1 which is the answer result of the user U1 to the questionnaire. The prediction unit 2010a is based on the second data set of the examinee, the user image data P1, As well as the user answer result Q1, predict the user inspection data.

藉由該結構,於本變形例的檢查裝置2a中,可基於用戶回答結果Q1來提高預測的精度。另外,於檢查裝置2a中,可根據預測結果A1來生成用於改善眼睛狀態的對用戶U1的建議。With this configuration, in the inspection apparatus 2a of the present modification, the accuracy of prediction can be improved based on the user's answer result Q1. In addition, in the inspection apparatus 2a, a suggestion to the user U1 for improving the eye condition can be generated based on the prediction result A1.

再者,於所述第一實施例及變形例中,被檢查者與用戶U1亦可為同一人物。於被檢查者與用戶U1為同一人物的情況下,被檢查者圖像資料是於拍攝用作用戶圖像資料P1的用戶U1的眼睛圖像之前的時期,利用數位照相機拍攝的用戶U1的眼睛圖像。另外,於被檢查者與用戶U1為同一人物的情況下,被檢查者回答結果是於獲得用作用戶回答結果Q1的用戶U1對問卷的回答結果之前的時期獲得的、用戶U1對問卷的回答結果。Furthermore, in the first embodiment and the modified example, the examinee and the user U1 may be the same person. In the case where the subject and the user U1 are the same person, the subject image data is the eye of the user U1 captured by the digital camera before the image of the eye of the user U1 used as the user image data P1 was captured image. In addition, when the examinee and the user U1 are the same person, the examinee's answer result is the user U1's answer to the questionnaire obtained before the user U1's answer result to the questionnaire is obtained as the user's answer result Q1 result.

於被檢查者與用戶U1為同一人物的情況下,於檢查裝置2或檢查裝置2a中,相較於被檢查者與用戶U1並非同一人物的情況,可提高預測用戶檢查資料的精度。When the examinee and the user U1 are the same person, the inspection apparatus 2 or the inspection apparatus 2a can improve the accuracy of predicting the user's examination data compared with the case where the examinee and the user U1 are not the same person.

於被檢查者與用戶U1為同一人物的情況下,檢查裝置2或檢查裝置2a將用戶U1作為患者而適宜地用於遠程治療及/或遠程診斷中。When the examinee and the user U1 are the same person, the examination apparatus 2 or the examination apparatus 2a appropriately uses the user U1 as a patient for remote treatment and/or remote diagnosis.

作為患者的用戶U1去看眼科一次以上,獲取並積累自身眼睛的圖像資料以及檢查資料(資料集)。用戶U1即便之後不去醫院,僅藉由利用攝影裝置1(例如智慧型手機)所包括的數位照相機拍攝自身的眼睛圖像並將該圖像發送至眼科醫生,便可判定淚液角膜狀態。藉由使用檢查裝置2或檢查裝置2a,用戶U1可減輕遠距離時等的去醫院的負擔。另外,用戶U1即便不將自身的眼睛圖像發送至眼科醫生,自己亦可觀察症狀的經過。因此,用戶U1只要於症狀惡化的情況下去醫院即可,從而可減少醫療費用。The user U1 who is a patient visits an ophthalmology department more than once, and acquires and accumulates image data and examination data (data set) of his own eyes. Even if the user U1 does not go to the hospital afterward, the tear cornea state can be determined only by taking an image of his own eye with a digital camera included in the photographing device 1 (eg, a smartphone) and sending the image to an ophthalmologist. By using the examination apparatus 2 or the examination apparatus 2a, the user U1 can reduce the burden of going to the hospital in a long distance or the like. In addition, the user U1 can observe the progress of the symptoms by himself without sending his own eye image to the ophthalmologist. Therefore, it is only necessary for the user U1 to go to the hospital when the symptoms worsen, so that medical expenses can be reduced.

再者,亦可利用電腦達成所述第一實施例中的攝影裝置1、攝影裝置1a或檢查裝置2、檢查裝置2a的一部分,例如攝影裝置控制部100及檢查裝置控制部200、檢查裝置控制部200a。該情況下,可藉由將用於達成該控制功能的程式記錄於電腦可讀記錄媒體中,使電腦系統讀入並執行記錄於該記錄介質中的程式來達成。再者,此處提及的所謂「電腦系統」為內置於攝影裝置1、攝影裝置1a、或檢查裝置2、檢查裝置2a的電腦系統,且設為包括作業系統(operating system,OS)、周邊設備等硬體。另外,所謂「電腦可讀記錄介質」是指軟碟、光磁碟、ROM、光碟(compact disc,CD)-ROM等可攜媒體、內置於電腦系統的硬碟等儲存裝置。進而,所謂「電腦可讀記錄媒體」,亦可包括:如經由網際網路等網路或電話線路等通訊線路發送程式時的通訊線般於短時間內動態地保持程式者;如作為該情況下的伺服器或客戶端的電腦系統內部的揮發性記憶體般於一定時間內保持程式者。另外,所述程式可為用於達成所述功能的一部分的程式,亦可為進而藉由與已記錄於電腦系統中的程式組合來達成所述功能的程式。Furthermore, the photographing device 1 , the photographing device 1 a or the inspection device 2 , and a part of the inspection device 2 a in the first embodiment can also be realized by using a computer, for example, the photographing device control unit 100 and the inspection device control unit 200 , and the inspection device control unit. Section 200a. In this case, it can be achieved by recording a program for realizing the control function in a computer-readable recording medium, and causing a computer system to read and execute the program recorded in the recording medium. In addition, the so-called "computer system" mentioned here is a computer system built in the photographing device 1, the photographing device 1a, or the inspection device 2, and the inspection device 2a, and is assumed to include an operating system (OS), peripherals. equipment and other hardware. The term "computer-readable recording medium" refers to portable media such as floppy disks, optical disks, ROMs, compact discs (CDs)-ROMs, and storage devices such as hard disks built into computer systems. Furthermore, the so-called "computer-readable recording medium" may also include those that dynamically hold the program for a short period of time, such as the communication line when the program is transmitted via a network such as the Internet or a communication line such as a telephone line; The volatile memory inside the computer system of the server or client below generally holds the program for a certain period of time. In addition, the program may be a program for realizing a part of the function, or a program that realizes the function by combining with a program already recorded in the computer system.

另外,亦可以LSI(大規模積體電路)等積體電路的形式達成所述第一實施例中的攝影裝置1、攝影裝置1a及檢查裝置2、檢查裝置2a的一部分或者全部。攝影裝置1、攝影裝置1a及檢查裝置2、檢查裝置2a的各功能塊可單獨地進行處理器化,亦可將一部分或全部整合來進行處理器化。另外,積體電路化的手法並不限於LSI,亦可利用專用電路或通用處理器來達成。另外,於因半導體技術的進步而出現了代替LSI的積體電路化的技術的情況下,亦可使用基於該技術的積體電路。In addition, part or all of the imaging device 1 , imaging device 1 a , inspection device 2 , and inspection device 2 a in the first embodiment may be implemented in the form of an integrated circuit such as an LSI (Large Scale Integrated Circuit). Each functional block of the imaging device 1 , the imaging device 1 a , the inspection device 2 , and the inspection device 2 a may be individually processed, or a part or all of them may be integrated and processed. In addition, the method of integrating the circuit is not limited to the LSI, but can also be achieved by using a dedicated circuit or a general-purpose processor. In addition, in the case where a technology for forming an integrated circuit in place of the LSI has appeared due to the advancement of the semiconductor technology, an integrated circuit based on this technology can also be used.

接下來,參照圖32至圖38,對所述實施形態的第二實施例的具體例進行說明。Next, a specific example of the second example of the above-described embodiment will be described with reference to FIGS. 32 to 38 .

圖32是表示第二實施例的機械學習執行裝置的硬體結構一例的圖。圖32所示的機械學習執行裝置10b是於後述的機械學習裝置700b的學習階段中使機械學習裝置700b執行機械學習的裝置。另外,如圖32所示,機械學習執行裝置10b包括處理器11b、主儲存裝置12b、通訊介面13b、輔助儲存裝置14b、輸入輸出裝置15b、以及匯流排16b。FIG. 32 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the second embodiment. The machine learning execution device 10b shown in FIG. 32 is a device that causes the machine learning device 700b to execute machine learning in a learning phase of the machine learning device 700b to be described later. In addition, as shown in FIG. 32, the machine learning execution device 10b includes a processor 11b, a main storage device 12b, a communication interface 13b, an auxiliary storage device 14b, an input and output device 15b, and a bus bar 16b.

處理器11b例如為CPU,讀出並執行後述的機械學習執行程式100b,以達成機械學習執行程式100b所具有的各功能。另外,處理器11b亦可讀出並執行機械學習執行程式100b以外的程式,以於達成機械學習執行程式100b所具有的各功能的基礎上達成必要的功能。The processor 11b is, for example, a CPU, and reads out and executes the machine learning execution program 100b described later to achieve each function of the machine learning execution program 100b. In addition, the processor 11b may read out and execute programs other than the machine learning execution program 100b, so as to achieve necessary functions in addition to the functions of the machine learning execution program 100b.

主儲存裝置12b例如為RAM,預先儲存有由處理器11b讀出並執行的機械學習執行程式100b以及其他程式。The main storage device 12b is, for example, a RAM, and stores in advance a machine learning execution program 100b and other programs that are read and executed by the processor 11b.

通訊介面13b是用於經由圖32所示的網路NW而與機械學習裝置700b以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 13b is an interface circuit for performing communication with the machine learning device 700b and other devices via the network NW shown in FIG. 32 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置14b例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 14b is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置15b例如為輸入輸出端口。輸入輸出裝置15b例如連接有圖32所示的鍵盤811b、滑鼠812b、顯示器910b。鍵盤811b及滑鼠812b例如用於輸入為了操作機械學習執行裝置10b所必需的資料的作業中。顯示器910b例如顯示機械學習執行裝置10b的圖形使用者介面。The input/output device 15b is, for example, an input/output port. The input/output device 15b is connected to, for example, a keyboard 811b, a mouse 812b, and a display 910b shown in FIG. 32 . The keyboard 811b and the mouse 812b are used, for example, for inputting data necessary for operating the machine learning execution device 10b. The display 910b, for example, displays a graphical user interface of the machine learning execution device 10b.

匯流排16b將處理器11b、主儲存裝置12b、通訊介面13b、輔助儲存裝置14b及輸入輸出裝置15b連接,以使該些能夠相互進行資料的收發。The bus bar 16b connects the processor 11b, the main storage device 12b, the communication interface 13b, the auxiliary storage device 14b, and the input/output device 15b, so that these can exchange data with each other.

圖33是表示第二實施例的機械學習執行程式的軟體結構一例的圖。機械學習執行裝置10b使用處理器11b讀出並執行機械學習執行程式100b,以達成圖33所示的教師資料獲取功能101b及機械學習執行功能102b。FIG. 33 is a diagram showing an example of the software configuration of the machine learning execution program of the second embodiment. The machine learning execution device 10b uses the processor 11b to read out and execute the machine learning execution program 100b, so as to achieve the teacher data acquisition function 101b and the machine learning execution function 102b shown in FIG. 33 .

教師資料獲取功能101b獲取將一個學習用圖像資料及學習用充血資料作為問題、將一個學習用檢查圖像資料及學習用檢查結果資料中的至少一者作為答案的教師資料。The teacher data acquisition function 101b acquires teacher data in which one image data for learning and congestion data for learning are used as questions, and at least one of one image data for learning and inspection result data for learning is an answer.

學習用圖像資料是構成教師資料的問題的一部分、且表示描繪有學習用被檢查體的眼睛的學習用圖像的資料。學習用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for learning is a part of the problem constituting the teacher data, and represents the image for learning in which the eyes of the subject for learning are drawn. The learning image is captured by, for example, a camera mounted on a smartphone.

學習用充血資料是構成教師資料的問題的一部分、且表示學習用被檢查體的眼睛的充血程度的資料。The hyperemia data for learning is a part of the questions constituting the teacher data, and is data showing the degree of hyperemia of the eyes of the subject for learning.

例如,學習用充血資料可表示基於學習用圖像資料而算出、且表現學習用被檢查體的眼睛的充血程度的值。具體而言,學習用充血資料可表示以下的值,即,於將以彩色描繪有學習用被檢查體的眼睛的學習用圖像轉換為僅綠色陣列的圖像、並使分配至該圖像所含的各畫素的亮度反轉而得的灰階圖像中,與以白色描繪出結膜的血管的區域的面積對應的值。For example, the hyperemia data for learning may represent a value that is calculated based on the image data for learning and expresses the degree of hyperemia of the eyes of the subject for learning. Specifically, the hyperemia data for learning can represent a value obtained by converting an image for learning in which the eyes of the subject for learning are drawn in color into an image of only a green array, and assigning it to the image A value corresponding to the area of the region in which the blood vessels of the conjunctiva are drawn in white in the grayscale image in which the luminance of each of the included pixels is inverted.

進而,該值亦可使用藉由在評價該面積之前自該灰階圖像剪切描繪出結膜的區域而生成的圖像來計算。該值為表示以下情況的值:描繪出結膜的血管的區域的面積越大,眼睛的充血程度越大,描繪出結膜的血管的區域的面積越小,眼睛的充血程度越小。Furthermore, the value can also be calculated using an image generated by clipping the region delineating the conjunctiva from the grayscale image prior to evaluating the area. This value is a value indicating that the larger the area of the area where the blood vessels of the conjunctiva are drawn, the greater the degree of hyperemia of the eye, and the smaller the area of the area where the blood vessels of the conjunctiva are drawn, the less the degree of hyperemia of the eye.

或者,學習用充血資料亦可表示使用用戶介面而輸入、且表現學習用被檢查體的眼睛的充血程度的值。該用戶介面例如被顯示於圖32所示的顯示器910b上。或者,該用戶介面被顯示於學習用被檢查體所簽約的智慧型手機中搭載的觸控面板顯示器上。Alternatively, the hyperemia data for learning may represent a value that is input using a user interface and expresses the degree of hyperemia of the eyes of the subject for learning. The user interface is displayed, for example, on the display 910b shown in FIG. 32 . Alternatively, the user interface is displayed on a touch panel display mounted in a smartphone contracted by the subject for learning.

學習用檢查圖像資料是構成教師資料的答案的一部分、且表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的學習用檢查圖像的資料。作為此種檢查,例如可列舉:使用螢光素染色試劑的對角結膜上皮損傷程度的檢查、使用麗絲胺綠染色試劑的對黏液素損傷程度的檢查。因此,例如,學習用檢查圖像資料成為表示描繪有經螢光素或麗絲胺綠染色的眼睛的圖像的資料。The examination image data for learning is a part of the answer that constitutes the teacher data, and it is used for learning that depicts the eyes of the subject for learning when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. Check the profile of the image. Such tests include, for example, an inspection of the degree of corneal and conjunctival epithelial damage using a luciferin staining reagent, and an inspection of the degree of mucin damage using a lissamine green staining reagent. Therefore, for example, the examination image data for learning is data representing an image in which an eye stained with luciferin or lissamine green is drawn.

學習用檢查結果資料是構成教師資料的答案的一部分、且表示與乾眼症的症狀相關的檢查結果的資料。作為此種檢查結果,例如可列舉基於使用螢光素染色試劑的檢查來表現角結膜上皮損傷程度的、為0以上且1以下的數值。或者,作為此種檢查結果,例如可列舉基於使用麗絲胺綠染色試劑的檢查來表現黏液素損傷程度的、為0以上且1以下的數值。The test result data for learning is data that constitutes a part of the answers of the teacher data and shows test results related to symptoms of dry eye. As such test results, for example, a numerical value of 0 or more and 1 or less, which expresses the degree of corneal and conjunctival epithelial damage based on the test using a luciferin staining reagent, can be mentioned. Alternatively, as such an inspection result, for example, a numerical value of 0 or more and 1 or less, which expresses the degree of mucin damage based on an inspection using a lissamine green staining reagent, can be mentioned.

於所述兩個數值為0以上且小於0.5的情況下,經螢光素染色試劑或麗絲胺綠染色試劑染色的角結膜上皮損傷或黏液素損傷的面積小於眼睛整體面積的30%,因此表現出學習用被檢查體的眼睛正常,不需要由眼科醫生進行診察。另外,於所述兩個數值為0.5以上且1以下的情況下,經螢光素染色試劑或麗絲胺綠染色試劑染色的角結膜上皮損傷或黏液素損傷的面積為眼睛整體面積的30%以上,因此表現出學習用被檢查體的眼睛異常,需要由眼科醫生進行診察。When the two values are 0 or more and less than 0.5, the area of corneal and conjunctival epithelial lesions or mucin lesions stained by luciferin staining reagent or Lissamine green staining reagent is less than 30% of the entire area of the eye, so It appears that the eyes of the subject for study are normal, and examination by an ophthalmologist is not required. In addition, when the two numerical values are 0.5 or more and 1 or less, the area of corneal and conjunctival epithelial damage or mucin damage stained with luciferin staining reagent or lissamine green staining reagent is 30% of the entire area of the eye As described above, the eyes of the subject for learning are abnormal, and therefore, an ophthalmologist is required to conduct an examination.

例如,教師資料獲取功能101b獲取1280個教師資料。該情況下,1280個學習用圖像資料分別表示藉由實施四組使用搭載於智慧型手機的照相機對8名學習用被檢查體的兩個眼睛分別進行20次拍攝的處理而獲取的1280張學習用圖像。另外,該情況下,1280個學習用充血資料表示分別示出基於學習用圖像資料而算出、且表現學習用被檢查體的眼睛的充血程度的值的1280個值。For example, the teacher profile acquisition function 101b acquires 1280 teacher profiles. In this case, the 1280 image data for learning represent 1280 images obtained by performing four sets of processing of photographing the two eyes of 8 subjects for learning 20 times using a camera mounted on a smartphone. Imagery for learning. In addition, in this case, the 1280 pieces of hyperemia data for learning represent 1280 values each of which is calculated based on the image data for learning and represents the degree of hyperemia of the eyes of the subject for learning.

另外,該情況下,1280個學習用檢查圖像資料分別表示對所述1280張學習用圖像各個中描繪出的學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時所拍攝的學習用檢查圖像。另外,1280個學習用檢查結果資料表示1280個分別表示拍攝1280張學習用檢查圖像時所實施的與乾眼症的症狀相關的檢查結果的、為0以上且1以下的數值。In addition, in this case, the 1280 pieces of examination image data for learning respectively indicate when the examination related to the symptoms of dry eye is performed on the eyes of the examination subject for learning depicted in each of the 1280 pieces of learning images. The image of the examination for learning was taken. In addition, the 1280 pieces of examination result data for learning represent 1280 numerical values of 0 or more and 1 or less, which respectively represent the examination results related to the symptoms of dry eye performed when 1280 examination images for learning were taken.

再者,於以下的說明中,列舉以下情況為例進行說明:1280個學習用檢查結果資料包含表示表現學習用被檢查體的眼睛正常的為0以上且小於0.5的數值的640個學習用檢查結果資料、以及表示表現學習用被檢查體的眼睛異常的為0.5以上且1以下的數值的640個學習用檢查結果資料。In addition, in the following description, the following case is taken as an example for description: 1280 pieces of examination result data for learning include 640 examinations for learning which show that the eyes of the examination subject for learning are normal and have a value of 0 or more and less than 0.5 Result data, and 640 test result data for learning showing a numerical value of 0.5 or more and 1 or less indicating abnormality in the eyes of the subject for learning.

機械學習執行功能102b將教師資料輸入至機械學習裝置700b中所安裝的機械學習程式750b,使機械學習程式750b進行學習。例如,機械學習執行功能102b使包括卷積類神經網路的機械學習程式750b藉由後向傳播進行學習。The machine learning execution function 102b inputs the teacher data into the machine learning program 750b installed in the machine learning device 700b, and causes the machine learning program 750b to learn. For example, the machine learning execution function 102b causes a machine learning program 750b including a convolutional neural network to learn by backpropagation.

例如,機械學習執行功能102b將以下560個教師資料輸入至機械學習程式750b,即,所述560個教師資料包含表示表現學習用被檢查體的眼睛正常的、為0以上且小於0.5的數值的學習用檢查結果資料。該560個教師資料是自選自所述8名中的7名獲取的教師資料。For example, the machine learning execution function 102b inputs the following 560 teacher data into the machine learning program 750b, that is, the 560 teacher data includes a value of 0 or more and less than 0.5, which indicates that the eyes of the subject for performance learning are normal. Study test results data. The 560 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

另外,例如,機械學習執行功能102b將以下560個教師資料輸入至機械學習程式750b,即,所述560個教師資料包含表示表現學習用被檢查體的眼睛異常的、為0.5以上且1以下的數值的學習用檢查結果資料。該560個教師資料是自選自所述8名中的7名獲取的教師資料。In addition, for example, the machine learning execution function 102b inputs the following 560 teacher data into the machine learning program 750b, that is, the 560 teacher data includes 0.5 or more and 1 or less, indicating that the eyes of the learning subject are abnormal. Numerical learning test result data. The 560 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

然後,機械學習執行功能102b利用所述1080個教師資料使機械學習程式750b進行學習。Then, the machine learning execution function 102b uses the 1080 teacher data to make the machine learning program 750b learn.

另外,例如,機械學習執行功能102b亦可將以下80個教師資料作為測試資料輸入至機械學習程式750b,即,所述80個教師資料表現學習用被檢查體的眼睛正常,且未用於機械學習程式750b的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102b can also input the following 80 teacher data as test data into the machine learning program 750b, that is, the 80 teacher data show that the eyes of the subject for learning are normal and are not used for machine learning The learning of the learning program 750b is in progress. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

另外,例如,機械學習執行功能102b亦可將以下80個教師資料作為測試資料輸入至機械學習程式750b,即,所述80個教師資料表現學習用被檢查體的眼睛異常,且未用於機械學習程式750b的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102b can also input the following 80 teacher data as test data into the machine learning program 750b, that is, the 80 teacher data represent abnormal eyes of the subject for learning and are not used for machine learning The learning of the learning program 750b is in progress. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

藉此,機械學習執行功能102b可對利用所述1080個教師資料使機械學習程式750b進行學習而獲得的機械學習程式750b的特性進行評價。Thereby, the machine learning execution function 102b can evaluate the characteristics of the machine learning program 750b obtained by learning the machine learning program 750b using the 1080 teacher data.

再者,機械學習執行功能102b當使用測試資料對機械學習程式750b的特性進行評價時,例如可使用類激活映射(CAM:Class Activation Mapping)。類激活映射是使輸入至類神經網路的資料中成為由類神經網路輸出的結果的根據的部分明確的技術。作為類激活映射,例如可列舉梯度加權類激活映射。梯度加權類激活映射為以下技術:利用與由卷積類神經網路執行的卷積的特徵相關的分類得分的梯度,來確定輸入至卷積類神經網路的圖像中對分類給予了一定程度以上的影響的區域。Furthermore, the machine learning execution function 102b may use, for example, Class Activation Mapping (CAM: Class Activation Mapping) when evaluating the characteristics of the machine learning program 750b using the test data. Class activation mapping is a technique for partially clarifying the data input to the neural network as the basis for the output of the neural network. Examples of the class activation map include gradient weighted class activation maps. Gradient-weighted class activation mapping is a technique that utilizes the gradient of the classification score relative to the features of the convolution performed by the convolutional neural network to determine that the image input to the convolutional neural network gives a certain the area of influence above the degree.

圖34是表示第二實施例的機械學習程式於預測與角膜上皮損傷相關的檢查結果時,學習用被檢查體的眼睛圖像中所重點考慮的部分的一例的圖。例如,機械學習執行功能102b使用梯度加權類激活映射,評價為:圖34所示的學習用圖像中由圖34所示的橢圓C36包圍的區域對利用機械學習程式750b進行的、對與角膜上皮損傷相關的檢查結果的預測給予了一定程度以上的影響。另外,由圖34所示的橢圓C36包圍的區域所含的三個階段的灰階表示對與角膜上皮損傷相關的檢查結果的預測給予的影響程度,且均重疊顯示於描繪出學習用被檢查體的眼睛的結膜的血管的區域上。34 is a diagram showing an example of a portion to be considered in an eye image of a subject for learning when predicting an examination result related to corneal epithelial damage by the machine learning program of the second embodiment. For example, the machine learning execution function 102b uses a gradient weighted class activation map, and evaluates the comparison of the area surrounded by the ellipse C36 shown in FIG. 34 in the learning image shown in FIG. 34 with the cornea by the machine learning program 750b Prediction of test results related to epithelial damage gave more than a certain degree of influence. In addition, the gray scales of the three stages included in the area enclosed by the ellipse C36 shown in FIG. 34 indicate the degree of influence given to the prediction of the test result related to the corneal epithelial damage, and all of them are superimposed and displayed on the drawing of the test subject for learning. on the region of the blood vessels of the conjunctiva of the body of the eye.

接下來,參照圖35,對第二實施例的機械學習執行程式100b執行的處理一例進行說明。圖35是表示利用第二實施例的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100b執行至少一次圖35所示的處理。Next, an example of processing executed by the machine learning execution program 100b of the second embodiment will be described with reference to FIG. 35 . FIG. 35 is a flowchart showing an example of processing executed by the machine learning execution program of the second embodiment. The machine learning execution program 100b executes the processing shown in FIG. 35 at least once.

於步驟S31中,教師資料獲取功能101b獲取將學習用圖像資料以及學習用充血資料作為問題、將學習用檢查圖像資料以及學習用檢查結果資料中的至少一者作為答案的教師資料。In step S31, the teacher data acquisition function 101b acquires teacher data with at least one of the learning image data and the learning hyperemia data as the question and at least one of the learning inspection image data and the learning inspection result data as the answer.

於步驟S32中,機械學習執行功能102b將教師資料輸入至機械學習程式750b,使機械學習程式750b進行學習。In step S32, the machine learning execution function 102b inputs the teacher data into the machine learning program 750b, and makes the machine learning program 750b learn.

接下來,參照圖36及圖37,對第二實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法的具體例進行說明。36 and 37 , specific examples of the dry eye syndrome inspection program, the dry eye syndrome inspection apparatus, and the dry eye syndrome inspection method according to the second embodiment will be described.

圖36是表示第二實施例的乾眼症檢查裝置的硬體結構一例的圖。圖36所示的乾眼症檢查裝置20b是於已利用機械學習執行程式100b學習完畢的機械學習裝置700b的推論階段中,使用機械學習裝置700b推斷推論用被檢查體的眼睛中出現的乾眼症的症狀的裝置。另外,如圖36所示,乾眼症檢查裝置20b包括處理器21b、主儲存裝置22b、通訊介面23b、輔助儲存裝置24b、輸入輸出裝置25b、以及匯流排26b。FIG. 36 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the second embodiment. The dry eye disease inspection apparatus 20b shown in FIG. 36 uses the machine learning apparatus 700b in the inference stage of the machine learning apparatus 700b that has been learned by the machine learning execution program 100b to infer the dry eye that appears in the eyes of the subject for inference symptomatic device. In addition, as shown in FIG. 36 , the dry eye disease inspection device 20b includes a processor 21b, a main storage device 22b, a communication interface 23b, an auxiliary storage device 24b, an input/output device 25b, and a bus bar 26b.

處理器21b例如為CPU,讀出並執行後述的乾眼症檢查程式200b,以達成乾眼症檢查程式200b所具有的各功能。另外,處理器21b亦可讀出並執行乾眼症檢查程式200b以外的程式,以於達成乾眼症檢查程式200b所具有的各功能的基礎上達成必要的功能。The processor 21b is, for example, a CPU, and reads out and executes the dry eye syndrome inspection program 200b described later, so as to achieve each function of the dry eye syndrome inspection program 200b. In addition, the processor 21b may read out and execute programs other than the dry eye disease inspection program 200b, so as to achieve necessary functions in addition to the respective functions possessed by the dry eye disease inspection program 200b.

主儲存裝置22b例如為RAM,預先儲存有由處理器21b讀出並執行的乾眼症檢查程式200b以及其他程式。The main storage device 22b is, for example, a RAM, and stores in advance the dry eye disease examination program 200b and other programs read and executed by the processor 21b.

通訊介面23b是用於經由圖36所示的網路NW而與機械學習裝置700b以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 23b is an interface circuit for performing communication with the machine learning apparatus 700b and other devices via the network NW shown in FIG. 36 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置24b例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24b is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置25b例如為輸入輸出端口。輸入輸出裝置25b例如連接有圖36所示的鍵盤821b、滑鼠822b、顯示器920b。鍵盤821b及滑鼠822b例如用於輸入為了操作乾眼症檢查裝置20b所必需的資料的作業中。顯示器920b例如顯示乾眼症檢查裝置20b的圖形使用者介面。The input/output device 25b is, for example, an input/output port. The input/output device 25b is connected to, for example, a keyboard 821b, a mouse 822b, and a display 920b shown in FIG. 36 . The keyboard 821b and the mouse 822b are used, for example, in the operation of inputting data necessary to operate the dry eye disease inspection apparatus 20b. The display 920b displays, for example, a graphical user interface of the dry eye disease inspection apparatus 20b.

匯流排26b將處理器21b、主儲存裝置22b、通訊介面23b、輔助儲存裝置24b及輸入輸出裝置25b連接,以使該些能夠相互進行資料的收發。The bus bar 26b connects the processor 21b, the main storage device 22b, the communication interface 23b, the auxiliary storage device 24b and the input/output device 25b, so that these can send and receive data to and from each other.

圖37是表示第二實施例的乾眼症檢查程式的軟體結構一例的圖。乾眼症檢查裝置20使用處理器21b讀出並執行乾眼症檢查程式200b,以達成圖37所示的資料獲取功能201b及症狀推斷功能202b。FIG. 37 is a diagram showing an example of a software configuration of the dry eye syndrome examination program according to the second embodiment. The dry eye disease inspection apparatus 20 uses the processor 21b to read out and execute the dry eye disease inspection program 200b, so as to achieve the data acquisition function 201b and the symptom estimation function 202b shown in FIG. 37 .

資料獲取功能201b獲取推論用圖像資料、以及推論用充血資料。The data acquisition function 201b acquires image data for inference and congestion data for inference.

推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The inference image is captured using, for example, a camera mounted on a smartphone.

推論用充血資料是表示推論用被檢查體的眼睛的充血程度的資料。例如,推論用充血資料可表示基於推論用圖像資料而算出、且表現推論用被檢查體的眼睛的充血程度的值。具體而言,推論用充血資料可表示以下的值,即,於將以彩色描繪有推論用被檢查體的眼睛的推論用圖像轉換為僅綠色陣列的圖像、並使分配至該圖像所含的各畫素的亮度反轉而得的灰階圖像中,與以白色描繪出結膜的血管的區域的面積對應的值。The hyperemia data for inference are data showing the degree of hyperemia of the eyes of the subject for inference. For example, the hyperemia data for inference can represent a value that is calculated based on the image data for inference and expresses the degree of hyperemia of the eye of the subject for inference. Specifically, the hyperemia data for inference can represent a value in which an inference image in which an eye of a subject for inference is drawn in color is converted into an image of only a green array and assigned to the image A value corresponding to the area of the region in which the blood vessels of the conjunctiva are drawn in white in the grayscale image in which the luminance of each of the included pixels is inverted.

進而,該值亦可使用藉由在評價該面積之前自該灰階圖像剪切描繪出結膜的區域而生成的圖像來計算。該值為表示以下情況的值:描繪出結膜的血管的區域的面積越大,眼睛的充血程度越大,描繪出結膜的血管的區域的面積越小,眼睛的充血程度越小。Furthermore, the value can also be calculated using an image generated by clipping the region delineating the conjunctiva from the grayscale image prior to evaluating the area. This value is a value indicating that the larger the area of the area where the blood vessels of the conjunctiva are drawn, the greater the degree of hyperemia of the eye, and the smaller the area of the area where the blood vessels of the conjunctiva are drawn, the less the degree of hyperemia of the eye.

或者,推論用充血資料亦可表示使用用戶介面而輸入、且表現推論用被檢查體的眼睛的充血程度的值。該用戶介面例如被顯示於圖36所示的顯示器920b上。或者,該用戶介面被顯示於推論用被檢查體所簽約的智慧型手機中搭載的觸控面板顯示器上。Alternatively, the hyperemia data for inference may represent a value that is input using the user interface and expresses the degree of hyperemia of the eye of the subject for inference. The user interface is displayed, for example, on the display 920b shown in FIG. 36 . Alternatively, the user interface is displayed on a touch panel display mounted on a smartphone contracted by the subject for inference.

症狀推斷功能202a將推論用圖像資料及推論用充血資料輸入至已利用機械學習執行功能102b學習完畢的機械學習程式750,並使機械學習程式750b推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。例如,症狀推斷功能202b將所述兩個資料輸入至機械學習程式750b,以推斷表現推論用被檢查體的眼睛中出現的角結膜上皮損傷的程度的數值。The symptom inference function 202a inputs the image data for inference and the hyperemia data for inference into the machine learning program 750 that has been learned by the machine learning executive function 102b, and causes the machine learning program 750b to infer the dryness appearing in the eyes of the subject for inference. Symptoms of eye disease. For example, the symptom inference function 202b inputs the two data into the machine learning program 750b to infer a numerical value representing the degree of corneal and conjunctival epithelial damage occurring in the eye of the subject for inference.

然後,症狀推斷功能202b使機械學習程式750b輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。例如,症狀推斷功能202b使機械學習程式750b輸出表示表現推論用被檢查體的眼睛中出現的角結膜上皮損傷的程度的數值的症狀資料。該情況下,症狀資料例如用於在顯示器920b上顯示進行自我表示、且表現推論用被檢查體的眼睛中出現的角結膜上皮損傷的程度的數值。Then, the symptom estimating function 202b causes the machine learning program 750b to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference. For example, the symptom estimation function 202b causes the machine learning program 750b to output symptom data representing a numerical value representing the degree of corneal and conjunctival epithelial damage occurring in the eye of the subject for estimation. In this case, the symptom data is used to display on the display 920b, for example, a numerical value that expresses itself and expresses the degree of corneal and conjunctival epithelial damage occurring in the eye of the subject for inference.

接下來,參照圖38,對第二實施例的乾眼症檢查程式200b執行的處理一例進行說明。圖38是表示利用第二實施例的乾眼症檢查程式執行的處理一例的流程圖。Next, with reference to FIG. 38 , an example of the processing executed by the dry eye syndrome examination program 200 b of the second embodiment will be described. FIG. 38 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the second embodiment.

於步驟S41中,資料獲取功能201b獲取推論用圖像資料、以及推論用充血資料。In step S41, the data acquisition function 201b acquires the image data for inference and the hyperemia data for inference.

於步驟S42中,症狀推斷功能202b將推論用圖像資料及推論用充血資料輸入至已學習完畢的機械學習程式750b,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,並使機械學習程式750b輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。In step S42, the symptom inference function 202b inputs the image data for inference and the hyperemia data for inference into the machine learning program 750b that has already been learned to infer the symptoms of dry eye appearing in the eyes of the subject for inference, and The machine learning program 750b is caused to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference.

以上,對第二實施例的機械學習執行程式、乾眼症檢查程式、機械學習執行裝置、乾眼症檢查裝置、機械學習執行方法及乾眼症檢查方法進行了說明。The machine learning execution program, the dry eye disease inspection program, the machine learning execution device, the dry eye disease inspection device, the machine learning execution method, and the dry eye disease inspection method of the second embodiment have been described above.

機械學習執行程式100b具備教師資料獲取功能101b、以及機械學習執行功能102b。The machine learning execution program 100b includes a teacher data acquisition function 101b and a machine learning execution function 102b.

教師資料獲取功能101b獲取將學習用圖像資料以及學習用充血資料作為問題、將學習用檢查圖像資料以及學習用檢查結果資料中的至少一者作為答案的教師資料。學習用圖像資料是表示描繪有學習用被檢查體的眼睛的圖像的資料。學習用充血資料是表示學習用被檢查體的眼睛的充血程度的資料。學習用檢查圖像資料是表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的圖像的資料。學習用檢查結果資料是表示與乾眼症的症狀相關的檢查結果的資料。The teacher data acquisition function 101b acquires teacher data in which the image data for learning and the hyperemia data for learning are the questions, and the answer is at least one of the image data for learning and the inspection result data for learning. The image data for learning is data representing an image in which the eyes of the subject for learning are drawn. The hyperemia data for learning are data showing the degree of hyperemia of the eyes of the subject for learning. The examination image data for learning is data representing an image in which the eyes of the subject for learning are drawn when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. The test result data for learning are data showing test results related to symptoms of dry eye.

機械學習執行功能102b將教師資料輸入至機械學習程式750b,使機械學習程式750b進行學習。The machine learning execution function 102b inputs the teacher data into the machine learning program 750b, and causes the machine learning program 750b to learn.

藉此,機械學習執行程式100b可生成基於學習用圖像資料及學習用充血資料來預測與乾眼症的症狀相關的檢查結果的機械學習程式750b。Thereby, the machine learning execution program 100b can generate the machine learning program 750b that predicts the test results related to the symptoms of dry eye based on the learning image data and the learning hyperemia data.

另外,機械學習執行程式100b使用不僅包含學習用圖像資料而且包含學習用充血資料作為問題的教師資料來使機械學習程式750b進行學習。因此,機械學習執行程式100b可生成能夠精度更良好地預測與乾眼症的症狀相關的檢查結果的機械學習程式750b。In addition, the machine learning execution program 100b causes the machine learning program 750b to learn using the teacher data including not only the learning image data but also the learning hyperemia data as questions. Therefore, the machine learning execution program 100b can generate the machine learning program 750b capable of predicting the test results related to the symptoms of dry eye more accurately.

另外,機械學習執行程式100b獲取將學習用充血資料作為問題的一部分的教師資料,所述學習用充血資料表示基於學習用圖像資料而算出、且表現學習用被檢查體的眼睛的充血程度的值。藉此,機械學習執行程式100b可省去學習用被檢查體以及其他人以目視確認學習用被檢查體的眼睛的充血程度的勞力。In addition, the machine learning execution program 100b acquires teacher data including hyperemia data for learning representing the degree of hyperemia of the eyes of the subject for learning calculated based on the image data for learning as a part of the problem value. Thereby, the machine learning execution program 100b can save the labor of visually checking the degree of hyperemia of the eyes of the learning subject and others.

另外,機械學習執行程式100b獲取將使用用戶介面輸入的學習用充血資料作為問題的一部分的教師資料。藉此,機械學習執行程式100b可省去基於學習用圖像資料算出表現學習用被檢查體的眼睛的充血程度的值的處理。In addition, the machine learning execution program 100b acquires teacher data that uses the hyperemia data for learning input using the user interface as a part of the problem. Thereby, the machine learning execution program 100b can omit the process of calculating the value representing the degree of hyperemia of the eyes of the subject for learning based on the image data for learning.

乾眼症檢查程式200b具備資料獲取功能201b、以及症狀推斷功能202b。The dry eye syndrome examination program 200b includes a data acquisition function 201b and a symptom estimation function 202b.

資料獲取功能201b獲取推論用圖像資料、以及推論用充血資料。推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用充血資料是表示推論用被檢查體的眼睛的充血程度的資料。The data acquisition function 201b acquires image data for inference and congestion data for inference. The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The hyperemia data for inference are data showing the degree of hyperemia of the eyes of the subject for inference.

症狀推斷功能202b將推論用圖像資料及推論用充血資料輸入至已利用機械學習執行程式100b學習完畢的機械學習程式750b,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。然後,症狀推斷功能202b使機械學習程式750b輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。The symptom inference function 202b inputs the inference image data and the inference hyperemia data to the machine learning program 750b that has been learned by the machine learning execution program 100b to infer the symptoms of dry eye appearing in the eyes of the inference subject. Then, the symptom estimating function 202b causes the machine learning program 750b to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference.

藉此,乾眼症檢查程式200b無需實際實施與乾眼症的症狀相關的檢查,便可預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye disease examination program 200b can predict the examination result related to the symptoms of dry eye disease without actually carrying out the examination related to the symptoms of dry eye disease.

另外,乾眼症檢查程式200b獲取推論用充血資料,所述推論用充血資料表示基於推論用圖像資料而算出、且表現推論用被檢查體的眼睛的充血程度的值。藉此,乾眼症檢查程式200b可省去推論用被檢查體以及其他人以目視確認推論用被檢查體的眼睛的充血程度的勞力。In addition, the dry eye examination program 200b acquires hyperemia data for inference, which represents a value that is calculated based on the image data for inference and expresses the degree of hyperemia of the eye of the subject for inference. Thereby, the dry eye examination program 200b can save labor for visually checking the degree of hyperemia of the eyes of the inference subject and others.

另外,乾眼症檢查程式200b獲取使用用戶介面輸入的推論用充血資料。藉此,乾眼症檢查程式200b可省去基於推論用圖像資料算出表現推論用被檢查體的眼睛的充血程度的值的處理。In addition, the dry eye examination program 200b acquires the hyperemia data for inference input using the user interface. Thereby, the dry eye examination program 200b can omit the process of calculating the value representing the degree of hyperemia of the eye of the subject for inference based on the image data for inference.

接下來,列舉使機械學習程式750b實際進行學習來推斷乾眼症的症狀的例子,並將比較例與第二實施例的實施例加以對比來說明藉由機械學習執行程式100b起到的效果的具體例。Next, an example in which the machine learning program 750b is actually learned to infer the symptoms of dry eye disease is given, and the effect of executing the program 100b by machine learning is described by comparing a comparative example with the example of the second embodiment. specific example.

第一,對在使用螢光素染色試劑實施了檢查的情況下藉由機械學習執行程式100b起到的效果的具體例進行說明。First, a specific example of the effect obtained by executing the program 100b by machine learning in the case where the test is performed using the luciferin staining reagent will be described.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的560個教師資料及表現學習用被檢查體的眼睛異常的560個教師資料中除去學習用充血資料後的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the data obtained by removing the hyperemia data for learning from the 560 teacher data showing normal eyes of the subject for learning and the 560 teacher data showing abnormal eyes of the subject for learning Examples of teacher data to make machine learning programs learn.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為73%且假陰性率為34%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 73% and the false negative rate was 34%.

另一方面,關於利用機械學習執行程式100b進行了學習的機械學習程式750b,若使用相同的測試資料來評價特性,則示出預測精度為78%且假陰性率為11%。該特性優於比較例的機械學習程式的特性。On the other hand, the machine learning program 750b learned by the machine learning execution program 100b showed that the prediction accuracy was 78% and the false negative rate was 11% when the characteristics were evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

第二,對在使用麗絲胺綠染色試劑實施了檢查的情況下藉由機械學習執行程式100b起到的效果的具體例進行說明。Second, a specific example of the effect obtained by executing the program 100b by machine learning when an inspection is performed using the lissamine green staining reagent will be described.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的540個教師資料及表現學習用被檢查體的眼睛異常的580個教師資料中除去學習用充血資料後的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the data obtained by removing the hyperemia data for learning from the 540 teacher data showing normal eyes of the subject for learning and the 580 teacher data showing abnormal eyes of the subject for learning Examples of teacher data to make machine learning programs learn.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為73%且假陰性率為34%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 73% and the false negative rate was 34%.

另一方面,關於利用機械學習執行程式100b進行了學習的機械學習程式750b,若使用相同的測試資料來評價特性,則示出預測精度為78%且假陰性率為11%。該特性優於比較例的機械學習程式的特性。On the other hand, the machine learning program 750b learned by the machine learning execution program 100b showed that the prediction accuracy was 78% and the false negative rate was 11% when the characteristics were evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example.

再者,機械學習執行程式100b所具有的功能的至少一部分可藉由包括電路部的硬體達成。同樣地,乾眼症檢查程式200b所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為LSI、ASIC、FPGA、GPU。Furthermore, at least a part of the functions of the machine learning execution program 100b can be realized by hardware including a circuit portion. Likewise, at least a part of the functions of the dry eye syndrome examination program 200b can be realized by hardware including a circuit unit. In addition, such hardware is, for example, LSI, ASIC, FPGA, and GPU.

另外,機械學習執行程式100b所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,乾眼症檢查程式200b所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100b can also be achieved by the cooperation of software and hardware. Similarly, at least a part of the functions of the dry eye syndrome checking program 200b can also be achieved by the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於第二實施例中,列舉機械學習執行裝置10b、機械學習裝置700b以及乾眼症檢查裝置20b為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In the second embodiment, the case where the machine learning execution device 10b, the machine learning device 700b, and the dry eye disease inspection device 20b are independent devices has been described as an example, but the present invention is not limited to this. The devices can also be implemented as one device.

接下來,參照圖39至圖44,對所述實施形態的第三實施例的具體例進行說明。Next, a specific example of the third example of the above-described embodiment will be described with reference to FIGS. 39 to 44 .

首先,參照圖39及圖40,對第三實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法的具體例進行說明。與第二實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法不同,第三實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法使用包含後述的學習用回答資料而非學習用充血資料作為問題的教師資料。First, with reference to FIGS. 39 and 40 , specific examples of the machine learning execution program, the machine learning execution device, and the machine learning execution method of the third embodiment will be described. Different from the machine learning execution program, machine learning execution device, and machine learning execution method of the second embodiment, the machine learning execution program, machine learning execution device, and machine learning execution method of the third embodiment use the answer data for learning to be described later. Teacher data for non-learning use hyperemia data as questions.

圖39是表示第三實施例的機械學習執行裝置的硬體結構一例的圖。圖39所示的機械學習執行裝置10c是於後述的機械學習裝置700c的學習階段中使機械學習裝置700c執行機械學習的裝置。另外,如圖39所示,機械學習執行裝置10c包括處理器11c、主儲存裝置12c、通訊介面13c、輔助儲存裝置14c、輸入輸出裝置15c、以及匯流排16c。FIG. 39 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the third embodiment. The machine learning execution device 10c shown in FIG. 39 is a device that causes the machine learning device 700c to execute machine learning in a learning phase of the machine learning device 700c to be described later. In addition, as shown in FIG. 39, the machine learning execution device 10c includes a processor 11c, a main storage device 12c, a communication interface 13c, an auxiliary storage device 14c, an input/output device 15c, and a bus bar 16c.

處理器11c例如為CPU,讀出並執行後述的機械學習執行程式100c,以達成機械學習執行程式100c所具有的各功能。另外,處理器11c亦可讀出並執行機械學習執行程式100c以外的程式,以於達成機械學習執行程式100c所具有的各功能的基礎上達成必要的功能。The processor 11c is, for example, a CPU, and reads out and executes the machine learning execution program 100c described later to achieve each function of the machine learning execution program 100c. In addition, the processor 11c may read out and execute programs other than the machine learning execution program 100c, so as to achieve necessary functions in addition to the functions of the machine learning execution program 100c.

主儲存裝置12c例如為RAM,預先儲存有由處理器11c讀出並執行的機械學習執行程式100c以及其他程式。The main storage device 12c is, for example, a RAM, and stores in advance a machine learning execution program 100c and other programs that are read and executed by the processor 11c.

通訊介面13c是用於經由圖39所示的網路NW而與機械學習裝置700c以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 13c is an interface circuit for performing communication with the machine learning device 700c and other devices via the network NW shown in FIG. 39 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置14c例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 14c is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置15c例如為輸入輸出端口。輸入輸出裝置15c例如連接有圖1所示的鍵盤811c、滑鼠812c、顯示器910c。鍵盤811c及滑鼠812c例如用於輸入為了操作機械學習執行裝置10c所必需的資料的作業中。顯示器910c例如顯示機械學習執行裝置10c的圖形使用者介面。The input/output device 15c is, for example, an input/output port. The input/output device 15c is connected to, for example, a keyboard 811c, a mouse 812c, and a display 910c shown in FIG. 1 . The keyboard 811c and the mouse 812c are used, for example, for inputting data necessary for operating the machine learning execution device 10c. The display 910c, for example, displays a graphical user interface of the machine learning execution device 10c.

匯流排16c將處理器11c、主儲存裝置12c、通訊介面13c、輔助儲存裝置14c及輸入輸出裝置15c連接,以使該些能夠相互進行資料的收發。The bus bar 16c connects the processor 11c, the main storage device 12c, the communication interface 13c, the auxiliary storage device 14c and the input/output device 15c, so that these can transmit and receive data with each other.

圖40是表示第三實施例的機械學習執行程式的軟體結構一例的圖。機械學習執行裝置10c使用處理器11c讀出並執行機械學習執行程式100c,以達成圖40所示的教師資料獲取功能101c及機械學習執行功能102c。FIG. 40 is a diagram showing an example of the software configuration of the machine learning execution program of the third embodiment. The machine learning execution device 10c uses the processor 11c to read out and execute the machine learning execution program 100c, so as to achieve the teacher data acquisition function 101c and the machine learning execution function 102c shown in FIG. 40 .

教師資料獲取功能101c獲取將一個學習用圖像資料及學習用回答資料作為問題、將一個學習用檢查圖像資料及學習用檢查結果資料中的至少一者作為答案的教師資料。The teacher data acquisition function 101c acquires teacher data with one image data for learning and answer data for learning as a question, and at least one of one image data for learning and inspection result data for learning as an answer.

學習用圖像資料是構成教師資料的問題的一部分、且表示描繪有學習用被檢查體的眼睛的學習用圖像的資料。學習用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for learning is a part of the problem constituting the teacher data, and represents the image for learning in which the eyes of the subject for learning are drawn. The learning image is captured by, for example, a camera mounted on a smartphone.

學習用回答資料是表示與學習用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果的資料。The response data for learning is data showing the results of the responses to the inquiries about the subjective symptoms of the eyes possessed by the subject for learning.

例如,作為由學習用回答資料表示的詢問,例如可列舉:學習用被檢查體「眼睛模糊」、「眼花」、「張著眼睛時辛苦」、「眼睛感到異物感」、「眼睛有不適感」等詢問。或者,作為由學習用回答資料表示的詢問,可列舉:「眼睛疲勞」、「眼睛乾澀」、「眼睛沈重」、「眼睛充血」等詢問。For example, as the question represented by the learning response data, for example, the learning subject "blurred eyes", "dazzled eyes", "difficulty opening eyes", "feeling of foreign body in eyes", "feeling of eye discomfort" ” and other inquiries. Alternatively, as the inquiries expressed by the answer material for learning, inquiries such as "eyestrain", "dry eyes", "heavy eyes", "eye congestion", etc. may be mentioned.

另外,例如,對與眼睛的自覺症狀相關的詢問的回答可為選自「0分:完全無症狀」、「1分:不太在意」、「2分:有點在意」、「3分:在意」及「4分:非常在意」這五個階段中的方式。或者,對與眼睛的自覺症狀相關的詢問的回答亦可為選自「是」與「否」這兩者中的方式。In addition, for example, the answer to the inquiry about the subjective symptoms of the eyes may be selected from "0 points: no symptoms at all", "1 point: not very concerned", "2 points: slightly concerned", "3 points: concerned ” and “4 points: I care very much” in the five stages. Alternatively, the answer to the inquiry about the subjective symptoms of the eyes may be selected from both "Yes" and "No".

學習用檢查圖像資料是構成教師資料的答案的一部分、且表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的學習用檢查圖像的資料。作為此種檢查,例如可列舉:使用螢光素染色試劑的對角結膜上皮損傷程度的檢查、使用麗絲胺綠染色試劑的對黏液素損傷程度的檢查。因此,例如,學習用檢查圖像資料成為表示描繪有經螢光素或麗絲胺綠染色的眼睛的圖像的資料。The examination image data for learning is a part of the answer that constitutes the teacher data, and it is used for learning that depicts the eyes of the subject for learning when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. Check the profile of the image. Such tests include, for example, an inspection of the degree of corneal and conjunctival epithelial damage using a luciferin staining reagent, and an inspection of the degree of mucin damage using a lissamine green staining reagent. Therefore, for example, the examination image data for learning is data representing an image in which an eye stained with luciferin or lissamine green is drawn.

學習用檢查結果資料是構成教師資料的答案的一部分、且表示與乾眼症的症狀相關的檢查結果的資料。作為此種檢查結果,例如可列舉基於使用螢光素染色試劑的檢查來表現角結膜上皮損傷程度的、為0以上且1以下的數值。或者,作為此種檢查結果,例如可列舉基於使用麗絲胺綠染色試劑的檢查來表現黏液素損傷程度的、為0以上且1以下的數值。The test result data for learning is data that constitutes a part of the answers of the teacher data and shows test results related to symptoms of dry eye. As such test results, for example, a numerical value of 0 or more and 1 or less, which expresses the degree of corneal and conjunctival epithelial damage based on the test using a luciferin staining reagent, can be mentioned. Alternatively, as such an inspection result, for example, a numerical value of 0 or more and 1 or less, which expresses the degree of mucin damage based on an inspection using a lissamine green staining reagent, can be mentioned.

於所述兩個數值為0以上且小於0.5的情況下,經螢光素染色試劑或麗絲胺綠染色試劑染色的角結膜上皮損傷或黏液素損傷的面積小於眼睛整體面積的30%,因此表現出學習用被檢查體的眼睛正常,不需要由眼科醫生進行診察。另外,於所述兩個數值為0.5以上且1以下的情況下,經螢光素染色試劑或麗絲胺綠染色試劑染色的角結膜上皮損傷或黏液素損傷的面積為眼睛整體面積的30%以上,因此表現出學習用被檢查體的眼睛異常,需要由眼科醫生進行診察。When the two values are 0 or more and less than 0.5, the area of corneal and conjunctival epithelial lesions or mucin lesions stained by luciferin staining reagent or Lissamine green staining reagent is less than 30% of the entire area of the eye, so It appears that the eyes of the subject for study are normal, and examination by an ophthalmologist is not required. In addition, when the two numerical values are 0.5 or more and 1 or less, the area of corneal and conjunctival epithelial damage or mucin damage stained with luciferin staining reagent or lissamine green staining reagent is 30% of the entire area of the eye As described above, the eyes of the subject for learning are abnormal, and therefore, an ophthalmologist is required to conduct an examination.

例如,教師資料獲取功能101c獲取1280個教師資料。該情況下,1280個學習用圖像資料分別表示藉由實施四組使用搭載於智慧型手機上的照相機對8名學習用被檢查體的兩個眼睛分別進行20次拍攝的處理而獲取的1280張學習用圖像。另外,該情況下,1280個學習用回答資料分別表示與學習用被檢查體所擁有的眼睛的自覺症狀相關的詢問的1280種回答結果。For example, the teacher profile acquisition function 101c acquires 1280 teacher profiles. In this case, the 1280 learning image data respectively represent 1280 obtained by performing four sets of processing of photographing the two eyes of 8 learning subjects 20 times using a camera mounted on a smartphone. Zhang learning images. In addition, in this case, the 1280 pieces of response data for learning respectively represent the results of 1280 kinds of responses to the inquiries about the subjective symptoms of the eyes possessed by the subject for learning.

另外,該情況下,1280個學習用檢查圖像資料分別表示對所述1280張學習用圖像各個中描繪出的學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時所拍攝的學習用檢查圖像。另外,1280個學習用檢查結果資料表示1280個分別表示拍攝1280張學習用檢查圖像時所實施的與乾眼症的症狀相關的檢查結果的、為0以上且1以下的數值。In addition, in this case, the 1280 pieces of examination image data for learning respectively indicate the time when the examination related to the symptoms of dry eye is performed on the eyes of the examination subject for learning depicted in each of the 1280 pieces of learning images. The image taken for the examination of the study. In addition, the 1280 pieces of examination result data for learning represent 1280 numerical values of 0 or more and 1 or less, which respectively represent the examination results related to the symptoms of dry eye performed when 1280 examination images for learning were taken.

再者,於以下的說明中,列舉以下情況為例進行說明:1280個學習用檢查結果資料包含表示表現學習用被檢查體的眼睛正常的為0以上且小於0.5的數值的640個學習用檢查結果資料、以及表示表現學習用被檢查體的眼睛異常的為0.5以上且1以下的數值的640個學習用檢查結果資料。In addition, in the following description, the following case is taken as an example for description: 1280 pieces of examination result data for learning include 640 examinations for learning which show that the eyes of the examination subject for learning are normal and have a value of 0 or more and less than 0.5 Result data, and 640 test result data for learning showing a numerical value of 0.5 or more and 1 or less indicating abnormality in the eyes of the subject for learning.

機械學習執行功能102c將教師資料輸入至機械學習裝置700c中所安裝的機械學習程式750c,使機械學習程式750c進行學習。例如,機械學習執行功能102c使包括卷積類神經網路的機械學習程式750c藉由後向傳播進行學習。The machine learning execution function 102c inputs the teacher data into the machine learning program 750c installed in the machine learning device 700c, and causes the machine learning program 750c to learn. For example, the machine learning execution function 102c causes a machine learning program 750c comprising a convolutional neural network to learn by backpropagation.

例如,機械學習執行功能102c將以下560個教師資料輸入至機械學習程式750c,即,所述560個教師資料包含表示表現學習用被檢查體的眼睛正常的、為0以上且小於0.5的數值的學習用檢查結果資料。該560個教師資料是自選自所述8名中的7名獲取的教師資料。For example, the machine learning execution function 102c inputs the following 560 teacher data into the machine learning program 750c, that is, the 560 teacher data includes a value of 0 or more and less than 0.5, which indicates that the eyes of the subject for performance learning are normal. Study test results data. The 560 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

另外,例如,機械學習執行功能102c將以下560個教師資料輸入至機械學習程式750c,即,所述560個教師資料包含表示表現學習用被檢查體的眼睛異常的、為0.5以上且1以下的數值的學習用檢查結果資料。該560個教師資料是自選自所述8名中的7名獲取的教師資料。In addition, for example, the machine learning execution function 102c inputs the following 560 teacher data into the machine learning program 750c, that is, the 560 teacher data includes 0.5 or more and 1 or less, indicating that the eyes of the learning subject are abnormal. Numerical learning test result data. The 560 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

然後,機械學習執行功能102c利用所述1080個教師資料使機械學習程式750c進行學習。Then, the machine learning execution function 102c makes the machine learning program 750c learn by using the 1080 teacher data.

另外,例如,機械學習執行功能102c亦可將以下80個教師資料作為測試資料輸入至機械學習程式750c,即,所述80個教師資料表現學習用被檢查體的眼睛正常,且未用於機械學習程式750c的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102c can also input the following 80 teacher data as test data into the machine learning program 750c, that is, the 80 teacher data show that the eyes of the subject for learning are normal, and are not used for machine learning The learning of the learning program 750c is in progress. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

另外,例如,機械學習執行功能102c亦可將以下80個教師資料作為測試資料輸入至機械學習程式750c,即,所述80個教師資料表現學習用被檢查體的眼睛異常,且未用於機械學習程式750c的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102c may also input the following 80 teacher data as test data into the machine learning program 750c, that is, the 80 teacher data express the abnormality of the eyes of the subject for learning and are not used in the machine learning program 750c. The learning of the learning program 750c is in progress. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

藉此,機械學習執行功能102c可對利用所述1080個教師資料使機械學習程式750c進行學習而獲得的機械學習程式750c的特性進行評價。Thereby, the machine learning execution function 102c can evaluate the characteristics of the machine learning program 750c obtained by learning the machine learning program 750c using the 1080 teacher data.

再者,機械學習執行功能102c當使用測試資料對機械學習程式750c的特性進行評價時,例如可使用類激活映射。類激活映射是使輸入至類神經網路的資料中成為由類神經網路輸出的結果的根據的部分明確的技術。作為類激活映射,例如可列舉梯度加權類激活映射。梯度加權類激活映射為以下技術:利用與由卷積類神經網路執行的卷積的特徵相關的分類得分的梯度,來確定輸入至卷積類神經網路的圖像中對分類給予了一定程度以上的影響的區域。Furthermore, the machine learning execution function 102c may use, for example, a class activation map when evaluating the properties of the machine learning program 750c using test data. Class activation mapping is a technique for partially clarifying the data input to the neural network as the basis for the output of the neural network. Examples of the class activation map include gradient weighted class activation maps. Gradient-weighted class activation mapping is a technique that utilizes the gradient of the classification score relative to the features of the convolution performed by the convolutional neural network to determine that the image input to the convolutional neural network gives a certain the area of influence above the degree.

接下來,參照圖41,對第三實施例的機械學習執行程式100c執行的處理一例進行說明。圖41是表示利用第三實施例的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100c執行至少一次圖41所示的處理。Next, an example of processing executed by the machine learning execution program 100c of the third embodiment will be described with reference to FIG. 41 . FIG. 41 is a flowchart showing an example of processing executed by the machine learning execution program of the third embodiment. The machine learning execution program 100c executes the processing shown in FIG. 41 at least once.

於步驟S51中,教師資料獲取功能101c獲取將學習用圖像資料以及學習用回答資料作為問題、將學習用檢查圖像資料以及學習用檢查結果資料中的至少一者作為答案的教師資料。In step S51 , the teacher data acquisition function 101c acquires teacher data for which the image data for learning and the answer data for learning are the questions, and at least one of the image data for learning and the inspection result data for learning is the answer.

於步驟S52中,機械學習執行功能102a將教師資料輸入至機械學習程式750c,使機械學習程式750c進行學習。In step S52, the machine learning execution function 102a inputs the teacher data into the machine learning program 750c, and makes the machine learning program 750c learn.

接下來,參照圖42及圖43,對第三實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法的具體例進行說明。與第二實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法不同,第三實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法並非獲取推論用充血資料,而是獲取後述的推論用回答資料。42 and 43 , specific examples of the dry eye syndrome inspection program, the dry eye syndrome inspection apparatus, and the dry eye syndrome inspection method according to the third embodiment will be described. Unlike the dry eye inspection program, dry eye inspection device, and dry eye inspection method of the second embodiment, the dry eye inspection program, dry eye inspection device, and dry eye inspection method of the third embodiment do not acquire inferences Instead of using the hyperemia data, the answer data for the inference described later is obtained.

圖42是表示第三實施例的乾眼症檢查裝置的硬體結構一例的圖。圖42所示的乾眼症檢查裝置20c是於已利用機械學習執行程式100c學習完畢的機械學習裝置700c的推論階段中,使用機械學習裝置700c推斷推論用被檢查體的眼睛中出現的乾眼症的症狀的裝置。另外,如圖42所示,乾眼症檢查裝置20c包括處理器21c、主儲存裝置22c、通訊介面23c、輔助儲存裝置24c、輸入輸出裝置25c、以及匯流排26c。FIG. 42 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the third embodiment. The dry eye disease inspection apparatus 20c shown in FIG. 42 uses the machine learning apparatus 700c to infer dry eye appearing in the eyes of the subject for inference in the inference stage of the machine learning apparatus 700c that has been learned by the machine learning execution program 100c. symptomatic device. In addition, as shown in FIG. 42 , the dry eye disease inspection device 20c includes a processor 21c, a main storage device 22c, a communication interface 23c, an auxiliary storage device 24c, an input/output device 25c, and a bus bar 26c.

處理器21c例如為CPU,讀出並執行後述的乾眼症檢查程式200c,以達成乾眼症檢查程式200c所具有的各功能。另外,處理器21c亦可讀出並執行乾眼症檢查程式200c以外的程式,以於達成乾眼症檢查程式200c所具有的各功能的基礎上達成必要的功能。The processor 21c is, for example, a CPU, and reads out and executes the dry eye syndrome inspection program 200c described later, so as to achieve each function of the dry eye syndrome inspection program 200c. In addition, the processor 21c may read out and execute programs other than the dry eye disease inspection program 200c, so as to achieve necessary functions in addition to each function of the dry eye disease inspection program 200c.

主儲存裝置22c例如為RAM,預先儲存有由處理器21c讀出並執行的乾眼症檢查程式200c以及其他程式。The main storage device 22c is, for example, a RAM, and stores in advance a dry eye syndrome examination program 200c and other programs read and executed by the processor 21c.

通訊介面23c是用於經由圖42所示的網路NW而與機械學習裝置700c以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 23c is an interface circuit for performing communication with the machine learning device 700c and other devices via the network NW shown in FIG. 42 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置24c例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24c is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置25c例如為輸入輸出端口。輸入輸出裝置25c例如連接有圖42所示的鍵盤821c、滑鼠822c、顯示器920c。鍵盤821c及滑鼠822c例如用於輸入為了操作乾眼症檢查裝置20c所必需的資料的作業中。顯示器920c例如顯示乾眼症檢查裝置20c的圖形使用者介面。The input/output device 25c is, for example, an input/output port. The input/output device 25c is connected to, for example, a keyboard 821c, a mouse 822c, and a display 920c shown in FIG. 42 . The keyboard 821c and the mouse 822c are used, for example, in the operation of inputting data necessary for operating the dry eye disease inspection apparatus 20c. The display 920c displays, for example, a graphical user interface of the dry eye disease inspection apparatus 20c.

匯流排26c將處理器21c、主儲存裝置22c、通訊介面23c、輔助儲存裝置24c及輸入輸出裝置25c連接,以使該些能夠相互進行資料的收發。The bus bar 26c connects the processor 21c, the main storage device 22c, the communication interface 23c, the auxiliary storage device 24c and the input/output device 25c, so that these can transmit and receive data to and from each other.

圖43是表示第三實施例的乾眼症檢查程式的軟體結構一例的圖。乾眼症檢查裝置20c使用處理器21c讀出並執行乾眼症檢查程式200c,以達成圖43所示的資料獲取功能201c及症狀推斷功能202c。FIG. 43 is a diagram showing an example of a software configuration of the dry eye syndrome examination program according to the third embodiment. The dry eye disease inspection device 20c uses the processor 21c to read out and execute the dry eye disease inspection program 200c, so as to achieve the data acquisition function 201c and the symptom estimation function 202c shown in FIG. 43 .

資料獲取功能201c獲取推論用圖像資料、以及推論用回答資料。The data acquisition function 201c acquires image data for inference and answer data for inference.

推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The inference image is captured using, for example, a camera mounted on a smartphone.

推論用回答資料是表示與推論用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果的資料。The response data for inference is data showing the result of an answer to an inquiry about the subjective symptoms of the eye possessed by the subject for inference.

例如,作為由推論用回答資料表示的詢問,例如可列舉:推論用被檢查體「眼睛模糊」、「眼花」、「張著眼睛時辛苦」、「眼睛感到異物感」、「眼睛有不適感」等詢問。或者,作為由推論用回答資料表示的詢問,可列舉:「眼睛疲勞」、「眼睛乾澀」、「眼睛沈重」、「眼睛充血」等詢問。For example, as a question represented by the answer data for inference, for example, the subject for inference "blurred eyes", "dazzled eyes", "difficulty opening eyes", "feeling of foreign body in eyes", "feeling of eye discomfort" ” and other inquiries. Alternatively, as the inquiries expressed by the answer data for inference, inquiries such as "eyestrain", "dry eyes", "heavy eyes", and "eye congestion" can be mentioned.

另外,例如,對與眼睛的自覺症狀相關的詢問的回答可為選自「0分:完全無症狀」、「1分:不太在意」、「2分:有點在意」、「3分:在意」及「4分:非常在意」這五個階段中的方式。或者,對與眼睛的自覺相關的詢問的回答亦可為選自「是」與「否」這兩者中的方式。In addition, for example, the answer to the inquiry about the subjective symptoms of the eyes may be selected from "0 points: no symptoms at all", "1 point: not very concerned", "2 points: slightly concerned", "3 points: concerned ” and “4 points: I care very much” in the five stages. Alternatively, the answer to the question about the awareness of the eyes may be selected from both "yes" and "no".

症狀推斷功能202c將推論用圖像資料及推論用回答資料輸入至已利用機械學習執行功能102c學習完畢的機械學習程式750c,並使機械學習程式750c推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。例如,症狀推斷功能202c將所述兩個資料輸入至機械學習程式750c,以推斷表現推論用被檢查體的眼睛中出現的角結膜上皮損傷的程度的數值。The symptom inference function 202c inputs the image data for inference and the answer data for inference into the machine learning program 750c that has been learned by the machine learning executive function 102c, and causes the machine learning program 750c to infer the dryness appearing in the eyes of the subject for inference. Symptoms of eye disease. For example, the symptom inference function 202c inputs the two data into the machine learning program 750c to infer a numerical value representing the degree of corneal and conjunctival epithelial damage occurring in the eye of the subject for inference.

然後,症狀推斷功能202c使機械學習程式750c輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。例如,症狀推斷功能202c使機械學習程式750c輸出表示表現推論用被檢查體的眼睛中出現的角結膜上皮損傷的程度的數值的症狀資料。該情況下,症狀資料例如用於在顯示器920c上顯示進行自我表示、且表現推論用被檢查體的眼睛中出現的角結膜上皮損傷的程度的數值。Then, the symptom estimating function 202c causes the machine learning program 750c to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference. For example, the symptom estimation function 202c causes the machine learning program 750c to output symptom data representing a numerical value representing the degree of corneal and conjunctival epithelial damage occurring in the eye of the subject for estimation. In this case, the symptom data is used to display on the display 920c, for example, a numerical value that expresses itself and expresses the degree of corneal and conjunctival epithelial damage occurring in the eye of the subject for inference.

接下來,參照圖44,對第三實施例的乾眼症檢查程式200c執行的處理一例進行說明。圖44是表示利用第三實施例的乾眼症檢查程式執行的處理一例的流程圖。Next, with reference to FIG. 44 , an example of the processing executed by the dry eye syndrome examination program 200 c of the third embodiment will be described. FIG. 44 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the third embodiment.

於步驟S61中,資料獲取功能201c獲取推論用圖像資料、以及推論用回答資料。In step S61, the data acquisition function 201c acquires image data for inference and answer data for inference.

於步驟S62中,症狀推斷功能202c將推論用圖像資料及推論用回答資料輸入至已學習完畢的機械學習程式750c,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,並使機械學習程式750c輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。In step S62, the symptom inference function 202c inputs the image data for inference and the answer data for inference into the machine learning program 750c that has been learned, so as to infer the symptoms of dry eye appearing in the eyes of the subject for inference, and The machine learning program 750c is caused to output symptom data indicating the symptoms of dry eye appearing in the eyes of the subject for inference.

以上,對第三實施例的機械學習執行程式、乾眼症檢查程式、機械學習執行裝置、乾眼症檢查裝置、機械學習執行方法及乾眼症檢查方法進行了說明。The machine learning execution program, the dry eye disease inspection program, the machine learning execution device, the dry eye disease inspection device, the machine learning execution method, and the dry eye disease inspection method of the third embodiment have been described above.

機械學習執行程式100c具備教師資料獲取功能101c、以及機械學習執行功能102c。The machine learning execution program 100c includes a teacher data acquisition function 101c and a machine learning execution function 102c.

教師資料獲取功能101c獲取將學習用圖像資料以及學習用回答資料作為問題、將學習用檢查圖像資料以及學習用檢查結果資料中的至少一者作為答案的教師資料。學習用圖像資料是表示描繪有學習用被檢查體的眼睛的圖像的資料。學習用回答資料是表示與學習用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果的資料。學習用檢查圖像資料是表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的圖像的資料。學習用檢查結果資料是表示與乾眼症的症狀相關的檢查結果的資料。The teacher data acquisition function 101c acquires teacher data in which the image data for learning and the answer data for learning are the questions, and the answer is at least one of the image data for learning and the inspection result data for learning. The image data for learning is data representing an image in which the eyes of the subject for learning are drawn. The response data for learning is data showing the results of the responses to the inquiries about the subjective symptoms of the eyes possessed by the subject for learning. The examination image data for learning is data representing an image in which the eyes of the subject for learning are drawn when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. The test result data for learning are data showing test results related to symptoms of dry eye.

機械學習執行功能102c將教師資料輸入至機械學習程式750c,使機械學習程式750c進行學習。The machine learning execution function 102c inputs the teacher data into the machine learning program 750c, and causes the machine learning program 750c to learn.

藉此,機械學習執行程式100c可生成基於學習用圖像資料及學習用回答資料來預測與乾眼症的症狀相關的檢查結果的機械學習程式750c。Thereby, the machine learning execution program 100c can generate the machine learning program 750c for predicting the test results related to the symptoms of dry eye based on the image data for learning and the response data for learning.

另外,機械學習執行程式100c使用不僅包含學習用圖像資料而且包含學習用充血資料作為問題的教師資料來使機械學習程式750c進行學習。因此,機械學習執行程式100c可生成能夠精度更良好地預測與乾眼症的症狀相關的檢查結果的機械學習程式750c。In addition, the machine learning execution program 100c causes the machine learning program 750c to learn using the teacher data including not only the learning image data but also the learning hyperemia data as questions. Therefore, the machine learning execution program 100c can generate the machine learning program 750c capable of predicting the test results related to the symptoms of dry eye more accurately.

乾眼症檢查程式200c具備資料獲取功能201c、以及症狀推斷功能202c。The dry eye syndrome examination program 200c includes a data acquisition function 201c and a symptom estimation function 202c.

資料獲取功能201c獲取推論用圖像資料、以及推論用回答資料。推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用回答資料是表示與推論用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果的資料。The data acquisition function 201c acquires image data for inference and answer data for inference. The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The response data for inference is data showing the result of an answer to an inquiry about the subjective symptoms of the eye possessed by the subject for inference.

症狀推斷功能202c將推論用圖像資料及推論用回答資料輸入至已利用機械學習執行程式100c學習完畢的機械學習程式750c,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。然後,症狀推斷功能202c使機械學習程式750c輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。The symptom inference function 202c inputs the inference image data and inference answer data to the machine learning program 750c that has been learned by the machine learning execution program 100c, to infer the symptoms of dry eye appearing in the eyes of the inference subject. Then, the symptom estimating function 202c causes the machine learning program 750c to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference.

藉此,乾眼症檢查程式200c無需實際實施與乾眼症的症狀相關的檢查,便可預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye disease examination program 200c can predict the examination result related to the symptoms of dry eye without actually carrying out the examination related to the symptoms of dry eye.

接下來,列舉使機械學習程式750c實際進行學習來推斷乾眼症的症狀的例子,並將比較例與第三實施例的實施例加以對比來說明藉由機械學習執行程式100c起到的效果的具體例。Next, an example in which the machine learning program 750c is actually learned to infer the symptoms of dry eye syndrome is given, and the effect of executing the program 100c by machine learning is described by comparing a comparative example with the example of the third embodiment. specific example.

第一,對在使用螢光素染色試劑實施了檢查的情況下藉由機械學習執行程式100c起到的效果的具體例進行說明。First, a specific example of the effect obtained by executing the program 100c by machine learning when the test is performed using the luciferin staining reagent will be described.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的560個教師資料及表現學習用被檢查體的眼睛異常的560個教師資料中除去學習用回答資料後的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the data obtained by removing the response data for learning from the above-mentioned 560 teacher data showing normal eyes of the subject for learning and 560 teacher data showing abnormal eyes of the subject for learning Examples of teacher data to make machine learning programs learn.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為73%且假陰性率為34%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 73% and the false negative rate was 34%.

另一方面,關於利用機械學習執行程式100b進行了學習的機械學習程式750b,若使用相同的測試資料來評價特性,則示出預測精度為77%且假陰性率為12%。該特性優於比較例的機械學習程式的特性。另外,該情況下,由學習用回答資料表示的詢問為「眼睛模糊」、「眼花」、「張著眼睛時辛苦」、「眼睛感到異物感」及「眼睛有不適感」這五個詢問。On the other hand, the machine learning program 750b learned by the machine learning execution program 100b showed that the prediction accuracy was 77% and the false negative rate was 12% when the characteristics were evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example. In addition, in this case, the questions represented by the answering materials for learning are five questions: "blurred eyes", "dazzled eyes", "difficulty when opening eyes", "feeling of foreign body in eyes", and "feeling of eye discomfort".

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

第二,對在使用麗絲胺綠染色試劑實施了檢查的情況下藉由機械學習執行程式100c起到的效果的具體例進行說明。Second, a specific example of the effect obtained by executing the program 100c by machine learning when the test is performed using the lissamine green staining reagent will be described.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的540個教師資料及表現學習用被檢查體的眼睛異常的580個教師資料中除去學習用回答資料後的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the data obtained by removing the response data for learning from the 540 teacher data showing normal eyes of the subject for learning and the 580 teacher data showing abnormal eyes of the subject for learning are taken as Examples of teacher data to make machine learning programs learn.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為73%且假陰性率為34%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 73% and the false negative rate was 34%.

另一方面,關於利用機械學習執行程式100b進行了學習的機械學習程式750b,若使用相同的測試資料來評價特性,則示出預測精度為77%且假陰性率為12%。該特性優於比較例的機械學習程式的特性。另外,該情況下,由學習用回答資料表示的詢問為「眼睛模糊」、「眼花」、「張著眼睛時辛苦」、「眼睛感到異物感」及「眼睛有不適感」這五個詢問。On the other hand, the machine learning program 750b learned by the machine learning execution program 100b showed that the prediction accuracy was 77% and the false negative rate was 12% when the characteristics were evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example. In addition, in this case, the questions represented by the answering materials for learning are five questions: "blurred eyes", "dazzled eyes", "difficulty when opening eyes", "feeling of foreign body in eyes", and "feeling of eye discomfort".

再者,機械學習執行程式100c所具有的功能的至少一部分可藉由包括電路部的硬體達成。同樣地,乾眼症檢查程式200b所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為LSI、ASIC、FPGA、GPU。Furthermore, at least a part of the functions of the machine learning execution program 100c can be realized by hardware including a circuit portion. Likewise, at least a part of the functions of the dry eye syndrome examination program 200b can be realized by hardware including a circuit unit. In addition, such hardware is, for example, LSI, ASIC, FPGA, and GPU.

另外,機械學習執行程式100c所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,乾眼症檢查程式200c所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100c can also be achieved by the cooperation of software and hardware. Similarly, at least a part of the functions of the dry eye disease checking program 200c can also be achieved by the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於第三實施例中,列舉機械學習執行裝置10c、機械學習裝置700c以及乾眼症檢查裝置20c為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In addition, in the third embodiment, the case where the machine learning execution device 10c, the machine learning device 700c, and the dry eye disease inspection device 20c are independent devices has been described as an example, but the present invention is not limited to this. The devices can also be implemented as one device.

[第二實施例及第三實施例的變形例] 於所述第二實施例中,列舉了教師資料獲取功能101b獲取不包含第三實施例中所說明的學習用回答資料作為問題的一部分的教師資料的情況為例,但並不限定於此。教師資料獲取功能101b亦可獲取除了包含學習用充血資料之外亦包含學習用回答資料作為問題的教師資料。另外,該情況下,資料獲取功能201b除了獲取推論用充血資料之外亦獲取推論用回答資料。而且,該情況下,症狀推斷功能202b使機械學習程式750b在基於推論用充血資料之外亦基於推論用回答資料,推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。 [Variations of the second embodiment and the third embodiment] In the second embodiment, the case where the teacher data acquisition function 101b acquires the teacher data that does not include the learning answer material described in the third embodiment as a part of the question is exemplified, but it is not limited to this. The teacher data acquisition function 101b can also acquire teacher data that includes not only the study congestion data but also the study response data as questions. In addition, in this case, the data acquisition function 201b acquires answer data for inference in addition to the congestion data for inference. In this case, the symptom estimating function 202b causes the machine learning program 750b to infer the symptoms of dry eye appearing in the eyes of the subject for inference based on the response data for inference in addition to the congestion data for inference.

另外,於所述第三實施例中,列舉了教師資料獲取功能101c獲取不包含第二實施例中所說明的學習用充血資料作為問題的一部分的教師資料的情況為例,但並不限定於此。教師資料獲取功能101c亦可獲取除了包含學習用回答資料之外亦包含學習用充血資料作為問題的教師資料。另外,該情況下,資料獲取功能201c除了獲取推論用回答資料之外亦獲取推論用充血資料。而且,該情況下,症狀推斷功能202c使機械學習程式750c在基於推論用回答資料之外亦基於推論用充血資料,推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。In addition, in the third embodiment, the case where the teacher data acquisition function 101c acquires the teacher data that does not include the hyperemia data for learning described in the second embodiment as a part of the problem is exemplified, but it is not limited to this. The teacher data acquisition function 101c can also acquire teacher data that includes not only the answer data for learning but also the congestion data for learning as a question. Moreover, in this case, the data acquisition function 201c acquires the hyperemia data for inference in addition to the answer data for inference. In this case, the symptom estimating function 202c causes the machine learning program 750c to infer the symptoms of dry eye appearing in the eyes of the subject for inference based on the inference hyperemia data in addition to the inference response data.

接下來,對使用包含學習用充血資料及學習用回答資料兩者作為問題的教師資料進行學習、且基於推論用充血資料及推論用回答資料兩者推斷推論用被檢查體的眼睛中出現的乾眼症的症狀時起到的效果的具體例進行說明。另外,於以下的說明中,列舉教師資料獲取功能101b獲取學習用回答資料、資料獲取功能201b獲取推論用回答資料的情況為例進行說明。Next, study using teacher data including both the hyperemia data for learning and the response data for learning as questions, and infer the dryness appearing in the eyes of the subject for inference based on both the hyperemia data for inference and the response data for inference. A specific example of the effect obtained when the symptoms of eye disease are present will be described. In addition, in the following description, the case where the teacher data acquisition function 101b acquires the answer material for learning, and the data acquisition function 201b acquires the answer material for inference is taken as an example and demonstrated.

第一,對在使用螢光素染色試劑實施了檢查的情況下藉由機械學習執行程式100b起到的效果的具體例進行說明。First, a specific example of the effect obtained by executing the program 100b by machine learning in the case where the test is performed using the luciferin staining reagent will be described.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的560個教師資料及表現學習用被檢查體的眼睛異常的560個教師資料中除去學習用充血資料及學習用回答資料後的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the study hyperemia data and the study responses were removed from the 560 teacher data showing the normal eyes of the learning subject and the 560 teacher data showing the abnormal eyes of the learning subject. The data after the data is used as an example of teacher data to make the machine learning program learn.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為73%且假陰性率為34%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 73% and the false negative rate was 34%.

另一方面,關於利用機械學習執行程式100b進行了學習的機械學習程式750b,若使用相同的測試資料來評價特性,則示出預測精度為84%且假陰性率為5%。該特性優於比較例的機械學習程式的特性。另外,該情況下,由學習用回答資料表示的詢問為「眼睛模糊」、「眼花」、「張著眼睛時辛苦」、「眼睛感到異物感」及「眼睛有不適感」這五個詢問。On the other hand, the machine learning program 750b learned by the machine learning execution program 100b shows that the prediction accuracy is 84% and the false negative rate is 5% when the characteristics are evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example. In addition, in this case, the questions represented by the answering materials for learning are five questions: "blurred eyes", "dazzled eyes", "difficulty when opening eyes", "feeling of foreign body in eyes", and "feeling of eye discomfort".

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

第二,對在使用麗絲胺綠染色試劑實施了檢查的情況下藉由機械學習執行程式100b起到的效果的具體例進行說明。Second, a specific example of the effect obtained by executing the program 100b by machine learning when an inspection is performed using the lissamine green staining reagent will be described.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的540個教師資料及表現學習用被檢查體的眼睛異常的580個教師資料中除去學習用充血資料及學習用回答資料後的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the study hyperemia data and the study responses were removed from the 540 teacher data showing the normal eyes of the learning subject and the 580 teacher data showing the abnormal eyes of the learning subject. The data after the data is used as an example of teacher data to make the machine learning program learn.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為73%且假陰性率為34%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 73% and the false negative rate was 34%.

另一方面,關於利用機械學習執行程式100b進行了學習的機械學習程式750b,若使用相同的測試資料來評價特性,則示出預測精度為84%且假陰性率為5%。該特性優於比較例的機械學習程式的特性。另外,該情況下,由學習用回答資料表示的詢問為「眼睛模糊」、「眼花」、「張著眼睛時辛苦」、「眼睛感到異物感」及「眼睛有不適感」這五個詢問。On the other hand, the machine learning program 750b learned by the machine learning execution program 100b shows that the prediction accuracy is 84% and the false negative rate is 5% when the characteristics are evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example. In addition, in this case, the questions represented by the answering materials for learning are five questions: "blurred eyes", "dazzled eyes", "difficulty when opening eyes", "feeling of foreign body in eyes", and "feeling of eye discomfort".

再者,所述乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法亦可於向推論用被檢查體的眼睛中滴入乾眼症滴眼劑的前後使用。藉由在此種時機下使用,所述乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法亦可成為驗證該乾眼症滴眼劑相對於推論用被檢查體擁有的乾眼症的症狀而具有的有效性的手段。In addition, the above-mentioned dry eye examination program, dry eye examination apparatus, and dry eye examination method may be used before and after instillation of the dry eye eye drops into the eyes of the subject for inference. By using at such a timing, the dry eye test program, the dry eye test device, and the dry eye test method can also be used to verify the dry eye possessed by the test subject for inference with respect to the dry eye eye drops. Symptoms of the disease and have the effectiveness of the means.

接下來,參照圖45至圖50,對所述實施形態的第四實施例的具體例進行說明。Next, a specific example of the fourth example of the above-described embodiment will be described with reference to FIGS. 45 to 50 .

圖45是表示第四實施例的機械學習執行裝置的硬體結構一例的圖。圖45所示的機械學習執行裝置10d是於後述的機械學習裝置700d的學習階段中使機械學習裝置700d執行機械學習的裝置。另外,如圖45所示,機械學習執行裝置10d包括處理器11d、主儲存裝置12d、通訊介面13d、輔助儲存裝置14d、輸入輸出裝置15d、以及匯流排16d。FIG. 45 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the fourth embodiment. The machine learning execution device 10d shown in FIG. 45 is a device that causes the machine learning device 700d to execute machine learning in a learning phase of the machine learning device 700d to be described later. In addition, as shown in FIG. 45, the machine learning execution device 10d includes a processor 11d, a main storage device 12d, a communication interface 13d, an auxiliary storage device 14d, an input/output device 15d, and a bus bar 16d.

處理器11d例如為CPU,讀出並執行後述的機械學習執行程式100d,以達成機械學習執行程式100d所具有的各功能。另外,處理器11d亦可讀出並執行機械學習執行程式100d以外的程式,以於達成機械學習執行程式100d所具有的各功能的基礎上達成必要的功能。The processor 11d is, for example, a CPU, and reads out and executes the machine learning execution program 100d described later to achieve each function of the machine learning execution program 100d. In addition, the processor 11d may read out and execute programs other than the machine learning execution program 100d, so as to achieve necessary functions in addition to the functions of the machine learning execution program 100d.

主儲存裝置12d例如為RAM,預先儲存有由處理器11d讀出並執行的機械學習執行程式100d以及其他程式。The main storage device 12d is, for example, a RAM, and stores in advance a machine learning execution program 100d and other programs read and executed by the processor 11d.

通訊介面13d是用於經由圖45所示的網路NW而與機械學習裝置700d以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 13d is an interface circuit for performing communication with the machine learning apparatus 700d and other devices via the network NW shown in FIG. 45 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置14d例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 14d is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置15d例如為輸入輸出端口。輸入輸出裝置15d例如連接有圖45所示的鍵盤811d、滑鼠812d、顯示器910d。鍵盤811d及滑鼠812d例如用於輸入為了操作機械學習執行裝置10d所必需的資料的作業中。顯示器910d例如顯示機械學習執行裝置10d的圖形使用者介面。The input/output device 15d is, for example, an input/output port. To the input/output device 15d, for example, a keyboard 811d, a mouse 812d, and a display 910d shown in FIG. 45 are connected. The keyboard 811d and the mouse 812d are used, for example, for inputting data necessary for operating the machine learning execution device 10d. The display 910d, for example, displays a graphical user interface of the machine learning execution device 10d.

匯流排16d將處理器11d、主儲存裝置12d、通訊介面13d、輔助儲存裝置14d及輸入輸出裝置15d連接,以使該些能夠相互進行資料的收發。The bus bar 16d connects the processor 11d, the main storage device 12d, the communication interface 13d, the auxiliary storage device 14d, and the input/output device 15d, so that these can exchange data with each other.

圖46是表示第四實施例的機械學習執行程式的軟體結構一例的圖。機械學習執行裝置10d使用處理器11d讀出並執行機械學習執行程式100d,以達成圖46所示的教師資料獲取功能101d及機械學習執行功能102d。FIG. 46 is a diagram showing an example of the software configuration of the machine learning execution program of the fourth embodiment. The machine learning execution device 10d uses the processor 11d to read out and execute the machine learning execution program 100d, so as to achieve the teacher data acquisition function 101d and the machine learning execution function 102d shown in FIG. 46 .

教師資料獲取功能101d獲取將一個學習用圖像資料及學習用開眼瞼資料作為問題、將一個學習用檢查圖像資料及學習用檢查結果資料中的至少一者作為答案的教師資料。The teacher data acquisition function 101d acquires teacher data with one image data for learning and eyelid opening data for learning as a question and at least one of one image data for learning and inspection result data for learning as an answer.

學習用圖像資料是構成教師資料的問題的一部分、且表示描繪有學習用被檢查體的眼睛的學習用圖像的資料。學習用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for learning is a part of the problem constituting the teacher data, and represents the image for learning in which the eyes of the subject for learning are drawn. The learning image is captured by, for example, a camera mounted on a smartphone.

學習用開眼瞼資料是構成教師資料的問題的一部分、且表示最大開眼瞼時間的資料,所述最大開眼瞼時間為學習用被檢查體能夠連續張開被拍攝學習用圖像的眼睛的時間。The eyelid opening data for learning is a part of the problem constituting the teacher data and indicates the maximum eyelid opening time, which is the time during which the subject for learning can continuously open the eyes of which the learning image is captured.

例如,學習用開眼瞼資料可表示以下的最大開眼瞼時間,即,該最大開眼瞼時間是基於至少拍攝了自學習用被檢查體眨眼而張開眼睛的即刻起至下一次眨眼為止的期間的動畫中所反映出的眼瞼的活動來算出。For example, the eyelid opening data for learning can represent the maximum eyelid opening time based on at least the period from the moment when the subject for learning to blink and open the eyes to the next blink by taking at least the image. It is calculated by the movement of the eyelid reflected in the animation.

或者,學習用開眼瞼資料亦可表示基於學習用被檢查體的自我申報、且使用用戶介面而輸入的最大開眼瞼時間。該用戶介面例如被顯示於圖45所示的顯示器910d上。或者,該用戶介面被顯示於學習用被檢查體所簽約的智慧型手機中搭載的觸控面板顯示器上。Alternatively, the eyelid opening data for learning may represent the maximum eyelid opening time based on the self-report of the subject for learning and input using the user interface. The user interface is displayed, for example, on the display 910d shown in FIG. 45 . Alternatively, the user interface is displayed on a touch panel display mounted in a smartphone contracted by the subject for learning.

學習用檢查圖像資料是構成教師資料的答案的一部分、且表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的學習用檢查圖像的資料。作為此種檢查,例如可列舉將螢光素染色試劑滴入眼中並使用狹縫燈(slit lamp)對眼睛進行觀察來調查淚液層破壞時間的檢查。因此,例如,學習用檢查圖像資料成為表示描繪有經螢光素染色並使用狹縫燈進行了觀察的眼睛的圖像或動畫的資料。The examination image data for learning is a part of the answer that constitutes the teacher data, and it is used for learning that depicts the eyes of the subject for learning when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. Check the profile of the image. As such an inspection, for example, a luciferin dyeing reagent is dropped into the eye, and the eye is observed with a slit lamp to examine the tear layer destruction time. Therefore, for example, the examination image data for learning is data representing an image or a moving image in which an eye that has been stained with fluorescein and observed with a slit lamp is drawn.

學習用檢查結果資料是構成教師資料的答案的一部分、且表示與乾眼症的症狀相關的檢查結果的資料。作為此種檢查的結果,例如可列舉:基於將螢光素染色試劑滴入眼中並使用狹縫燈對眼睛進行觀察來調查淚液層破壞時間的檢查來表現淚液層破壞時間的長度的、為0以上且1以下的數值。The test result data for learning is data that constitutes a part of the answers of the teacher data and shows test results related to symptoms of dry eye. As a result of such an examination, for example, the length of the tear layer destruction time is expressed as 0 based on an examination that investigates the tear layer destruction time by instilling a luciferin dyeing reagent into the eye and observing the eye with a slit lamp. A numerical value of more than 1 and less than 1.

於該數值為0以上且小於0.5的情況下,淚液層破壞時間為10秒以上,因此表現出學習用被檢查體的眼睛正常,不需要由眼科醫生進行診察。另外,於該數值為0.5以上且1以下的情況下,淚液層破壞時間小於10秒,因此表現出學習用被檢查體的眼睛異常,需要由眼科醫生進行診察。When the numerical value is 0 or more and less than 0.5, the tear layer destruction time is 10 seconds or more, and therefore the eyes of the subject for study appear to be normal, and examination by an ophthalmologist is unnecessary. In addition, when the numerical value is 0.5 or more and 1 or less, the tear layer destruction time is less than 10 seconds, and therefore the eye of the subject for study is abnormal, and an ophthalmologist is required to examine it.

例如,教師資料獲取功能101d獲取1280個教師資料。該情況下,1280個學習用圖像資料分別表示藉由實施四組使用搭載於智慧型手機上的照相機對8名學習用被檢查體的兩個眼睛分別進行20次拍攝的處理而獲取的1280張學習用圖像。另外,該情況下,1280個學習用開眼瞼資料表示分別示出表現學習用被檢查體的最大開眼瞼時間的值的1280個值。For example, the teacher profile acquisition function 101d acquires 1280 teacher profiles. In this case, the 1280 learning image data respectively represent 1280 obtained by performing four sets of processing of photographing the two eyes of 8 learning subjects 20 times using a camera mounted on a smartphone. Zhang learning images. In this case, the 1280 eyelid opening data for learning represent 1280 values each showing a value representing the maximum eyelid opening time of the subject for learning.

另外,該情況下,1280個學習用檢查圖像資料分別表示對所述1280張學習用圖像各個中描繪出的學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時所拍攝的學習用檢查圖像。另外,1280個學習用檢查結果資料表示1280個分別表示拍攝1280張學習用檢查圖像時所實施的與乾眼症的症狀相關的檢查結果的、為0以上且1以下的數值。In addition, in this case, the 1280 pieces of examination image data for learning respectively indicate the time when the examination related to the symptoms of dry eye is performed on the eyes of the examination subject for learning depicted in each of the 1280 pieces of learning images. The image taken for the examination of the study. In addition, the 1280 pieces of examination result data for learning represent 1280 numerical values of 0 or more and 1 or less, which respectively represent the examination results related to the symptoms of dry eye performed when 1280 examination images for learning were taken.

再者,於以下的說明中,列舉以下情況為例進行說明:1280個學習用檢查結果資料包含表示表現學習用被檢查體的眼睛正常的為0以上且小於0.5的數值的620個學習用檢查結果資料、以及表示表現學習用被檢查體的眼睛異常的為0.5以上且1以下的數值的660個學習用檢查結果資料。In addition, in the following description, the following case is taken as an example for description: 1280 pieces of examination result data for learning include 620 examinations for learning which show that the eyes of the examination subject for learning are normal with a value of 0 or more and less than 0.5 Result data, and 660 test result data for learning showing a numerical value of 0.5 or more and 1 or less indicating abnormality of the eyes of the subject for learning.

機械學習執行功能102d將教師資料輸入至機械學習裝置700d中所安裝的機械學習程式750d,使機械學習程式750d進行學習。例如,機械學習執行功能102d使包括卷積類神經網路的機械學習程式750d藉由後向傳播進行學習。The machine learning execution function 102d inputs the teacher data into the machine learning program 750d installed in the machine learning device 700d, and causes the machine learning program 750d to learn. For example, the machine learning execution function 102d causes a machine learning program 750d including a convolutional neural network to learn by backpropagation.

例如,機械學習執行功能102d將以下540個教師資料輸入至機械學習程式750d,即,所述540個教師資料包含表示表現學習用被檢查體的眼睛正常的、為0以上且小於0.5的數值的學習用檢查結果資料。該540個教師資料是自選自所述8名中的7名獲取的教師資料。For example, the machine learning execution function 102d inputs the following 540 teacher data into the machine learning program 750d, that is, the 540 teacher data includes a numerical value of 0 or more and less than 0.5, which indicates that the eyes of the subject for performance learning are normal. Study test results data. The 540 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

另外,例如,機械學習執行功能102d將以下580個教師資料輸入至機械學習程式750d,即,所述580個教師資料包含表示表現學習用被檢查體的眼睛異常的、為0.5以上且1以下的數值的學習用檢查結果資料。該580個教師資料是自選自所述8名中的7名獲取的教師資料。In addition, for example, the machine learning execution function 102d inputs the following 580 teacher data into the machine learning program 750d, that is, the 580 teacher data includes 0.5 or more and 1 or less indicating abnormality of the eyes of the subject for learning. Numerical learning test result data. The 580 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

然後,機械學習執行功能102d利用所述1120個教師資料使機械學習程式750d進行學習。Then, the machine learning execution function 102d makes the machine learning program 750d learn by using the 1120 teacher data.

另外,例如,機械學習執行功能102d亦可將以下80個教師資料作為測試資料輸入至機械學習程式750d,即,所述80個教師資料表現學習用被檢查體的眼睛正常,且未用於機械學習程式750d的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102d can also input the following 80 teacher data as test data into the machine learning program 750d, that is, the 80 teacher data show that the eyes of the subject for learning are normal and are not used for machine learning Learning program 750d is in progress. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

另外,例如,機械學習執行功能102d亦可將以下80個教師資料作為測試資料輸入至機械學習程式750d,即,所述80個教師資料表現學習用被檢查體的眼睛異常,且未用於機械學習程式750d的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102d can also input the following 80 teacher data as test data into the machine learning program 750d. That is, the 80 teacher data represent abnormal eyes of the subject for learning and are not used for machine learning. Learning program 750d is in progress. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

藉此,機械學習執行功能102d可對利用所述1120個教師資料使機械學習程式750d進行學習而獲得的機械學習程式750d的特性進行評價。Thereby, the machine learning execution function 102d can evaluate the characteristics of the machine learning program 750d obtained by learning the machine learning program 750d using the 1120 teacher data.

再者,機械學習執行功能102d當使用測試資料對機械學習程式750d的特性進行評價時,例如可使用類激活映射。類激活映射是使輸入至類神經網路的資料中成為由類神經網路輸出的結果的根據的部分明確的技術。作為類激活映射,例如可列舉梯度加權類激活映射。梯度加權類激活映射為以下技術:利用與由卷積類神經網路執行的卷積的特徵相關的分類得分的梯度,來確定輸入至卷積類神經網路的圖像中對分類給予了一定程度以上的影響的區域。Furthermore, the machine learning execution function 102d may use, for example, a class activation map when evaluating the properties of the machine learning program 750d using test data. Class activation mapping is a technique for partially clarifying the data input to the neural network as the basis for the output of the neural network. Examples of the class activation map include gradient weighted class activation maps. Gradient-weighted class activation mapping is a technique that utilizes the gradient of the classification score relative to the features of the convolution performed by the convolutional neural network to determine that the image input to the convolutional neural network gives a certain the area of influence above the degree.

接下來,參照圖47,對第四實施例的機械學習執行程式100d執行的處理一例進行說明。圖47是表示利用第四實施例的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100d執行至少一次圖47所示的處理。Next, an example of processing executed by the machine learning execution program 100d of the fourth embodiment will be described with reference to FIG. 47 . 47 is a flowchart showing an example of processing executed by the machine learning execution program of the fourth embodiment. The machine learning execution program 100d executes the processing shown in FIG. 47 at least once.

於步驟S71中,教師資料獲取功能101d獲取將學習用圖像資料以及學習用開眼瞼資料作為問題、將學習用檢查圖像資料以及學習用檢查結果資料中的至少一者作為答案的教師資料。In step S71, the teacher data acquisition function 101d acquires teacher data with at least one of the learning image data and the learning eyelid opening data as the question and at least one of the learning inspection image data and the learning inspection result data as the answer.

於步驟S72中,機械學習執行功能102d將教師資料輸入至機械學習程式750d,使機械學習程式750d進行學習。In step S72, the machine learning execution function 102d inputs the teacher data into the machine learning program 750d, so that the machine learning program 750d learns.

接下來,參照圖48至圖50,對第四實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法的具體例進行說明。48 to 50 , specific examples of the dry eye syndrome inspection program, the dry eye syndrome inspection apparatus, and the dry eye syndrome inspection method according to the fourth embodiment will be described.

圖48是表示第四實施例的乾眼症檢查裝置的硬體結構一例的圖。圖48所示的乾眼症檢查裝置20d是於已利用機械學習執行程式100d學習完畢的機械學習裝置700d的推論階段中,使用機械學習裝置700d推斷推論用被檢查體的眼睛中出現的乾眼症的症狀的裝置。另外,如圖48所示,乾眼症檢查裝置20d包括處理器21d、主儲存裝置22d、通訊介面23d、輔助儲存裝置24d、輸入輸出裝置25d、以及匯流排26d。FIG. 48 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the fourth embodiment. The dry eye disease inspection apparatus 20d shown in FIG. 48 uses the machine learning apparatus 700d to infer dry eye appearing in the eyes of the subject for inference in the inference stage of the machine learning apparatus 700d that has been learned by the machine learning execution program 100d symptomatic device. In addition, as shown in FIG. 48, the dry eye disease inspection apparatus 20d includes a processor 21d, a main storage device 22d, a communication interface 23d, an auxiliary storage device 24d, an input/output device 25d, and a bus bar 26d.

處理器21d例如為CPU,讀出並執行後述的乾眼症檢查程式200d,以達成乾眼症檢查程式200d所具有的各功能。另外,處理器21d亦可讀出並執行乾眼症檢查程式200d以外的程式,以於達成乾眼症檢查程式200d所具有的各功能的基礎上達成必要的功能。The processor 21d is, for example, a CPU, and reads out and executes a dry eye disease inspection program 200d described later, so as to achieve each function of the dry eye disease inspection program 200d. In addition, the processor 21d may read out and execute programs other than the dry eye disease inspection program 200d, so as to achieve necessary functions in addition to the functions of the dry eye disease inspection program 200d.

主儲存裝置22d例如為RAM,預先儲存有由處理器21d讀出並執行的乾眼症檢查程式200d以及其他程式。The main storage device 22d is, for example, a RAM, and stores the dry eye examination program 200d and other programs read out and executed by the processor 21d in advance.

通訊介面23d是用於經由圖48所示的網路NW而與機械學習裝置700d以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 23d is an interface circuit for performing communication with the machine learning apparatus 700d and other devices via the network NW shown in FIG. 48 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置24d例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24d is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置25d例如為輸入輸出端口。輸入輸出裝置25d例如連接有圖48所示的鍵盤821d、滑鼠822d、顯示器920d。鍵盤821d及滑鼠822d例如用於輸入為了操作乾眼症檢查裝置20d所必需的資料的作業中。顯示器920d例如顯示乾眼症檢查裝置20d的圖形使用者介面。The input/output device 25d is, for example, an input/output port. The input/output device 25d is connected to, for example, a keyboard 821d, a mouse 822d, and a display 920d shown in FIG. 48 . The keyboard 821d and the mouse 822d are used, for example, in the operation of inputting data necessary to operate the dry eye examination apparatus 20d. The display 920d displays, for example, a graphical user interface of the dry eye disease inspection apparatus 20d.

匯流排26d將處理器21d、主儲存裝置22d、通訊介面23d、輔助儲存裝置24d及輸入輸出裝置25d連接,以使該些能夠相互進行資料的收發。The bus bar 26d connects the processor 21d, the main storage device 22d, the communication interface 23d, the auxiliary storage device 24d, and the input/output device 25d, so that these can exchange data with each other.

圖49是表示第四實施例的乾眼症檢查程式的軟體結構一例的圖。乾眼症檢查裝置20d使用處理器21d讀出並執行乾眼症檢查程式200d,以達成圖49所示的資料獲取功能201d及症狀推斷功能202d。FIG. 49 is a diagram showing an example of a software configuration of a dry eye syndrome test program according to the fourth embodiment. The dry eye disease inspection device 20d uses the processor 21d to read out and execute the dry eye disease inspection program 200d, so as to achieve the data acquisition function 201d and the symptom estimation function 202d shown in FIG. 49 .

資料獲取功能201d獲取推論用圖像資料、以及推論用開眼瞼資料。The data acquisition function 201d acquires image data for inference and eyelid opening data for inference.

推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The inference image is captured by, for example, a camera mounted on a smartphone.

推論用開眼瞼資料是表示最大開眼瞼時間的資料,所述最大開眼瞼時間為推論用被檢查體能夠連續張開被拍攝推論用圖像的眼睛的時間。The eyelid opening data for inference is data indicating the maximum eyelid opening time, which is the time during which the subject for inference can continuously open the eyes of which the inference image is captured.

例如,推論用開眼瞼資料可表示以下的最大開眼瞼時間,即,該最大開眼瞼時間是基於至少拍攝了自推論用被檢查體眨眼而張開眼睛的即刻起至下一次眨眼為止的期間的動畫中所反映出的眼瞼的活動來算出。For example, the eyelid opening data for inference can represent the maximum eyelid opening time based on at least the period from the moment when the subject for inference blinks and the eye is opened until the next blink is imaged. It is calculated by the movement of the eyelid reflected in the animation.

或者,推論用開眼瞼資料亦可表示基於推論用被檢查體的自我申報、且使用用戶介面而輸入的最大開眼瞼時間。該用戶介面例如被顯示於圖48所示的顯示器920d上。或者,該用戶介面被顯示於推論用被檢查體所簽約的智慧型手機中搭載的觸控面板顯示器上。Alternatively, the eyelid opening data for inference may represent the maximum eyelid opening time based on the self-report of the subject for inference and input using the user interface. The user interface is displayed, for example, on the display 920d shown in FIG. 48 . Alternatively, the user interface is displayed on a touch panel display mounted on a smartphone contracted by the subject for inference.

症狀推斷功能202d將推論用圖像資料及推論用開眼瞼資料輸入至已利用機械學習執行功能102d學習完畢的機械學習程式750d,並使機械學習程式750d推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。例如,症狀推斷功能202d將所述兩個資料輸入至機械學習程式750d,以推斷表現推論用被檢查體的眼睛的淚液層破壞時間的長度的數值。The symptom inference function 202d inputs the image data for inference and the eyelid opening data for inference into the machine learning program 750d that has been learned by the machine learning executive function 102d, and causes the machine learning program 750d to infer the symptoms appearing in the eyes of the subject for inference. Symptoms of dry eye. For example, the symptom inference function 202d inputs the two data into the machine learning program 750d, and infers a numerical value representing the length of the tear layer destruction time of the eye of the subject for inference.

然後,症狀推斷功能202d使機械學習程式750d輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。例如,症狀推斷功能202d使機械學習程式750d輸出表示表現推論用被檢查體的眼睛的淚液層破壞時間的長度的數值的症狀資料。該情況下,症狀資料例如用於在顯示器920d上顯示進行自我表示、且表現推論用被檢查體的眼睛的淚液層破壞時間的長度的數值。Then, the symptom estimating function 202d causes the machine learning program 750d to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference. For example, the symptom estimation function 202d causes the machine learning program 750d to output symptom data representing a numerical value representing the length of the tear layer destruction time of the eye of the subject for estimation. In this case, the symptom data is used to display, on the display 920d, for example, a numerical value that expresses itself and expresses the length of the tear layer destruction time of the eye of the subject for inference.

接下來,參照圖50,對第四實施例的乾眼症檢查程式200d執行的處理一例進行說明。圖50是表示利用第四實施例的乾眼症檢查程式執行的處理一例的流程圖。Next, with reference to FIG. 50 , an example of the processing executed by the dry eye syndrome examination program 200d of the fourth embodiment will be described. FIG. 50 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the fourth embodiment.

於步驟S81中,資料獲取功能201d獲取推論用圖像資料、以及推論用開眼瞼資料。In step S81, the data acquisition function 201d acquires image data for inference and eyelid opening data for inference.

於步驟S82中,症狀推斷功能202d將推論用圖像資料及推論用開眼瞼資料輸入至已學習完畢的機械學習程式750d,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,並使機械學習程式750d輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。In step S82, the symptom inference function 202d inputs the image data for inference and the eyelid opening data for inference into the machine learning program 750d that has been learned, so as to infer the symptoms of dry eye appearing in the eyes of the subject for inference, The machine learning program 750d is caused to output symptom data indicating the symptoms of dry eye appearing in the eyes of the subject for inference.

以上,對第四實施例的機械學習執行程式、乾眼症檢查程式、機械學習執行裝置、乾眼症檢查裝置、機械學習執行方法及乾眼症檢查方法進行了說明。The machine learning execution program, the dry eye disease inspection program, the machine learning execution device, the dry eye disease inspection device, the machine learning execution method, and the dry eye disease inspection method of the fourth embodiment have been described above.

機械學習執行程式100d具備教師資料獲取功能101d、以及機械學習執行功能102d。The machine learning execution program 100d includes a teacher data acquisition function 101d and a machine learning execution function 102d.

教師資料獲取功能101d獲取將學習用圖像資料以及學習用開眼瞼資料作為問題、將學習用檢查圖像資料以及學習用檢查結果資料中的至少一者作為答案的教師資料。學習用圖像資料是表示描繪有學習用被檢查體的眼睛的圖像的資料。學習用開眼瞼資料是表示最大開眼瞼時間的資料,所述最大開眼瞼時間為學習用被檢查體能夠連續張開被拍攝學習用圖像的眼睛的時間。學習用檢查圖像資料是表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的圖像的資料。學習用檢查結果資料是表示與乾眼症的症狀相關的檢查結果的資料。The teacher data acquisition function 101d acquires teacher data with at least one of the image data for learning and the eyelid opening data for learning as a question, and at least one of the image data for learning and the test result data for learning as an answer. The image data for learning is data representing an image in which the eyes of the subject for learning are drawn. The eyelid opening data for learning is data indicating the maximum eyelid opening time, which is the time during which the subject for learning can continuously open the eyes of which the image for learning is captured. The examination image data for learning is data representing an image in which the eyes of the subject for learning are drawn when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. The test result data for learning are data showing test results related to symptoms of dry eye.

機械學習執行功能102d將教師資料輸入至機械學習程式750d,使機械學習程式750d進行學習。The machine learning execution function 102d inputs the teacher data into the machine learning program 750d, and causes the machine learning program 750d to learn.

藉此,機械學習執行程式100d可生成基於學習用圖像資料及學習用開眼瞼資料來預測與乾眼症的症狀相關的檢查結果的機械學習程式750d。Thereby, the machine learning execution program 100d can generate the machine learning program 750d for predicting the test result related to the symptoms of dry eye based on the image data for learning and the eyelid opening data for learning.

另外,機械學習執行程式100d使用不僅包含學習用圖像資料而且包含學習用開眼瞼資料作為問題的教師資料來使機械學習程式750d進行學習。因此,機械學習執行程式100d可生成能夠精度更良好地預測與乾眼症的症狀相關的檢查結果的機械學習程式750d。In addition, the machine learning execution program 100d causes the machine learning program 750d to learn using the teacher data including not only the image data for learning but also the eyelid opening data for learning as a problem. Therefore, the machine learning execution program 100d can generate the machine learning program 750d capable of predicting the test results related to the symptoms of dry eye more accurately.

乾眼症檢查程式200d具備資料獲取功能201d、以及症狀推斷功能202d。The dry eye syndrome examination program 200d includes a data acquisition function 201d and a symptom estimation function 202d.

資料獲取功能201d獲取推論用圖像資料、以及推論用開眼瞼資料。推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用開眼瞼資料是表示最大開眼瞼時間的資料,所述最大開眼瞼時間為推論用被檢查體能夠連續張開被拍攝推論用圖像的眼睛的時間。The data acquisition function 201d acquires image data for inference and eyelid opening data for inference. The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The eyelid opening data for inference is data indicating the maximum eyelid opening time, which is the time during which the subject for inference can continuously open the eyes of which the inference image is captured.

症狀推斷功能202d將推論用圖像資料及推論用開眼瞼資料輸入至已利用機械學習執行程式100d學習完畢的機械學習程式750d,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。然後,症狀推斷功能202d使機械學習程式750d輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。The symptom inference function 202d inputs the image data for inference and the eyelid opening data for inference into the machine learning program 750d that has been learned by the machine learning execution program 100d, so as to infer the symptoms of dry eye appearing in the eyes of the subject for inference . Then, the symptom estimating function 202d causes the machine learning program 750d to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference.

藉此,乾眼症檢查程式200d無需實際實施與乾眼症的症狀相關的檢查,便可預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye syndrome test program 200d can predict the test results related to the symptoms of the dry eye syndrome without actually carrying out the test related to the symptoms of the dry eye syndrome.

接下來,列舉使機械學習程式750d實際進行學習來推斷乾眼症的症狀的例子,並將比較例與第四實施例的實施例加以對比來說明藉由機械學習執行程式100d起到的效果的具體例。Next, an example in which the machine learning program 750d is actually learned to infer the symptoms of dry eye syndrome is given, and the effect of executing the program 100d by machine learning is explained by comparing the comparative example with the example of the fourth embodiment. specific example.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的540個教師資料及表現學習用被檢查體的眼睛異常的580個教師資料中除去學習用開眼瞼資料後的資料作為教師資料來使機械學習程式進行學習的例子。The comparative example in this case is the data obtained by excluding the eyelid opening data for learning from the 540 teacher data showing normal eyes of the subject for learning and 580 teacher data showing abnormal eyes of the subject for learning. An example of making machine learning programs learn as teacher data.

關於如此般進行了學習的機械學習程式,若使用所述表現學習用被檢查體的眼睛正常的80個測試資料及表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為72%且假陰性率為28%。With regard to the machine learning program that has been learned in this way, if the characteristics are evaluated using the 80 test data representing the normal eye of the subject for learning and the 80 test data representing the abnormal eye of the subject for learning, the results are shown as follows: The prediction accuracy was 72% and the false negative rate was 28%.

另一方面,關於利用機械學習執行程式100d進行了學習的機械學習程式750d,若使用相同的測試資料來評價特性,則示出預測精度為80%且假陰性率為20%。該特性優於比較例的機械學習程式的特性。On the other hand, the machine learning program 750d learned by the machine learning execution program 100d shows that the prediction accuracy is 80% and the false negative rate is 20% when the characteristics are evaluated using the same test data. This characteristic is superior to that of the machine learning program of the comparative example.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

再者,機械學習執行程式100d所具有的功能的至少一部分可藉由包括電路部的硬體達成。同樣地,乾眼症檢查程式200d所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為LSI、ASIC、FPGA、GPU。Furthermore, at least a part of the functions of the machine learning execution program 100d can be realized by hardware including a circuit portion. Likewise, at least a part of the functions of the dry eye disease examination program 200d can be realized by hardware including a circuit unit. In addition, such hardware is, for example, LSI, ASIC, FPGA, and GPU.

另外,機械學習執行程式100d所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,乾眼症檢查程式200d所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100d can also be achieved by the cooperation of software and hardware. Similarly, at least a part of the functions of the dry eye syndrome checking program 200d can also be achieved by the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於第四實施形態中,列舉機械學習執行裝置10d、機械學習裝置700d以及乾眼症檢查裝置20d為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In addition, in the fourth embodiment, the case where the machine learning execution device 10d, the machine learning device 700d, and the dry eye disease inspection device 20d are independent devices has been described as an example, but the present invention is not limited to this. The devices can also be implemented as one device.

接下來,參照圖51至圖59,對所述實施形態的第五實施例的具體例進行說明。與第四實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法不同,第五實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法使用以下的教師資料,即,所述教師資料包含自後述的學習用圖像資料中剪切的學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者作為問題。Next, a specific example of the fifth embodiment of the above-described embodiment will be described with reference to FIGS. 51 to 59 . Different from the machine learning execution program, machine learning execution device, and machine learning execution method of the fourth embodiment, the machine learning execution program, machine learning execution device, and machine learning execution method of the fifth embodiment use the following teacher data, that is, all The teacher data includes, as a question, at least one of learning tear meniscus image data and learning illumination image data cut out from the learning image data described later.

圖51是表示第五實施例的機械學習執行裝置的硬體結構一例的圖。圖51所示的機械學習執行裝置10e是於後述的機械學習裝置700e的學習階段中使機械學習裝置700e執行機械學習的裝置。另外,如圖51所示,機械學習執行裝置10e包括處理器11e、主儲存裝置12e、通訊介面13e、輔助儲存裝置14e、輸入輸出裝置15e、以及匯流排16e。FIG. 51 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the fifth embodiment. The machine learning execution device 10e shown in FIG. 51 is a device that causes the machine learning device 700e to execute machine learning in a learning phase of the machine learning device 700e to be described later. In addition, as shown in FIG. 51 , the machine learning execution device 10e includes a processor 11e, a main storage device 12e, a communication interface 13e, an auxiliary storage device 14e, an input/output device 15e, and a bus bar 16e.

處理器11e例如為CPU,讀出並執行後述的機械學習執行程式100e,以達成機械學習執行程式100e所具有的各功能。另外,處理器11e亦可讀出並執行機械學習執行程式100e以外的程式,以於達成機械學習執行程式100e所具有的各功能的基礎上達成必要的功能。The processor 11e is, for example, a CPU, and reads out and executes the machine learning execution program 100e described later to achieve each function of the machine learning execution program 100e. In addition, the processor 11e may read out and execute programs other than the machine learning execution program 100e, so as to achieve necessary functions in addition to each function possessed by the machine learning execution program 100e.

主儲存裝置12e例如為RAM,預先儲存有由處理器11e讀出並執行的機械學習執行程式100e以及其他程式。The main storage device 12e is, for example, a RAM, and stores in advance a machine learning execution program 100e and other programs read and executed by the processor 11e.

通訊介面13e是用於經由圖51所示的網路NW而與機械學習裝置700e以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 13e is an interface circuit for performing communication with the machine learning apparatus 700e and other devices via the network NW shown in FIG. 51 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置14e例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 14e is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置15e例如為輸入輸出端口。輸入輸出裝置15e例如連接有圖51所示的鍵盤811e、滑鼠812e、顯示器910e。鍵盤811e及滑鼠812e例如用於輸入為了操作機械學習執行裝置10e所必需的資料的作業中。顯示器910e例如顯示機械學習執行裝置10e的圖形使用者介面。The input/output device 15e is, for example, an input/output port. The input/output device 15e is connected to, for example, a keyboard 811e, a mouse 812e, and a display 910e shown in FIG. 51 . The keyboard 811e and the mouse 812e are used, for example, for inputting data necessary for operating the machine learning execution device 10e. The display 910e, for example, displays a graphical user interface of the machine learning execution device 10e.

匯流排16e將處理器11e、主儲存裝置12e、通訊介面13e、輔助儲存裝置14e及輸入輸出裝置15e連接,以使該些能夠相互進行資料的收發。The bus bar 16e connects the processor 11e, the main storage device 12e, the communication interface 13e, the auxiliary storage device 14e, and the input/output device 15e, so that these can exchange data with each other.

圖52是表示第五實施例的機械學習執行程式的軟體結構一例的圖。機械學習執行裝置10e使用處理器11e讀出並執行機械學習執行程式100e,以達成圖52所示的教師資料獲取功能101e及機械學習執行功能102e。FIG. 52 is a diagram showing an example of the software configuration of the machine learning execution program of the fifth embodiment. The machine learning execution device 10e uses the processor 11e to read out and execute the machine learning execution program 100e, so as to achieve the teacher data acquisition function 101e and the machine learning execution function 102e shown in FIG. 52 .

教師資料獲取功能101e獲取將一個學習用淚液彎液面圖像資料及一個學習用照明圖像資料中的至少一者作為問題、將一個學習用檢查圖像資料及學習用檢查結果資料中的至少一者作為答案的教師資料。The teacher data acquisition function 101e acquires at least one of one tear meniscus image data for learning and one illumination image data for learning as a question, and at least one of one inspection image data for learning and inspection result data for learning. One as the teacher profile for the answer.

學習用圖像資料是表示描繪有學習用被檢查體的眼睛的學習用圖像的資料。學習用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The learning image data is data representing a learning image in which the eyes of the subject for learning are drawn. The learning image is captured by, for example, a camera mounted on a smartphone.

學習用淚液彎液面圖像資料是表示學習用淚液彎液面圖像的資料,且為學習用圖像的一種,所述學習用淚液彎液面圖像是自描繪有學習用被檢查體的眼睛的學習用圖像中剪切描繪出學習用被檢查體的淚液彎液面的區域而得。淚液彎液面為角膜與下眼瞼之間形成的淚的層,且反映了淚液的量。於淚液彎液面低的情況下,淚液少,於淚液彎液面高的情況下,淚液多。另外,學習用淚液彎液面圖像例如藉由在學習用圖像中剪切描繪出淚液彎液面的區域而生成。圖53是表示自第五實施例的學習用圖像中剪切描繪出淚液彎液面的區域而得的、學習用淚液彎液面圖像一例的圖。教師資料獲取功能101e例如獲取表示圖53所示的學習用淚液彎液面圖像的學習用淚液彎液面圖像資料。The tear meniscus image data for learning is data representing a tear meniscus image for learning, which is a kind of image for learning, and the tear meniscus image for learning is a self-drawing subject for learning. It is obtained by cutting out the area of the tear meniscus of the subject for learning by cutting out the image for learning of the eye. The tear meniscus is the layer of tear formed between the cornea and the lower eyelid, and reflects the amount of tear fluid. When the tear meniscus is low, there are few tears, and when the tear meniscus is high, there are more tears. In addition, the tear meniscus image for learning is generated by, for example, cutting out a region in which the tear meniscus is drawn in the image for learning. FIG. 53 is a diagram showing an example of a learning tear meniscus image obtained by clipping a region in which the tear meniscus is drawn from the learning image of the fifth embodiment. The teacher data acquisition function 101e acquires, for example, learning tear meniscus image data representing the learning tear meniscus image shown in FIG. 53 .

學習用照明圖像資料是表示學習用照明圖像的資料,且為學習用圖像的一種,所述學習用照明圖像是自描繪有學習用被檢查體的眼睛的學習用圖像中剪切描繪出映入至學習用被檢查體的角膜上的照明的區域而得的。此種照明例如為設置於拍攝學習用圖像的房間的天花板上的螢光燈。另外,學習用照明圖像例如藉由在學習用圖像中剪切描繪出映入至學習用被檢查體的角膜上的照明的區域而生成。圖54是表示自第五實施例的學習用圖像中剪切描繪出映入至學習用被檢查體的角膜上的照明的區域而得的、學習用照明圖像一例的圖。教師資料獲取功能101e例如獲取表示圖54所示的學習用照明圖像的學習用照明圖像資料。Illumination image data for learning is data representing an illumination image for learning, which is a type of image for learning, and the illumination image for learning is clipped from an image for learning in which the eyes of the subject for learning are drawn. It is obtained by delineating the area of illumination reflected on the cornea of the subject for learning. Such lighting is, for example, a fluorescent lamp installed on the ceiling of a room in which images for learning are captured. In addition, the illumination image for learning is generated by, for example, clipping and drawing a region of illumination reflected on the cornea of the subject for learning in the image for learning. 54 is a diagram showing an example of an illumination image for learning obtained by clipping and drawing a region of illumination reflected on the cornea of the subject for learning from the image for learning in the fifth embodiment. The teacher data acquisition function 101e acquires, for example, learning illumination image data representing the learning illumination image shown in FIG. 54 .

學習用檢查圖像資料是構成教師資料的答案的一部分、且表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的學習用檢查圖像的資料。作為此種檢查,例如可列舉將螢光素染色試劑滴入眼中並使用狹縫燈對眼睛進行觀察來調查淚液層破壞時間的檢查。因此,例如,學習用檢查圖像資料成為表示描繪有經螢光素染色並使用狹縫燈進行了觀察的眼睛的圖像或動畫的資料。The examination image data for learning is a part of the answer that constitutes the teacher data, and it is used for learning that depicts the eyes of the subject for learning when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. Check the profile of the image. As such an examination, for example, a luciferin dyeing reagent is dropped into the eye, and the eye is observed with a slit lamp to examine the tear layer destruction time. Therefore, for example, the examination image data for learning is data representing an image or a moving image in which an eye that has been stained with fluorescein and observed with a slit lamp is drawn.

學習用檢查結果資料是構成教師資料的答案的一部分、且表示與乾眼症的症狀相關的檢查結果的資料。作為此種檢查的結果,例如可列舉:基於將螢光素染色試劑滴入眼中並使用狹縫燈對眼睛進行觀察來調查淚液層破壞時間的檢查來表現淚液層破壞時間的長度的、為0以上且1以下的數值。The test result data for learning is data that constitutes a part of the answers of the teacher data and shows test results related to symptoms of dry eye. As a result of such an examination, for example, the length of the tear layer destruction time is expressed as 0 based on an examination that investigates the tear layer destruction time by instilling a luciferin dyeing reagent into the eye and observing the eye with a slit lamp. A numerical value of more than 1 and less than 1.

於該數值為0以上且小於0.5的情況下,淚液層破壞時間為10秒以上,因此表現出學習用被檢查體的眼睛正常,不需要由眼科醫生進行診察。另外,於該數值為0.5以上且1以下的情況下,淚液層破壞時間小於10秒,因此表現出學習用被檢查體的眼睛異常,需要由眼科醫生進行診察。When the numerical value is 0 or more and less than 0.5, the tear layer destruction time is 10 seconds or more, and therefore the eyes of the subject for study appear to be normal, and examination by an ophthalmologist is unnecessary. In addition, when the numerical value is 0.5 or more and 1 or less, the tear layer destruction time is less than 10 seconds, and therefore the eye of the subject for study is abnormal, and an ophthalmologist is required to examine it.

例如,教師資料獲取功能101e獲取1280個教師資料。該情況下,1280個學習用淚液彎液面圖像資料及學習用照明圖像資料分別表示以下圖像,即,藉由實施四組使用搭載於智慧型手機上的照相機對8名學習用被檢查體的兩個眼睛分別進行20次拍攝的處理而獲取1280張學習用圖像,藉由剪切所述1280張學習用圖像的描繪出淚液彎液面的區域及描繪出映入至角膜上的照明的區域而生成的圖像。For example, the teacher profile acquisition function 101e acquires 1280 teacher profiles. In this case, the 1280 tear meniscus image data for learning and the illumination image data for learning respectively represent the following images. That is, by implementing four sets of images for 8 students for learning using a camera mounted on a smartphone The two eyes of the test subject were photographed 20 times to acquire 1,280 study images, and the area where the tear meniscus was drawn and the area reflected on the cornea were cut out of the 1,280 study images. image generated on the illuminated area.

另外,該情況下,1280個學習用檢查圖像資料分別表示對所述1280張學習用圖像各個中描繪出的學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時所拍攝的學習用檢查圖像。另外,1280個學習用檢查結果資料表示1280個分別表示拍攝1280張學習用檢查圖像時所實施的與乾眼症的症狀相關的檢查結果的、為0以上且1以下的數值。In addition, in this case, the 1280 pieces of examination image data for learning respectively indicate the time when the examination related to the symptoms of dry eye is performed on the eyes of the examination subject for learning depicted in each of the 1280 pieces of learning images. The image taken for the examination of the study. In addition, the 1280 pieces of examination result data for learning represent 1280 numerical values of 0 or more and 1 or less, which respectively represent the examination results related to the symptoms of dry eye performed when 1280 examination images for learning were taken.

再者,於以下的說明中,列舉以下情況為例進行說明:1280個學習用檢查結果資料包含表示表現學習用被檢查體的眼睛正常的為0以上且小於0.5的數值的620個學習用檢查結果資料、以及表示表現學習用被檢查體的眼睛異常的為0.5以上且1以下的數值的660個學習用檢查結果資料。In addition, in the following description, the following case is taken as an example for description: 1280 pieces of examination result data for learning include 620 examinations for learning which show that the eyes of the examination subject for learning are normal with a value of 0 or more and less than 0.5 Result data, and 660 test result data for learning showing a numerical value of 0.5 or more and 1 or less indicating abnormality of the eyes of the subject for learning.

機械學習執行功能102e將教師資料輸入至機械學習裝置700e中所安裝的機械學習程式750e,使機械學習程式750e進行學習。例如,機械學習執行功能102e使包括卷積類神經網路的機械學習程式750e藉由後向傳播進行學習。The machine learning execution function 102e inputs the teacher data into the machine learning program 750e installed in the machine learning device 700e, and causes the machine learning program 750e to learn. For example, the machine learning execution function 102e causes a machine learning program 750e comprising a convolutional neural network to learn by back propagation.

例如,機械學習執行功能102e將以下540個教師資料輸入至機械學習程式750e,即,所述540個教師資料包含表示表現學習用被檢查體的眼睛正常的、為0以上且小於0.5的數值的學習用檢查結果資料。該540個教師資料是自選自所述8名中的7名獲取的教師資料。For example, the machine learning execution function 102e inputs, into the machine learning program 750e, 540 pieces of teacher data, that is, the 540 pieces of teacher data including a numerical value of 0 or more and less than 0.5 indicating that the eyes of the subject for performance learning are normal. Study test results data. The 540 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

另外,例如,機械學習執行功能102e將以下580個教師資料輸入至機械學習程式750e,即,所述580個教師資料包含表示表現學習用被檢查體的眼睛異常的、為0.5以上且1以下的數值的學習用檢查結果資料。該580個教師資料是自選自所述8名中的7名獲取的教師資料。In addition, for example, the machine learning execution function 102e inputs the following 580 teacher data into the machine learning program 750e, that is, the 580 teacher data includes 0.5 or more and 1 or less, indicating that the eyes of the learning subject are abnormal. Numerical learning test result data. The 580 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

然後,機械學習執行功能102e利用所述1120個教師資料使機械學習程式750e進行學習。Then, the machine learning execution function 102e uses the 1120 teacher data to make the machine learning program 750e learn.

另外,例如,機械學習執行功能102e亦可將以下80個教師資料作為測試資料輸入至機械學習程式750e,即,所述80個教師資料表現學習用被檢查體的眼睛正常,且未用於機械學習程式750e的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102e can also input the following 80 teacher data as test data into the machine learning program 750e, that is, the 80 teacher data show that the eyes of the subject for learning are normal and are not used for the machine learning Learning program 750e is being learned. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

另外,例如,機械學習執行功能102e亦可將以下80個教師資料作為測試資料輸入至機械學習程式750e,即,所述80個教師資料表現學習用被檢查體的眼睛異常,且未用於機械學習程式750e的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102e may input the following 80 teacher data as test data into the machine learning program 750e, that is, the 80 teacher data representing the abnormality of the eyes of the subject for learning and not used for the machine learning Learning program 750e is being learned. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

藉此,機械學習執行功能102e可對利用所述1120個教師資料使機械學習程式750e進行學習而獲得的機械學習程式750e的特性進行評價。Thereby, the machine learning execution function 102e can evaluate the characteristics of the machine learning program 750e obtained by learning the machine learning program 750e using the 1120 teacher data.

再者,機械學習執行功能102e當使用測試資料對機械學習程式750e的特性進行評價時,例如可使用類激活映射。類激活映射是使輸入至類神經網路的資料中成為由類神經網路輸出的結果的根據的部分明確的技術。作為類激活映射,例如可列舉梯度加權類激活映射。梯度加權類激活映射為以下技術:利用與由卷積類神經網路執行的卷積的特徵相關的分類得分的梯度,來確定輸入至卷積類神經網路的圖像中對分類給予了一定程度以上的影響的區域。Furthermore, the machine learning execution function 102e may use, for example, a class activation map when evaluating the properties of the machine learning program 750e using the test data. Class activation mapping is a technique for partially clarifying the data input to the neural network as the basis for the output of the neural network. Examples of the class activation map include gradient weighted class activation maps. Gradient-weighted class activation mapping is a technique that utilizes the gradient of the classification score relative to the features of the convolution performed by the convolutional neural network to determine that the image input to the convolutional neural network gives a certain the area of influence above the degree.

接下來,參照圖55,對第五實施例的機械學習執行程式100e執行的處理一例進行說明。圖55是表示利用第五實施例的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100e執行至少一次圖55所示的處理。Next, an example of processing executed by the machine learning execution program 100e of the fifth embodiment will be described with reference to FIG. 55 . FIG. 55 is a flowchart showing an example of processing executed by the machine learning execution program of the fifth embodiment. The machine learning execution program 100e executes the processing shown in FIG. 55 at least once.

於步驟S91中,教師資料獲取功能101e獲取將學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者作為問題、將學習用檢查圖像資料及學習用檢查結果資料中的至少一者作為答案的教師資料。In step S91, the teacher data acquisition function 101e acquires at least one of the tear meniscus image data for learning and the illumination image data for learning as a question, and the inspection image data for learning and the inspection result data for learning. at least one of the teacher profiles for the answer.

於步驟S92中,機械學習執行功能102e將教師資料輸入至機械學習程式750e,使機械學習程式750e進行學習。In step S92, the machine learning execution function 102e inputs the teacher data into the machine learning program 750e, and makes the machine learning program 750e learn.

接下來,參照圖56至圖58,對第五實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法的具體例進行說明。與第四實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法不同,第五實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法獲取自後述的推論用學習用圖像資料中剪切的推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者。56 to 58 , specific examples of the dry eye syndrome inspection program, the dry eye syndrome inspection apparatus, and the dry eye syndrome inspection method according to the fifth embodiment will be described. Different from the dry eye inspection program, dry eye inspection device, and dry eye inspection method of the fourth embodiment, the dry eye inspection program, dry eye inspection device, and dry eye inspection method of the fifth embodiment are obtained from the following description. At least one of the tear meniscus image data for inference and the illumination image data for inference cut out from the image data for inference learning.

圖56是表示第五實施例的乾眼症檢查裝置的硬體結構一例的圖。圖56所示的乾眼症檢查裝置20e是於已利用機械學習執行程式100e學習完畢的機械學習裝置700e的推論階段中,使用機械學習裝置700e推斷推論用被檢查體的眼睛中出現的乾眼症的症狀的裝置。另外,如圖56所示,乾眼症檢查裝置20e包括處理器21e、主儲存裝置22e、通訊介面23e、輔助儲存裝置24e、輸入輸出裝置25e、以及匯流排26e。FIG. 56 is a diagram showing an example of the hardware configuration of the dry eye disease testing apparatus according to the fifth embodiment. The dry eye disease inspection apparatus 20e shown in FIG. 56 uses the machine learning apparatus 700e to infer dry eye appearing in the eyes of the subject for inference in the inference stage of the machine learning apparatus 700e that has been learned by the machine learning execution program 100e symptomatic device. In addition, as shown in FIG. 56, the dry eye disease inspection apparatus 20e includes a processor 21e, a main storage device 22e, a communication interface 23e, an auxiliary storage device 24e, an input/output device 25e, and a bus bar 26e.

處理器21e例如為CPU,讀出並執行後述的乾眼症檢查程式200e,以達成乾眼症檢查程式200e所具有的各功能。另外,處理器21e亦可讀出並執行乾眼症檢查程式200e以外的程式,以於達成乾眼症檢查程式200e所具有的各功能的基礎上達成必要的功能。The processor 21e is, for example, a CPU, and reads out and executes a dry eye disease inspection program 200e described later, so as to achieve each function of the dry eye disease inspection program 200e. In addition, the processor 21e may read out and execute programs other than the dry eye disease inspection program 200e, so as to achieve necessary functions in addition to the respective functions of the dry eye disease inspection program 200e.

主儲存裝置22e例如為RAM,預先儲存有由處理器21e讀出並執行的乾眼症檢查程式200e以及其他程式。The main storage device 22e is, for example, a RAM, and stores in advance the dry eye syndrome examination program 200e and other programs read and executed by the processor 21e.

通訊介面23e是用於經由圖56所示的網路NW而與機械學習裝置700e以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 23e is an interface circuit for performing communication with the machine learning apparatus 700e and other devices via the network NW shown in FIG. 56 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置24e例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24e is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置25e例如為輸入輸出端口。輸入輸出裝置25e例如連接有圖56所示的鍵盤821e、滑鼠822e、顯示器920e。鍵盤821e及滑鼠822e例如用於輸入為了操作乾眼症檢查裝置20e所必需的資料的作業中。顯示器920e例如顯示乾眼症檢查裝置20e的圖形使用者介面。The input/output device 25e is, for example, an input/output port. The input/output device 25e is connected to, for example, a keyboard 821e, a mouse 822e, and a display 920e shown in FIG. 56 . The keyboard 821e and the mouse 822e are used, for example, in the operation of inputting data necessary to operate the dry eye disease inspection apparatus 20e. The display 920e displays, for example, a graphical user interface of the dry eye examination apparatus 20e.

匯流排26e將處理器21e、主儲存裝置22e、通訊介面23e、輔助儲存裝置24e及輸入輸出裝置25e連接,以使該些能夠相互進行資料的收發。The bus bar 26e connects the processor 21e, the main storage device 22e, the communication interface 23e, the auxiliary storage device 24e, and the input/output device 25e, so that these can exchange data with each other.

圖57是表示第五實施例的乾眼症檢查程式的軟體結構一例的圖。乾眼症檢查裝置20e使用處理器21e讀出並執行乾眼症檢查程式200e,以達成圖57所示的資料獲取功能201e及症狀推斷功能202e。FIG. 57 is a diagram showing an example of the software configuration of the dry eye syndrome examination program according to the fifth embodiment. The dry eye disease inspection apparatus 20e uses the processor 21e to read out and execute the dry eye disease inspection program 200e, so as to achieve the data acquisition function 201e and the symptom estimation function 202e shown in FIG. 57 .

資料獲取功能201e獲取推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者。The data acquisition function 201e acquires at least one of tear meniscus image data for inference and illumination image data for inference.

推論用圖像資料是表示描繪有推論用被檢查體的眼睛的圖像的資料。推論用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for inference is data representing an image in which the eyes of the subject for inference are drawn. The inference image is captured by, for example, a camera mounted on a smartphone.

推論用淚液彎液面圖像資料是表示自描繪有推論用被檢查體的眼睛的推論用圖像中剪切描繪出推論用被檢查體的淚液彎液面的區域而得的推論用淚液彎液面圖像的資料。另外,推論用淚液彎液面圖像例如藉由在推論用圖像中剪切描繪出淚液彎液面的區域而生成。The tear meniscus image data for inference shows the tear meniscus for inference obtained by cutting out the area where the tear meniscus of the subject for inference is drawn from the image for which the eye of the subject for inference is drawn. Information on liquid level images. In addition, the tear meniscus image for inference is generated by, for example, clipping a region in which the tear meniscus is drawn in the image for inference.

推論用照明圖像資料是表示自描繪有推論用被檢查體的眼睛的推論用圖像中剪切描繪出映入至推論用被檢查體的角膜上的照明的區域而得的推論用照明圖像的資料。另外,推論用照明圖像例如藉由在推論用圖像中剪切描繪出映入至推論用被檢查體的角膜上的照明的區域而生成。The illumination image data for inference is an illumination map for inference obtained by cutting out the area of illumination reflected on the cornea of the inference subject from the inference image depicting the eye of the subject for inference. like data. In addition, the illumination image for inference is generated by, for example, clipping and drawing a region of illumination reflected on the cornea of the subject for inference in the image for inference.

症狀推斷功能202e將推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者輸入至已利用機械學習執行功能102e學習完畢的機械學習程式750e,並使機械學習程式750e推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。例如,症狀推斷功能202e將所述資料輸入至機械學習程式750e,以推斷表現推論用被檢查體的眼睛的淚液層破壞時間的長度的數值。The symptom inference function 202e inputs at least one of the tear meniscus image data for inference and the illumination image data for inference to the machine learning program 750e that has been learned by the machine learning executive function 102e, and causes the machine learning program 750e to infer Symptoms of dry eye appearing in the eyes of the subject are inferred. For example, the symptom inference function 202e inputs the data into the machine learning program 750e, and infers a numerical value representing the length of the tear layer destruction time of the eye of the subject for inference.

然後,症狀推斷功能202e使機械學習程式750e輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。例如,症狀推斷功能202e使機械學習程式750e輸出表示表現推論用被檢查體的眼睛的淚液層破壞時間的長度的數值的症狀資料。該情況下,症狀資料例如用於在顯示器920e上顯示進行自我表示、且表現推論用被檢查體的眼睛的淚液層破壞時間的長度的數值。Then, the symptom estimating function 202e causes the machine learning program 750e to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference. For example, the symptom estimation function 202e causes the machine learning program 750e to output symptom data representing a numerical value representing the length of the tear layer destruction time of the eye of the subject for estimation. In this case, the symptom data is used to display, for example, on the display 920e, a numerical value that expresses itself and expresses the length of the tear layer destruction time of the eye of the subject for inference.

接下來,參照圖58,對第五實施例的乾眼症檢查程式200e執行的處理一例進行說明。圖58是表示利用第五實施例的乾眼症檢查程式執行的處理一例的流程圖。Next, with reference to FIG. 58 , an example of the processing executed by the dry eye syndrome examination program 200e of the fifth embodiment will be described. 58 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the fifth embodiment.

於步驟S101中,資料獲取功能201e獲取推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者。In step S101, the data acquisition function 201e acquires at least one of tear meniscus image data for inference and illumination image data for inference.

於步驟S102中,症狀推斷功能202e將推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者輸入至已學習完畢的機械學習程式,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,並使機械學習程式輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。In step S102, the symptom inference function 202e inputs at least one of the tear meniscus image data for inference and the illumination image data for inference into the machine learning program that has been learned, so as to infer the eye of the subject for inference. Symptoms of dry eye appearing in the test subject, and the machine learning program is caused to output symptom data indicating the symptoms of dry eye appearing in the eyes of the subject for inference.

以上,對第五實施例的機械學習執行程式、乾眼症檢查程式、機械學習執行裝置、乾眼症檢查裝置、機械學習執行方法及乾眼症檢查方法進行了說明。The machine learning execution program, the dry eye disease inspection program, the machine learning execution device, the dry eye disease inspection device, the machine learning execution method, and the dry eye disease inspection method of the fifth embodiment have been described above.

機械學習執行程式100e具備教師資料獲取功能101e、以及機械學習執行功能102e。The machine learning execution program 100e includes a teacher data acquisition function 101e and a machine learning execution function 102e.

教師資料獲取功能101e獲取將學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者作為問題、將學習用檢查圖像資料及學習用檢查結果資料中的至少一者作為答案的教師資料。學習用淚液彎液面圖像資料是表示學習用淚液彎液面圖像的資料,所述學習用淚液彎液面圖像描繪出學習用被檢查體的淚液彎液面。學習用照明圖像資料是表示學習用照明圖像的資料,所述學習用照明圖像描繪出映入至學習用被檢查體的角膜上的照明。學習用檢查圖像資料是表示描繪有對學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的學習用被檢查體的眼睛的圖像的資料。學習用檢查結果資料是表示與乾眼症的症狀相關的檢查結果的資料。The teacher data acquisition function 101e acquires at least one of the tear meniscus image data for learning and the illumination image data for learning as a question, and at least one of the inspection image data for learning and the inspection result data for learning as a question. Teacher profile for answers. The tear meniscus image data for learning is data representing a tear meniscus image for learning that depicts the tear meniscus of the subject for learning. The illumination image data for learning is data representing an illumination image for learning that depicts illumination reflected on the cornea of the subject for learning. The examination image data for learning is data representing an image in which the eyes of the subject for learning are drawn when an examination related to the symptoms of dry eye is performed on the eyes of the subject for learning. The test result data for learning are data showing test results related to symptoms of dry eye.

機械學習執行功能102e將教師資料輸入至機械學習程式750e,使機械學習程式750e進行學習。The machine learning execution function 102e inputs the teacher data into the machine learning program 750e, and causes the machine learning program 750e to learn.

藉此,機械學習執行程式100e可生成基於學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者來預測與乾眼症的症狀相關的檢查結果的機械學習程式750e。Thereby, the machine learning execution program 100e can generate the machine learning program 750e for predicting the test result related to the symptoms of dry eye based on at least one of the tear meniscus image data for learning and the illumination image data for learning.

另外,機械學習執行程式100e使用包含自學習用圖像資料中剪切的學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者作為問題的教師資料,使機械學習程式750e進行學習。因此,機械學習執行程式100e可生成能夠精度更良好地預測與乾眼症的症狀相關的檢查結果的機械學習程式750e。In addition, the machine learning execution program 100e uses the teacher data including at least one of the tear meniscus image data for learning and the illumination image data for learning cut out from the image data for learning as a question, and makes the machine learning program 750e for learning. Therefore, the machine learning execution program 100e can generate the machine learning program 750e capable of predicting the test results related to the symptoms of dry eye more accurately.

乾眼症檢查程式200e具備資料獲取功能201e、以及症狀推斷功能202e。The dry eye syndrome examination program 200e includes a data acquisition function 201e and a symptom estimation function 202e.

資料獲取功能201e獲取推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者。推論用淚液彎液面圖像資料是表示推論用淚液彎液面圖像的資料,所述推論用淚液彎液面圖像描繪出推論用被檢查體的淚液彎液面。推論用照明圖像資料是表示推論用照明圖像的資料,所述推論用照明圖像描繪出映入至推論用被檢查體的角膜上的照明。The data acquisition function 201e acquires at least one of tear meniscus image data for inference and illumination image data for inference. The tear meniscus image data for inference is a data showing a tear meniscus image for inference that depicts the tear meniscus of the inference subject. The illumination image data for inference is data representing an illumination image for inference that depicts illumination reflected on the cornea of the inference subject.

症狀推斷功能202e將推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者輸入至已利用機械學習執行程式100e學習完畢的機械學習程式750e,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。然後,症狀推斷功能202e使機械學習程式750e輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。The symptom inference function 202e inputs at least one of the tear meniscus image data for inference and the illumination image data for inference into the machine learning program 750e that has been learned by the machine learning execution program 100e to infer the inference subject Symptoms of dry eye in the eyes. Then, the symptom estimating function 202e causes the machine learning program 750e to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference.

藉此,乾眼症檢查程式200e無需實際實施與乾眼症的症狀相關的檢查,便可預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye disease examination program 200e can predict the examination result related to the symptoms of dry eye without actually carrying out the examination related to the symptoms of dry eye.

接下來,列舉使機械學習程式750e實際進行學習來推斷乾眼症的症狀的例子,並將比較例與第五實施例的實施例加以對比來說明藉由機械學習執行程式100e起到的效果的具體例。Next, an example in which the machine learning program 750e is actually learned to infer the symptoms of dry eye syndrome will be described, and the effect of executing the program 100e by machine learning will be described by comparing a comparative example with the example of the fifth embodiment. specific example.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的540個教師資料及表現學習用被檢查體的眼睛異常的580個教師資料中除去了學習用淚液彎液面圖像資料及學習用照明圖像資料、且添加了學習用圖像資料的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the learning tear meniscus was removed from the 540 teacher data representing the normal eyes of the learning subject and the 580 teacher data representing the abnormal eyes of the learning subject. An example of learning a machine learning program by adding the image data and the learning image data as the teacher data and adding the learning image data as the teacher data.

關於如此般進行了學習的機械學習程式,若使用自所述表現學習用被檢查體的眼睛正常的80個測試資料中除去了學習用淚液彎液面圖像資料及學習用照明圖像資料、且添加了學習用圖像資料的資料以及自表現學習用被檢查體的眼睛異常的80個測試資料中除去了學習用淚液彎液面圖像資料及學習用照明圖像資料、且添加了學習用圖像資料的資料來評價特性,則示出預測精度為72%且假陰性率為28%。As for the machine learning program that has been learned in this way, if using the above-mentioned 80 test data showing normal eyes of the subject for learning, excluding the tear meniscus image data for learning and the illumination image data for learning, In addition, the data of the image data for learning and the 80 test data that express the abnormality of the eyes of the subject for learning are added. Evaluating the features with data from image data shows a prediction accuracy of 72% and a false negative rate of 28%.

另一方面,關於利用機械學習執行程式100e且使用學習用淚液彎液面圖像資料進行了學習的機械學習程式750e,若使用僅包含學習用淚液彎液面圖像資料的測試資料來評價特性,則示出預測精度為77%且假陰性率為23%。該特性優於比較例的機械學習程式的特性。On the other hand, with respect to the machine learning program 750e that executes the program 100e by machine learning and is learned using the tear meniscus image data for learning, the characteristics are evaluated using the test data including only the tear meniscus image data for learning. , it shows a prediction accuracy of 77% and a false negative rate of 23%. This characteristic is superior to that of the machine learning program of the comparative example.

另外,關於利用機械學習執行程式100e且使用學習用照明圖像資料進行了學習的機械學習程式750e,若使用僅包含學習用照明圖像資料的測試資料來評價特性,則示出預測精度為77%且假陰性率為23%。該特性優於比較例的機械學習程式的特性。In addition, about the machine learning program 750e that has been learned by using the machine learning execution program 100e and using the illumination image data for learning, when the characteristics are evaluated using the test data including only the illumination image data for learning, the prediction accuracy is 77%. % and the false negative rate was 23%. This characteristic is superior to that of the machine learning program of the comparative example.

另外,關於利用機械學習執行程式100e且使用學習用淚液彎液面圖像資料及學習用照明圖像資料進行了學習的機械學習程式750e,若使用包含學習用淚液彎液面圖像資料及學習用照明圖像資料的測試資料來評價特性,則示出預測精度為82%且假陰性率為18%。該特性優於比較例的機械學習程式的特性、使用學習用淚液彎液面圖像資料進行了學習的機械學習程式750e的特性及使用學習用照明圖像資料進行了學習的機械學習程式750e的特性。In addition, with regard to the machine learning program 750e that executes the program 100e by machine learning and is learned using the tear meniscus image data for learning and the illumination image data for learning, if the machine learning program 750e is used that includes the tear meniscus image data for learning and the learning Evaluating the characteristics with test data of illumination image data shows a prediction accuracy of 82% and a false negative rate of 18%. This characteristic is superior to the characteristics of the machine learning program of the comparative example, the characteristics of the machine learning program 750e learned using the tear meniscus image data for learning, and the machine learning program 750e learned using the illumination image data for learning. characteristic.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

再者,機械學習執行程式100e所具有的功能的至少一部分可藉由包括電路部的硬體達成。同樣地,乾眼症檢查程式200e所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為LSI、ASIC、FPGA、GPU。Furthermore, at least a part of the functions of the machine learning execution program 100e can be realized by hardware including a circuit portion. Similarly, at least a part of the functions of the dry eye syndrome examination program 200e can be realized by hardware including a circuit unit. In addition, such hardware is, for example, LSI, ASIC, FPGA, and GPU.

另外,機械學習執行程式100e所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,乾眼症檢查程式200e所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100e can also be achieved by the cooperation of software and hardware. Similarly, at least a part of the functions of the dry eye syndrome checking program 200e can also be achieved by the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於第五實施例中,列舉機械學習執行裝置10e、機械學習裝置700e以及乾眼症檢查裝置20e為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In addition, in the fifth embodiment, the case where the machine learning execution device 10e, the machine learning device 700e, and the dry eye disease inspection device 20e are mutually independent devices has been described as an example, but it is not limited to this. The devices can also be implemented as one device.

[第四實施例及第五實施例的變形例] 於所述第四實施例中,列舉了教師資料獲取功能101d獲取不包含第五實施例中所說明的學習用淚液彎液面圖像資料及學習用照明圖像資料作為問題的一部分的教師資料的情況為例,但並不限定於此。教師資料獲取功能101d亦可獲取於學習用開眼瞼資料的基礎上亦將學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者作為所述學習用圖像資料的教師資料。另外,該情況下,資料獲取功能201d除了獲取推論用開眼瞼資料之外,亦獲取推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者作為所述推論用圖像資料。而且,該情況下,症狀推斷功能202d使機械學習程式750d在基於推論用開眼瞼資料之外亦基於推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者,推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。 [Variations of the fourth embodiment and the fifth embodiment] In the fourth embodiment, it is listed that the teacher data acquisition function 101d acquires teacher data that does not include the tear meniscus image data for learning and the illumination image data for learning described in the fifth embodiment as part of the problem. For example, but not limited to this. The teacher data acquisition function 101d can also be acquired based on the eyelid opening data for learning, and at least one of the tear meniscus image data for learning and the illumination image data for learning is also used as the teacher of the image data for learning material. In addition, in this case, the data acquisition function 201d acquires at least one of the tear meniscus image data for inference and the illumination image data for inference as the inference image in addition to the eyelid opening data for inference. material. In this case, the symptom inference function 202d causes the machine learning program 750d to infer the inference based on at least one of the tear meniscus image data for inference and the illumination image data for inference in addition to the eyelid opening data for inference. Symptoms of dry eye appearing in the eyes of the subject.

另外,於所述第五實施例中,列舉了教師資料獲取功能101e獲取不包含第四實施例中所說明的學習用開眼瞼資料作為問題的一部分的教師資料的情況為例,但並不限定於此。教師資料獲取功能101e亦可獲取於學習用淚液彎液面圖像資料及學習用照明圖像資料的基礎上亦將學習用開眼瞼資料作為問題的教師資料。另外,該情況下,資料獲取功能201e除了獲取推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者之外,更獲取推論用開眼瞼資料。而且,該情況下,症狀推斷功能202e使機械學習程式750e在基於推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者之外亦基於推論用開眼瞼資料,推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。In addition, in the fifth embodiment, the case where the teacher data acquisition function 101e acquires the teacher data that does not include the eyelid opening material for learning described in the fourth embodiment as a part of the problem is exemplified, but it is not limited to here. The teacher data acquisition function 101e can also acquire teacher data for which the eyelid opening data for learning is used as a problem based on the tear meniscus image data for learning and the illumination image data for learning. In addition, in this case, the data acquisition function 201e acquires eyelid opening data for inference in addition to at least one of the tear meniscus image data for inference and the illumination image data for inference. In this case, the symptom inference function 202e causes the machine learning program 750e to infer the inference based on the eyelid opening data for inference in addition to at least one of the tear meniscus image data for inference and the illumination image data for inference Symptoms of dry eye appearing in the eyes of the subject.

進而,第四實施例及第五實施例的變形例的機械學習執行功能102d當使用測試資料對機械學習程式750d的特性進行評價時,例如可使用類激活映射。同樣地,第四實施例及第五實施例的變形例的機械學習執行功能102e當使用測試資料對機械學習程式750e的特性進行評價時,例如可使用類激活映射。Furthermore, the machine learning execution function 102d of the fourth embodiment and the modification of the fifth embodiment may use, for example, a class activation map when evaluating the characteristics of the machine learning program 750d using test data. Similarly, the machine learning execution function 102e of the fourth embodiment and the modification of the fifth embodiment may use, for example, a class activation map when evaluating the characteristics of the machine learning program 750e using test data.

圖59是表示第四實施例及第五實施例的變形例的機械學習程式於預測調查淚液層破壞時間的檢查結果時,描繪有學習用被檢查體的眼睛的學習用圖像中所重點考慮的部分的一例的圖。例如,第四實施例及第五實施例的變形例的機械學習執行功能102d使用梯度加權類激活映射,評價為:圖59所示的學習用圖像中由橢圓C611包圍的區域、由橢圓C612包圍的區域及由橢圓C61包圍的區域對利用機械學習程式750d進行的對調查淚液層破壞時間的檢查的預測給予了一定程度以上的影響。另外,關於第四實施例及第五實施例的變形例的機械學習執行功能102e,參照圖59所說明的事項亦適用。Fig. 59 is a diagram showing the important consideration in the learning image depicting the eye of the subject for learning when the machine learning program of the fourth embodiment and the modification of the fifth embodiment is used to predict the test result of the tear layer destruction time A diagram of an example of a part of . For example, the machine learning execution function 102d of the fourth embodiment and the modification of the fifth embodiment uses the gradient weighted class activation map, and evaluates the area surrounded by the ellipse C611 in the learning image shown in FIG. 59, the area surrounded by the ellipse C612 The enclosed area and the area enclosed by the ellipse C61 have more than a certain degree of influence on the prediction of the inspection to investigate the tear layer destruction time by the machine learning program 750d. In addition, the matters described with reference to FIG. 59 are also applied to the machine learning execution function 102e of the fourth embodiment and the modification of the fifth embodiment.

再者,由該些橢圓包圍的區域所含的三個階段的灰階表示對與角膜上皮損傷相關的檢查結果的預測給予的影響程度。由橢圓C611包圍的區域及由橢圓C612包圍的區域均重疊顯示於描繪出學習用被檢查體的淚液彎液面的區域上。由橢圓C61包圍的區域重疊顯示於描繪出映入至學習用被檢查體的角膜上的照明的區域上。In addition, the gray scales of the three stages included in the area surrounded by these ellipses indicate the degree of influence given to the prediction of the test results related to corneal epithelial damage. Both the area enclosed by the ellipse C611 and the area enclosed by the ellipse C612 are displayed superimposed on the area in which the tear meniscus of the subject for learning is drawn. The area surrounded by the ellipse C61 is superimposed and displayed on the area in which the illumination reflected on the cornea of the subject for learning is drawn.

接下來,對使用包含學習用開眼瞼資料、以及學習用淚液彎液面圖像資料及學習用照明圖像資料中的至少一者作為問題的教師資料進行學習、且基於推論用開眼瞼資料、以及推論用淚液彎液面圖像資料及推論用照明圖像資料中的至少一者推斷推論用被檢查體的眼睛中出現的乾眼症的症狀時起到的效果的具體例進行說明。另外,於以下的說明中,列舉教師資料獲取功能101d獲取學習用開眼瞼資料,資料獲取功能201d獲取推論用開眼瞼資料的情況為例進行說明。Next, learn using the teacher data including the eyelid opening data for learning, and at least one of the tear meniscus image data for learning and the illumination image data for learning as a problem, and based on the eyelid opening data for inference, And a specific example of the effect obtained when at least one of the tear meniscus image data for inference and the illumination image data for inference infers the symptoms of dry eye appearing in the eye of the subject will be described. In addition, in the following description, the case where the teacher data acquisition function 101d acquires the eyelid opening data for learning, and the data acquisition function 201d acquires the eyelid opening data for inference will be described as an example.

該情況下的比較例是將自所述表現學習用被檢查體的眼睛正常的540個教師資料及表現學習用被檢查體的眼睛異常的580個教師資料中除去了學習用開眼瞼資料、學習用淚液彎液面圖像資料及學習用照明圖像資料、且添加了學習用圖像資料的資料作為教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the eyelid opening materials for learning and the learning materials for learning are excluded from the 540 teacher data for expressing the normal eyes of the subject for learning and the 580 teacher data for expressing the abnormal eyes of the subject for learning. An example of learning a machine learning program using tear meniscus image data and learning illumination image data, and adding learning image data as teacher data.

關於如此般進行了學習的機械學習程式,若使用自所述表現學習用被檢查體的眼睛正常的80個測試資料中除去了學習用開眼瞼資料、學習用淚液彎液面圖像資料及學習用照明圖像資料、且添加了學習用圖像資料的資料以及自表現學習用被檢查體的眼睛異常的80個測試資料中除去了學習用開眼瞼資料、學習用淚液彎液面圖像資料及學習用照明圖像資料、且添加了學習用圖像資料的資料來評價特性,則示出預測精度為72%且假陰性率為28%。As for the machine learning program that has been learned in this way, if the eyelid opening data for learning, the tear meniscus image data for learning, and the learning are removed from the 80 test data of normal eyes of the subject for performance learning described above Illumination image data, additional study image data, and 80 test data showing eye abnormalities of the study subject were excluded from the study eyelid opening data and the study tear meniscus image data. and the learning illumination image data, and the data of the learning image data are added to evaluate the characteristics, the prediction accuracy is 72% and the false negative rate is 28%.

另一方面,關於利用機械學習執行程式100d且使用學習用開眼瞼資料、學習用淚液彎液面圖像資料及學習用照明圖像資料進行了學習的機械學習程式750d,若使用包含學習用淚液彎液面圖像資料及學習用照明圖像資料的測試資料來評價特性,則示出預測精度為90%且假陰性率為10%。該特性優於比較例的機械學習程式的特性、第四實施例的機械學習程式750d的特性及第五實施例的機械學習程式750e的特性。On the other hand, with respect to the machine learning program 750d that executes the program 100d by machine learning and has learned using the eyelid opening data for learning, the tear meniscus image data for learning, and the illumination image data for learning The characteristics were evaluated by the test data of the meniscus image data and the illumination image data for learning, and the prediction accuracy was 90% and the false negative rate was 10%. This characteristic is superior to the characteristics of the machine learning program of the comparative example, the characteristics of the machine learning program 750d of the fourth embodiment, and the characteristics of the machine learning program 750e of the fifth embodiment.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

再者,所述乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法亦可於向推論用被檢查體的眼睛中滴入乾眼症滴眼劑的前後使用。藉由在此種時機下使用,所述乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法亦可成為驗證該乾眼症滴眼劑相對於推論用被檢查體擁有的乾眼症的症狀而具有的有效性的手段。In addition, the above-mentioned dry eye examination program, dry eye examination apparatus, and dry eye examination method may be used before and after instillation of the dry eye eye drops into the eyes of the subject for inference. By using at such a timing, the dry eye test program, the dry eye test device, and the dry eye test method can also be used to verify the dry eye possessed by the test subject for inference with respect to the dry eye eye drops. Symptoms of the disease and have the effectiveness of the means.

接下來,參照圖60至圖66,對所述實施形態的第六實施例的具體例進行說明。Next, a specific example of the sixth embodiment of the above-described embodiment will be described with reference to FIGS. 60 to 66 .

圖60是表示第六實施例的機械學習執行裝置的硬體結構一例的圖。圖60所示的機械學習執行裝置10f是於後述的機械學習裝置700f的學習階段中使機械學習裝置700f執行機械學習的裝置。另外,如圖60所示,機械學習執行裝置10f包括處理器11f、主儲存裝置12f、通訊介面13f、輔助儲存裝置14f、輸入輸出裝置15f、以及匯流排16f。FIG. 60 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the sixth embodiment. The machine learning execution device 10f shown in FIG. 60 is a device that causes the machine learning device 700f to execute machine learning in a learning phase of the machine learning device 700f described later. In addition, as shown in FIG. 60, the machine learning execution device 10f includes a processor 11f, a main storage device 12f, a communication interface 13f, an auxiliary storage device 14f, an input/output device 15f, and a bus bar 16f.

處理器11f例如為CPU,讀出並執行後述的機械學習執行程式100f,以達成機械學習執行程式100f所具有的各功能。另外,處理器11f亦可讀出並執行機械學習執行程式100f以外的程式,以於達成機械學習執行程式100f所具有的各功能的基礎上達成必要的功能。The processor 11f is, for example, a CPU, and reads out and executes the machine learning execution program 100f described later, so as to achieve each function of the machine learning execution program 100f. In addition, the processor 11f may read out and execute programs other than the machine learning execution program 100f, so as to achieve necessary functions in addition to each function of the machine learning execution program 100f.

主儲存裝置12f例如為RAM,預先儲存有由處理器11f讀出並執行的機械學習執行程式100f以及其他程式。The main storage device 12f is, for example, a RAM, and stores in advance a machine learning execution program 100f and other programs read and executed by the processor 11f.

通訊介面13f是用於經由圖1所示的網路NW而與機械學習裝置700f以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 13f is an interface circuit for performing communication with the machine learning device 700f and other devices via the network NW shown in FIG. 1 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置14f例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 14f is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置15f例如為輸入輸出端口。輸入輸出裝置15f例如連接有圖1所示的鍵盤811f、滑鼠812f、顯示器910f。鍵盤811f及滑鼠812f例如用於輸入為了操作機械學習執行裝置10f所必需的資料的作業中。顯示器910f例如顯示機械學習執行裝置10f的圖形使用者介面。The input/output device 15f is, for example, an input/output port. The input/output device 15f is connected to, for example, a keyboard 811f, a mouse 812f, and a display 910f shown in FIG. 1 . The keyboard 811f and the mouse 812f are used, for example, for inputting data necessary for operating the machine learning execution device 10f. The display 910f displays, for example, a graphical user interface of the machine learning execution device 10f.

匯流排16f將處理器11f、主儲存裝置12f、通訊介面13f、輔助儲存裝置14f及輸入輸出裝置15f連接,以使該些能夠相互進行資料的收發。The bus bar 16f connects the processor 11f, the main storage device 12f, the communication interface 13f, the auxiliary storage device 14f, and the input/output device 15f, so that these can exchange data with each other.

圖61是表示第六實施例的機械學習執行程式的軟體結構一例的圖。機械學習執行裝置10f使用處理器11f讀出並執行機械學習執行程式100f,以達成圖61所示的教師資料獲取功能101f及機械學習執行功能102f。FIG. 61 is a diagram showing an example of the software configuration of the machine learning execution program of the sixth embodiment. The machine learning execution device 10f uses the processor 11f to read out and execute the machine learning execution program 100f, so as to achieve the teacher data acquisition function 101f and the machine learning execution function 102f shown in FIG. 61 .

教師資料獲取功能101f獲取將一個學習用圖像資料作為問題、將一個學習用檢查結果資料作為答案的教師資料。The teacher data acquisition function 101f acquires teacher data with one image material for learning as the question and one examination result material for learning as the answer.

學習用圖像資料是表示學習用圖像的資料,所述學習用圖像描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的眼睛。此種學習用圖像例如為:藉由對描繪有實際上被具有色溫4000 K的光照射的學習用被檢查體的眼睛的圖像調整白平衡而生成、且模擬地描繪有被具有色溫3000 K的光照射的該眼睛的圖像。另外,學習用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The image data for learning is data representing an image for learning in which it is assumed that the eyes of the subject for learning are irradiated with light having a lower color temperature than the eyes of the subject for learning. Learn to use the subject's eyes. Such a learning image is, for example, generated by adjusting the white balance of an image depicting an eye of a subject for learning actually irradiated with light having a color temperature of 4000 K, and is simulated by depicting an eye with a color temperature of 3000 K. Image of this eye illuminated by K light. In addition, the image for learning is photographed using, for example, a camera mounted on a smartphone.

學習用檢查結果資料是構成教師資料的答案的一部分、且表示與乾眼症的症狀相關的檢查結果的資料。作為此種檢查的結果,例如可列舉:基於使用光干涉計評價淚液油層的厚度的檢查來表現淚液油層的厚度的、為0以上且1以下的數值。The test result data for learning is data that constitutes a part of the answers of the teacher data and shows test results related to symptoms of dry eye. As a result of such an inspection, the numerical value which expresses the thickness of a tear oil layer based on the inspection which evaluates the thickness of a tear oil layer using an optical interferometer and is 0 or more and 1 or less is mentioned, for example.

於該數值為0以上且小於0.5的情況下,使用光干涉計進行評價而得的淚液油層的厚度小於75 μm,因此表現出學習用被檢查體的眼睛正常,不需要由眼科醫生進行診察。另外,於該數值為0.5以上且1以下的情況下,使用光干涉計進行評價而得的淚液油層的厚度為75 μm以上,因此表現出學習用被檢查體的眼睛異常,需要由眼科醫生進行診察。When the numerical value is 0 or more and less than 0.5, the thickness of the tear oil layer evaluated using the optical interferometer is less than 75 μm, and therefore the eyes of the subject for study appear to be normal, and examination by an ophthalmologist is unnecessary. In addition, when the numerical value is 0.5 or more and 1 or less, the thickness of the tear oil layer evaluated using the optical interferometer is 75 μm or more, and therefore, the eye of the subject for study is abnormal, and an ophthalmologist is required to perform the evaluation. diagnosis.

例如,教師資料獲取功能101f獲取1280個教師資料。該情況下,1280個學習用圖像資料分別表示藉由實施四組使用搭載於智慧型手機上的照相機對8名學習用被檢查體的兩個眼睛分別進行20次拍攝的處理而獲取的1280張學習用圖像。另外,該情況下,1280個學習用檢查結果資料表示1280個分別表示使用光干涉計進行評價而得的淚液油層的厚度的、為0以上且1以下的數值。For example, the teacher profile acquisition function 101f acquires 1280 teacher profiles. In this case, the 1280 learning image data respectively represent 1280 obtained by performing four sets of processing of photographing the two eyes of 8 learning subjects 20 times using a camera mounted on a smartphone. Zhang learning images. In addition, in this case, the 1280 pieces of inspection result data for learning represent 1280 numerical values of 0 or more and 1 or less, each representing the thickness of the tear oil layer evaluated using an optical interferometer.

再者,於以下的說明中,列舉以下情況為例進行說明:1280個學習用檢查結果資料包含表示表現學習用被檢查體的眼睛正常的為0以上且小於0.5的數值的600個學習用檢查結果資料、以及表示表現學習用被檢查體的眼睛異常的為0.5以上且1以下的數值的680個學習用檢查結果資料。In addition, in the following description, the following case is taken as an example for description: 1280 pieces of examination result data for learning include 600 examinations for learning which show that the eyes of the examination subject for learning are normal and have a value of 0 or more and less than 0.5 Result data, and 680 test result data for learning showing a numerical value of 0.5 or more and 1 or less indicating abnormality in the eyes of the subject for learning.

機械學習執行功能102f將教師資料輸入至機械學習裝置700f中所安裝的機械學習程式750f,使機械學習程式750f進行學習。例如,機械學習執行功能102f使包括卷積類神經網路的機械學習程式750f藉由後向傳播進行學習。The machine learning execution function 102f inputs the teacher data into the machine learning program 750f installed in the machine learning device 700f, and causes the machine learning program 750f to learn. For example, the machine learning execution function 102f causes a machine learning program 750f including a convolutional neural network to learn by back-propagation.

例如,機械學習執行功能102f將以下520個教師資料輸入至機械學習程式750f,即,所述540個教師資料包含表示表現學習用被檢查體的眼睛正常的、為0以上且小於0.5的數值的學習用檢查結果資料。該520個教師資料是自選自所述8名中的7名獲取的教師資料。For example, the machine learning execution function 102f inputs the following 520 teacher data into the machine learning program 750f, that is, the 540 teacher data includes a value of 0 or more and less than 0.5 which indicates that the eyes of the subject for performance learning are normal. Study test results data. The 520 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

另外,例如,機械學習執行功能102f將以下600個教師資料輸入至機械學習程式750f,即,所述600個教師資料包含表示表現學習用被檢查體的眼睛異常的、為0.5以上且1以下的數值的學習用檢查結果資料。該600個教師資料是自選自所述8名中的7名獲取的教師資料。In addition, for example, the machine learning execution function 102f inputs the following 600 teacher data into the machine learning program 750f, that is, the 600 teacher data includes 0.5 or more and 1 or less indicating abnormality in the eyes of the subject for learning. Numerical learning test result data. The 600 teacher profiles are teacher profiles obtained from 7 of the 8 persons.

然後,機械學習執行功能102f利用所述1120個教師資料使機械學習程式750f進行學習。Then, the machine learning execution function 102f makes the machine learning program 750f learn by using the 1120 teacher data.

另外,例如,機械學習執行功能102f亦可將以下80個教師資料作為測試資料輸入至機械學習程式750f,即,所述80個教師資料表現學習用被檢查體的眼睛正常,且未用於機械學習程式750f的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102f can also input the following 80 teacher data as test data into the machine learning program 750f, that is, the 80 teacher data show that the eyes of the subject for learning are normal, and are not used for machine learning Learning program 750f is being learned. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

另外,例如,機械學習執行功能102e亦可將以下80個教師資料作為測試資料輸入至機械學習程式750f,即,所述80個教師資料表現學習用被檢查體的眼睛異常,且未用於機械學習程式750f的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102e may also input the following 80 teacher data as test data into the machine learning program 750f, that is, the 80 teacher data representing the abnormality of the eyes of the subject for learning and not used for the machine learning Learning program 750f is being learned. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

藉此,機械學習執行功能102f可對利用所述1080個教師資料使機械學習程式750f進行學習而獲得的機械學習程式750f的特性進行評價。Thereby, the machine learning execution function 102f can evaluate the characteristics of the machine learning program 750f obtained by learning the machine learning program 750f using the 1080 teacher data.

再者,機械學習執行功能102f當使用測試資料對機械學習程式750f的特性進行評價時,例如可使用類激活映射。類激活映射是使輸入至類神經網路的資料中成為由類神經網路輸出的結果的根據的部分明確的技術。作為類激活映射,例如可列舉梯度加權類激活映射。梯度加權類激活映射為以下技術:利用與由卷積類神經網路執行的卷積的特徵相關的分類得分的梯度,來確定輸入至卷積類神經網路的圖像中對分類給予了一定程度以上的影響的區域。Furthermore, the machine learning execution function 102f may use, for example, a class activation map when evaluating the characteristics of the machine learning program 750f using test data. Class activation mapping is a technique for partially clarifying the data input to the neural network as the basis for the output of the neural network. Examples of the class activation map include gradient weighted class activation maps. Gradient-weighted class activation mapping is a technique that utilizes the gradient of the classification score relative to the features of the convolution performed by the convolutional neural network to determine that the image input to the convolutional neural network gives a certain the area of influence above the degree.

圖62是表示第六實施例的機械學習程式於預測與淚液油層的厚度相關的檢查結果時,學習用被檢查體的眼睛圖像中所重點考慮的部分的一例的圖。例如,機械學習執行功能102f使用梯度加權類激活映射,評價為:圖62所示的學習用圖像中由圖62所示的橢圓C64包圍的區域對利用機械學習程式750f進行的、對與淚液油層的厚度相關的檢查結果的預測給予了一定程度以上的影響。另外,由圖62所示的橢圓C64包圍的區域所含的三個階段的灰階表示對與淚液油層的厚度相關的檢查結果的預測給予的影響程度,且重疊顯示於描繪出學習用被檢查體的眼睛的角膜的靠近下瞼的區域上。進而,該灰階重疊於學習用被檢查體的眼睛的角膜中下瞼側的約一半上。62 is a diagram showing an example of a portion to be considered in the eye image of the subject for learning when the machine learning program of the sixth embodiment predicts the test result related to the thickness of the tear oil layer. For example, the machine learning execution function 102f uses a gradient weighted class activation map, and evaluates the pairing of the area surrounded by the ellipse C64 shown in FIG. 62 in the learning image shown in FIG. 62 with the tear fluid performed by the machine learning program 750f The prediction of the inspection results related to the thickness of the oil layer gives more than a certain degree of influence. In addition, the three-stage gray scales included in the area enclosed by the ellipse C64 shown in FIG. 62 indicate the degree of influence given to the prediction of the test result related to the thickness of the tear oil layer, and are superimposed and displayed on the drawing of the test subject for learning. on the area of the cornea of the human eye near the lower lid. Furthermore, this gray scale is superimposed on about half of the lower eyelid side of the cornea of the eye of the subject for learning.

接下來,圖63是表示利用第六實施例的機械學習執行程式執行的處理一例的流程圖。圖63是表示利用第六實施例的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100f執行至少一次圖63所示的處理。Next, FIG. 63 is a flowchart showing an example of processing executed by the machine learning execution program of the sixth embodiment. 63 is a flowchart showing an example of processing executed by the machine learning execution program of the sixth embodiment. The machine learning execution routine 100f executes the processing shown in FIG. 63 at least once.

於步驟S111中,教師資料獲取功能101f獲取將學習用圖像資料作為問題、將學習用檢查結果資料作為答案的教師資料。In step S111, the teacher data acquisition function 101f acquires teacher data with the image data for learning as the question and the examination result data for learning as the answer.

於步驟S112中,機械學習執行功能102f將教師資料輸入至機械學習程式750f,使機械學習程式750f進行學習。In step S112, the machine learning execution function 102f inputs the teacher data into the machine learning program 750f, so that the machine learning program 750f learns.

接下來,參照圖64至圖66,對第六實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法的具體例進行說明。64 to 66 , specific examples of the dry eye syndrome inspection program, the dry eye syndrome inspection apparatus, and the dry eye syndrome inspection method according to the sixth embodiment will be described.

圖64是表示第六實施例的乾眼症檢查裝置的硬體結構一例的圖。圖64所示的乾眼症檢查裝置20f是於已利用機械學習執行程式100f學習完畢的機械學習裝置700f的推論階段中,使用機械學習裝置700f推斷推論用被檢查體的眼睛中出現的乾眼症的症狀的裝置。另外,如圖64所示,乾眼症檢查裝置20f包括處理器21f、主儲存裝置22f、通訊介面23f、輔助儲存裝置24f、輸入輸出裝置25f、以及匯流排26f。FIG. 64 is a diagram showing an example of the hardware configuration of the dry eye disease testing apparatus according to the sixth embodiment. The dry eye disease inspection apparatus 20f shown in FIG. 64 uses the machine learning apparatus 700f to infer dry eye appearing in the eyes of the subject for inference in the inference stage of the machine learning apparatus 700f that has been learned by the machine learning execution program 100f symptomatic device. In addition, as shown in FIG. 64, the dry eye disease inspection apparatus 20f includes a processor 21f, a main storage device 22f, a communication interface 23f, an auxiliary storage device 24f, an input/output device 25f, and a bus bar 26f.

處理器21f例如為CPU,讀出並執行後述的乾眼症檢查程式200f,以達成乾眼症檢查程式200f所具有的各功能。另外,處理器21f亦可讀出並執行乾眼症檢查程式200f以外的程式,以於達成乾眼症檢查程式200f所具有的各功能的基礎上達成必要的功能。The processor 21f is, for example, a CPU, and reads out and executes a dry eye syndrome inspection program 200f described later, so as to achieve each function of the dry eye syndrome inspection program 200f. In addition, the processor 21f may read out and execute programs other than the dry eye disease inspection program 200f, so as to achieve necessary functions in addition to each function of the dry eye disease inspection program 200f.

主儲存裝置22f例如為RAM,預先儲存有由處理器21f讀出並執行的乾眼症檢查程式200f以及其他程式。The main storage device 22f is, for example, a RAM, and stores in advance a dry eye syndrome examination program 200f and other programs read and executed by the processor 21f.

通訊介面23f是用於經由圖64所示的網路NW而與機械學習裝置700f以及其他設備執行通訊的介面電路。另外,網路NW例如為LFN、內部網路。The communication interface 23f is an interface circuit for performing communication with the machine learning apparatus 700f and other devices via the network NW shown in FIG. 64 . In addition, the network NW is, for example, an LFN or an intranet.

輔助儲存裝置24f例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24f is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置25f例如為輸入輸出端口。輸入輸出裝置25f例如連接有圖5所示的鍵盤821f、滑鼠822e、顯示器920f。鍵盤821f及滑鼠822f例如用於輸入為了操作乾眼症檢查裝置20f所必需的資料的作業中。顯示器920f例如顯示乾眼症檢查裝置20f的圖形使用者介面。The input/output device 25f is, for example, an input/output port. The input/output device 25f is connected to, for example, a keyboard 821f, a mouse 822e, and a display 920f shown in FIG. 5 . The keyboard 821f and the mouse 822f are used, for example, in the operation of inputting data necessary to operate the dry eye disease inspection apparatus 20f. The display 920f displays, for example, a graphical user interface of the dry eye disease inspection apparatus 20f.

匯流排26f將處理器21f、主儲存裝置22f、通訊介面23f、輔助儲存裝置24f及輸入輸出裝置25f連接,以使該些能夠相互進行資料的收發。The bus bar 26f connects the processor 21f, the main storage device 22f, the communication interface 23f, the auxiliary storage device 24f and the input/output device 25f, so that these can transmit and receive data to and from each other.

圖65是表示第六實施例的乾眼症檢查程式的軟體結構一例的圖。乾眼症檢查裝置20f使用處理器21f讀出並執行乾眼症檢查程式200f,以達成圖65所示的資料獲取功能201f及症狀推斷功能202f。FIG. 65 is a diagram showing an example of a software configuration of the dry eye syndrome examination program of the sixth embodiment. The dry eye disease inspection device 20f uses the processor 21f to read out and execute the dry eye disease inspection program 200f, so as to achieve the data acquisition function 201f and the symptom estimation function 202f shown in FIG. 65 .

資料獲取功能201f獲取推論用圖像資料。推論用圖像資料是表示推論用圖像的資料,所述推論用圖像描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的眼睛。此種推論用圖像例如為:藉由對描繪有實際上被具有色溫4000 K的光照射的推論用被檢查體的眼睛的圖像調整白平衡而生成、且模擬地描繪有被具有色溫3000 K的光照射的該眼睛的圖像。另外,推論用圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。The data acquisition function 201f acquires image data for inference. The image data for inference is data showing an image for inference that depicts a case where the eyes of the subject for inference are irradiated with light having a lower color temperature than the light irradiating the eyes of the subject for inference. The inference is made with the eyes of the subject. Such an inference image is, for example, generated by adjusting the white balance of an image depicting the eye of the inference subject actually irradiated with light having a color temperature of 4000 K, and is simulated to depict an eye with a color temperature of 3000 K. Image of this eye illuminated by K light. In addition, the image for inference is photographed using, for example, a camera mounted on a smartphone.

症狀推斷功能202f將推論用圖像資料輸入至已利用機械學習執行功能102f學習完畢的機械學習程式750f,並使機械學習程式750f推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。例如,症狀推斷功能202f將推論用圖像資料輸入至機械學習程式750f,以推斷表現推論用被檢查體的眼睛的淚液油層厚度的數值。The symptom inference function 202f inputs the image data for inference to the machine learning program 750f that has been learned by the machine learning execution function 102f, and causes the machine learning program 750f to infer the symptoms of dry eye appearing in the eyes of the subject for inference. For example, the symptom estimation function 202f inputs the image data for inference into the machine learning program 750f, and infers a numerical value representing the thickness of the tear oil layer of the eye of the subject for inference.

然後,症狀推斷功能202f使機械學習程式750f輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。例如,症狀推斷功能202f使機械學習程式750f輸出表示表現推論用被檢查體的眼睛的淚液油層厚度的數值的症狀資料。該情況下,症狀資料例如用於在顯示器920f上顯示進行自我表示、且表現推論用被檢查體的眼睛的淚液油層厚度的數值。Then, the symptom estimating function 202f causes the machine learning program 750f to output symptom data indicating the symptoms of dry eye appearing in the eyes of the subject for inference. For example, the symptom estimation function 202f causes the machine learning program 750f to output symptom data representing a numerical value representing the thickness of the tear oil layer of the eye of the subject for estimation. In this case, the symptom data is used to display on the display 920f, for example, a numerical value that expresses itself and expresses the thickness of the tear oil layer of the eye of the subject for inference.

接下來,參照圖66,對第六實施例的乾眼症檢查程式200f執行的處理一例進行說明。圖66是表示利用第六實施例的乾眼症檢查程式執行的處理一例的流程圖。Next, with reference to FIG. 66 , an example of the processing executed by the dry eye syndrome examination program 200f of the sixth embodiment will be described. FIG. 66 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the sixth embodiment.

於步驟S121中,資料獲取功能201f獲取推論用圖像資料。In step S121, the data acquisition function 201f acquires image data for inference.

於步驟S122中,症狀推斷功能202f將推論用圖像資料輸入至已學習完畢的機械學習程式750f,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,並使機械學習程式750f輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。In step S122, the symptom inference function 202f inputs the image data for inference into the machine learning program 750f that has already been learned to infer the symptoms of dry eye appearing in the eyes of the subject for inference, and makes the machine learning program 750f. Symptom data showing symptoms of dry eye syndrome appearing in the eyes of the subject for inference is output.

以上,對第六實施例的機械學習執行程式、乾眼症檢查程式、機械學習執行裝置、乾眼症檢查裝置、機械學習執行方法及乾眼症檢查方法進行了說明。The machine learning execution program, the dry eye disease inspection program, the machine learning execution device, the dry eye disease inspection device, the machine learning execution method, and the dry eye disease inspection method of the sixth embodiment have been described above.

機械學習執行程式100f具備教師資料獲取功能101f、以及機械學習執行功能102f。The machine learning execution program 100f includes a teacher data acquisition function 101f and a machine learning execution function 102f.

教師資料獲取功能101f獲取將學習用圖像資料作為問題、將學習用檢查結果資料作為答案的教師資料。學習用圖像資料是表示學習用圖像的資料,所述學習用圖像描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的眼睛。學習用檢查結果資料是表示調查學習用被檢查體的眼睛的淚液油層厚度的檢查結果的資料。The teacher data acquisition function 101f acquires teacher data with image data for learning as questions and examination result data for learning as answers. The image data for learning is data representing an image for learning in which it is assumed that the eyes of the subject for learning are irradiated with light having a lower color temperature than the eyes of the subject for learning. Learn to use the subject's eyes. The examination result data for learning is the data showing the examination result of investigating the thickness of the tear oil layer of the eye of the subject for examination.

機械學習執行功能102f將教師資料輸入至機械學習程式750f,使機械學習程式750f進行學習。The machine learning execution function 102f inputs the teacher data into the machine learning program 750f, and causes the machine learning program 750f to perform learning.

藉此,機械學習執行程式100f可生成基於學習用圖像資料來預測與乾眼症的症狀相關的檢查結果的機械學習程式750f。Thereby, the machine learning execution program 100f can generate the machine learning program 750f for predicting the test results related to the symptoms of dry eye based on the image data for learning.

乾眼症檢查程式200f具備資料獲取功能201f、以及症狀推斷功能202f。The dry eye syndrome examination program 200f includes a data acquisition function 201f and a symptom estimation function 202f.

資料獲取功能201f獲取推論用圖像資料。推論用圖像資料是表示推論用圖像的資料,所述推論用圖像描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的眼睛。The data acquisition function 201f acquires image data for inference. The image data for inference is data showing an image for inference that depicts a case where the eyes of the subject for inference are irradiated with light having a lower color temperature than the light irradiating the eyes of the subject for inference. The inference is made with the eyes of the subject.

症狀推斷功能202f將推論用圖像資料輸入至已利用機械學習執行程式100f學習完畢的機械學習程式750f,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。然後,症狀推斷功能202f使機械學習程式750f輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。The symptom estimating function 202f inputs the image data for inference to the machine learning program 750f that has been learned by the machine learning execution program 100f, and thereby infers the symptoms of dry eye appearing in the eyes of the subject for inference. Then, the symptom estimating function 202f causes the machine learning program 750f to output symptom data indicating the symptoms of dry eye appearing in the eyes of the subject for inference.

藉此,乾眼症檢查程式200f無需實際實施與乾眼症的症狀相關的檢查,便可預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye syndrome test program 200f can predict the test results related to the symptoms of dry eye syndrome without actually carrying out the test related to the symptoms of dry eye syndrome.

接下來,列舉使機械學習程式750f實際進行學習來推斷乾眼症的症狀的例子,並將比較例與第六實施例的實施例加以對比來說明藉由機械學習執行程式100f起到的效果的具體例。Next, an example in which the machine learning program 750f is actually learned to infer the symptoms of dry eye disease is given, and the effect of executing the program 100f by machine learning is described by comparing the comparative example with the example of the sixth embodiment. specific example.

於該情況下的比較例中,教師資料所含的問題並非學習用圖像資料,而是表示描繪有學習用被檢查體的眼睛被色溫與照射著學習用被檢查體的眼睛的光相等的光照射時的學習用被檢查體的眼睛的圖像的圖像資料。另外,該情況下的比較例是使用所述表現學習用被檢查體的眼睛正常的520個教師資料及表現學習用被檢查體的眼睛異常的600個教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the problem contained in the teacher data is not the image data for learning, but the problem that the color temperature of the eyes of the subject for learning is drawn and the light irradiating the eyes of the subject for learning is equal. Image data of images of the eyes of the subject for study during light irradiation. In addition, the comparative example in this case is an example in which the machine learning program is learned using 520 teacher data for expressing the normal eyes of the subject for learning and 600 teacher data for expressing the abnormal eyes of the learning subject .

關於如此般進行了學習的機械學習程式,若使用包含所述圖像資料且表現學習用被檢查體的眼睛正常的80個測試資料、及包含所述圖像資料且表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為50%且假陰性率為50%。As for the machine learning program that has been learned in this way, if 80 test data including the image data and representing the normal eyes of the subject for learning are used, and the image data including the image data and representing the subject for learning are used. 80 test data of abnormal eyes were used to evaluate the characteristics, which showed a prediction accuracy of 50% and a false negative rate of 50%.

另一方面,關於利用機械學習執行程式100f進行了學習的機械學習程式750f,若使用包含學習用圖像資料的測試資料來評價特性,則示出預測精度為74%且假陰性率為26%。該特性優於比較例的機械學習程式的特性。On the other hand, about the machine learning program 750f learned by the machine learning execution program 100f, when the characteristics were evaluated using the test data including the learning image data, the prediction accuracy was 74% and the false negative rate was 26%. . This characteristic is superior to that of the machine learning program of the comparative example.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

再者,機械學習執行程式100f所具有的功能的至少一部分可藉由包括電路部的硬體達成。同樣地,乾眼症檢查程式200f所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為LSI、ASIC、FPGA、GPU。Furthermore, at least a part of the functions of the machine learning execution program 100f can be realized by hardware including a circuit portion. Likewise, at least a part of the functions of the dry eye syndrome examination program 200f can be realized by hardware including a circuit unit. In addition, such hardware is, for example, LSI, ASIC, FPGA, and GPU.

另外,機械學習執行程式100f所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,乾眼症檢查程式200f所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100f can also be achieved by the cooperation of software and hardware. Similarly, at least a part of the functions of the dry eye disease checking program 200f can also be achieved by the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於第六實施例中,列舉機械學習執行裝置10f、機械學習裝置700f以及乾眼症檢查裝置20f為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In addition, in the sixth embodiment, the case where the machine learning execution device 10f, the machine learning device 700f, and the dry eye disease inspection device 20f are independent devices has been described as an example, but the present invention is not limited to this. The devices can also be implemented as one device.

接下來,參照圖67至圖72,對所述實施形態的第七實施例的具體例進行說明。與第六實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法不同,第七實施例的機械學習執行程式、機械學習執行裝置及機械學習執行方法使用包含後述的學習用角膜圖像資料而非學習用圖像資料作為問題的教師資料。Next, a specific example of the seventh example of the above-described embodiment will be described with reference to FIGS. 67 to 72 . Different from the machine learning execution program, machine learning execution apparatus, and machine learning execution method of the sixth embodiment, the machine learning execution program, machine learning execution apparatus, and machine learning execution method of the seventh embodiment use corneal images including the later-described learning use. Data rather than teacher data for learning to use image data as questions.

圖67是表示第七實施例的機械學習執行裝置的硬體結構一例的圖。圖67所示的機械學習執行裝置10g是於後述的機械學習裝置700g的學習階段中使機械學習裝置700g執行機械學習的裝置。另外,如圖67所示,機械學習執行裝置10g包括處理器11g、主儲存裝置12g、通訊介面13g、輔助儲存裝置14g、輸入輸出裝置15g、以及匯流排16g。FIG. 67 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the seventh embodiment. The machine learning execution device 10g shown in FIG. 67 is a device for causing the machine learning device 700g to execute machine learning in a learning phase of the machine learning device 700g described later. In addition, as shown in FIG. 67, the machine learning execution device 10g includes a processor 11g, a main storage device 12g, a communication interface 13g, an auxiliary storage device 14g, an input/output device 15g, and a bus bar 16g.

處理器11g例如為CPU,讀出並執行後述的機械學習執行程式100g,以達成機械學習執行程式100g所具有的各功能。另外,處理器11g亦可讀出並執行機械學習執行程式100g以外的程式,以於達成機械學習執行程式100g所具有的各功能的基礎上達成必要的功能。The processor 11g is, for example, a CPU, and reads out and executes the machine learning execution program 100g described later to achieve each function of the machine learning execution program 100g. In addition, the processor 11g may read out and execute programs other than the machine learning execution program 100g, so as to achieve necessary functions in addition to each function of the machine learning execution program 100g.

主儲存裝置12g例如為RAM,預先儲存有由處理器11g讀出並執行的機械學習執行程式100g以及其他程式。The main storage device 12g is, for example, a RAM, and stores in advance a machine learning execution program 100g and other programs read and executed by the processor 11g.

通訊介面13g是用於經由圖8所示的網路NW而與機械學習裝置700g以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 13g is an interface circuit for performing communication with the machine learning apparatus 700g and other devices via the network NW shown in FIG. 8 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置14g例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 14g is, for example, a hard disk drive, a solid state drive, a flash memory, or a ROM.

輸入輸出裝置15g例如為輸入輸出端口。輸入輸出裝置15g例如連接有圖1所示的鍵盤811g、滑鼠812g、顯示器910g。鍵盤811g及滑鼠812g例如用於輸入為了操作機械學習執行裝置10g所必需的資料的作業中。顯示器910g例如顯示機械學習執行裝置10g的圖形使用者介面。The input/output device 15g is, for example, an input/output port. The input/output device 15g is connected to, for example, a keyboard 811g, a mouse 812g, and a display 910g shown in FIG. 1 . The keyboard 811g and the mouse 812g are used, for example, for inputting data necessary for operating the machine learning execution device 10g. The display 910g displays, for example, a graphical user interface of the machine learning execution device 10g.

匯流排16g將處理器11g、主儲存裝置12g、通訊介面13g、輔助儲存裝置14g以及輸入輸出裝置15g連接,以使該些能夠相互進行資料的收發。The bus bar 16g connects the processor 11g, the main storage device 12g, the communication interface 13g, the auxiliary storage device 14g, and the input/output device 15g, so that these can send and receive data to and from each other.

圖68是表示第七實施例的機械學習執行程式的軟體結構一例的圖。機械學習執行裝置10g使用處理器11g讀出並執行機械學習執行程式100g,以達成圖68所示的教師資料獲取功能101g及機械學習執行功能102g。FIG. 68 is a diagram showing an example of the software configuration of the machine learning execution program of the seventh embodiment. The machine learning execution device 10g uses the processor 11g to read out and execute the machine learning execution program 100g, so as to achieve the teacher data acquisition function 101g and the machine learning execution function 102g shown in FIG. 68 .

教師資料獲取功能101g獲取將一個學習用角膜圖像資料作為問題、將一個學習用檢查結果資料作為答案的教師資料。The teacher data acquisition function 101g acquires teacher data with a corneal image data for learning as a question and a learning examination result data as an answer.

學習用角膜圖像資料是表示學習用角膜圖像的資料,所述學習用角膜圖像描繪出學習用被檢查體的眼睛的角膜的至少一部分。學習用角膜圖像可為僅描繪出學習用被檢查體的眼睛的角膜的至少一部分的圖像,亦可為亦描繪出學習用被檢查體的眼睛的角膜以外部分的圖像。另外,學習用角膜圖像可為自描繪有學習用被檢查體的眼睛的學習用圖像中剪切描繪出學習用被檢查體的眼睛的角膜的至少一部分的區域而得的圖像。The corneal image data for learning is data representing a corneal image for learning that depicts at least a part of the cornea of the eye of the subject for learning. The corneal image for learning may be an image that depicts only at least a part of the cornea of the eye of the subject for learning, or an image that also depicts parts other than the cornea of the eye of the subject for learning. In addition, the corneal image for learning may be an image obtained by cutting out an area in which at least a part of the cornea of the eye of the subject for learning is drawn from the image for learning in which the eye of the subject for learning is drawn.

另外,關於學習用角膜圖像,與描繪出學習用被檢查體的眼睛的角膜中靠近學習用被檢查體的上瞼的部分及靠近下瞼的部分兩者的情況相比,僅描繪出靠近學習用被檢查體的下瞼的部分時可提高機械學習程式750g的預測精度。例如,於學習用角膜圖像僅描繪出學習用被檢查體的眼睛的角膜中下瞼側的一半部分時,可進一步提高使用機械學習程式750g預測乾眼症的症狀時的精度。再者,學習用角膜圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。In addition, with regard to the corneal image for learning, only the portion close to the upper eyelid of the subject for learning and the portion close to the lower eyelid of the eye of the subject for learning are drawn as compared The prediction accuracy of the machine learning program 750g can be improved when the lower eyelid part of the subject is learned. For example, when the learning corneal image depicts only half of the cornea on the lower eyelid side of the eye of the learning subject, the accuracy of predicting the symptoms of dry eye using the machine learning program 750g can be further improved. In addition, the corneal image for learning is photographed using, for example, a camera mounted on a smartphone.

學習用檢查結果資料是構成教師資料的答案的一部分、且表示與乾眼症的症狀相關的檢查結果的資料。作為此種檢查的結果,例如可列舉:基於使用光干涉計評價淚液油層的厚度的檢查來表現淚液油層的厚度的、為0以上且1以下的數值。The test result data for learning is data that constitutes a part of the answers of the teacher data and shows test results related to symptoms of dry eye. As a result of such an inspection, the numerical value which expresses the thickness of a tear oil layer based on the inspection which evaluates the thickness of a tear oil layer using an optical interferometer and is 0 or more and 1 or less is mentioned, for example.

於該數值為0以上且小於0.5的情況下,使用光干涉計進行評價而得的淚液油層的厚度小於75 μm,因此表現出學習用被檢查體的眼睛正常,不需要由眼科醫生進行診察。另外,於該數值為0.5以上且1以下的情況下,使用光干涉計進行評價而得的淚液油層的厚度為75 μm以上,因此表現出學習用被檢查體的眼睛異常,需要由眼科醫生進行診察。When the numerical value is 0 or more and less than 0.5, the thickness of the tear oil layer evaluated using the optical interferometer is less than 75 μm, and therefore the eyes of the subject for study appear to be normal, and examination by an ophthalmologist is unnecessary. In addition, when the numerical value is 0.5 or more and 1 or less, the thickness of the tear oil layer evaluated using the optical interferometer is 75 μm or more, and therefore, the eye of the subject for study is abnormal, and an ophthalmologist is required to perform the evaluation. diagnosis.

例如,教師資料獲取功能101f獲取1280個教師資料。該情況下,1280個學習用角膜圖像資料分別表示藉由實施四組使用搭載於智慧型手機上的照相機對8名學習用被檢查體的兩個眼睛分別進行20次拍攝的處理而獲取的1280張學習用圖像。另外,該情況下,1280個學習用檢查結果資料表示1280個分別表示使用光干涉計進行評價而得的淚液油層的厚度的、為0以上且1以下的數值。For example, the teacher profile acquisition function 101f acquires 1280 teacher profiles. In this case, the 1280 corneal image data for learning are obtained by performing four sets of processing of photographing the two eyes of 8 subjects for learning 20 times using a camera mounted on a smartphone. 1280 images for learning. In addition, in this case, the 1280 pieces of inspection result data for learning represent 1280 numerical values of 0 or more and 1 or less, each representing the thickness of the tear oil layer evaluated using an optical interferometer.

再者,於以下的說明中,列舉以下情況為例進行說明:1280個學習用檢查結果資料包含表示表現學習用被檢查體的眼睛正常的為0以上且小於0.5的數值的600個學習用檢查結果資料、以及表示表現學習用被檢查體的眼睛異常的為0.5以上且1以下的數值的680個學習用檢查結果資料。In addition, in the following description, the following case is taken as an example for description: 1280 pieces of examination result data for learning include 600 examinations for learning which show that the eyes of the examination subject for learning are normal and have a value of 0 or more and less than 0.5 Result data, and 680 test result data for learning showing a numerical value of 0.5 or more and 1 or less indicating abnormality in the eyes of the subject for learning.

機械學習執行功能102g將教師資料輸入至機械學習裝置700g中所安裝的機械學習程式750g,使機械學習程式750g進行學習。例如,機械學習執行功能102g使包括卷積類神經網路的機械學習程式750g藉由後向傳播進行學習。The machine learning execution function 102g inputs the teacher data into the machine learning program 750g installed in the machine learning device 700g, and causes the machine learning program 750g to learn. For example, the machine learning execution function 102g causes a machine learning program 750g including a convolutional neural network to learn by backpropagation.

例如,機械學習執行功能102g將以下520個教師資料輸入至機械學習程式750g,即,所述520個教師資料包含表示表現學習用被檢查體的眼睛正常的、為0以上且小於0.5的數值的學習用檢查結果資料。該520個教師資料是自選自所述8名中的7名獲取的教師資料。For example, the machine learning execution function 102g inputs the following 520 teacher data into the machine learning program 750g, that is, the 520 teacher data includes a numerical value of 0 or more and less than 0.5, which indicates that the eyes of the subject for performance learning are normal. Study test results data. The 520 teacher profiles are teacher profiles obtained from 7 selected from the 8 persons.

另外,例如,機械學習執行功能102g將以下600個教師資料輸入至機械學習程式750g,即,所述600個教師資料包含表示表現學習用被檢查體的眼睛異常的、為0.5以上且1以下的數值的學習用檢查結果資料。該600個教師資料是自選自所述8名中的7名獲取的教師資料。In addition, for example, the machine learning execution function 102g inputs the following 600 teacher data into the machine learning program 750g, that is, the 600 teacher data includes 0.5 or more and 1 or less indicating abnormality in the eyes of the subject for learning. Numerical learning test result data. The 600 teacher profiles are teacher profiles obtained from 7 of the 8 persons.

然後,機械學習執行功能102g利用所述1120個教師資料使機械學習程式750g進行學習。Then, the machine learning execution function 102g makes the machine learning program 750g learn by using the 1120 teacher data.

另外,機械學習執行功能102g中,越是包含以下學習用角膜圖像資料的教師資料,越優先輸入至機械學習程式,即,所述學習用角膜圖像資料表示描繪出學習用被檢查體的眼睛的角膜中靠近學習用被檢查體的下瞼的部分的學習用角膜圖像。其原因在於:如上所述,關於學習用角膜圖像,與描繪出學習用被檢查體的眼睛的角膜中靠近學習用被檢查體的上瞼的部分及靠近下瞼的部分兩者的情況相比,僅描繪出靠近學習用被檢查體的下瞼的部分時可提高機械學習程式750g的預測精度。In addition, in the machine learning execution function 102g, the more the teacher data including the following corneal image data for learning, that is, the corneal image data for learning representing the image that depicts the subject for learning, is preferentially input to the machine learning program. An image of the cornea for learning of the portion of the cornea of the eye that is close to the lower eyelid of the subject for learning. The reason for this is that, as described above, the corneal image for learning is different from the case where both the portion near the upper eyelid and the portion near the lower eyelid of the eye of the subject for learning are drawn in the cornea of the subject for learning. In contrast, when only the portion close to the lower eyelid of the subject for learning is drawn, the prediction accuracy of the machine learning program 750g can be improved.

另外,例如,機械學習執行功能102g亦可將以下80個教師資料作為測試資料輸入至機械學習程式750g,即,所述80個教師資料表現學習用被檢查體的眼睛正常,且未用於機械學習程式750g的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102g can also input the following 80 teacher data as test data into the machine learning program 750g, that is, the 80 teacher data show that the eyes of the subject for learning are normal and are not used for the machine learning Learning program 750g is learning. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

另外,例如,機械學習執行功能102g亦可將以下80個教師資料作為測試資料輸入至機械學習程式750g,即,所述80個教師資料表現學習用被檢查體的眼睛異常,且未用於機械學習程式750g的學習中。該80個教師資料是自未被選作所述7名的1名獲取的教師資料。In addition, for example, the machine learning execution function 102g may also input the following 80 teacher data as test data into the machine learning program 750g, that is, the 80 teacher data representing the abnormality of the eyes of the subject for learning and not used for the machine learning Learning program 750g is learning. The 80 teacher profiles are teacher profiles obtained from 1 person who was not selected as the 7 persons.

藉此,機械學習執行功能102g可對利用所述1080個教師資料使機械學習程式750g進行學習而獲得的機械學習程式750g的特性進行評價。Thereby, the machine learning execution function 102g can evaluate the characteristics of the machine learning program 750g obtained by learning the machine learning program 750g using the 1080 teacher data.

再者,機械學習執行功能102g當使用測試資料對機械學習程式750g的特性進行評價時,例如可使用類激活映射。類激活映射是使輸入至類神經網路的資料中成為由類神經網路輸出的結果的根據的部分明確的技術。作為類激活映射,例如可列舉梯度加權類激活映射。梯度加權類激活映射為以下技術:利用與由卷積類神經網路執行的卷積的特徵相關的分類得分的梯度,來確定輸入至卷積類神經網路的圖像中對分類給予了一定程度以上的影響的區域。Furthermore, the machine learning execution function 102g may use, for example, a class activation map when evaluating the characteristics of the machine learning program 750g using test data. Class activation mapping is a technique for partially clarifying the data input to the neural network as the basis for the results output by the neural network. Examples of the class activation map include gradient weighted class activation maps. Gradient-weighted class activation mapping is a technique that utilizes the gradient of the classification score relative to the features of the convolution performed by the convolutional neural network to determine that the image input to the convolutional neural network gives a certain amount to the classification area of influence above the degree.

接下來,參照圖69,對第七實施例的機械學習執行程式100g執行的處理一例進行說明。圖69是表示利用第七實施例的機械學習執行程式執行的處理一例的流程圖。機械學習執行程式100g執行至少一次圖69所示的處理。Next, an example of processing executed by the machine learning execution program 100g of the seventh embodiment will be described with reference to FIG. 69 . FIG. 69 is a flowchart showing an example of processing executed by the machine learning execution program of the seventh embodiment. The machine learning execution program 100g executes the processing shown in FIG. 69 at least once.

於步驟S131中,教師資料獲取功能101g獲取將學習用角膜圖像資料作為問題、將學習用檢查結果資料作為答案的教師資料。In step S131, the teacher data acquisition function 101g acquires teacher data with the corneal image data for learning as the question and the examination result data for learning as the answer.

於步驟S132中,機械學習執行功能102g將教師資料輸入至機械學習程式750g,使機械學習程式750g進行學習。In step S132, the machine learning execution function 102g inputs the teacher data into the machine learning program 750g, and makes the machine learning program 750g learn.

接下來,參照圖70至圖72,對第七實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法的具體例進行說明。與第六實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法不同,第七實施例的乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法並非獲取推論用圖像資料,而是獲取後述的推論用角膜圖像資料。70 to 72 , specific examples of the dry eye syndrome inspection program, the dry eye syndrome inspection apparatus, and the dry eye syndrome inspection method according to the seventh embodiment will be described. Unlike the dry eye inspection program, dry eye inspection device, and dry eye inspection method of the sixth embodiment, the dry eye inspection program, dry eye inspection device, and dry eye inspection method of the seventh embodiment do not acquire inferences Instead of using the image data, the corneal image data for inference described later is acquired.

圖70是表示第七實施例的乾眼症檢查裝置的硬體結構一例的圖。圖70所示的乾眼症檢查裝置20g是於已利用機械學習執行程式100g學習完畢的機械學習裝置700g的推論階段中,使用機械學習裝置700g推斷推論用被檢查體的眼睛中出現的乾眼症的症狀的裝置。另外,如圖70所示,乾眼症檢查裝置20g包括處理器21g、主儲存裝置22g、通訊介面23g、輔助儲存裝置24g、輸入輸出裝置25g、以及匯流排26g。FIG. 70 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the seventh embodiment. The dry eye disease inspection apparatus 20g shown in FIG. 70 uses the machine learning apparatus 700g in the inference stage of the machine learning apparatus 700g that has been learned by the machine learning execution program 100g to infer the dry eye that appears in the eyes of the subject for inference symptomatic device. In addition, as shown in FIG. 70 , the dry eye disease inspection apparatus 20g includes a processor 21g, a main storage device 22g, a communication interface 23g, an auxiliary storage device 24g, an input/output device 25g, and a bus bar 26g.

處理器21f例如為CPU,讀出並執行後述的乾眼症檢查程式200g,以達成乾眼症檢查程式200g所具有的各功能。另外,處理器21g亦可讀出並執行乾眼症檢查程式200g以外的程式,以於達成乾眼症檢查程式200g所具有的各功能的基礎上達成必要的功能。The processor 21f is, for example, a CPU, and reads out and executes the dry eye disease inspection program 200g described later, so as to achieve each function of the dry eye disease inspection program 200g. In addition, the processor 21g may read out and execute programs other than the dry eye disease inspection program 200g, so as to achieve necessary functions in addition to each function of the dry eye disease inspection program 200g.

主儲存裝置22g例如為RAM,預先儲存有由處理器21g讀出並執行的乾眼症檢查程式200g以及其他程式。The main storage device 22g is, for example, a RAM, and stores in advance the dry eye examination program 200g and other programs read and executed by the processor 21g.

通訊介面23g是用於經由圖70所示的網路NW而與機械學習裝置700g以及其他設備執行通訊的介面電路。另外,網路NW例如為LAN、內部網路。The communication interface 23g is an interface circuit for performing communication with the machine learning apparatus 700g and other devices via the network NW shown in FIG. 70 . In addition, the network NW is, for example, a LAN or an intranet.

輔助儲存裝置24g例如為硬碟驅動機、固態驅動機、快閃記憶體、ROM。The auxiliary storage device 24g is, for example, a hard disk drive, a solid state drive, a flash memory, and a ROM.

輸入輸出裝置25g例如為輸入輸出端口。輸入輸出裝置25g例如連接有圖70所示的鍵盤821g、滑鼠822g、顯示器920g。鍵盤821g及滑鼠822g例如用於輸入為了操作乾眼症檢查裝置20g所必需的資料的作業中。顯示器920g例如顯示乾眼症檢查裝置20g的圖形使用者介面。The input/output device 25g is, for example, an input/output port. The input/output device 25g is connected to, for example, a keyboard 821g, a mouse 822g, and a display 920g shown in FIG. 70 . The keyboard 821g and the mouse 822g are used, for example, in the operation of inputting data necessary to operate the dry eye examination apparatus 20g. The display 920g displays, for example, a graphical user interface of the dry eye examination apparatus 20g.

匯流排26g將處理器21g、主儲存裝置22g、通訊介面23g、輔助儲存裝置24g以及輸入輸出裝置25g連接,以使該些能夠相互進行資料的收發。The bus bar 26g connects the processor 21g, the main storage device 22g, the communication interface 23g, the auxiliary storage device 24g, and the input/output device 25g, so that these can send and receive data to and from each other.

圖71是表示第七實施例的乾眼症檢查程式的軟體結構一例的圖。乾眼症檢查裝置20g使用處理器21g讀出並執行乾眼症檢查程式200g,以達成圖71所示的資料獲取功能201g及症狀推斷功能202g。FIG. 71 is a diagram showing an example of the software configuration of the dry eye syndrome examination program according to the seventh embodiment. The dry eye disease inspection apparatus 20g uses the processor 21g to read out and execute the dry eye disease inspection program 200g, so as to achieve the data acquisition function 201g and the symptom estimation function 202g shown in FIG. 71 .

資料獲取功能201g獲取推論用角膜圖像資料。推論用角膜圖像資料是表示推論用角膜圖像的資料,所述推論用角膜圖像描繪出推論用被檢查體的眼睛的角膜的至少一部分。推論用角膜圖像可為僅描繪出推論用被檢查體的眼睛的角膜的至少一部分的圖像,亦可為亦描繪出推論用被檢查體的眼睛的角膜以外部分的圖像。另外,推論用角膜圖像可為自描繪有推論用被檢查體的眼睛的推論用圖像中剪切描繪出推論用被檢查體的眼睛的角膜的至少一部分的區域而得的圖像。The data acquisition function 201g acquires corneal image data for inference. The corneal image data for inference is data representing a corneal image for inference that depicts at least a part of the cornea of the eye of the subject for inference. The corneal image for inference may be an image that depicts only at least a part of the cornea of the eye of the subject for inference, or an image that also depicts a portion other than the cornea of the eye of the subject for inference. In addition, the cornea image for inference may be an image obtained by cutting out an area in which at least a part of the cornea of the eye of the subject for inference is drawn from the image for inference in which the eye of the subject for inference is drawn.

另外,於機械學習程式750g使用將表示僅描繪出學習用被檢查體的靠近下瞼的部分的學習用角膜圖像的學習用角膜圖像資料作為問題的教師資料進行學習時,推論用角膜圖像較佳為僅描繪出推論用被檢查體的靠近下瞼的部分。藉此,乾眼症檢查程式200g能夠使用機械學習程式750g以更高的精度預測推論用被檢查體的乾眼症的症狀。例如,於推論用角膜圖像僅描繪出推論用被檢查體的眼睛的角膜中下瞼側的一半部分時,可進一步提高使用機械學習程式750g預測乾眼症的症狀時的精度。再者,推論用角膜圖像例如是使用搭載於智慧型手機上的照相機拍攝而成。In addition, when the machine learning program 750g is used for learning using the teacher data that uses the learning corneal image data representing the learning corneal image depicting only the portion close to the lower eyelid of the learning subject as a problem, the inference cornea map is used for learning. Preferably, only the portion close to the lower eyelid of the subject for inference is drawn. Thereby, the dry eye syndrome examination program 200g can predict the symptoms of dry eye syndrome of the subject for inference with higher accuracy using the machine learning program 750g. For example, when the inference cornea image only depicts a half of the lower eyelid side of the cornea of the eye of the inference subject, the accuracy in predicting the symptoms of dry eye using the machine learning program 750g can be further improved. In addition, the corneal image for inference is captured using, for example, a camera mounted on a smartphone.

症狀推斷功能202g將推論用角膜圖像資料輸入至已利用機械學習執行功能102g學習完畢的機械學習程式750g,並使機械學習程式750g推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。例如,症狀推斷功能202g將推論用角膜圖像資料輸入至機械學習程式750g,以推斷表現推論用被檢查體的眼睛的淚液油層厚度的數值。The symptom inference function 202g inputs the corneal image data for inference into the machine learning program 750g that has been learned by the machine learning executive function 102g, and causes the machine learning program 750g to infer the symptoms of dry eye appearing in the eyes of the subject for inference . For example, the symptom inference function 202g inputs the corneal image data for inference into the machine learning program 750g, and infers a numerical value representing the thickness of the tear oil layer of the eye of the subject for inference.

另外,症狀推斷功能202g可將推論用角膜圖像資料輸入至以下的機械學習程式,即,所述機械學習程式中,越是包含表示描繪出學習用被檢查體的眼睛的角膜中靠近學習用被檢查體的下瞼的部分的學習用角膜圖像的學習用角膜圖像資料的教師資料,越被優先輸入。In addition, the symptom estimation function 202g can input the corneal image data for inference into a machine learning program, that is, the more the machine learning program includes the cornea indicating that the eye of the object for learning is drawn, the closer the learning program is. The teacher data of the corneal image data for learning of the corneal image for learning of the lower eyelid part of the subject is input with priority.

然後,症狀推斷功能202g使機械學習程式750g輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。例如,症狀推斷功能202g使機械學習程式750g輸出表示推論用被檢查體的眼睛的淚液油層厚度的數值的症狀資料。該情況下,症狀資料例如用於在顯示器920g上顯示進行自我表示、且表現推論用被檢查體的眼睛的淚液油層厚度的數值。Then, the symptom estimation function 202g causes the machine learning program 750g to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for estimation. For example, the symptom estimation function 202g causes the machine learning program 750g to output symptom data indicating the numerical value of the tear oil layer thickness of the eye of the subject for estimation. In this case, the symptom data is used to display, for example, on the display 920g, a numerical value representing the thickness of the tear oil layer of the eye of the subject for inference, which expresses itself.

接下來,參照圖72,對第七實施例的乾眼症檢查程式200g執行的處理一例進行說明。圖72是表示利用第七實施例的乾眼症檢查程式執行的處理一例的流程圖。Next, with reference to FIG. 72 , an example of the processing executed by the dry eye syndrome examination program 200g of the seventh embodiment will be described. FIG. 72 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the seventh embodiment.

於步驟S141中,資料獲取功能201g獲取推論用角膜圖像資料。In step S141, the data acquisition function 201g acquires corneal image data for inference.

於步驟S142中,症狀推斷功能202g將推論用角膜圖像資料輸入至已學習完畢的機械學習程式750g,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,並使機械學習程式750g輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。In step S142, the symptom inference function 202g inputs the corneal image data for inference into the machine learning program 750g that has already been learned to infer the symptoms of dry eye appearing in the eyes of the subject to be inspected for inference, and makes the machine learning program. 750g output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference.

以上,對第七實施例的機械學習執行程式、乾眼症檢查程式、機械學習執行裝置、乾眼症檢查裝置、機械學習執行方法及乾眼症檢查方法進行了說明。The machine learning execution program, the dry eye disease inspection program, the machine learning execution device, the dry eye disease inspection device, the machine learning execution method, and the dry eye disease inspection method of the seventh embodiment have been described above.

機械學習執行程式100g具備教師資料獲取功能101g、以及機械學習執行功能102g。The machine learning execution program 100g includes a teacher data acquisition function 101g and a machine learning execution function 102g.

教師資料獲取功能101g獲取將學習用角膜圖像資料作為問題、將學習用檢查結果資料作為答案的教師資料。學習用角膜圖像資料是表示學習用角膜圖像的資料,所述學習用角膜圖像描繪出學習用被檢查體的眼睛的角膜的至少一部分。學習用檢查結果資料是表示調查學習用被檢查體的眼睛的淚液油層厚度的檢查結果的資料。The teacher data acquisition function 101g acquires teacher data with the corneal image data for learning as the question and the examination result data for learning as the answer. The corneal image data for learning is data representing a corneal image for learning that depicts at least a part of the cornea of the eye of the subject for learning. The examination result data for learning is the data showing the examination result of investigating the thickness of the tear oil layer of the eye of the subject for examination.

機械學習執行功能102g將教師資料輸入至機械學習程式750g,使機械學習程式750g進行學習。The machine learning execution function 102g inputs the teacher data into the machine learning program 750g, and causes the machine learning program 750g to learn.

藉此,機械學習執行程式100g可生成基於學習用角膜圖像資料來預測與乾眼症的症狀相關的檢查結果的機械學習程式750g。Thereby, the machine learning execution program 100g can generate the machine learning program 750g for predicting the test result related to the symptoms of dry eye based on the corneal image data for learning.

另外,機械學習執行程式100g中,越是包含以下學習用角膜圖像資料的教師資料,越優先輸入至機械學習程式,即,所述學習用角膜圖像資料表示描繪出學習用被檢查體的眼睛的角膜中靠近學習用被檢查體的下瞼的部分的學習用角膜圖像。In addition, in the machine learning execution program 100g, the more the teacher data including the following corneal image data for learning, that is, the corneal image data for learning representing the image that depicts the subject for learning, is preferentially input to the machine learning program. An image of the cornea for learning of the portion of the cornea of the eye that is close to the lower eyelid of the subject for learning.

藉此,機械學習執行程式100g可生成基於學習用角膜圖像資料來精度更良好地預測與乾眼症的症狀相關的檢查結果的機械學習程式750g。Thereby, the machine learning execution program 100g can generate the machine learning program 750g that can predict the test results related to the symptoms of dry eye more accurately based on the corneal image data for learning.

乾眼症檢查程式200g具備資料獲取功能201g、以及症狀推斷功能202g。The dry eye syndrome examination program 200g includes a data acquisition function 201g and a symptom estimation function 202g.

資料獲取功能201g獲取推論用角膜圖像資料。推論用角膜圖像資料是表示推論用角膜圖像的資料,所述推論用角膜圖像描繪出推論用被檢查體的眼睛的角膜的至少一部分。The data acquisition function 201g acquires corneal image data for inference. The corneal image data for inference is data representing a corneal image for inference that depicts at least a part of the cornea of the eye of the subject for inference.

症狀推斷功能202g將推論用角膜圖像資料輸入至已利用機械學習執行程式100g學習完畢的機械學習程式750g,以推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。然後,症狀推斷功能202g使機械學習程式750g輸出表示推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料。The symptom estimating function 202g inputs the corneal image data for inference into the machine learning program 750g that has been learned by the machine learning execution program 100g, and thereby infers the symptoms of dry eye appearing in the eyes of the subject for inference. Then, the symptom estimation function 202g causes the machine learning program 750g to output symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for estimation.

藉此,乾眼症檢查程式200g無需實際實施與乾眼症的症狀相關的檢查,便可預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye syndrome test program 200g can predict the test results related to the symptoms of the dry eye syndrome without actually carrying out the test related to the symptoms of the dry eye syndrome.

另外,乾眼症檢查程式200g可將推論用角膜圖像資料輸入至以下的機械學習程式750g,即,所述機械學習程式750g中,越是包含表示描繪出學習用被檢查體的眼睛的角膜中靠近學習用被檢查體的下瞼的部分的學習用角膜圖像的學習用角膜圖像資料的教師資料,越被優先輸入。而且,乾眼症檢查程式200g使機械學習程式750g推斷推論用被檢查體的眼睛中出現的乾眼症的症狀。In addition, the dry eye disease examination program 200g can input the corneal image data for inference into the following machine learning program 750g, that is, the more the machine learning program 750g includes the cornea representing the eye of the object to be examined for learning. The teacher data of the corneal image data for learning of the portion near the lower eyelid of the subject for learning is inputted with priority. In addition, the dry eye syndrome examination program 200g causes the machine learning program 750g to infer the symptoms of dry eye syndrome appearing in the eyes of the subject.

藉此,乾眼症檢查程式200g無需實際實施與乾眼症的症狀相關的檢查,便可精度更良好地預測與乾眼症的症狀相關的檢查結果。Thereby, the dry eye syndrome test program 200g can predict the test results related to the symptoms of dry eye syndrome with higher accuracy without actually carrying out the test related to the symptoms of dry eye syndrome.

接下來,列舉使機械學習程式750g實際進行學習來推斷乾眼症的症狀的例子,並將比較例與第七實施例的實施例加以對比來說明藉由機械學習執行程式100g起到的效果的具體例。Next, an example in which the machine learning program 750g is actually learned to infer the symptoms of dry eye syndrome is given, and the effect of executing the program 100g by machine learning is explained by comparing the comparative example with the example of the seventh embodiment. specific example.

於該情況下的比較例中,教師資料所含的問題並非學習用角膜圖像資料,而是表示描繪有學習用被檢查體的眼睛的圖像的圖像資料。另外,該情況下的比較例是使用所述表現學習用被檢查體的眼睛正常的520個教師資料及表現學習用被檢查體的眼睛異常的600個教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the problem included in the teacher data is not the corneal image data for learning, but the image data representing the image in which the eye of the subject for learning is drawn. In addition, the comparative example in this case is an example in which the machine learning program is learned using 520 teacher data for expressing the normal eyes of the subject for learning and 600 teacher data for expressing the abnormal eyes of the learning subject .

關於如此般進行了學習的機械學習程式,若使用包含所述圖像資料且表現學習用被檢查體的眼睛正常的80個測試資料、及包含所述圖像資料且表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為50%且假陰性率為50%。As for the machine learning program that has been learned in this way, if 80 test data including the image data and representing the normal eyes of the subject for learning are used, and the image data including the image data and representing the subject for learning are used. 80 test data of abnormal eyes were used to evaluate the characteristics, which showed a prediction accuracy of 50% and a false negative rate of 50%.

另一方面,關於機械學習執行程式100g而僅使用僅描繪有學習用被檢查體的眼睛的角膜整體的學習用角膜圖像資料進行了學習的機械學習程式750g,若使用包含同樣的學習用角膜圖像資料的測試資料來評價特性,則示出預測精度為69%且假陰性率為31%。該特性優於比較例的機械學習程式的特性。On the other hand, with regard to the machine learning execution program 100g, the machine learning program 750g that has been learned using only the corneal image data for learning in which only the entire cornea of the eye of the subject for learning is drawn is used. The test data of the image data to evaluate the characteristics shows a prediction accuracy of 69% and a false negative rate of 31%. This characteristic is superior to that of the machine learning program of the comparative example.

另外,關於機械學習執行程式100g而僅使用僅描繪出學習用被檢查體的眼睛的角膜中下瞼側的一半的學習用角膜圖像資料進行了學習的機械學習程式750g,若使用包含同樣的學習用角膜圖像資料的測試資料來評價特性,則示出預測精度為78%且假陰性率為22%。該特性優於比較例的機械學習程式的特性。In addition, with regard to the machine learning execution program 100g, the machine learning program 750g that has been learned using only the corneal image data for learning that depicts only half of the cornea of the eye of the subject for learning and the lower eyelid side is used. Learning to evaluate properties using test data of corneal image data showed a prediction accuracy of 78% and a false negative rate of 22%. This characteristic is superior to that of the machine learning program of the comparative example.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

再者,機械學習執行程式100g所具有的功能的至少一部分可藉由包括電路部的硬體達成。同樣地,乾眼症檢查程式200g所具有的功能的至少一部分可藉由包括電路部的硬體達成。另外,此種硬體例如為LSI、ASIC、FPGA、GPU。Furthermore, at least a part of the functions of the machine learning execution program 100g can be realized by hardware including a circuit portion. Likewise, at least a part of the functions of the dry eye disease examination program 200g can be realized by hardware including a circuit unit. In addition, such hardware is, for example, LSI, ASIC, FPGA, and GPU.

另外,機械學習執行程式100g所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。同樣地,乾眼症檢查程式200g所具有的功能的至少一部分亦可藉由軟體與硬體的協作來達成。另外,該些硬體可統合成一個,亦可分成多個。In addition, at least a part of the functions of the machine learning execution program 100g can also be achieved by the cooperation of software and hardware. Similarly, at least a part of the functions of the dry eye syndrome checking program 200g can also be achieved by the cooperation of software and hardware. In addition, these hardwares may be integrated into one, or may be divided into multiple pieces.

另外,於第七實施例中,列舉機械學習執行裝置10g、機械學習裝置700g以及乾眼症檢查裝置20g為相互獨立的裝置的情況為例進行了說明,但並不限定於此。該些裝置亦可作為一個裝置來達成。In addition, in the seventh embodiment, the case where the machine learning execution device 10g, the machine learning device 700g, and the dry eye disease inspection device 20g are mutually independent devices has been described as an example, but it is not limited to this. The devices can also be implemented as one device.

[第六實施例及第七實施例的變形例] 於所述第六實施例中,列舉了教師資料獲取功能101f獲取不包含第七實施例中所說明的學習用角膜圖像資料作為問題的教師資料的情況為例,但並不限定於此。教師資料獲取功能101f亦可獲取包含學習用角膜圖像資料代替學習用圖像資料來作為問題的教師資料,所述學習用角膜圖像資料描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜。另外,該情況下,資料獲取功能201f獲取推論用角膜圖像資料代替推論用圖像資料,所述推論用角膜圖像資料描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的角膜。而且,該情況下,症狀推斷功能202f使機械學習程式750f基於推論用角膜圖像資料代替推論用圖像資料來推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,所述推論用角膜圖像資料描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的角膜。 [Modifications of the sixth embodiment and the seventh embodiment] In the sixth embodiment, the case where the teacher data acquisition function 101f acquires the teacher data that does not include the learning corneal image data described in the seventh embodiment as a question is given as an example, but it is not limited to this. The teacher data acquisition function 101f may also acquire teacher data including learning corneal image data instead of learning image data as a problem, the learning corneal image data depicting the eyes of the assumed learning subject whose eyes are irradiated with a higher color temperature. Learning to use the cornea of the subject to be examined when irradiated with light that is low in light of the subject's eye. In addition, in this case, the data acquisition function 201f acquires the inference corneal image data instead of the inference image data, the inference cornea image data depicts the eye of the hypothetical inference subject whose eyes are illuminated with a color temperature higher than that of the inference subject. The cornea of the test subject is used for inference when the light of the eye of the test subject is low in light irradiation. In this case, the symptom inference function 202f causes the machine learning program 750f to infer the symptoms of dry eye appearing in the eyes of the subject for inference based on the corneal image data for inference instead of the image data for inference. The corneal image data depicts the cornea of the subject for inference when it is assumed that the subject's eye for inference is irradiated with light having a lower color temperature than the light irradiating the eye of the subject for inference.

另外,於所述第七實施例中,列舉了教師資料獲取功能101g獲取不包含第六實施例中所說明的學習用圖像資料作為問題的教師資料的情況為例,所述學習用圖像資料描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的眼睛,但第七實施例並不限定於此。教師資料獲取功能101g亦可獲取包含以下的學習用角膜圖像資料代替學習用角膜圖像資料來作為問題的教師資料,即,所述學習用角膜圖像資料描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜。另外,該情況下,資料獲取功能201g獲取以下的推論用角膜圖像資料,即,所述推論用角膜圖像資料描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的角膜。而且,該情況下,症狀推斷功能202g基於以下的推論用角膜圖像資料來推斷推論用被檢查體的眼睛中出現的乾眼症的症狀,即,所述推論用角膜圖像資料描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的角膜。In addition, in the seventh embodiment, the case where the teacher data acquisition function 101g acquires teacher data that does not include the learning image data described in the sixth embodiment as a question is taken as an example. The data depicts the eyes of the subject for learning when it is assumed that the eyes of the subject for learning are irradiated with light having a lower color temperature than the eyes of the subject for learning, but the seventh embodiment is not limited to this. The teacher data acquisition function 101g may also acquire teacher data including the following corneal image data for learning in place of the corneal image data for learning as a problem, that is, the corneal image data for learning depicting a hypothetical subject for learning. The cornea of the subject for learning when the eye is irradiated with light having a lower color temperature than the light irradiating the eye of the subject for learning. In addition, in this case, the data acquisition function 201g acquires the corneal image data for inference, that is, the inference cornea image data in which the eye of the hypothetical inference subject is depicted is irradiated with a color temperature of the inference subject. The cornea of the subject is used for inference when the light of the eye is low in light irradiation. In this case, the symptom estimating function 202g infers the symptoms of dry eye appearing in the eye of the subject for inference based on the corneal image data for inference, that is, the corneal image data for inference depicting the hypothetical The cornea of the subject for inference when the eye of the subject for inference is irradiated with light having a lower color temperature than the light irradiating the eye of the subject for inference.

接下來,對使用包含以下的學習用角膜圖像資料作為問題的教師資料進行學習、並基於以下的推論用角膜圖像資料來推斷推論用被檢查體的眼睛中出現的乾眼症的症狀時起到的效果的具體例進行說明,即,所述學習用角膜圖像資料描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜,所述推論用角膜圖像資料描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的角膜。另外,於以下的說明中,列舉教師資料獲取功能101f獲取以下的學習用角膜圖像資料、資料獲取功能201f獲取以下的推論用角膜圖像資料的情況為例進行說明,即,所述學習用角膜圖像資料描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜,所述推論用角膜圖像資料描繪有假定推論用被檢查體的眼睛被色溫較照射著推論用被檢查體的眼睛的光低的光照射時的推論用被檢查體的角膜。Next, when learning using the teacher data including the following corneal image data for learning as questions, and inferring the symptoms of dry eye appearing in the eye of the subject for inference based on the following corneal image data for inference A specific example of the effect obtained will be described. That is, the corneal image data for learning depicts a case where it is assumed that the eye of the subject for learning is irradiated with light having a lower color temperature than the light irradiating the eye of the subject for learning. The cornea of the object to be studied is studied, and the cornea image data for the inference is drawn when the eye of the object to be inspected for inference is irradiated with light having a lower color temperature than the eye of the object to be inspected for inference. body's cornea. In addition, in the following description, the case where the teacher data acquisition function 101f acquires the following corneal image data for learning, and the data acquisition function 201f acquires the following corneal image data for inference will be described as an example. The corneal image data depicts the cornea of the subject for learning when it is assumed that the eye of the subject for learning is irradiated with light having a lower color temperature than the eye of the subject for learning, and the corneal image data for inference is described. The cornea of the inference subject when the eye of the inference subject is assumed to be irradiated with light having a lower color temperature than the light irradiating the inference subject's eye is depicted.

於該情況下的比較例中,教師資料所含的問題並非描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜的學習用角膜圖像資料,而是表示描繪有學習用被檢查體的眼睛被色溫與照射著學習用被檢查體的眼睛的光相等的光照射時的學習用被檢查體的角膜的圖像的圖像資料。另外,該情況下的比較例是使用所述表現學習用被檢查體的眼睛正常的520個教師資料及表現學習用被檢查體的眼睛異常的600個教師資料來使機械學習程式進行學習的例子。In the comparative example in this case, the problem contained in the teacher data does not describe the learning test subject when the eyes of the learning subject are assumed to be irradiated with light having a lower color temperature than the light irradiating the eyes of the learning subject. It is the corneal image data for learning of the subject's cornea, but represents the cornea of the subject for learning when the eye of the subject for learning is depicted when irradiated with light having a color temperature equal to the light irradiating the eye of the subject for learning. image information of the image. In addition, the comparative example in this case is an example in which the machine learning program is learned using 520 teacher data for expressing the normal eyes of the subject for learning and 600 teacher data for expressing the abnormal eyes of the learning subject .

關於如此般進行了學習的機械學習程式,若使用包含所述圖像資料且表現學習用被檢查體的眼睛正常的80個測試資料、及包含所述圖像資料且表現學習用被檢查體的眼睛異常的80個測試資料來評價特性,則示出預測精度為50%且假陰性率為50%。As for the machine learning program that has been learned in this way, if 80 test data including the image data and representing the normal eyes of the subject for learning are used, and the image data including the image data and representing the subject for learning are used. 80 test data of abnormal eyes were used to evaluate the characteristics, which showed a prediction accuracy of 50% and a false negative rate of 50%.

另一方面,關於機械學習執行程式100f而使用僅描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的眼睛的角膜整體的學習用角膜圖像資料進行了學習的機械學習程式750f,若使用包含以下的學習用角膜圖像資料的測試資料來評價特性,則示出預測精度為81%且假陰性率為15%,即,所述學習用角膜圖像資料描繪有假定同樣的學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜。該特性優於比較例的機械學習程式的特性。On the other hand, with regard to the machine learning execution program 100f, the eyes of the learning subject when only the eyes of the hypothetical learning subject are drawn are irradiated with light having a lower color temperature than the light irradiating the eyes of the learning subject. The machine learning program 750f, which has learned the corneal image data for learning of the entire cornea, evaluates the characteristics using the test data including the following corneal image data for learning, the prediction accuracy is 81%, and the false negative rate is 81%. 15%, that is, the learning test subject when the eye of the learning test subject is assumed to be irradiated with light having a lower color temperature than the light irradiating the eyes of the learning test subject in the learning corneal image data cornea. This characteristic is superior to that of the machine learning program of the comparative example.

另外,關於機械學習執行程式100f而使用僅描繪有假定學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的眼睛的角膜中下瞼側的一半的學習用角膜圖像資料進行了學習的機械學習程式750f,若使用包含以下的學習用角膜圖像資料的測試資料來評價特性,則示出預測精度為89%且假陰性率為5%,即,所述學習用角膜圖像資料描繪有假定同樣的學習用被檢查體的眼睛被色溫較照射著學習用被檢查體的眼睛的光低的光照射時的學習用被檢查體的角膜。該特性優於比較例的機械學習程式的特性。In addition, regarding the machine learning execution program 100f, the cornea of the eye of the subject for learning is used when only the eye of the subject for learning is drawn with light having a lower color temperature than the light irradiating the eye of the subject for learning. The machine learning program 750f, which has learned half of the corneal image data for learning on the middle and lower eyelid sides, evaluated the characteristics using test data including the following corneal image data for learning, and showed that the prediction accuracy was 89% and false The negative rate is 5%, that is, the learning corneal image data depicts the learning use when the eyes of the same learning subject are irradiated with light having a lower color temperature than the light irradiating the eyes of the learning subject. The cornea of the subject being examined. This characteristic is superior to that of the machine learning program of the comparative example.

再者,預測精度為異常群組所含的學習用圖像中由機械學習程式預測為異常的張數X、及正常群組所含的學習用圖像中由機械學習程式預測為正常的張數Y的合計X+Y相對於測試資料的總數的比例。因此,該比較例的情況下,預測精度是利用((X+Y)/160)×100來算出。In addition, the prediction accuracy is the number of sheets X predicted to be abnormal by the machine learning program among the learning images included in the abnormal group, and the number of sheets predicted to be normal by the machine learning program among the learning images included in the normal group. The ratio of the total X+Y of the number Y to the total number of test data. Therefore, in the case of this comparative example, the prediction accuracy is calculated using ((X+Y)/160)×100.

另外,假陰性率為異常群組所含的學習用圖像中由機械學習程式預測為正常的張數相對於異常群組所含的學習用圖像的合計的比例。因此,該比較例的情況下,假陰性率利用((80-X)/80)×100來算出。In addition, the false negative rate is the ratio of the number of images for learning included in the abnormal group that are predicted to be normal by the machine learning program to the total number of images for learning included in the abnormal group. Therefore, in the case of this comparative example, the false negative rate was calculated by ((80-X)/80)×100.

再者,所述乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法亦可於向推論用被檢查體的眼睛中滴入乾眼症滴眼劑的前後使用。藉由在此種時機下使用,所述乾眼症檢查程式、乾眼症檢查裝置及乾眼症檢查方法亦可成為驗證該乾眼症滴眼劑相對於推論用被檢查體擁有的乾眼症的症狀而具有的有效性的手段。In addition, the above-mentioned dry eye examination program, dry eye examination apparatus, and dry eye examination method may be used before and after instillation of the dry eye eye drops into the eyes of the subject for inference. By using at such a timing, the dry eye test program, the dry eye test device, and the dry eye test method can also be used to verify the dry eye possessed by the test subject for inference with respect to the dry eye eye drops. Symptoms of the disease and have the effectiveness of the means.

1:攝影裝置 2、2a、20:檢查裝置 10、10b、10c、10d、10e、10f、10g:機械學習執行裝置 11、11b、11c、11d、11e、11f、11g、21、21b、21c、21d、21e、21f、21g:處理器 12、12b、12c、12d、12e、12f、12g、22、22b、22c、22d、22e、22f、22g:主儲存裝置 13、13b、13c、13d、13e、13f、13g、23、23b、23c、23d、23e、23f、23g:通訊介面 14、14b、14c、14d、14e、14f、14g、24、24b、24c、24d、24e、24f、24g:輔助儲存裝置 15、15b、15c、15d、15e、15f、15g、25b、25c、25d、25e、25f、25g:輸入輸出裝置 16、16b、16c、16d、16e、16f、16g、26、26b、26c、26d、26e、26f、26g:匯流排 20b、20c、20d、20e、20f、20g:乾眼症檢查裝置 25:觸控面板顯示器 100:機械學習執行程式/攝影裝置控制部 100b、100c、100d、100e、100f、100g:機械學習執行程式 101、101b、101c、101d、101e、101f、101g:教師資料獲取功能 102、102b、102c、102d、102e、102f、102g:機械學習執行功能 110:攝影部 120:攝影裝置通訊部 130:顯示部 140:操作受理部 200:檢查程式/檢查裝置控制部 200a:檢查裝置控制部 200b、200c、200d、200e、200f、200g:乾眼症檢查程式 201、201b、201c、201d、201e、201f、201g:資料獲取功能 202:推斷功能 202b、202c、202d、202e、202f、202g:症狀推斷功能 210:儲存部 220:檢查裝置通訊部 700、700b、700c、700d、700e、700f、700g:機械學習裝置 750、750b、750c、750d、750e、750f、750g:機械學習程式 811、811b、811c、811d、811e、811f、811g、821b、821c、821d、821e、821f、821g:鍵盤 812、812b、812c、812d、812e、812f、812g、822b、822c、822d、822e、822f、822g:滑鼠 910、910b、910c、910d、910e、910f、910g、920b、920c、920d、920e、920f、920g:顯示器 1000:圖像獲取部 1010:攝影條件判定部 1020:白眼球與黑眼球部分提取部 1030:圖像輸出部 1040:預測結果獲取部 1050:提示部 2000:圖像資料獲取部 2010、2010a:預測部 2020:預測結果輸出部 2030、2030a:學習部 2040a:回答結果獲取部 A1:預測結果 A7、A8、A9、A11、A13、A14、A81:圖像 B:底座 C36、C61、C64、C611、C612:橢圓 E1:檢查系統 G1、G2、G4:攝影畫面 G3、G6:預測結果顯示畫面 G7:歷史畫面 L1、L10、L10a:學習結果 NW:網路 P:支柱 P0:攝影圖像 P1、P2:用戶圖像資料 R1、R2、R3、R4、R5、R6:攝影圖像 S10、S11、S12、S20、S21、S22、S30、S31、S32、S40、S41、S42、S50、S51、S52、S60、S61、S62、S70、S71、S72、S80、S81、S82、S90、S91、S92、S100、S101、S102、S110、S111、S112、S120、S121、S122、S130、S131、S132、S141、S142:步驟 U1:用戶 1: Photographic installation 2, 2a, 20: Inspection device 10, 10b, 10c, 10d, 10e, 10f, 10g: Machine learning actuators 11, 11b, 11c, 11d, 11e, 11f, 11g, 21, 21b, 21c, 21d, 21e, 21f, 21g: Processor 12, 12b, 12c, 12d, 12e, 12f, 12g, 22, 22b, 22c, 22d, 22e, 22f, 22g: Primary storage device 13, 13b, 13c, 13d, 13e, 13f, 13g, 23, 23b, 23c, 23d, 23e, 23f, 23g: Communication interface 14, 14b, 14c, 14d, 14e, 14f, 14g, 24, 24b, 24c, 24d, 24e, 24f, 24g: auxiliary storage device 15, 15b, 15c, 15d, 15e, 15f, 15g, 25b, 25c, 25d, 25e, 25f, 25g: Input and output devices 16, 16b, 16c, 16d, 16e, 16f, 16g, 26, 26b, 26c, 26d, 26e, 26f, 26g: Busbars 20b, 20c, 20d, 20e, 20f, 20g: Dry Eye Examination Device 25: Touch Panel Display 100: Machine Learning Execution Program/Camera Device Control Department 100b, 100c, 100d, 100e, 100f, 100g: Machine Learning Executive Program 101, 101b, 101c, 101d, 101e, 101f, 101g: Teacher data acquisition function 102, 102b, 102c, 102d, 102e, 102f, 102g: Machine Learning of Executive Functions 110: Photography Department 120:Camera Device Communication Department 130: Display part 140: Operation Reception Department 200: Check program/check device control section 200a: Inspection device control section 200b, 200c, 200d, 200e, 200f, 200g: Dry Eye Examination Program 201, 201b, 201c, 201d, 201e, 201f, 201g: Data acquisition function 202: Infer function 202b, 202c, 202d, 202e, 202f, 202g: Symptom inference function 210: Storage Department 220: Check Device Communication Department 700, 700b, 700c, 700d, 700e, 700f, 700g: Machine Learning Devices 750, 750b, 750c, 750d, 750e, 750f, 750g: Machine Learning Programs 811, 811b, 811c, 811d, 811e, 811f, 811g, 821b, 821c, 821d, 821e, 821f, 821g: Keyboard 812, 812b, 812c, 812d, 812e, 812f, 812g, 822b, 822c, 822d, 822e, 822f, 822g: Mouse 910, 910b, 910c, 910d, 910e, 910f, 910g, 920b, 920c, 920d, 920e, 920f, 920g: Monitor 1000: Image Acquisition Department 1010: Photography Condition Determination Department 1020: Parts of white eyeball and black eyeball extraction 1030: Image output section 1040: Prediction result acquisition department 1050: Tips Department 2000: Image data acquisition department 2010, 2010a: Forecasting Department 2020: Prediction result output section 2030, 2030a: Department of Learning 2040a: Answer Result Acquisition Department A1: Predicted results A7, A8, A9, A11, A13, A14, A81: Image B: base C36, C61, C64, C611, C612: Ellipse E1: Check the system G1, G2, G4: Photography screen G3, G6: Prediction result display screen G7: History screen L1, L10, L10a: Learning Results NW: Internet P: Pillar P0: Photographic image P1, P2: User image data R1, R2, R3, R4, R5, R6: Photographic images S10, S11, S12, S20, S21, S22, S30, S31, S32, S40, S41, S42, S50, S51, S52, S60, S61, S62, S70, S71, S72, S80, S81, S82, S90, S91, S92, S100, S101, S102, S110, S111, S112, S120, S121, S122, S130, S131, S132, S141, S142: Steps U1: User

圖1是表示實施形態的機械學習執行裝置的硬體結構一例的圖。 圖2是表示實施形態的機械學習執行裝置的軟體結構一例的圖。 圖3是表示利用實施形態的機械學習執行程式執行的處理一例的流程圖。 圖4是表示實施形態的檢查裝置的硬體結構一例的圖。 圖5是表示實施形態的檢查裝置的外觀一例的圖。 圖6是表示實施形態的檢查裝置的軟體結構一例的圖。 圖7是表示利用實施形態的檢查裝置顯示的圖像一例的圖。 圖8是表示利用實施形態的檢查裝置顯示的圖像一例的圖。 圖9是表示利用實施形態的檢查裝置顯示的圖像一例的圖。 圖10是表示利用實施形態的檢查裝置顯示的圖像一例的圖。 圖11是表示利用實施形態的檢查裝置顯示的圖像一例的圖。 圖12是表示利用實施形態的檢查裝置顯示的圖像一例的圖。 圖13是表示代替圖8所示的眼睛圖像而顯示的眼睛圖像一例的圖。 圖14是表示代替圖8所示的眼睛圖像而顯示的眼睛圖像一例的圖。 圖15是表示代替圖8所示的眼睛圖像而顯示的眼睛圖像一例的圖。 圖16是表示代替圖8所示的眼睛圖像而顯示的眼睛圖像一例的圖。 圖17是表示利用實施形態的檢查程式執行的處理一例的流程圖。 圖18是表示第一實施例的檢查系統的結構一例的圖。 圖19是表示第一實施例的攝影裝置的結構一例的圖。 圖20是表示第一實施例的規定的攝影條件一例的圖。 圖21是表示第一實施例的用戶圖像資料一例的圖。 圖22是表示第一實施例的檢查裝置的結構一例的圖。 圖23是表示第一實施例的眼睛狀態預測處理一例的圖。 圖24是表示第一實施例的攝影畫面一例的圖。 圖25是表示第一實施例的攝影畫面一例的圖。 圖26是表示第一實施例的預測結果顯示畫面一例的圖。 圖27是表示第一實施例的預測處理一例的圖。 圖28是表示第一實施例的變形例的裝置的結構一例的圖。 圖29是表示第一實施例的變形例的攝影畫面一例的圖。 圖30是表示第一實施例的變形例的預測結果顯示畫面一例的圖。 圖31是表示第一實施例的變形例的歷史畫面一例的圖。 圖32是表示第二實施例的機械學習執行裝置的硬體結構一例的圖。 圖33是表示第二實施例的機械學習執行程式的軟體結構一例的圖。 圖34是表示第二實施例的機械學習程式於預測與角膜上皮損傷相關的檢查結果時,學習用被檢查體的眼睛圖像中所重點考慮的部分的一例的圖。 圖35是表示利用第二實施例的機械學習執行程式執行的處理一例的流程圖。 圖36是表示第二實施例的乾眼症檢查裝置的硬體結構一例的圖。 圖37是表示第二實施例的乾眼症檢查程式的軟體結構一例的圖。 圖38是表示利用第二實施例的乾眼症檢查程式執行的處理一例的流程圖。 圖39是表示第三實施例的機械學習執行裝置的硬體結構一例的圖。 圖40是表示第三實施例的機械學習執行程式的軟體結構一例的圖。 圖41是表示利用第三實施例的機械學習執行程式執行的處理一例的流程圖。 圖42是表示第三實施例的乾眼症檢查裝置的硬體結構一例的圖。 圖43是表示第三實施例的乾眼症檢查程式的軟體結構一例的圖。 圖44是表示利用第三實施例的乾眼症檢查程式執行的處理一例的流程圖。 圖45是表示第四實施例的機械學習執行裝置的硬體結構一例的圖。 圖46是表示第四實施例的機械學習執行程式的軟體結構一例的圖。 圖47是表示利用第四實施例的機械學習執行程式執行的處理一例的流程圖。 圖48是表示第四實施例的乾眼症檢查裝置的硬體結構一例的圖。 圖49是表示第四實施例的乾眼症檢查程式的軟體結構一例的圖。 圖50是表示利用第四實施例的乾眼症檢查程式執行的處理一例的流程圖。 圖51是表示第五實施例的機械學習執行裝置的硬體結構一例的圖。 圖52是表示第五實施例的機械學習執行程式的軟體結構一例的圖。 圖53是表示自第五實施例的學習用圖像中剪切描繪出淚液彎液面的區域而得的、學習用淚液彎液面圖像一例的圖。 圖54是表示自第五實施例的學習用圖像中剪切描繪出映入至學習用被檢查體的角膜上的照明的區域而得的、學習用照明圖像一例的圖。 圖55是表示利用第五實施例的機械學習執行程式執行的處理一例的流程圖。 圖56是表示第五實施例的乾眼症檢查裝置的硬體結構一例的圖。 圖57是表示第五實施例的乾眼症檢查程式的軟體結構一例的圖。 圖58是表示利用第五實施例的乾眼症檢查程式執行的處理一例的流程圖。 圖59是表示第四實施例及第五實施例的變形例的機械學習程式於預測調查淚液層破壞時間的檢查結果時,描繪有學習用被檢查體的眼睛的學習用圖像中所重點考慮的部分的一例的圖。 圖60是表示第六實施例的機械學習執行裝置的硬體結構一例的圖。 圖61是表示第六實施例的機械學習執行程式的軟體結構一例的圖。 圖62是表示第六實施例的機械學習程式於預測與淚液油層的厚度相關的檢查結果時,學習用被檢查體的眼睛圖像中所重點考慮的部分的一例的圖。 圖63是表示利用第六實施例的機械學習執行程式執行的處理一例的流程圖。 圖64是表示第六實施例的乾眼症檢查裝置的硬體結構一例的圖。 圖65是表示第六實施例的乾眼症檢查程式的軟體結構一例的圖。 圖66是表示利用第六實施例的乾眼症檢查程式執行的處理一例的流程圖。 圖67是表示第七實施例的機械學習執行裝置的硬體結構一例的圖。 圖68是表示第七實施例的機械學習執行程式的軟體結構一例的圖。 圖69是表示利用第七實施例的機械學習執行程式執行的處理一例的流程圖。 圖70是表示第七實施例的乾眼症檢查裝置的硬體結構一例的圖。 圖71是表示第七實施例的乾眼症檢查程式的軟體結構一例的圖。 圖72是表示利用第七實施例的乾眼症檢查程式執行的處理一例的流程圖。 FIG. 1 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the embodiment. FIG. 2 is a diagram showing an example of a software configuration of the machine learning execution device according to the embodiment. FIG. 3 is a flowchart showing an example of processing executed by the machine learning execution program according to the embodiment. FIG. 4 is a diagram showing an example of the hardware configuration of the inspection apparatus according to the embodiment. FIG. 5 is a diagram showing an example of the appearance of the inspection apparatus according to the embodiment. FIG. 6 is a diagram showing an example of a software configuration of the inspection apparatus according to the embodiment. FIG. 7 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. FIG. 8 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. FIG. 9 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. FIG. 10 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. FIG. 11 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. FIG. 12 is a diagram showing an example of an image displayed by the inspection apparatus according to the embodiment. FIG. 13 is a diagram showing an example of an eye image displayed instead of the eye image shown in FIG. 8 . FIG. 14 is a diagram showing an example of an eye image displayed instead of the eye image shown in FIG. 8 . FIG. 15 is a diagram showing an example of an eye image displayed instead of the eye image shown in FIG. 8 . FIG. 16 is a diagram showing an example of an eye image displayed instead of the eye image shown in FIG. 8 . FIG. 17 is a flowchart showing an example of processing performed by the inspection program of the embodiment. FIG. 18 is a diagram showing an example of the configuration of the inspection system according to the first embodiment. FIG. 19 is a diagram showing an example of the configuration of the imaging device according to the first embodiment. FIG. 20 is a diagram showing an example of predetermined imaging conditions in the first embodiment. FIG. 21 is a diagram showing an example of user image data in the first embodiment. FIG. 22 is a diagram showing an example of the configuration of the inspection apparatus according to the first embodiment. FIG. 23 is a diagram showing an example of an eye state prediction process according to the first embodiment. FIG. 24 is a diagram showing an example of a photographing screen of the first embodiment. FIG. 25 is a diagram showing an example of a photographing screen of the first embodiment. FIG. 26 is a diagram showing an example of a prediction result display screen in the first embodiment. FIG. 27 is a diagram showing an example of prediction processing in the first embodiment. FIG. 28 is a diagram showing an example of the configuration of an apparatus according to a modification of the first embodiment. FIG. 29 is a diagram showing an example of a photographing screen in a modification of the first embodiment. FIG. 30 is a diagram showing an example of a prediction result display screen in a modification of the first embodiment. FIG. 31 is a diagram showing an example of a history screen in a modification of the first embodiment. FIG. 32 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the second embodiment. FIG. 33 is a diagram showing an example of the software configuration of the machine learning execution program of the second embodiment. 34 is a diagram showing an example of a portion to be considered in an eye image of a subject for learning when predicting an examination result related to corneal epithelial damage by the machine learning program of the second embodiment. FIG. 35 is a flowchart showing an example of processing executed by the machine learning execution program of the second embodiment. FIG. 36 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the second embodiment. FIG. 37 is a diagram showing an example of a software configuration of the dry eye syndrome examination program according to the second embodiment. FIG. 38 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the second embodiment. FIG. 39 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the third embodiment. FIG. 40 is a diagram showing an example of the software configuration of the machine learning execution program of the third embodiment. FIG. 41 is a flowchart showing an example of processing executed by the machine learning execution program of the third embodiment. FIG. 42 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the third embodiment. FIG. 43 is a diagram showing an example of a software configuration of the dry eye syndrome examination program according to the third embodiment. FIG. 44 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the third embodiment. FIG. 45 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the fourth embodiment. FIG. 46 is a diagram showing an example of the software configuration of the machine learning execution program of the fourth embodiment. 47 is a flowchart showing an example of processing executed by the machine learning execution program of the fourth embodiment. FIG. 48 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the fourth embodiment. FIG. 49 is a diagram showing an example of a software configuration of a dry eye syndrome test program according to the fourth embodiment. FIG. 50 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the fourth embodiment. FIG. 51 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the fifth embodiment. FIG. 52 is a diagram showing an example of the software configuration of the machine learning execution program of the fifth embodiment. FIG. 53 is a diagram showing an example of a learning tear meniscus image obtained by clipping a region in which the tear meniscus is drawn from the learning image of the fifth embodiment. 54 is a diagram showing an example of an illumination image for learning obtained by clipping and drawing a region of illumination reflected on the cornea of the subject for learning from the image for learning in the fifth embodiment. FIG. 55 is a flowchart showing an example of processing executed by the machine learning execution program of the fifth embodiment. FIG. 56 is a diagram showing an example of the hardware configuration of the dry eye disease testing apparatus according to the fifth embodiment. FIG. 57 is a diagram showing an example of the software configuration of the dry eye syndrome examination program according to the fifth embodiment. 58 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the fifth embodiment. Fig. 59 is a diagram showing the important consideration in the learning image depicting the eye of the subject for learning when the machine learning program of the fourth embodiment and the modification of the fifth embodiment is used to predict the test result of the tear layer destruction time A diagram of an example of a part of . FIG. 60 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the sixth embodiment. FIG. 61 is a diagram showing an example of the software configuration of the machine learning execution program of the sixth embodiment. 62 is a diagram showing an example of a portion to be considered in the eye image of the subject for learning when the machine learning program of the sixth embodiment predicts the test result related to the thickness of the tear oil layer. 63 is a flowchart showing an example of processing executed by the machine learning execution program of the sixth embodiment. FIG. 64 is a diagram showing an example of the hardware configuration of the dry eye disease testing apparatus according to the sixth embodiment. FIG. 65 is a diagram showing an example of a software configuration of the dry eye syndrome examination program of the sixth embodiment. FIG. 66 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the sixth embodiment. FIG. 67 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the seventh embodiment. FIG. 68 is a diagram showing an example of the software configuration of the machine learning execution program of the seventh embodiment. FIG. 69 is a flowchart showing an example of processing executed by the machine learning execution program of the seventh embodiment. FIG. 70 is a diagram showing an example of the hardware configuration of the dry eye disease inspection apparatus according to the seventh embodiment. FIG. 71 is a diagram showing an example of the software configuration of the dry eye syndrome examination program according to the seventh embodiment. FIG. 72 is a flowchart showing an example of processing performed by the dry eye syndrome examination program of the seventh embodiment.

20:檢查裝置 20: Inspection device

200:檢查程式 200: Check program

201:資料獲取功能 201: Data acquisition function

202:推斷功能 202: Infer function

700:機械學習裝置 700: Machine Learning Devices

750:機械學習程式 750: Machine Learning Programs

Claims (15)

一種檢查方法,獲取推論用圖像資料,所述推論用圖像資料表示描繪有推論用被檢查體的眼睛的推論用圖像,並且 將所述推論用圖像資料輸入至機械學習程式,所述機械學習程式使用將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛狀態,並使所述機械學習程式輸出表示所述推論用被檢查體的眼睛狀態的推論用資料。 An inspection method that acquires image data for inference, the image data for inference representing an image for inference depicting an eye of a subject for inference, and The image data for inference is input into a machine learning program, and the machine learning program uses, as a question, the image data for learning representing the image for learning in which the eyes of the subject for learning are drawn, and the learning program is used to represent the learning. Learning is performed using the state data for learning of the eye state of the subject as an answer to the teacher data, the inspection method causes the machine learning program to infer the eye state of the inference subject, and causes the machine learning program to infer the eye state of the subject for inference Data for inference indicating the eye state of the subject for inference is output. 如請求項1所述的檢查方法,其中, 輸出所述推論用資料,所述推論用資料表示所述推論用圖像所描繪出的所述推論用被檢查體的眼睛中滿足規定條件的區域的顯示態樣、以及與不滿足所述規定條件的區域的顯示態樣不同的圖像。 The inspection method according to claim 1, wherein, Outputting the inference data indicating the display state of the area in the eye of the inference subject drawn by the inference image that satisfies the predetermined condition, and the display state of the area that does not satisfy the predetermined condition Different images are displayed in the area of the condition. 如請求項1或請求項2所述的檢查方法,其中, 所述機械學習程式使用更包含表示改善所述學習用被檢查體的眼睛狀態的措施的學習用措施資料作為所述答案的一部分的所述教師資料進行了學習,所述檢查方法使所述機械學習程式輸出表示改善所述推論用被檢查體的眼睛狀態的措施的所述推論用資料。 The inspection method according to claim 1 or claim 2, wherein, The machine learning program has learned using the teacher data that further includes, as a part of the answer, a learning measure data indicating a measure to improve the eye state of the learning subject, and the inspection method makes the machine learn. The learning program outputs the inference data indicating measures to improve the eye state of the inference subject. 如請求項1至請求項3中任一項所述的檢查方法,其中, 獲取表示描繪有所述推論用被檢查體的眼睛的角膜的至少一部分的所述推論用圖像的所述推論用圖像資料,並且 使所述機械學習程式輸出所述推論用資料,所述機械學習程式使用將所述學習用圖像資料作為所述問題的所述教師資料進行了學習,所述學習用圖像資料表示描繪有所述學習用被檢查體的眼睛的角膜的至少一部分的所述學習用圖像。 The inspection method according to any one of claim 1 to claim 3, wherein, acquiring the inference image data representing the inference image in which at least a part of the cornea of the eye of the inference subject is depicted, and The machine learning program is caused to output the data for inference, and the machine learning program has learned using the teacher data that uses the image data for learning as the problem, and the image data for learning indicates that there are The image for learning of at least a part of the cornea of the eye of the subject for learning. 如請求項4所述的檢查方法,其中, 所述機械學習程式使用更包含表示調查所述學習用被檢查體的眼睛的淚液油層的厚度的檢查結果的學習用檢查結果資料作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式輸出所述推論用資料。 The inspection method according to claim 4, wherein, The machine learning program is learned by using the teacher data further including, as the answer, the test result data for learning that shows the test result of the test result of investigating the thickness of the tear oil layer of the eye of the test subject, and the test method The machine learning program is caused to output the data for inference. 如請求項1至請求項5中任一項所述的檢查方法,其中, 獲取表示描繪有所述推論用被檢查體的眼睛的結膜的至少一部分的所述推論用圖像的所述推論用圖像資料,並且 使所述機械學習程式輸出所述推論用資料,所述機械學習程式使用將所述學習用圖像資料作為所述問題的所述教師資料進行了學習,所述學習用圖像資料表示描繪有所述學習用被檢查體的眼睛的結膜的至少一部分的所述學習用圖像。 The inspection method according to any one of claim 1 to claim 5, wherein, acquiring the inference image data representing the inference image in which at least a part of the conjunctiva of the eye of the inference subject is depicted, and The machine learning program is caused to output the data for inference, and the machine learning program has learned using the teacher data that uses the image data for learning as the problem, and the image data for learning indicates that there are The image for learning of at least a part of the conjunctiva of the eye of the subject for learning. 如請求項1至請求項6中任一項所述的檢查方法,其中, 進而獲取表示所述推論用被檢查體的眼睛的充血程度的推論用充血資料, 所述機械學習程式使用更包含表示所述學習用被檢查體的眼睛的充血程度的學習用充血資料作為所述問題的一部分、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示所述檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。 The inspection method according to any one of claim 1 to claim 6, wherein, Furthermore, the hyperemia data for inference indicating the degree of hyperemia of the eyes of the subject for inference is acquired, The machine learning program uses, as a part of the question, the hyperemia data for learning which further includes the hyperemia degree of the eyes of the subject for learning, and further includes the data showing that the eyes of the subject for learning are drawn. At least one of the examination image data for learning of the image of the eye of the subject for learning and the examination result data for learning indicating the examination result at the time of the examination related to the symptoms of dry eye is used as the object. The teacher data of the answer is learned, and the inspection method causes the machine learning program to infer the symptoms of dry eye appearing in the eyes of the inference subject, and outputs an output indicating the inference subject The symptom data of the symptoms of dry eye appearing in the eyes of 2000 were used as the data for the inference. 如請求項1至請求項7中任一項所述的檢查方法,其中, 進而獲取推論用回答資料,所述推論用回答資料表示與所述推論用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果, 所述機械學習程式使用更包含表示與所述學習用被檢查體所擁有的眼睛的自覺症狀相關的詢問的回答結果的學習用回答資料作為所述問題的一部分、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示所述檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。 The inspection method according to any one of claim 1 to claim 7, wherein, Further, answer data for inference are obtained, and the answer data for inference indicates the answer result of the inquiry related to the subjective symptoms of the eyes possessed by the subject for inference, The machine learning program uses, as a part of the question, response data for learning that further includes an answer result of an inquiry about the subjective symptoms of the eyes possessed by the subject for learning, and further includes a representation that depicts the subject. The learning test image data of images of the eyes of the learning subject when an examination related to the symptoms of dry eye is performed on the eyes of the learning subject, and the learning test showing the test results At least one of the result data is learned as the teacher data for the answer, the inspection method causes the machine learning program to infer the symptoms of dry eye appearing in the eyes of the inference subject, and Symptom data representing the symptoms of dry eye appearing in the eyes of the subject for inference are output as the data for inference. 如請求項1至請求項6中任一項所述的檢查方法,其中, 進而獲取表示最大開眼瞼時間的推論用開眼瞼資料,所述最大開眼瞼時間為所述推論用被檢查體能夠連續張開被拍攝所述推論用圖像的眼睛的時間, 所述機械學習程式使用更包含表示所述學習用被檢查體能夠連續張開被拍攝所述學習用圖像的眼睛的時間即最大開眼瞼時間的學習用開眼瞼資料作為所述問題的一部分、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示與乾眼症的症狀相關的檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。 The inspection method according to any one of claim 1 to claim 6, wherein, And then obtain the eyelid opening data for inference representing the maximum eyelid opening time, the maximum eyelid opening time is the time when the subject for inference can continuously open the eyes of which the image for inference is captured, The machine learning program uses, as a part of the problem, the eyelid opening data for learning that indicates the maximum eyelid opening time that the subject for learning can continuously open the eyes of which the learning image is captured, that is, the maximum eyelid opening time, Furthermore, it further includes examination image data for learning and a representation showing an image of the eye of the subject for learning when the eye of the subject for learning was subjected to an examination related to the symptoms of dry eye syndrome. At least one of the test result data for learning of test results related to symptoms of dry eye is learned as the teacher data for the answer, and the test method causes the machine learning program to infer the inference subject. Symptoms of dry eye syndrome appearing in the eyes of the test subject, and symptom data indicating the symptoms of dry eye syndrome appearing in the eyes of the subject for inference are output as the data for inference. 如請求項1至請求項6及請求項9中任一項所述的檢查方法,其中, 獲取表示推論用淚液彎液面圖像的推論用淚液彎液面圖像資料作為所述推論用圖像資料,所述推論用淚液彎液面圖像是自描繪有所述推論用被檢查體的眼睛的淚液彎液面的所述推論用圖像中剪切描繪出所述推論用被檢查體的淚液彎液面的區域而得, 所述機械學習程式使用將表示學習用淚液彎液面圖像的學習用淚液彎液面圖像資料將所述學習用圖像資料作為所述問題、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示與乾眼症的症狀相關的檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述學習用淚液彎液面圖像是自描繪有所述學習用被檢查體的眼睛的淚液彎液面的所述學習用圖像中剪切描繪出所述學習用被檢查體的淚液彎液面的區域而得,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。 The inspection method according to any one of claim 1 to claim 6 and claim 9, wherein, The inference tear meniscus image data representing the inference tear meniscus image is acquired as the inference image data, and the inference tear meniscus image is self-drawn with the inference subject. The tear meniscus of the eye is obtained by cutting out the area of the tear meniscus of the subject for inference in the image for the inference, The machine learning program uses learning tear meniscus image data representing a learning tear meniscus image as the problem, and further includes a representation that depicts the learning image data as the problem. The image data of the learning test image for learning the image of the eye of the test subject when the test related to the symptoms of dry eye is performed on the eye of the test subject, and the test results showing the test results related to the symptoms of dry eye At least one of the examination result data for learning has been studied as the teacher data of the answer, and the tear meniscus image for learning is a tear meniscus drawn from the eye of the subject for learning A region in which the tear meniscus of the subject for learning is drawn in the image for learning on the surface is cut out, and the inspection method causes the machine learning program to infer the eye of the subject for inference. Symptoms of dry eye symptoms appearing in the eyes of the subject for inference are output as the data for inferences. 如請求項1至請求項6、請求項9及請求項10中任一項所述的檢查方法,其中, 獲取表示推論用照明圖像的推論用照明圖像資料作為所述推論用圖像資料,所述推論用照明圖像是自描繪有映入至所述推論用被檢查體的角膜上的照明的所述推論用圖像中剪切描繪出映入至所述推論用被檢查體的角膜上的照明的區域而得, 所述機械學習程式使用將表示學習用照明圖像的學習用照明圖像資料將所述學習用圖像資料作為所述問題、且更包含表示描繪有對所述學習用被檢查體的眼睛實施了與乾眼症的症狀相關的檢查時的所述學習用被檢查體的眼睛的圖像的學習用檢查圖像資料以及表示與乾眼症的症狀相關的檢查結果的學習用檢查結果資料中的至少一者作為所述答案的所述教師資料進行了學習,所述學習用照明圖像是自描繪有映入至所述學習用被檢查體的角膜上的照明的所述學習用圖像中剪切描繪出映入至所述學習用被檢查體的角膜上的照明的區域而得,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。 The inspection method according to any one of claim 1 to claim 6, claim 9 and claim 10, wherein, Acquiring illumination image data for inference representing an illumination image for inference as the image data for inference, the illumination image for inference self-depicting illumination reflected on the cornea of the subject for inference The inference image is obtained by clipping and drawing a region of illumination reflected on the cornea of the inference subject, The machine learning program uses the illumination image data for learning representing the illumination image for learning, the image data for learning is used as the question, and further includes a representation that depicts an eye performed on the subject for learning. In the examination image data for learning of the image of the eye of the subject for learning at the time of the examination related to the symptoms of dry eye, and the examination result data for learning showing the results of the examination related to the symptoms of dry eye At least one of the teacher data of the answer has been studied, and the illumination image for study is the image for study in which illumination reflected on the cornea of the subject for study is drawn. The middle cut is obtained by delineating a region of illumination reflected on the cornea of the subject for learning, and the inspection method causes the machine learning program to infer dry eye that appears in the eye of the subject for inference Symptoms of the disease, and the symptom data representing the symptoms of dry eye appearing in the eyes of the subject for inference are output as the data for inference. 如請求項1至請求項6中任一項所述的檢查方法,其中, 獲取表示描繪有假定所述推論用被檢查體的眼睛被色溫較照射著所述推論用被檢查體的眼睛的光低的光照射時的所述推論用被檢查體的眼睛的所述推論用圖像的所述推論用圖像資料, 所述機械學習程式使用將表示描繪有假定所述學習用被檢查體的眼睛被色溫較照射著所述學習用被檢查體的眼睛的光低的光照射時的所述學習用被檢查體的眼睛的所述學習用圖像的所述學習用圖像資料作為所述問題、且更包含表示調查所述學習用被檢查體的眼睛的淚液油層的厚度的檢查結果的學習用檢查結果資料作為所述答案的所述教師資料進行了學習,所述檢查方法使所述機械學習程式推斷所述推論用被檢查體的眼睛中出現的乾眼症的症狀,並輸出表示所述推論用被檢查體的眼睛中出現的乾眼症的症狀的症狀資料作為所述推論用資料。 The inspection method according to any one of claim 1 to claim 6, wherein, Acquires the inference object representing the inference object's eye when it is assumed that the inference object's eye is irradiated with light having a lower color temperature than the light irradiating the inference object's eye. Image data for said inference of images, The machine learning program uses an image representing the learning subject when the eyes of the learning subject are drawn with light having a lower color temperature than the light irradiating the eyes of the learning subject. The learning image data of the learning image of the eye is used as the question, and further includes the learning test result data representing the test result of investigating the thickness of the tear oil layer of the eye of the learning subject as the question. The teacher data of the answer is learned, and the inspection method causes the machine learning program to infer the symptoms of dry eye appearing in the eyes of the inference subject, and outputs an output indicating that the inference subject is inspected. The symptom data of the symptoms of dry eye appearing in the eyes of the human body are used as the data for the inference. 一種機械學習執行方法,獲取將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料,並且 將所述教師資料輸入至機械學習程式,使所述機械學習程式進行學習。 A machine learning execution method that acquires, as a question, image data for learning representing an image for learning in which the eyes of the subject for learning are drawn, and state data for learning representing the state of the eyes of the subject for learning as a problem. teacher profile for the answer, and The teacher data is input into a machine learning program, so that the machine learning program learns. 一種檢查裝置,包括: 資料獲取部,獲取推論用圖像資料,所述推論用圖像資料表示描繪有推論用被檢查體的眼睛的推論用圖像;以及 推斷部,將所述推論用圖像資料輸入至機械學習程式,所述機械學習程式使用將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料進行了學習,所述推斷部使所述機械學習程式推斷所述推論用被檢查體的眼睛狀態,並使所述機械學習程式輸出表示所述推論用被檢查體的眼睛狀態的推論用資料。 An inspection device, comprising: A data acquisition unit that acquires image data for inference, the image data for inference representing an image for inference depicting the eyes of the subject for inference; and The inference unit inputs the image data for inference into a machine learning program that uses, as a question, the image data for learning representing the image for learning in which the eyes of the subject for learning are drawn, and expresses The learning state data of the eye state of the subject for learning is learned as the teacher data for the answer, and the inference unit causes the machine learning program to infer the eye state of the subject for inference, and causes the inference to be performed. The machine learning program outputs data for inference indicating the state of the eyes of the subject for inference. 一種機械學習執行裝置,包括: 教師資料獲取部,獲取將表示描繪有學習用被檢查體的眼睛的學習用圖像的學習用圖像資料作為問題、將表示所述學習用被檢查體的眼睛狀態的學習用狀態資料作為答案的教師資料;以及 機械學習執行部,將所述教師資料輸入至機械學習程式,使所述機械學習程式進行學習。 A machine learning execution device, comprising: A teacher data acquisition unit that acquires, as a question, learning image data representing a learning image in which the eyes of the learning subject are drawn, and learning state data representing the state of the eyes of the learning subject as an answer teacher information; and The machine learning execution unit inputs the teacher data into the machine learning program, and causes the machine learning program to learn.
TW110139818A 2020-10-28 2021-10-27 Examination method, machine learning execution method, examination device, and machine learning execution method TW202222245A (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
JP2020-180593 2020-10-28
JP2020180593A JP2022071558A (en) 2020-10-28 2020-10-28 Lachrymal fluid keratoconjunctive state examination device, lachrymal fluid keratoconjunctive state examination method and program
JP2020-217685 2020-12-25
JP2020-217683 2020-12-25
JP2020-217684 2020-12-25
JP2020217683A JP2022102757A (en) 2020-12-25 2020-12-25 Machine learning execution program, dry eye inspection program, machine learning execution device and dry eye inspection device
JP2020217684A JP2022102758A (en) 2020-12-25 2020-12-25 Machine learning execution program, dry eye inspection program, machine learning execution device and dry eye inspection device
JP2020217685A JP2022102759A (en) 2020-12-25 2020-12-25 Machine learning execution program, dry eye inspection program, machine learning execution device and dry eye inspection device

Publications (1)

Publication Number Publication Date
TW202222245A true TW202222245A (en) 2022-06-16

Family

ID=81382585

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110139818A TW202222245A (en) 2020-10-28 2021-10-27 Examination method, machine learning execution method, examination device, and machine learning execution method

Country Status (3)

Country Link
KR (1) KR20230096957A (en)
TW (1) TW202222245A (en)
WO (1) WO2022092134A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024069762A1 (en) * 2022-09-27 2024-04-04 日本電気株式会社 Information processing system, information processing method, and recording medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008137863A2 (en) * 2007-05-04 2008-11-13 Advanced Medical Optics, Inc. Methods and devices for measuring tear film and diagnosing tear disorders
US9962072B2 (en) * 2015-12-31 2018-05-08 Anantha Pradeep System and method for analyzing eye and contact lens condition
US11514570B2 (en) * 2017-08-07 2022-11-29 Kowa Company, Ltd. Tear fluid state evaluation method, computer program, and device
KR102469720B1 (en) * 2017-10-31 2022-11-23 삼성전자주식회사 Electronic device and method for determining hyperemia grade of eye using the same
JP7448470B2 (en) * 2018-03-02 2024-03-12 興和株式会社 Image classification device operating method, device and program
JP7096020B2 (en) * 2018-03-16 2022-07-05 株式会社トプコン Mobile terminal and control method of mobile terminal
US10468142B1 (en) * 2018-07-27 2019-11-05 University Of Miami Artificial intelligence-based system and methods for corneal diagnosis
JP2020036835A (en) 2018-09-05 2020-03-12 株式会社クレスコ Ophthalmologic diagnostic support apparatus, ophthalmologic diagnostic support method, and ophthalmologic diagnostic support program
CN111700582A (en) * 2020-06-23 2020-09-25 温州医科大学附属眼视光医院 Common ocular surface disease diagnosis system based on intelligent terminal

Also Published As

Publication number Publication date
WO2022092134A1 (en) 2022-05-05
KR20230096957A (en) 2023-06-30

Similar Documents

Publication Publication Date Title
JP2021507428A (en) Diagnosis and referral based on deep learning of ophthalmic diseases and disorders
US20220400943A1 (en) Machine learning methods for creating structure-derived visual field priors
JP2019005319A (en) Ophthalmologic information processing system, ophthalmologic information processing method, program, and recording medium
JP2018005553A (en) Medical support method and medical support system
Akter et al. Glaucoma diagnosis using multi-feature analysis and a deep learning technique
Chen et al. Early detection of visual impairment in young children using a smartphone-based deep learning system
Shi et al. Assessment of image quality on color fundus retinal images using the automatic retinal image analysis
Upadhyaya et al. Validation of a portable, non-mydriatic fundus camera compared to gold standard dilated fundus examination using slit lamp biomicroscopy for assessing the optic disc for glaucoma
Krishnan et al. Intelligent-based decision support system for diagnosing glaucoma in primary eyecare centers using eye tracker
TW202222245A (en) Examination method, machine learning execution method, examination device, and machine learning execution method
Nderitu et al. Deep learning for gradability classification of handheld, non-mydriatic retinal images
Zaleska-Żmijewska et al. A new platform designed for glaucoma screening: identifying the risk of glaucomatous optic neuropathy using fundus photography with deep learning architecture together with intraocular pressure measurements
Brandão-de-Resende et al. Glaucoma and telemedicine
Lee et al. Progression detection capabilities of circumpapillary and macular vessel density in advanced glaucomatous eyes
Hu et al. Teleophthalmology for anterior segment disease
Shroff et al. Agreement of a novel artificial intelligence software with optical coherence tomography and manual grading of the optic disc in glaucoma
AlSabti et al. Efficacy and reliability of fundus digital camera as a screening tool for diabetic retinopathy in Kuwait
JPWO2020116351A1 (en) Diagnostic support device and diagnostic support program
Tabuchi et al. Developing an iOS application that uses machine learning for the automated diagnosis of blepharoptosis
Sridhar et al. Artificial intelligence in medicine: diabetes as a model
Bowd et al. Multimodal Deep Learning Classifier for Primary Open Angle Glaucoma Diagnosis Using Wide-Field Optic Nerve Head Cube Scans in Eyes With and Without High Myopia
Banaee et al. Distribution of different sized ocular surface vessels in diabetics and normal individuals
Fattoh et al. Changes in noncontact meibography and noninvasive tear break-up time test with contact lenses usage
JP7094468B1 (en) Inspection methods
Quintero et al. Distinguishing glaucoma, cataract, and glaucoma suspect based on visual symptoms