WO2023286199A1 - 処理装置、処理プログラム、処理方法及び処理システム - Google Patents
処理装置、処理プログラム、処理方法及び処理システム Download PDFInfo
- Publication number
- WO2023286199A1 WO2023286199A1 PCT/JP2021/026442 JP2021026442W WO2023286199A1 WO 2023286199 A1 WO2023286199 A1 WO 2023286199A1 JP 2021026442 W JP2021026442 W JP 2021026442W WO 2023286199 A1 WO2023286199 A1 WO 2023286199A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- determination
- information
- processing
- processor
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 298
- 238000003672 processing method Methods 0.000 title claims description 8
- 230000015654 memory Effects 0.000 claims abstract description 78
- 210000000214 mouth Anatomy 0.000 claims abstract description 53
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 42
- 201000010099 disease Diseases 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims description 75
- 238000003384 imaging method Methods 0.000 claims description 72
- 230000008569 process Effects 0.000 claims description 49
- 210000003800 pharynx Anatomy 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 21
- 210000002741 palatine tonsil Anatomy 0.000 claims description 6
- 238000003745 diagnosis Methods 0.000 abstract description 11
- 230000008602 contraction Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 54
- 206010022000 influenza Diseases 0.000 description 52
- 238000010801 machine learning Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 16
- 238000007781 pre-processing Methods 0.000 description 14
- 238000012360 testing method Methods 0.000 description 12
- 238000013528 artificial neural network Methods 0.000 description 11
- 238000009792 diffusion process Methods 0.000 description 11
- 238000012790 confirmation Methods 0.000 description 8
- 210000002569 neuron Anatomy 0.000 description 7
- 238000003780 insertion Methods 0.000 description 6
- 230000037431 insertion Effects 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 6
- 241000700605 Viruses Species 0.000 description 4
- 230000004308 accommodation Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000003317 immunochromatography Methods 0.000 description 4
- 238000002955 isolation Methods 0.000 description 4
- 229920005989 resin Polymers 0.000 description 4
- 239000011347 resin Substances 0.000 description 4
- 208000024891 symptom Diseases 0.000 description 4
- 230000036760 body temperature Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 210000005004 lymphoid follicle Anatomy 0.000 description 3
- 229920005672 polyolefin resin Polymers 0.000 description 3
- -1 polypropylene Polymers 0.000 description 3
- 210000001584 soft palate Anatomy 0.000 description 3
- 208000035473 Communicable disease Diseases 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000004283 incisor Anatomy 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000004417 polycarbonate Substances 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000008961 swelling Effects 0.000 description 2
- 229920005992 thermoplastic resin Polymers 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000002255 vaccination Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 229920000178 Acrylic resin Polymers 0.000 description 1
- 239000004925 Acrylic resin Substances 0.000 description 1
- 208000010370 Adenoviridae Infections Diseases 0.000 description 1
- 206010060931 Adenovirus infection Diseases 0.000 description 1
- 206010003210 Arteriosclerosis Diseases 0.000 description 1
- 208000006820 Arthralgia Diseases 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 241000222122 Candida albicans Species 0.000 description 1
- 206010007134 Candida infections Diseases 0.000 description 1
- 229920002284 Cellulose triacetate Polymers 0.000 description 1
- 206010011224 Cough Diseases 0.000 description 1
- 208000010201 Exanthema Diseases 0.000 description 1
- 208000020061 Hand, Foot and Mouth Disease Diseases 0.000 description 1
- 208000025713 Hand-foot-and-mouth disease Diseases 0.000 description 1
- 206010019233 Headaches Diseases 0.000 description 1
- 102000001554 Hemoglobins Human genes 0.000 description 1
- 108010054147 Hemoglobins Proteins 0.000 description 1
- 201000006219 Herpangina Diseases 0.000 description 1
- 208000008454 Hyperhidrosis Diseases 0.000 description 1
- 206010020751 Hypersensitivity Diseases 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 208000000112 Myalgia Diseases 0.000 description 1
- 206010028470 Mycoplasma infections Diseases 0.000 description 1
- 206010028735 Nasal congestion Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 206010068319 Oropharyngeal pain Diseases 0.000 description 1
- 208000009565 Pharyngeal Neoplasms Diseases 0.000 description 1
- 206010034811 Pharyngeal cancer Diseases 0.000 description 1
- 201000007100 Pharyngitis Diseases 0.000 description 1
- 239000004743 Polypropylene Substances 0.000 description 1
- 206010039101 Rhinorrhoea Diseases 0.000 description 1
- 206010051495 Strawberry tongue Diseases 0.000 description 1
- 206010061372 Streptococcal infection Diseases 0.000 description 1
- 206010062129 Tongue neoplasm Diseases 0.000 description 1
- 206010047700 Vomiting Diseases 0.000 description 1
- NNLVGZFZQQXQNW-ADJNRHBOSA-N [(2r,3r,4s,5r,6s)-4,5-diacetyloxy-3-[(2s,3r,4s,5r,6r)-3,4,5-triacetyloxy-6-(acetyloxymethyl)oxan-2-yl]oxy-6-[(2r,3r,4s,5r,6s)-4,5,6-triacetyloxy-2-(acetyloxymethyl)oxan-3-yl]oxyoxan-2-yl]methyl acetate Chemical compound O([C@@H]1O[C@@H]([C@H]([C@H](OC(C)=O)[C@H]1OC(C)=O)O[C@H]1[C@@H]([C@@H](OC(C)=O)[C@H](OC(C)=O)[C@@H](COC(C)=O)O1)OC(C)=O)COC(=O)C)[C@@H]1[C@@H](COC(C)=O)O[C@@H](OC(C)=O)[C@H](OC(C)=O)[C@H]1OC(C)=O NNLVGZFZQQXQNW-ADJNRHBOSA-N 0.000 description 1
- 208000011589 adenoviridae infectious disease Diseases 0.000 description 1
- 208000026935 allergic disease Diseases 0.000 description 1
- 230000007815 allergy Effects 0.000 description 1
- 230000004596 appetite loss Effects 0.000 description 1
- 208000011775 arteriosclerosis disease Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 201000003984 candidiasis Diseases 0.000 description 1
- 229920002678 cellulose Polymers 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 229920001577 copolymer Polymers 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 229920005994 diacetyl cellulose Polymers 0.000 description 1
- 230000001079 digestive effect Effects 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 201000005884 exanthem Diseases 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 231100000869 headache Toxicity 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 210000000867 larynx Anatomy 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 208000019017 loss of appetite Diseases 0.000 description 1
- 235000021266 loss of appetite Nutrition 0.000 description 1
- 210000001165 lymph node Anatomy 0.000 description 1
- 230000001926 lymphatic effect Effects 0.000 description 1
- 206010025482 malaise Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 208000013465 muscle pain Diseases 0.000 description 1
- 208000010753 nasal discharge Diseases 0.000 description 1
- JFNLZVQOOSMTJK-KNVOCYPGSA-N norbornene Chemical compound C1[C@@H]2CC[C@H]1C=C2 JFNLZVQOOSMTJK-KNVOCYPGSA-N 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 229920005668 polycarbonate resin Polymers 0.000 description 1
- 239000004431 polycarbonate resin Substances 0.000 description 1
- 229920000728 polyester Polymers 0.000 description 1
- 229920001155 polypropylene Polymers 0.000 description 1
- 229920005990 polystyrene resin Polymers 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 206010037844 rash Diseases 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000035900 sweating Effects 0.000 description 1
- 201000006134 tongue cancer Diseases 0.000 description 1
- 206010044008 tonsillitis Diseases 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 210000002396 uvula Anatomy 0.000 description 1
- 208000019553 vascular disease Diseases 0.000 description 1
- 230000009385 viral infection Effects 0.000 description 1
- 230000003612 virological effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/267—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00006—Operational features of endoscopes characterised by electronic signal processing of control signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00025—Operational features of endoscopes characterised by power management
- A61B1/00027—Operational features of endoscopes characterised by power management characterised by power supply
- A61B1/00032—Operational features of endoscopes characterised by power management characterised by power supply internally powered
- A61B1/00034—Operational features of endoscopes characterised by power management characterised by power supply internally powered rechargeable
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00039—Operational features of endoscopes provided with input arrangements for the user
- A61B1/0004—Operational features of endoscopes provided with input arrangements for the user for electronic operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
- A61B1/0005—Display arrangement combining images e.g. side-by-side, superimposed or tiled
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
- A61B1/00052—Display arrangement positioned at proximal end of the endoscope body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00064—Constructional details of the endoscope body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00064—Constructional details of the endoscope body
- A61B1/00071—Insertion part of the endoscope body
- A61B1/0008—Insertion part of the endoscope body characterised by distal tip features
- A61B1/00087—Tools
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00064—Constructional details of the endoscope body
- A61B1/00071—Insertion part of the endoscope body
- A61B1/0008—Insertion part of the endoscope body characterised by distal tip features
- A61B1/00101—Insertion part of the endoscope body characterised by distal tip features the distal tip features being detachable
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00064—Constructional details of the endoscope body
- A61B1/00105—Constructional details of the endoscope body characterised by modular construction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/045—Control thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/05—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0655—Control therefor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0661—Endoscope light sources
- A61B1/0684—Endoscope light sources using light emitting diodes [LED]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/07—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/24—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
Definitions
- the present disclosure relates to a processing device, processing program, processing method, and processing system for processing an image of a subject captured by a camera.
- Non-Patent Document 1 reports that lymphatic follicles appearing in the deepest part of the pharynx located in the oral cavity have a pattern peculiar to influenza. Lymphoid follicles having this unique pattern are called influenza follicles, which are characteristic signs of influenza and are said to appear about 2 hours after onset. However, such a pharynx portion is diagnosed by a doctor's direct visual inspection, and diagnosis using an image has not been made.
- the possibility of affliction with a predetermined disease is determined using a determination image of a subject obtained by imaging the oral cavity of the user. It aims at providing the processing apparatus, the processing program, the processing method, or the processing system for.
- the at least one processor captures an image of the subject including at least a portion of the oral cavity of the user via a camera. Based on the learned judgment model stored in memory for obtaining one or more judgment images of and judging the possibility of affliction with a predetermined disease, and the acquired one or more judgment images, A processing device configured to perform processing for determining the possibility of affliction with the predetermined disease and outputting information indicating the determined possibility of affliction is provided.
- a processing program is provided that causes the at least one processor to determine a likelihood of affliction with a disease and output information indicative of the determined likelihood of affliction.
- a processing method executed by at least one processor, for capturing an image of a subject including at least a portion of the oral cavity of a user, via a camera, Based on the step of acquiring one or more determination images, a learned determination model stored in memory for determining the possibility of affliction with a predetermined disease, and the acquired one or more determination images and outputting information indicating the determined possibility of affliction.
- a photographing device including a camera for photographing an image of a subject including at least part of the oral cavity of the user, and connected to the photographing device via a wired or wireless network
- a processing system comprising: a processing apparatus as described above.
- a processing device a processing program, a processing method, and a processing system suitable for processing an image obtained by photographing the inside of the oral cavity for use in intraoral diagnosis.
- FIG. 1 is a diagram showing a usage state of a processing system 1 according to an embodiment of the present disclosure.
- FIG. 2 is a diagram illustrating a usage state of the processing system 1 according to an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of a processing system 1 according to one embodiment of the present disclosure.
- FIG. 4 is a block diagram showing the configuration of the processing system 1 according to one embodiment of the present disclosure.
- FIG. 5 is a schematic diagram showing the configuration of the top surface of the imaging device 200 according to an embodiment of the present disclosure.
- FIG. 6 is a schematic diagram showing a cross-sectional configuration of an imaging device 200 according to an embodiment of the present disclosure.
- FIG. 7A is a diagram conceptually showing an image management table stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 7B is a diagram conceptually showing a user table stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 8 is a diagram illustrating a processing sequence executed between the processing device 100 and the imaging device 200 according to an embodiment of the present disclosure.
- FIG. 9 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure.
- FIG. 10 is a diagram showing a processing flow performed in the imaging device 200 according to one embodiment of the present disclosure.
- FIG. 11 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure.
- FIG. 12 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure.
- FIG. 12 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure.
- FIG. 13 is a diagram illustrating a processing flow for generating a trained model according to an embodiment of the present disclosure
- FIG. 14 is a diagram showing a processing flow performed by the processing device 100 according to an embodiment of the present disclosure.
- FIG. 15 is a diagram illustrating a processing flow for generating a trained model according to an embodiment of the present disclosure
- FIG. 16 is a diagram illustrating a processing flow for generating a trained model according to an embodiment of the present disclosure
- FIG. 17 is a diagram illustrating a processing flow for generating a trained model according to an embodiment of the present disclosure
- FIG. 18 is a diagram showing an example of a screen displayed on the processing device 100 according to an embodiment of the present disclosure.
- FIG. 19 is a diagram showing an example of a screen displayed on the processing device 100 according to an embodiment of the present disclosure.
- FIG. 20 is a schematic diagram of a processing system 1 according to one embodiment of the present disclosure.
- the processing system 1 according to the present disclosure is mainly used to photograph the inside of the user's oral cavity and obtain a subject image.
- the processing system 1 is used for imaging the back of the larynx of the oral cavity, specifically the pharynx. Therefore, the case where the processing system 1 according to the present disclosure is used for radiography of the pharynx will be mainly described below.
- the pharynx is an example of an imaged region, and naturally, the processing system 1 according to the present disclosure can be preferably used for other regions such as tonsils in the oral cavity.
- the processing system 1 determines the possibility of affliction with a predetermined disease from a subject image obtained by photographing a subject including at least the pharynx region of the oral cavity of the user, and diagnoses the predetermined disease. Or it is used to assist it.
- An example of a disease determined by the processing system 1 is influenza. Influenza probabilities are usually diagnosed by examining the user's pharynx and tonsil regions and determining the presence or absence of findings such as follicles in the pharynx region. However, by using the processing system 1 to determine the possibility of affliction with influenza and outputting the result, it is possible to diagnose or assist the disease. Note that the determination of the possibility of contracting influenza is an example.
- the processing system 1 can be suitably used for determination of any disease that causes a difference in findings in the oral cavity due to being affected. It should be noted that the difference in finding is not limited to the one discovered by a doctor or the like and whose existence is medically known. For example, any difference that can be recognized by a person other than a doctor or a difference that can be detected by artificial intelligence or image recognition technology can be suitably applied to the processing system 1 .
- diseases include, in addition to influenza, infections such as streptococcal infection, adenovirus infection, EB virus infection, mycoplasma infection, hand-foot-and-mouth disease, herpangina, candidiasis, arteriosclerosis, diabetes, Diseases exhibiting vascular disorders or mucosal disorders such as hypertension, and tumors such as tongue cancer and pharyngeal cancer are included.
- infections such as streptococcal infection, adenovirus infection, EB virus infection, mycoplasma infection, hand-foot-and-mouth disease, herpangina, candidiasis, arteriosclerosis, diabetes, Diseases exhibiting vascular disorders or mucosal disorders such as hypertension, and tumors such as tongue cancer and pharyngeal cancer are included.
- the processing system 1 of the present disclosure may be used by the user himself/herself or by an operator other than a doctor to be judged or diagnosed by the processing device 100 included in the processing system 1. .
- the user to be imaged by the imaging device 200 can include all humans such as patients, subjects, diagnostic users, and healthy subjects.
- an operator who holds the imaging device 200 and performs imaging operations is not limited to medical professionals such as doctors, nurses, and laboratory technicians, and may include any person such as the user himself/herself.
- the processing system 1 according to the present disclosure is typically assumed to be used in medical institutions. However, it is not limited to this case, and may be used at any location such as the user's home, school, or workplace.
- the subject should include at least part of the user's oral cavity.
- any disease can be determined as long as it causes a difference in findings in the oral cavity.
- the subject includes the pharynx or the vicinity of the pharynx and the possibility of affliction with influenza is determined as the disease.
- the subject image and the determination image may be one or more moving images or one or more still images.
- a through image is captured by the camera, and the captured through image is displayed on the display 203 .
- the operator presses the capture button one or more still images are captured by the camera, and the captured images are displayed on the display 203 .
- shooting button shooting of a moving image is started, and the image being shot by the camera is displayed on the display 203 during that time. Then, when the shooting button is pressed again, the shooting of the moving image ends.
- various images such as through images, still images, and moving images are captured by the camera and displayed on the display. , may contain all of the images captured by the camera.
- FIG. 1 is a diagram showing a usage state of a processing system 1 according to an embodiment of the present disclosure.
- a processing system 1 according to the present disclosure includes a processing device 100 and an imaging device 200 .
- the operator attaches the auxiliary tool 300 to the tip of the imaging device 200 so as to cover it, and inserts the imaging device 200 together with the auxiliary tool 300 into the oral cavity 710 of the user.
- the operator attaches the auxiliary tool 300 to the tip of the photographing device 200 so as to cover it.
- the operator inserts the imaging device 200 with the auxiliary tool 300 attached into the oral cavity 710 .
- the tip of the assisting device 300 is inserted through the incisor 711 to the vicinity of the soft palate 713 . That is, the imaging device 200 is similarly inserted up to the vicinity of the soft palate 713 .
- the auxiliary tool 300 (which functions as a tongue depressor) pushes the tongue 714 downward and restricts the movement of the tongue 714 .
- the tip of the assisting device 300 pushes the soft palate 713 upward.
- the operator can secure a good field of view of the imaging device 200 and can take a good image of the pharynx 715 located in front of the imaging device 200 .
- a captured subject image (typically, an image including the pharynx 715) is transmitted from the imaging device 200 to the processing device 100 communicably connected via a wired or wireless network.
- the processor of the processing device 100 that has received the subject image processes the program stored in the memory, thereby selecting a determination image to be used for determination from the subject image and determining the possibility of affliction with a predetermined disease. be. Then, the result is output to a display or the like.
- FIG. 2 is a diagram showing a usage state of the processing system 1 according to one embodiment of the present disclosure. Specifically, FIG. 2 is a diagram showing a state in which the operator 600 holds the imaging device 200 of the processing system 1 .
- the imaging device 200 is composed of a main body 201, a grip 202 and a display 203 from the side inserted into the oral cavity.
- the main body 201 and the grip 202 are formed in a substantially columnar shape with a predetermined length along the insertion direction H into the oral cavity.
- the display 203 is arranged on the side of the grip 202 opposite to the main body 201 side.
- the photographing device 200 is formed in a substantially columnar shape as a whole, and is gripped by the operator 600 by holding it like a pencil. That is, since the display panel of the display 203 faces the direction of the operator 600 himself in the use state, it is possible to easily handle the photographing device 200 while confirming the subject image photographed by the photographing device 200 in real time.
- the shooting button 220 is arranged on the upper surface side of the grip. Therefore, when the operator 600 holds it, the operator 600 can easily press the shooting button 220 with his index finger or the like.
- FIG. 3 is a schematic diagram of the processing system 1 according to one embodiment of the present disclosure.
- the processing system 1 includes a processing device 100 and an imaging device 200 communicably connected to the processing device 100 via a wired or wireless network.
- the processing device 100 receives an operation input by an operator and controls photography by the photography device 200 .
- the processing device 100 also processes the subject image captured by the imaging device 200 to determine the possibility of the user contracting influenza. Furthermore, the processing device 100 outputs the determined result and notifies the user, operator, doctor, or the like of the result.
- the imaging device 200 has its tip inserted into the user's oral cavity and images the oral cavity, especially the pharynx. The specific shooting process will be described later.
- a captured subject image is transmitted to the processing device 100 via a wired or wireless network.
- the processing system 1 can further include a mounting table 400 as necessary.
- the mounting table 400 can stably mount the imaging device 200 .
- by connecting the mounting table 400 to a power supply via a wired cable it is possible to supply power to the image capturing apparatus 200 from the power supply terminal of the mounting table 400 through the power supply port of the image capturing apparatus 200 .
- FIG. 4 is a block diagram showing the configuration of the processing system 1 according to one embodiment of the present disclosure.
- the processing system 1 includes a processing device 100 including a processor 111, a memory 112, an input interface 113, an output interface 114 and a communication interface 115, a camera 211, a light source 212, a processor 213, a memory 214, a display panel 215, and an imaging device 200 including an input interface 210 and a communication interface 216 .
- Each of these components are electrically connected to each other via control lines and data lines.
- the processing system 1 does not need to include all of the components shown in FIG. 4, and may be configured by omitting some or adding other components.
- processing system 1 may include a battery or the like for driving each component.
- the processor 111 of the processing device 100 functions as a control unit that controls other components of the processing system 1 based on programs stored in the memory 112 .
- the processor 111 controls the driving of the camera 211 and the driving of the light source 212 based on the program stored in the memory 112, stores the subject image received from the imaging device 200 in the memory 112, and reproduces the stored subject image. process.
- the processor 111 performs a process of acquiring a subject image of a subject from the camera 211, a process of inputting the acquired subject image into a judgment image selection model to acquire candidates for a judgment image, and a process of acquiring judgment image candidates.
- the processor 111 is mainly composed of one or more CPUs, but may be combined with a GPU, FPGA, or the like as appropriate.
- the memory 112 is composed of RAM, ROM, nonvolatile memory, HDD, etc., and functions as a storage unit.
- the memory 112 stores instruction commands for various controls of the processing system 1 according to this embodiment as programs. Specifically, the memory 112 performs a process of acquiring a subject image of a subject from the camera 211, a process of inputting the acquired subject image into a judgment image selection model to acquire candidates for a judgment image, and a process of acquiring judgment image candidates.
- a process of acquiring a determination image from candidates based on the degree of similarity between each image a process of acquiring at least one of user interview information and attribute information, a learned determination model stored in the memory 112, and an acquisition processing for determining the possibility of affliction with influenza based on one or more determination images obtained and, if necessary, at least one of medical interview information and attribute information of the user, for diagnosing affliction with a predetermined disease, or It stores a program for the processor 111 to execute, such as a process of outputting information indicating the possibility of morbidity determined to assist the diagnosis.
- the memory 112 stores subject images captured by the camera 211 of the image capturing apparatus 200, an image management table for managing the images, user attribute information, interview information, determination results, and the like. Stores a user table and the like.
- the memory 112 stores each learned model such as a learned determination image selection model used for selecting a determination image from a subject image and a learned determination model for determining the possibility of affliction with a disease from a determination image. do.
- the input interface 113 functions as an input unit that receives an operator's instruction input to the processing device 100 and the imaging device 200 .
- Examples of the input interface 113 include a "shooting button” for instructing the start/end of recording by the shooting device 200, a “confirmation button” for making various selections, and a button for returning to the previous screen or canceling an input confirmation operation.
- a "return/cancel button” for inputting characters, a cross key button for moving a pointer or the like output to the output interface 114, an on/off key for turning on/off the power of the processing device 100, and inputting various characters.
- physical key buttons such as character input key buttons.
- the input interface 113 it is also possible to use a touch panel which is superimposed on the display functioning as the output interface 114 and has an input coordinate system corresponding to the display coordinate system of the display.
- icons corresponding to the physical keys are displayed on the display, and the operator inputs instructions via the touch panel to select each icon. Any method such as a capacitive method, a resistive film method, or the like may be used to detect a user's instruction input through the touch panel.
- the input interface 113 does not always need to be physically provided in the processing device 100, and may be connected as necessary via a wired or wireless network.
- the output interface 114 functions as an output unit for outputting a subject image captured by the imaging device 200 and outputting the results determined by the processor 111 .
- An example of the output interface 114 is a display composed of a liquid crystal panel, an organic EL display, a plasma display, or the like.
- the processing device 100 itself does not necessarily have to have a display.
- an interface for connecting to a display or the like that can be connected to the processing device 100 via a wired or wireless network can function as the output interface 114 that outputs display data to the display or the like.
- the communication interface 115 functions as a communication unit for transmitting and receiving various commands related to the start of shooting and image data captured by the imaging device 200 to and from the imaging device 200 connected via a wired or wireless network.
- Examples of the communication interface 115 include connectors for wired communication such as USB and SCSI, transmitting/receiving devices for wireless communication such as wireless LAN, Bluetooth (registered trademark) and infrared rays, and various connection terminals for printed mounting boards and flexible mounting boards. and so on.
- the camera 211 of the imaging device 200 functions as an imaging unit that detects reflected light reflected by the oral cavity, which is a subject, and generates a subject image.
- the camera 211 includes, for example, a CMOS image sensor to detect the light, and a lens system and a drive system for realizing desired functions.
- the image sensor is not limited to a CMOS image sensor, and other sensors such as a CCD image sensor can also be used.
- the camera 211 can have an autofocus function, and is preferably set to focus on a specific part in front of the lens, for example.
- the camera 211 can also have a zoom function, and is preferably set to image at an appropriate magnification according to the size of the pharynx or influenza follicles.
- the lymphoid follicles that appear in the deepest part of the pharynx located in the oral cavity have a pattern unique to influenza. Lymphoid follicles having this unique pattern are called influenza follicles, which are characteristic signs of influenza and are said to appear about 2 hours after onset.
- influenza follicles which are characteristic signs of influenza and are said to appear about 2 hours after onset.
- the processing system 1 of the present embodiment is used, for example, to image the pharynx of the oral cavity and detect the follicles, thereby determining the possibility of the user being infected with influenza. Therefore, when the photographing device 200 is inserted into the oral cavity, the distance between the camera 211 and the subject becomes relatively short.
- the camera 211 preferably has an angle of view (2 ⁇ ) such that the value calculated by [(distance from the tip of the camera 211 to the posterior wall of the pharynx)*tan ⁇ ] is 20 mm or more vertically and 40 mm or more horizontally.
- 2 ⁇ the value calculated by [(distance from the tip of the camera 211 to the posterior wall of the pharynx)*tan ⁇ ] is 20 mm or more vertically and 40 mm or more horizontally.
- the main subject imaged by the camera 211 is the pharynx or influenza follicles formed in the pharynx.
- the pharynx is recessed in the depth direction, so if the depth of field is shallow, the focus shifts between the anterior pharynx and the posterior pharynx. becomes difficult.
- camera 211 has a depth of field of at least 20 mm or more, preferably 30 mm or more. By using a camera having such a depth of field, it is possible to obtain a subject image in which any part from the anterior pharynx to the posterior pharynx is in focus.
- the light source 212 is driven by an instruction from the processing device 100 or the imaging device 200, and functions as a light source unit for irradiating light into the oral cavity.
- Light source 212 includes one or more light sources.
- the light source 212 is composed of one or a plurality of LEDs, and each LED emits light having a predetermined frequency band toward the oral cavity.
- the light source 212 uses light in a desired band from among an ultraviolet light band, a visible light band, and an infrared light band, or a combination thereof. It should be noted that when the processing apparatus 100 determines the possibility of influenza, it is preferable to use light in the short wavelength band of the ultraviolet light band.
- the processor 213 functions as a control unit that controls other components of the imaging device 200 based on programs stored in the memory 214 .
- the processor 213 controls the driving of the camera 211 and the light source 212 based on the program stored in the memory 214 , and controls the storage of the subject image captured by the camera 211 in the memory 214 .
- the processor 213 also controls the output of the subject image and user information stored in the memory 214 to the display 203 and the transmission to the processing device 100 .
- the processor 213 is mainly composed of one or more CPUs, but may be appropriately combined with other processors.
- the memory 214 is composed of RAM, ROM, nonvolatile memory, HDD, etc., and functions as a storage unit.
- the memory 214 stores instruction commands for various controls of the imaging device 200 as programs.
- the memory 214 stores a subject image captured by the camera 211, various user information, and the like.
- the display panel 215 functions as a display unit provided on the display 203 to display the subject image captured by the imaging device 200 .
- the display panel 215 is configured by a liquid crystal panel, it is not limited to the liquid crystal panel, and may be configured by an organic EL display, a plasma display, or the like.
- the input interface 210 functions as an input unit that receives user's instruction input to the processing device 100 and the imaging device 200 .
- Examples of the input interface 210 include a “shooting button” for instructing the start/end of recording by the shooting device 200, a “power button” for turning on/off the power of the shooting device 200, and a “power button” for making various selections. a “return/cancel button” for returning to the previous screen or canceling the entered confirmation operation; be done.
- These various buttons and keys may be physically prepared, or may be displayed as icons on the display panel 215, and a touch panel or the like superimposed on the display panel 215 and arranged as the input interface 210 may be used. It may be made selectable by using Any method such as a capacitive method, a resistive film method, or the like may be used to detect a user's instruction input through the touch panel.
- the communication interface 216 functions as a communication unit for transmitting and receiving information with the imaging device 200 and/or other devices.
- Examples of the communication interface 216 include connectors for wired communication such as USB and SCSI, transmitting/receiving devices for wireless communication such as wireless LAN, Bluetooth (registered trademark) and infrared rays, and various connection terminals for printed mounting boards and flexible mounting boards. and so on.
- FIG. 5 is a top view showing the configuration of the imaging device 200 according to one embodiment of the present disclosure.
- FIG. 5 is a diagram showing a state in which the photographing device 200 including the main body 201, the grip 202, and the display 203 is viewed from above from the side that is inserted into the oral cavity.
- the body 201 has a proximal end 225 and a distal end 222, and has a predetermined length in a direction in which light is emitted from the light source 212, i.e., in a direction substantially parallel to the direction H of insertion into the oral cavity. It is composed of pillars. At least the tip 222 of the main body 201 is then inserted into the oral cavity.
- the main body 201 is formed in the shape of a hollow cylindrical column whose cross section is a perfect circle.
- the wall portion 224 may be made of any material as long as it can guide light into the interior thereof, and for example, it can be obtained using a thermoplastic resin.
- Thermoplastic resins include linear polyolefin resins (polypropylene resins, etc.), polyolefin resins such as cyclic polyolefin resins (norbornene resins, etc.), cellulose ester resins such as triacetyl cellulose and diacetyl cellulose, and polyesters. resins, polycarbonate resins, (meth)acrylic resins, polystyrene resins, or mixtures or copolymers thereof. That is, the wall portion 224 of the main body 201 functions as a light guide for guiding the light emitted from the light source into the oral cavity or toward the diffusion plate.
- the main body 201 Since the main body 201 is formed hollow, a housing space 223 is formed on the inner surface by the wall portion 224 .
- the camera 211 is accommodated in this accommodation space 223 .
- the main body 201 only needs to be formed in a columnar shape having the accommodation space 223 . Therefore, the accommodation space 223 does not need to have a cylindrical shape with a perfectly circular cross section, and may have an elliptical or polygonal cross section. Further, the main body 201 does not necessarily have to be hollow inside.
- the tip of the grip 202 is connected to the proximal end 225 of the main body 201 .
- a user grips the grip 202 and performs an operation such as inserting/removing the photographing apparatus 200 .
- the grip 202 is composed of a columnar body having a predetermined length in a direction substantially parallel to the direction H in which it is inserted into the oral cavity, that is, along the longitudinal direction of the main body 201, and is coaxial with the main body 201 in the direction H. placed on a line.
- the cross section in the vertical direction is formed to have a substantially elliptical shape, but it is not necessary to have an elliptical shape, and may be a perfect circle, an ellipse, or a polygon.
- the grip 202 has a connecting portion 230 formed at a position closest to the proximal end 225 of the main body 201 and is connected to the main body 201 via the connecting portion 230 .
- the outer periphery of the connecting portion 230 has engaging projections 217 (217-1 to 217-4) for positioning the auxiliary tool 300 and positioning projections 218. As shown in FIG.
- the engaging projections 217 are engaged with the engaging projections 318 (318-1 to 318-4) provided on the auxiliary tool 300 with each other.
- the positioning protrusion 218 is inserted into an insertion hole 321 provided in the auxiliary tool 300 to position the photographing device 200 and the auxiliary tool 300 relative to each other.
- the engaging projections 217 of the main body 201 include a total of four engaging projections (engaging projections 217-1 to 217-4) that are located on the surface of the grip 202 and on the base of the main body 201. They are evenly spaced at positions near the edge 225 .
- One positioning projection 218 is arranged between the engaging projections 217 on the surface of the grip 202 and near the base end 225 of the main body 201 .
- the arrangement is not limited to this, and only one of the engaging projection 217 and the positioning projection 218 may be arranged.
- the number of the engaging protrusions 217 and the positioning protrusions 218 may be any number as long as the number is one or more.
- the grip 202 includes an imaging button 220 at a position close to the proximal end 225 of the main body 201 on its upper surface, that is, near the distal end of the grip 202 in the insertion direction H into the oral cavity.
- the power button 221 is arranged on the upper surface of the grip 202 at a position close to the display 203 , that is, at a position on the opposite side of the grip 202 from the shooting button 220 . As a result, it is possible to prevent the operator 600 from accidentally pressing the power button 221 while the operator 600 is holding and capturing an image.
- the display 203 has a substantially rectangular parallelepiped shape as a whole and is arranged on the same straight line in the direction H as the main body 201 .
- the display 203 also includes a display panel 215 on the surface opposite to the direction H of insertion into the oral cavity (that is, the direction toward the user). Therefore, the display 203 is formed such that the surface including the display panel is substantially perpendicular to the longitudinal direction of the main body 201 and the grip 202 which are formed substantially parallel to the direction H of insertion into the oral cavity. be. Then, it is connected to the grip 202 on the side opposite to the oral cavity of the grip 202 on the surface opposite to the surface including the display panel.
- the shape of the display is not limited to a substantially rectangular parallelepiped shape, and may be any shape such as a columnar shape.
- the diffuser plate 219 is arranged at the tip 222 of the main body 201 and diffuses the light emitted from the light source 212 and passed through the main body 201 toward the intraoral cavity.
- the diffuser plate 219 has a shape corresponding to the cross-sectional shape of the portion of the main body 201 configured to be able to guide light.
- the main body 201 is formed in a hollow cylindrical shape. Therefore, the cross section of diffusion plate 219 is also formed hollow corresponding to the shape.
- the camera 211 is used to generate a subject image by detecting reflected light that has been diffused from the diffusion plate 219, irradiated into the oral cavity, and reflected on the subject.
- the camera 211 is arranged on the same straight line in the direction H as the main body 201 on the inner surface of the wall portion 224 of the main body 201 , that is, in the housing space 223 formed inside the main body 201 .
- the imaging device 200 may include a plurality of cameras. By generating a subject image using a plurality of cameras, the subject image includes information about the three-dimensional shape.
- the camera 211 is arranged in the accommodation space 223 of the main body 201, but the camera 211 can may be placed in
- FIG. 6 is a schematic diagram showing a cross-sectional configuration of the imaging device 200 according to one embodiment of the present disclosure.
- a total of four light sources 212 - 1 to 212 - 4 are arranged on a substrate 231 arranged on the tip side of the grip 202 .
- the light sources 212 each include an LED, and light having a predetermined frequency band is emitted from each LED toward the oral cavity.
- the light emitted from the light source 212 enters the base end 225 of the main body 201 and is guided toward the diffusion plate 219 by the wall portion 224 of the main body 201 .
- the light reaching the diffuser plate 219 is diffused by the diffuser plate 219 into the oral cavity.
- the light diffused by the diffusion plate 219 is reflected by the object such as the pharynx 715 .
- this reflected light reaches the camera 211, a subject image is generated.
- the light sources 212-1 to 212-4 may be configured to be controlled independently. For example, by illuminating some of the light sources 212-1 to 212-4, the shadow of a subject having a three-dimensional shape (such as influenza follicles) can be included in the subject image. As a result, the subject image includes information about the three-dimensional shape of the subject, making it possible to more clearly identify the subject and more accurately determine the possibility of influenza infection by the determination algorithm.
- a subject having a three-dimensional shape such as influenza follicles
- the light sources 212-1 to 212-4 are arranged on the base end 225 side of the main body 201, but the tip 222 of the main body 201 or the main body 201 may be located on the perimeter of the
- the diffusion plate 219 is used to prevent the light emitted from the light source 212 from illuminating only a part of the oral cavity and to generate uniform light. Therefore, as an example, a lens-shaped diffusion plate is used in which a fine lens array is formed on the surface of the diffusion plate 219 and has an arbitrary diffusion angle. Alternatively, a diffuser plate that can diffuse light by other methods, such as a diffuser plate that achieves a light diffusion function with fine irregularities randomly arranged on the surface, may be used. Furthermore, the diffuser plate 219 may be configured integrally with the main body 201 . For example, it can be realized by a method such as forming fine unevenness on the tip portion of the main body 201 .
- the diffusion plate 219 is arranged on the tip 222 side of the main body 201 .
- the present invention is not limited to this, as long as it is between the light source 212 and the oral cavity to be irradiated with it. ) may be placed.
- FIG. 7A is a diagram conceptually showing an image management table stored in the processing device 100 according to an embodiment of the present disclosure. The information stored in the image management table is updated and stored as needed according to the progress of the processing of the processor 111 of the processing device 100 .
- the image management table stores subject image information, candidate information, determination image information, etc. in association with user ID information.
- User ID information is information unique to each user and for specifying each user. User ID information is generated each time a new user is registered by the operator.
- Subject image information is information specifying a subject image captured by the operator for each user.
- a subject image is one or a plurality of images including a subject captured by the camera of the imaging device 200 , and is stored in the memory 112 by being received from the imaging device 200 .
- the “candidate information” is information specifying an image that is a candidate for selecting a determination image from one or more subject images.
- Determination image information is information for specifying a determination image used to determine the possibility of influenza infection.
- Such a determination image is selected based on the degree of similarity from the candidate images specified by the candidate information.
- information for specifying each image is stored as subject image information, candidate information, and determination image information.
- the information for specifying each image is typically identification information for identifying each image, but it may be information indicating the storage location of each image or the image data of each image itself. good too.
- FIG. 7B is a diagram conceptually showing a user table stored in the processing device 100 according to an embodiment of the present disclosure.
- the information stored in the user table is updated and stored as needed according to the progress of the processing of the processor 111 of the processing device 100 .
- attribute information, interview information, two-dimensional code information, determination result information, etc. are stored in the user table in association with user ID information.
- User ID information is information unique to each user and for specifying each user. User ID information is generated each time a new user is registered by the operator.
- attribute information is information input by an operator or a user, for example, and is information related to an individual user such as the user's name, sex, age, and address.
- Information is, for example, information input by an operator or a user, and is information such as the user's medical history and symptoms that is used as a reference for diagnosis by a doctor or the like.
- interview information examples include weight, allergy, patient background such as underlying disease, body temperature, peak body temperature from onset, elapsed time from onset, heart rate, pulse rate, oxygen saturation, blood pressure, medication Usage status, contact status with other influenza patients, joint pain, muscle pain, headache, malaise, loss of appetite, chills, sweating, cough, sore throat, nasal discharge/congestion, tonsillitis, digestive symptoms, rash on hands and feet , the presence or absence of subjective symptoms and physical findings such as pharynx redness and white lichen, tonsil swelling, history of tonsil resection, strawberry tongue, swollen anterior cervical lymph nodes with tenderness, influenza vaccination history, vaccination timing, etc. mentioned.
- “Two-dimensional code information” is information for specifying a recording medium on which at least one of user ID information, information for specifying it, attribute information, interview information, and a combination thereof is recorded. be. Such a recording medium need not be a two-dimensional code. In place of the two-dimensional code, various items such as one-dimensional bar code, other multi-dimensional code, text information such as specific numbers and characters, and image information can be used.
- “Determination result information” is information indicating the determination result of the possibility of contracting influenza based on the determination image. An example of such determination result information is the positive rate for influenza. However, it is not limited to the positive rate, and any information that indicates the possibility, such as information specifying whether it is positive or negative, may be used. Moreover, the determination result does not need to be a specific numerical value, and may be in any form, such as classification according to the positive rate, classification indicating whether it is positive or negative, and the like.
- the attribute information and inquiry information need not be input each time by the user or operator, and may be received from an electronic medical record device or other terminal device connected via a wired or wireless network, for example. . Alternatively, it may be obtained by analyzing a subject image captured by the imaging device 200 . Furthermore, although not particularly illustrated in FIGS. 7A and 7B, information on current epidemics of infectious diseases to be diagnosed or assisted in diagnosing influenza etc., judgment results and disease status of other users regarding these infectious diseases It is also possible to further store external factor information such as
- FIG. 8 is a diagram showing a processing sequence executed between the processing device 100 and the photographing device 200 according to an embodiment of the present disclosure. Specifically, FIG. 8 is executed after the processing device 100 selects the shooting mode, the shooting device 200 shoots the subject image, and the processing device 100 outputs the determination result. A processing sequence is shown.
- the processing device 100 outputs a mode selection screen via the output interface 114, and receives mode selection by the operator via the input interface 113 (S11). Then, when the selection of the shooting mode is accepted, the processing device 100 outputs an attribute information input screen via the output interface 114 .
- the processing device 100 receives input from the operator or user via the input interface 113, acquires the attribute information, and stores it in the user table in association with the user ID information (S12). Further, when the attribute information is acquired, the processing device 100 outputs an input screen for inquiry information via the output interface 114 .
- the processing device 100 receives an input from an operator or a user via the input interface 113, acquires inquiry information, and stores it in the user table in association with the user ID information (S13).
- attribute information and inquiry information need not be performed at this timing, and can be performed at other timing such as before the determination process.
- information may be acquired not only by receiving input via the input interface 113 but also by receiving from an electronic medical chart device or other terminal device connected via a wired or wireless network.
- these information are recorded in a recording medium such as a two-dimensional code after being input by an electronic medical record device or other terminal device, and the recording medium is photographed by a camera or a photographing device 200 connected to the processing device 100.
- a recording medium such as a two-dimensional code after being input by an electronic medical record device or other terminal device, and the recording medium is photographed by a camera or a photographing device 200 connected to the processing device 100.
- a paper medium such as a medical questionnaire by users, operators, patients, medical personnel, etc.
- the paper medium is captured by a scanner or imaging device 200 connected to the processing device 100.
- optically recognizing characters may be obtained by optically recognizing characters.
- the processing device 100 Under the control of the processor 111, the processing device 100 generates a two-dimensional code in which user ID information generated in advance is recorded, and stores it in the memory 112 (S14). Then, the processing device 100 outputs the generated two-dimensional code via the output interface 114 (S15).
- the photographing device 200 activates the camera 211 and the like in response to an input from the operator to the input interface 210 (for example, a power button) (S21). Then, by photographing the two-dimensional code output via the output interface 114 with the activated camera 211, the photographing device 200 reads the user ID information recorded in the two-dimensional code (S22).
- the operator covers the tip of the imaging device 200 with the auxiliary tool 300 and inserts the imaging device 200 into the user's oral cavity to a predetermined position.
- the imaging device 200 Upon receiving an operator's input to the input interface 210 (for example, an image capture button), the imaging device 200 starts capturing a subject image of the subject including at least part of the oral cavity (S23).
- the photographing device 200 associates the photographed subject image with the user ID information read from the two-dimensional code, stores the photographed subject image in the memory 214, and displays the photographed subject image on the display panel 215 of the display. Then, the subject image is output (S24).
- the photographing apparatus 200 receives the input of the end of photographing by the operator through the input interface 210, the photographing apparatus 200 associates the stored object image (T21) with the user ID information through the communication interface 216, and outputs it to the processing apparatus. Send to 100.
- the processing device 100 receives the subject image via the communication interface 115, it stores it in the memory 112 and registers it in the image management table based on the user ID information.
- the processing device 100 selects a judgment image to be used for judging the possibility of contracting influenza from the stored subject images (S31).
- the processing device 100 uses the selected determination image to execute the processing of determining the possibility of contracting influenza (S32).
- the processing device 100 stores the obtained determination result in association with the user ID information in the user table and outputs it via the output interface 114 (S33). After that, the processing sequence ends.
- FIG. 9 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure. Specifically, FIG. 9 is a diagram showing a processing flow that is executed at predetermined intervals for the processing related to S11 to S15 of FIG. The processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112 .
- FIG. 18 is a diagram showing an example of a screen displayed on the processing device 100 according to an embodiment of the present disclosure. Specifically, FIG. 18 shows an example of the mode selection screen output in S111 and S112 of FIG. According to FIG. 18, approximately at the center of the display functioning as the output interface 114, there is a photographing mode icon 11 for shifting to a photographing mode for photographing a subject image, and the result of having already determined the possibility of contracting influenza is displayed on the display. A determination result confirmation mode icon 12 for shifting to the output determination result confirmation mode is displayed. The user can operate the input interface 113 to select which mode to shift to.
- the processor 111 determines whether or not the mode selection by the operator has been accepted via the input interface 113 (S112). At this time, if the processor 111 determines that neither the shooting mode icon 11 nor the determination result confirmation mode icon 12 shown in FIG. 18 has been input and no mode selection has been received, the processing flow ends.
- the processor 111 determines whether or not the shooting mode has been selected. (S113).
- the processor 111 determines that the determination result confirmation mode icon 12 shown in FIG. 18 has been selected, the desired determination result is displayed via the output interface 114 (S118).
- the processor 111 determines that the shooting mode icon 11 shown in FIG. 18 has been selected, it displays a screen for accepting input of the user's attribute information on the output interface 114 (not shown).
- the screen includes items such as the user's name, gender, age, address, etc., which must be input as attribute information, and input boxes for inputting answers to each item.
- the processor 111 acquires the information input to each input box via the input interface 113 as attribute information (S114).
- the processor 111 then generates new user ID information corresponding to the user newly stored in the user table, and stores the attribute information in the user table in association with the user ID information. . Note that if the user ID information is selected in advance before inputting the attribute information, it is possible to omit generating new user ID information.
- the processor 111 displays on the output interface 114 a screen for accepting the user's inquiry information input (not shown).
- the screen includes items such as the user's body temperature, heart rate, medication status, presence or absence of subjective symptoms, etc., which need to be input as interview information, and an input box for inputting answers to each item.
- the processor 111 acquires the information input to each input box via the input interface 113 as interview information, and stores it in the user table in association with the user ID information (S115).
- attribute information and inquiry information are input by the processing device 100 .
- the information is not limited to this, and may be acquired by receiving information input to an electronic medical chart device or other terminal device connected via a wired or wireless network.
- the processor 111 refers to the user table, reads the user ID information corresponding to the user whose information has been input, and generates a two-dimensional code recording this (S116).
- the processor 111 associates the generated two-dimensional code with the user ID information, stores it in the user table, and outputs it via the output interface 114 (S117). With the above, the processing flow ends.
- FIG. 19 is a diagram showing an example of a screen displayed on the processing device 100 according to an embodiment of the present disclosure.
- FIG. 19 is a diagram showing an example of the display screen of the two-dimensional code output in S117 of FIG.
- the user ID information of the user whose attribute information and the like are input is displayed above the display functioning as the output interface 114 .
- the two-dimensional code generated in S116 of FIG. 19 is displayed approximately in the center of the display.
- FIG. 10 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure. Specifically, FIG. 10 is a diagram showing a processing flow that is executed at predetermined intervals for the processing related to S21 to S24 of FIG. The processing flow is mainly performed by the processor 213 of the imaging device 200 reading and executing a program stored in the memory 214 .
- the processor 213 determines whether or not an operator's input is accepted via the input interface 210 (eg, power button) (S211). At this time, if the processor 213 determines that the operator's input has not been received, the processing flow ends.
- the processor 213 determines that the operator's input has been accepted, it outputs a standby screen to the display panel 215 (S212).
- the standby screen (not shown) includes a through image captured via the camera 211 .
- the operator moves the photographing device 200 so that the angle of view of the camera 211 includes the two-dimensional code output to the output interface 114 of the processing device 100, and the processor 213 photographs the two-dimensional code with the camera 211. (S213).
- processor 213 reads the user ID information recorded in the two-dimensional code, and stores the read user ID information in memory 214 (S214).
- the processor 213 then outputs the standby screen to the display panel 215 again (S215).
- the operator covers the tip of the imaging device 200 with the auxiliary tool 300, and inserts the imaging device 200 into the oral cavity of the user to a predetermined position.
- the processor 213 receives a shooting start operation from the operator via the input interface 210 (for example, a shooting button)
- the processor 213 controls the camera 211 to start shooting a subject image of the subject (S216).
- the photographing of the subject images is performed by continuously photographing a predetermined number of images (for example, 30 images) at a predetermined interval by pressing the photographing button.
- processor 213 stores the photographed subject image in memory 214 in association with the read user ID information.
- Processor 213 then outputs the subject image stored in display panel 215 (S217).
- the operator takes out the imaging device 200 together with the aid 300 from the intraoral cavity, checks the subject image output to the display panel 215, and if the desired image is not obtained, inputs an instruction for re-imaging. It is possible. Therefore, the processor 213 determines whether or not the input of the re-imaging instruction by the operator has been received via the input interface 210 (S218). If the input of the re-imaging instruction is received, the processor 213 displays the standby screen of S215 again to enable the imaging of the subject image.
- the processor 213 transfers the image to the memory via the communication interface 216 . 214 and the user ID information associated with the subject image are transmitted to the processing device 100 (S219). After that, the processing flow ends.
- FIG. 11 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure. Specifically, FIG. 11 is a diagram showing a processing flow executed for the processing related to S31 to S33 of FIG. The processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112 .
- the processor 111 when the processor 111 receives the subject image and the associated user ID information from the photographing device 200, it stores it in the memory 112 and registers it in the image management table (S311). Then, the processor 111 outputs the received user ID information or attribute information (for example, name) corresponding to the received user ID information through the output interface 114, and selects a user whose possibility of affliction with influenza is to be determined. is accepted (S312). At this time, when a plurality of pieces of user ID information and associated subject images are received from the photographing device 200, a plurality of pieces of user ID information or attribute information corresponding thereto are output, and any It is possible to select the user.
- the processor 111 receives the subject image and the associated user ID information from the photographing device 200, it stores it in the memory 112 and registers it in the image management table (S311). Then, the processor 111 outputs the received user ID information or attribute information (for example, name) corresponding to the received user ID information through the output interface 114,
- the processor 111 When the processor 111 receives the selection of the user to be judged via the input interface 113, it reads the attribute information associated with the user ID information of the user from the user table in the memory 112 (S313). Similarly, the processor 111 reads medical inquiry information associated with the user ID information of the user to be determined from the user table in the memory 112 (S314).
- the processor 111 reads out the subject image associated with the user ID information of the selected user from the memory 112, and selects the determination image used to determine the possibility of affliction with influenza. (S315: The details of this selection process will be described later.). Then, the processor 111 executes processing for determining the possibility of contracting influenza based on the selected determination image (S316: details of this determination processing will be described later).
- the processor 111 stores it in the user table in association with the user ID information, and outputs the determination result via the output interface 114 (S317). After that, the processing flow ends.
- FIG. 12 is a diagram showing a processing flow executed by the processing device 100 according to one embodiment of the present disclosure. Specifically, FIG. 12 is a diagram showing the details of the determination image selection process executed in S315 of FIG. The processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112 .
- the processor 111 reads the subject image associated with the user ID information of the selected user from the memory 112 (S411).
- the processor 412 selects an image that is a candidate for the determination image from the read subject images (S412). This selection is performed, for example, using a learned judgment image selection model.
- FIG. 13 is a diagram showing a processing flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 13 is a diagram showing a processing flow relating to generation of a learned judgment image selection model used in S412 of FIG.
- the processing flow may be executed by the processor 111 of the processing device 100 or by a processor of another processing device.
- the processor executes a step of acquiring a subject image of a subject including at least part of the pharynx as a learning subject image (S511).
- the processor executes a processing step of adding label information indicating whether or not the acquired learning subject image is an image that can be used as a determination image (S512).
- the processor executes a step of storing the assigned label information in association with the learning object image (S513). Note that in the labeling process and the label information storage process, a human judges in advance whether or not the object image for learning is a judgment image, and the processor stores it in association with the object image for learning.
- the processor may analyze whether or not the image is a determination image by a known image analysis process, and store the result in association with the learning object image. Also, the label information is given based on the viewpoints of whether or not at least a partial region of the oral cavity, which is the subject, is captured, and whether or not the image quality is good such as camera shake, defocus, and cloudiness.
- the processor executes a step of performing machine learning of the selection pattern of the determination image using them (S514).
- machine learning for example, a set of an object image for learning and label information is given to a neural network configured by combining neurons, and the parameters of each neuron are adjusted so that the output of the neural network is the same as the label information. This is done by repeating learning while making adjustments.
- a step of acquiring a trained judgment image selection model for example, neural network and parameters
- the acquired learned determination image selection model may be stored in the memory 112 of the processing device 100 or another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs the subject image read out in S411 to the learned judgment image selection model, thereby obtaining candidate images as judgment image candidates as output.
- This makes it possible to favorably select an image showing at least a partial region of the oral cavity, which is a subject, or an image with good image quality such as camera shake, defocus, motion blur of the subject, exposure, and cloudiness.
- the processor 111 registers the acquired candidate images, which are candidates for the determination image, in the image management table.
- the processor 111 executes selection processing of a judgment image based on the degree of similarity from the selected candidate images (S413). Specifically, the processor 111 compares the obtained candidate images and calculates the degree of similarity between the candidate images. Then, the processor 111 selects a candidate image judged to have a low degree of similarity to other candidate images as a judgment image.
- the degree of similarity between such candidate images can be determined by a method using local feature amounts in each candidate image (Bag-of-Keypoints method), a method using EMD (Earth Mover's Distance), SVM ( It is calculated by a method using support vector machine), a method using Hamming distance, a method using cosine similarity, or the like.
- the processor 111 registers the candidate image selected based on the degree of similarity as the determination image in the image management table (S414).
- each of the subject image, the candidate image, and the determination image may be one or a plurality of images.
- by using a plurality of determination image groups for the determination processing described later it is possible to further improve the determination accuracy as compared with the case where only one determination image is used.
- each time a subject image is captured the captured subject image is transmitted to the processing device 100 and then the candidate image and the determination image are selected, or the candidate image is selected by the image capturing device 200 .
- Image selection and determination image selection may be performed, and photographing may be terminated when a predetermined number of determination images (for example, about five) are obtained. By doing so, it is possible to minimize the time required for photographing the subject image while maintaining the improvement in determination accuracy as described above. In other words, discomfort to the user such as vomiting reflex can be reduced.
- a predetermined number of determination images for example, about five
- FIG. 14 is a diagram showing a processing flow performed by the processing device 100 according to one embodiment of the present disclosure. Specifically, FIG. 14 is a diagram showing details of the processing for determining the possibility of contracting influenza, which is executed in S316 of FIG. The processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112 .
- the processor 111 obtains the determination result by ensemble processing the first positive rate, the second positive rate, and the third positive rate obtained by different methods.
- the processor 111 reads from the memory 112 the determination image associated with the user ID information of the user to be determined (S611). Then, the processor 111 performs predetermined preprocessing on the read judgment image.
- preprocessing includes filter processing such as bandpass filters including high-pass filters and low-pass filters, averaging filters, Gaussian filters, Gabor filters, Canny filters, Sobel filters, Laplacian filters, median filters, bilateral filters, etc.
- Blood vessel extraction processing using Hessian matrix etc. segmentation processing of specific regions (e.g., follicles) using machine learning, trimming processing for segmented regions, defogging processing, super-resolution processing, and combinations of these, high-definition It is selected according to the purpose such as image reduction, area extraction, noise removal, edge enhancement, image correction, and image conversion.
- preprocessing it is possible to improve the determination accuracy by extracting and emphasizing in advance a region of interest that is important in diagnosing a disease, such as follicles in influenza.
- defogging processing as an example, a set of a learning subject image and a learning degraded image to which cloudiness is added by applying a cloudiness addition filter or the like from the learning subject image is given to the learning device, and machine learning is performed.
- the processor 111 inputs the read judgment image as an input into the learned defogging image model stored in the memory 112, and acquires the judgment image with the defogging removed as an output.
- a set of high-resolution images of the subject and low-resolution images obtained by degrading the high-resolution images, such as scale reduction and blurring, are used as training images for the learning device. and use a trained super-resolution image model obtained by machine learning.
- the processor 111 receives the defogged determination image as an input, inputs it to the trained super-resolution image model stored in the memory 112, and acquires the super-resolution-processed determination image as an output.
- the segmentation process includes a learning object image and position information of the label obtained by labeling a region of interest (for example, a follicle) based on an operation input by a doctor or the like to the learning object image.
- the processor 111 receives the super-resolution-processed judgment image as an input, inputs the learned segmentation image model stored in the memory 112, and obtains a judgment image in which a region of interest (eg, follicle) is segmented.
- the processor 112 then stores the preprocessed decision image in the memory 112 .
- the defogging process, the super-resolution process, and the segmentation process are processed in this order, but the order may be any order, and at least any one process may be performed.
- other processes such as defogging filter, scale enlargement process, and sharpening process may be used.
- the processor 111 gives the preprocessed determination image as an input to the feature amount extractor (S613), and acquires the image feature amount of the determination image as an output (S614). Further, the processor 111 provides the feature amount of the obtained determination image as input to the classifier (S615), and obtains as output the first positive rate indicating the first possibility of contracting influenza (S616).
- the feature amount extractor can obtain a predetermined number of feature amounts, such as the presence or absence of follicles and the presence or absence of redness in the determination image, as vectors. As an example, 1024-dimensional feature quantity vectors are extracted from the judgment image and stored as the feature quantity of the judgment image.
- FIG. 15 is a diagram showing a processing flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 15 is a diagram showing a processing flow relating to generation of a trained positive rate determination selection model including the feature amount extractor of S613 and the classifier of S615 of FIG.
- the processing flow may be executed by the processor 111 of the processing device 100 or by a processor of another processing device.
- the processor executes a step of obtaining, as a learning determination image, an image of a subject including at least part of the pharynx that has undergone preprocessing similar to S612 of FIG. 14 (S711).
- the processor assigns a correct label based on the results of a rapid influenza test by immunochromatography, a PCR test, a virus isolation culture test, etc., to the user who is the subject of the acquired determination image for learning. (S712).
- the processor executes a step of storing the assigned correct label information as determination result information in association with the learning determination image (S713).
- the processor uses them to carry out machine learning of positive rate determination patterns (S714).
- machine learning for example, a combination of a learning judgment image and correct label information is given to a feature quantity extractor composed of a convolutional neural network and a classifier composed of a neural network, and an output from the classifier is the same as the correct label information, learning is repeated while adjusting the parameters of each neuron.
- a step of acquiring a learned positive rate determination model is executed (S715).
- the acquired learned positive rate determination model may be stored in the memory 112 of the processing device 100 or another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs the judgment image preprocessed in S612 to the learned positive rate judgment model, thereby obtaining the feature quantity (S614) of the judgment image and the first influenza morbidity.
- a first positive rate (S616) indicating the possibility is obtained as an output and stored in the memory 112 in association with the user ID information.
- the processor 111 reads from the memory 112 at least one of medical inquiry information and attribute information associated with the user ID information of the user to be determined (S617). Further, the processor 111 reads from the memory 112 the feature amount of the determination image calculated in S614 and stored in the memory 112 in association with the user ID information (S614). Then, the processor 111 gives at least one of the read medical interview information and attribute information and the feature amount of the judgment image as inputs to the learned positive rate judgment model (S618), and calculates the second possibility of affliction with influenza. The indicated second positive rate is obtained as an output (S619).
- FIG. 16 is a diagram showing a processing flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 16 is a diagram showing a processing flow relating to generation of the learned positive rate determination selection model in S618 of FIG.
- the processing flow may be executed by the processor 111 of the processing device 100 or by a processor of another processing device.
- the processor executes a step of acquiring a learning feature amount from a determination image obtained by performing preprocessing similar to S612 of FIG. 14 on an image of a subject including at least part of the pharynx (S721 ). Also, the processor executes a step of acquiring interview information and attribute information stored in advance in association with the user ID information of the user who is the subject of the determination image (S721). Next, the processor carries out a processing step of assigning a correct label to the user who is the subject of the judgment image, which is assigned in advance based on the results of the rapid influenza test by immunochromatography, the PCR test, the virus isolation and culture test, and the like. Execute (S722). Then, the processor executes a step of storing the assigned correct label information as determination result information in association with the learning feature amount of the determination image, the inquiry information, and the attribute information (S723).
- the processor executes a step of performing machine learning of the positive rate judgment pattern using them ( S724).
- the machine learning provides a set of information to a neural network that combines neurons, and learns while adjusting the parameters of each neuron so that the output from the neural network is the same as the correct label information. is performed by repeating Then, a step of obtaining a learned positive rate determination model is executed (S725).
- the acquired learned positive rate determination model may be stored in the memory 112 of the processing device 100 or another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs at least one of the feature amount of the judgment image read out in S614 and the inquiry information and attribute information read out in S617 for the learned positive rate judgment model.
- a second positive rate (S619) indicating the second possibility of contracting influenza is obtained as an output and stored in memory 112 in association with the user ID information.
- the processor 111 reads from the memory 112 at least one of medical inquiry information and attribute information associated with the user ID information of the user to be determined (S617).
- the processor 111 also reads from the memory 112 the first positive rate calculated in S616 and stored in the memory 112 in association with the user ID information. Then, the processor 111 gives at least one of the read medical interview information and attribute information and the first positive rate as inputs to the learned positive rate determination model (S620), and indicates the third possibility of affliction with influenza.
- a third positive rate is obtained as an output (S621).
- FIG. 17 is a diagram showing a processing flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 17 is a diagram showing a processing flow relating to generation of the learned positive rate determination selection model in S620 of FIG.
- the processing flow may be executed by the processor 111 of the processing device 100 or by a processor of another processing device.
- the processor performs the same preprocessing as in S612 of FIG. 14 on the image of the subject including at least part of the pharynx, and converts the determination image into a feature amount extractor (S613 in FIG. 14) and a classifier. (S615 of FIG. 14) is executed to acquire the first positive rate information obtained by inputting to the learned positive rate determination selection model (S731). Also, the processor executes a step of acquiring interview information and attribute information stored in advance in association with the user ID information of the user who is the subject of the determination image (S731).
- the processor carries out a processing step of assigning a correct label to the user who is the subject of the judgment image, which is assigned in advance based on the results of the rapid influenza test by immunochromatography, the PCR test, the virus isolation and culture test, and the like. Execute (S732). Then, the processor executes a step of storing the assigned correct label information as determination result information in association with the first positive rate information, medical inquiry information and attribute information (S733).
- the processor executes a step of performing machine learning of positive rate determination patterns using them (S734).
- the machine learning provides a set of information to a neural network that combines neurons, and learns while adjusting the parameters of each neuron so that the output from the neural network is the same as the correct label information. is performed by repeating Then, a step of acquiring a learned positive rate determination model is executed (S735).
- the acquired learned positive rate determination model may be stored in the memory 112 of the processing device 100 or another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs at least one of the first positive rate information read out in S616 and the inquiry information and attribute information read out in S617 for the learned positive rate determination model.
- the third positive rate (S621) indicating the third possibility of contracting influenza is acquired as an output, and stored in memory 112 in association with the user ID information.
- the processor 111 reads each positive rate from the memory 112 and performs ensemble processing (S622).
- the obtained first positive rate, second positive rate, and third positive rate are given as inputs to the ridge regression model, and each positive rate is ensembled as a determination result of the possibility of affliction with influenza.
- the result obtained is obtained (S623).
- the ridge regression model used in S622 is generated by machine learning by the processor 111 of the processing device 100 or a processor of another processing device. Specifically, the processor acquires the first positive rate, the second positive rate, and the third positive rate from the learning determination image. In addition, the processor preliminarily assigns a correct label based on the results of a rapid influenza test by immunochromatography, a PCR test, a virus isolation culture test, etc., to the user who is the subject of the learning determination image.
- the processor gives each positive rate and a set of correct labels associated with it to a ridge regression model, and the parameters given to each positive rate so that the output is the same as the correct label information of the ridge regression model Repeat learning while adjusting
- a ridge regression model used for ensemble processing is obtained and stored in the memory 112 of the processing device 100 or another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 associates the determination result thus obtained with the user ID information and stores it in the user table of the memory 112 (S624). This ends the processing flow.
- each positive rate may be directly used as the final determination result, or an ensemble process using any two positive rates may be performed as the final determination result.
- ensemble processing may be performed by adding other positive rates obtained by other methods, and the final determination result may be obtained.
- the obtained determination results are output via the output interface 114, but only the final determination results may be output, or each positive rate may also be output together.
- a processing device a processing program, a processing method, and a processing system suitable for processing an image obtained by photographing the intraoral cavity for use in intraoral diagnosis are provided. becomes possible.
- a learned information estimation model is obtained by applying attribute information and interview information associated with the learning object image as a correct label to the learning object image and subjecting these sets to machine learning using a neural network. Then, the processor 111 can obtain desired medical inquiry information and attribute information by giving the subject image as an input to the learned information estimation model. Examples of such interview information and attribute information include sex, age, degree of pharyngeal redness, degree of tonsil swelling, presence or absence of white moss, and the like. This saves the operator the trouble of inputting medical inquiry information and attribute information.
- follicles appearing in the pharyngeal region are characteristic signs of influenza, and are confirmed by visual inspection even in the diagnosis by a doctor. Therefore, a region of interest such as a follicle is subjected to labeling processing by a doctor or the like on the object image for learning. Then, the position information (shape information) of the label in the learning object image is acquired as the learning position information, and the set of the learning object image and the learning position information labeled with the learning object image is machine-learned by the neural network. , to obtain a trained region extraction model.
- the processor 111 outputs the position information (shape information) of the attention area (that is, the follicle) by giving the subject image as an input to the learned area extraction model. After that, the processor 111 stores the obtained positional information (shape information) of the follicles as interview information.
- the processor 111 receives a predetermined period of photographing time as a subject image.
- the processor 111 extracts each RGB color component from each frame constituting the received moving image to obtain the luminance of the G (green) component.
- the processor 111 generates a luminance waveform of the G component in the moving image from the obtained luminance of the G component of each frame, and estimates the heart rate from its peak value.
- this method utilizes the fact that hemoglobin in blood absorbs green light, the heart rate may of course be estimated by other methods.
- the processor 111 stores the heart rate estimated in this way as inquiry information.
- the processor 111 may read the determination image from the memory 112 and provide the read determination image without preprocessing as an input to the feature amount extractor. Moreover, even when preprocessing is performed, the processor 111 may provide both a preprocessed determination image and a non-preprocessed determination image as inputs to the feature amount extractor.
- the judgment image subjected to the same preprocessing as in S612 of FIG. 14 was used as learning data.
- the judgment image without preprocessing is may be used as training data.
- Each trained model described in FIGS. 13, 15 to 17, etc. was generated using a neural network or a convolutional neural network. However, not limited to these, it is also possible to generate using machine learning such as the nearest neighbor method, decision tree, regression tree, random forest, or the like.
- FIG. 20 is a schematic diagram of a processing system 1 according to one embodiment of the present disclosure. Specifically, FIG. 20 is a diagram showing a connection example of various devices that may constitute the processing system 1 . According to FIG. 20, the processing system 1 includes a processing device 100, an imaging device 200, a terminal device 810 such as a smartphone, a tablet, a laptop PC, an electronic medical record device 820, and a server device 830. Connected via a wired or wireless network. Note that each device shown in FIG. 20 does not necessarily have to be provided, and may be provided as appropriate according to the distribution example of processing illustrated below.
- the photographing apparatus 200 executes all processes such as photographing of a subject image, acquisition of attribute information and interview information, selection of a judgment image, judgment processing, and output of judgment results.
- the photographing device 200 captures the subject image and outputs the determination result, and the server device 830 (cloud server device) performs processing using machine learning, such as determination image selection and determination processing.
- the terminal device 810 inputs medical inquiry information and attribute information, the processing device 100 selects a judgment image, the judgment processing, and outputs the judgment result, and the photographing device 200 shoots a subject image.
- the photographing device 200 executes the input of inquiry information and attribute information and the photographing of the subject image, and the processing device 100 executes the selection of the determination image, the determination processing, and the output of the determination result.
- Input of inquiry information and attribute information and output of determination results are performed by the terminal device 810 , determination image selection and determination processing are performed by the processing device 100 , and subject images are captured by the imaging device 200 .
- the electronic medical chart device 820 executes the input of medical inquiry information and attribute information, the processing device 100 executes the selection of the judgment image and the judgment processing, the photographing device 200 executes the photographing of the subject image, and the output of the judgment result. is executed on the terminal device 810 .
- the electronic medical chart device 820 executes the input of medical inquiry information and attribute information and the output of the judgment result, the processing device 100 executes the selection of the judgment image and the judgment processing, and the photographing device 200 executes the photographing of the subject image. .
- Input of inquiry information and attribute information and output of determination results are performed by the terminal device 810 , selection of determination images and determination processing are performed by the server device 830 , and subject images are captured by the imaging device 200 .
- the terminal device 810 and the electronic medical record device 820 execute the input of medical interview information and attribute information and the output of the judgment result, the server device 830 executes the selection of the judgment image and the judgment processing, and the photographing of the subject image is performed by the imaging device. 200 to run.
- the electronic medical record device 820 executes the input of medical inquiry information and attribute information and the output of the judgment result
- the server device 830 executes the selection of the judgment image and the judgment processing
- the photographing device 200 executes the photographing of the subject image.
- a terminal device 810 such as a smart phone held by a patient or the like, a tablet used in a medical institution or the like, or a laptop PC used by a doctor or the like executes the processing related to S11 to S15 shown in FIG. After that, when the processing related to S21 to S24 is executed in the imaging device 200, the subject image is transmitted to the server device 830 via the terminal device 810 or directly. In the server device 830 that has received the subject image, the processing related to S31 to S32 is executed, and the determination result is output from the server device 830 to the terminal device 810. FIG. In the terminal device 810 that receives the output of the determination result, the determination result is stored in the memory and displayed on the display.
- the processing device 100 is called a processing device.
- the processing device 100 is simply referred to as a processing device because it simply executes various processes related to determination processing and the like. For example, when various processes are executed by the imaging device 200, the terminal device 810, the electronic medical record device 820, the server device 830, etc., these also function as processing devices and are sometimes called processing devices.
- the image of the subject is captured using the substantially cylindrical imaging device 200 as the imaging device 200 .
- the terminal device 810 as a photographing device and photograph a subject image using a camera provided in the terminal device 810, for example.
- the camera is not inserted to the vicinity of the pharynx in the oral cavity, but is placed outside the incisors (outside the body) to photograph the oral cavity.
- processes and procedures described in this specification can be implemented not only by those explicitly described in the embodiments, but also by software, hardware, or a combination thereof. Specifically, the processes and procedures described herein are implemented by implementing logic corresponding to the processes in media such as integrated circuits, volatile memories, non-volatile memories, magnetic disks, and optical storage. be done. Further, the processes and procedures described in this specification can be implemented as computer programs and executed by various computers including processing devices and server devices.
- Processing System 100 Processing Device 200 Imaging Device 300 Auxiliary Tool 400 Mounting Table 600 Operator 700 User 810 Terminal Device 820 Electronic Medical Record Device 830 Server Device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Dentistry (AREA)
- Pulmonology (AREA)
- Physiology (AREA)
- Otolaryngology (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Endoscopes (AREA)
Abstract
Description
1.処理システム1の概要
本開示に係る処理システム1は、主に使用者の口腔の内部を撮影し被写体画像を得るために用いられる。特に、当該処理システム1は、口腔の喉奥周辺、具体的には咽頭を撮影するために用いられる。したがって、以下においては、本開示に係る処理システム1を咽頭の撮影に用いた場合について主に説明する。ただし、咽頭は撮影部位の一例であって、当然に、口腔内であれば扁桃などの他の部位であっても、本開示に係る処理システム1を好適に用いることができる。
図3は、本開示の一実施形態に係る処理システム1の概略図である。図3によれば、処理システム1は、処理装置100と、当該処理装置100に有線又は無線ネットワークを介して通信可能に接続された撮影装置200とを含む。処理装置100は、操作者による操作入力を受け付けて、撮影装置200による撮影を制御する。また、処理装置100は、撮影装置200によって撮影された被写体画像を処理して、使用者のインフルエンザへの罹患の可能性を判定する。さらに、処理装置100は、判定された結果を出力して、使用者、操作者又は医師等に、その結果を通知する。
図7Aは、本開示の一実施形態に係る処理装置100に記憶される画像管理テーブルを概念的に示す図である。画像管理テーブルに記憶される情報は、処理装置100のプロセッサ111の処理の進行に応じて随時更新して記憶される。
図8は、本開示の一実施形態に係る処理装置100と撮影装置200との間で実行される処理シーケンスを示す図である。具体的には、図8は、処理装置100において撮影モードの選択が行われてから、撮影装置200で被写体画像の撮影が行われ、処理装置100から判定結果が出力されるまでに実行される処理シーケンスを示す。
図9は、本開示の一実施形態に係る処理装置100において実行される処理フローを示す図である。具体的には、図9は、図8のS11~S15に係る処理のために、所定周期で実行される処理フローを示す図である。当該処理フローは、主に処理装置100のプロセッサ111がメモリ112に記憶されたプログラムを読み出して実行することにより行われる。
図10は、本開示の一実施形態に係る処理装置100において実行される処理フローを示す図である。具体的には、図10は、図8のS21~S24に係る処理のために、所定周期で実行される処理フローを示す図である。当該処理フローは、主に撮影装置200のプロセッサ213がメモリ214に記憶されたプログラムを読み出して実行することにより行われる。
図11は、本開示の一実施形態に係る処理装置100において実行される処理フローを示す図である。具体的には、図11は、図8のS31~S33に係る処理のために実行される処理フローを示す図である。当該処理フローは、主に処理装置100のプロセッサ111がメモリ112に記憶されたプログラムを読み出して実行することにより行われる。
図14の例では、問診情報及び属性情報の少なくともいずれかを用いてインフルエンザに対する罹患の可能性を示す情報を出力する場合について説明した。しかし、これらの情報に代えて、又はこれらの情報に加えて、インフルエンザに関連する外部要因情報を用いて、インフルエンザに対する罹患の可能性を示す情報を出力するようにしてもよい。このような外部要因情報には、他の使用者に対してなされて判定結果や、医師による診断結果、また使用者が属する地域等におけるインフルエンザの流行情報などが挙げられる。プロセッサ111は、このような外部要因情報を通信インターフェイス115を介して他の処理装置等から取得し、学習済み陽性率判定モデルに外部要因情報を入力として与えることで、外部要因情報を考慮した陽性率を得ることが可能となる。
(1)被写体画像の撮影、属性情報や問診情報の取得、判定画像の選択、判定処理、判定結果の出力などのすべての処理を撮影装置200で実行する。
(2)被写体画像の撮影及び判定結果の出力を撮影装置200で実行し、判定画像の選択や判定処理など、機械学習を利用する処理をサーバ装置830(クラウドサーバ装置)で実行する。
(3)問診情報や属性情報の入力を端末装置810で実行し、判定画像の選択、判定処理、判定結果の出力を処理装置100で実行し、被写体画像の撮影を撮影装置200で実行する。
(4)問診情報や属性情報の入力、被写体画像の撮影を撮影装置200で実行し、判定画像の選択、判定処理、判定結果の出力を処理装置100で実行する。
(5)問診情報や属性情報の入力、判定結果の出力を端末装置810で実行し、判定画像の選択、判定処理を処理装置100で実行し、被写体画像の撮影を撮影装置200で実行する。
(6)問診情報や属性情報の入力を電子カルテ装置820で実行し、判定画像の選択、判定処理を処理装置100で実行し、被写体画像の撮影を撮影装置200で実行し、判定結果の出力を端末装置810で実行する。
(7)問診情報や属性情報の入力、判定結果の出力を電子カルテ装置820で実行し、判定画像の選択、判定処理を処理装置100で実行し、被写体画像の撮影を撮影装置200で実行する。
(8)問診情報や属性情報の入力、判定結果の出力を端末装置810で実行し、判定画像の選択、判定処理をサーバ装置830で実行し、被写体画像の撮影を撮影装置200で実行する。
(9)問診情報や属性情報の入力、判定結果の出力を端末装置810及び電子カルテ装置820で実行し、判定画像の選択、判定処理をサーバ装置830で実行し、被写体画像の撮影を撮影装置200で実行する。
(10)問診情報や属性情報の入力、判定結果の出力を電子カルテ装置820で実行し、判定画像の選択、判定処理をサーバ装置830で実行し、被写体画像の撮影を撮影装置200で実行する。
100 処理装置
200 撮影装置
300 補助具
400 載置台
600 操作者
700 使用者
810 端末装置
820 電子カルテ装置
830 サーバ装置
Claims (15)
- 少なくとも一つのプロセッサを含み、
前記少なくとも一つのプロセッサが、
使用者の口腔の少なくとも一部が含まれた被写体の画像を撮影するためのカメラを介して前記被写体の一又は複数の判定画像を取得し、
所定の疾患に対する罹患の可能性を判定するためにメモリに記憶された学習済み判定モデルと、取得された前記一又は複数の判定画像とに基づいて、前記所定の疾患に対する罹患の可能性を判定し、
判定された前記罹患の可能性を示す情報を出力する、
ための処理をするように構成される、処理装置。 - 前記被写体には咽頭が少なくとも含まれる、請求項1に記載の処理装置。
- 前記被写体には扁桃が少なくとも含まれる、請求項1に記載の処理装置。
- 前記一又は複数の判定画像は、前記カメラで撮影された一又は複数の被写体画像から前記一又は複数の判定画像を選択するために前記メモリに記憶された学習済み判定画像選択モデルに、前記一又は複数の被写体画像を入力することで取得される、請求項1~3のいずれか一項に記載の処理装置。
- 前記学習済み判定画像選択モデルは、前記被写体を撮影した学習用被写体画像と、前記学習用被写体画像に対して前記罹患の可能性の判定に用いることができるか否かをしめすラベル情報とを用いて学習することによって得られる、請求項4に記載の処理装置。
- 前記一又は複数の判定画像は、前記カメラで撮影された複数の被写体画像から各被写体画像間の類似度に基づいて取得される、請求項1~3のいずれか一項に記載の処理装置。
- 前記プロセッサは、
前記所定の疾患に対する罹患の可能性の判定は、前記一又は複数の判定画像から所定の特徴量を抽出するための学習済み特徴量抽出器に、前記一又は複数の判定画像を入力して前記所定の特徴量を算出し、
前記所定の特徴量と前記学習済み判定モデルとに基づいて前記所定の疾患に対する罹患の可能性を判定する、
請求項1~6のいずれか一項に記載の処理装置。 - 前記プロセッサは、
前記使用者の問診情報及び属性情報の少なくともいずれかを取得し、
前記学習済み判定モデルと前記一又は複数の判定画像に加えて、前記問診情報及び前記属性情報の少なくともいずれかに基づいて、前記所定の疾患に対する罹患の可能性を判定する、
請求項1~7のいずれか一項に記載の処理装置。 - 前記プロセッサは、
前記使用者の問診情報及び属性情報の少なくともいずれかを取得し、
前記問診情報及び前記属性情報のいずれも用いることなく、前記学習済み判定モデルと前記一又は複数の判定画像とに基づいて、前記所定の疾患に対する罹患の第1の可能性を判定し、
前記学習済み判定モデルと前記一又は複数の判定画像に加えて、前記問診情報及び前記属性情報の少なくともいずれかに基づいて、前記所定の疾患に対する罹患の第2の可能性を判定し、
前記第1の可能性と前記第2の可能性とに基づいて前記罹患の可能性を示す情報を取得する、
請求項1~7のいずれか一項に記載の処理装置。 - 前記属性情報は、前記カメラで撮影された一又は複数の被写体画像から取得される情報である、請求項8又は9に記載の処理装置。
- 前記プロセッサは、
前記所定の疾患に関連する外部要因情報を取得し、
前記学習済み判定モデルと前記一又は複数の判定画像に加えて、前記外部要因情報に基づいて、前記所定の疾患に対する罹患の可能性を判定する、
請求項1~10のいずれか一項に記載の処理装置。 - 前記プロセッサは、少なくとも2枚以上の複数の判定画像に基づいて、前記所定の疾患に対する罹患の可能性を判定する、請求項1~11のいずれか一項に記載の処理装置。
- 少なくとも一つのプロセッサにより実行されることにより、
使用者の口腔の少なくとも一部が含まれた被写体の画像を撮影するためのカメラを介して前記被写体の一又は複数の判定画像を取得し、
所定の疾患に対する罹患の可能性を判定するためにメモリに記憶された学習済み判定モデルと、取得された前記一又は複数の判定画像とに基づいて、前記所定の疾患に対する罹患の可能性を判定し、
判定された前記罹患の可能性を示す情報を出力する、
ように前記少なくとも一つのプロセッサを機能させる処理プログラム。 - 少なくとも1つのプロセッサにより実行される処理方法であって、
使用者の口腔の少なくとも一部が含まれた被写体の画像を撮影するためのカメラを介して前記被写体の一又は複数の判定画像を取得する段階と、
所定の疾患に対する罹患の可能性を判定するためにメモリに記憶された学習済み判定モデルと、取得された前記一又は複数の判定画像とに基づいて、前記所定の疾患に対する罹患の可能性を判定する段階と、
判定された前記罹患の可能性を示す情報を出力する段階と、
を含む処理方法。 - 前記使用者の口腔の少なくとも一部が含まれた被写体の画像を撮影するためのカメラを備える撮影装置と、
前記撮影装置と有線又は無線ネットワークを介して接続された請求項1~12のいずれか一項に記載の処理装置と、
を含む処理システム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237044815A KR20240032748A (ko) | 2021-07-14 | 2021-07-14 | 처리 장치, 처리 프로그램, 처리 방법 및 처리 시스템 |
PCT/JP2021/026442 WO2023286199A1 (ja) | 2021-07-14 | 2021-07-14 | 処理装置、処理プログラム、処理方法及び処理システム |
CN202180100203.6A CN117651517A (zh) | 2021-07-14 | 2021-07-14 | 处理装置、处理程序、处理方法以及处理系统 |
EP21950140.0A EP4371466A1 (en) | 2021-07-14 | 2021-07-14 | Processing device, processing program, processing method, and processing system |
JP2023534510A JPWO2023286199A1 (ja) | 2021-07-14 | 2021-07-14 | |
US18/404,970 US20240130604A1 (en) | 2021-07-14 | 2024-01-05 | Processing Device, Processing Program, Processing Method, And Processing System |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/026442 WO2023286199A1 (ja) | 2021-07-14 | 2021-07-14 | 処理装置、処理プログラム、処理方法及び処理システム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/404,970 Continuation US20240130604A1 (en) | 2021-07-14 | 2024-01-05 | Processing Device, Processing Program, Processing Method, And Processing System |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023286199A1 true WO2023286199A1 (ja) | 2023-01-19 |
Family
ID=84919744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/026442 WO2023286199A1 (ja) | 2021-07-14 | 2021-07-14 | 処理装置、処理プログラム、処理方法及び処理システム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240130604A1 (ja) |
EP (1) | EP4371466A1 (ja) |
JP (1) | JPWO2023286199A1 (ja) |
KR (1) | KR20240032748A (ja) |
CN (1) | CN117651517A (ja) |
WO (1) | WO2023286199A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017199408A1 (ja) * | 2016-05-19 | 2017-11-23 | オリンパス株式会社 | 画像処理装置、画像処理装置の作動方法及び画像処理装置の作動プログラム |
WO2018105062A1 (ja) * | 2016-12-07 | 2018-06-14 | オリンパス株式会社 | 画像処理装置及び画像処理方法 |
JP2018527997A (ja) * | 2015-05-12 | 2018-09-27 | ジップライン ヘルス、インク. | 医療診断情報を取得するための装置、方法、およびシステム、ならびに遠隔医療サービスの提供 |
WO2019131327A1 (ja) * | 2017-12-28 | 2019-07-04 | アイリス株式会社 | 口内撮影装置、医療装置及びプログラム |
-
2021
- 2021-07-14 WO PCT/JP2021/026442 patent/WO2023286199A1/ja active Application Filing
- 2021-07-14 CN CN202180100203.6A patent/CN117651517A/zh active Pending
- 2021-07-14 EP EP21950140.0A patent/EP4371466A1/en active Pending
- 2021-07-14 JP JP2023534510A patent/JPWO2023286199A1/ja active Pending
- 2021-07-14 KR KR1020237044815A patent/KR20240032748A/ko active Search and Examination
-
2024
- 2024-01-05 US US18/404,970 patent/US20240130604A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018527997A (ja) * | 2015-05-12 | 2018-09-27 | ジップライン ヘルス、インク. | 医療診断情報を取得するための装置、方法、およびシステム、ならびに遠隔医療サービスの提供 |
WO2017199408A1 (ja) * | 2016-05-19 | 2017-11-23 | オリンパス株式会社 | 画像処理装置、画像処理装置の作動方法及び画像処理装置の作動プログラム |
WO2018105062A1 (ja) * | 2016-12-07 | 2018-06-14 | オリンパス株式会社 | 画像処理装置及び画像処理方法 |
WO2019131327A1 (ja) * | 2017-12-28 | 2019-07-04 | アイリス株式会社 | 口内撮影装置、医療装置及びプログラム |
Non-Patent Citations (2)
Title |
---|
IRIS CO., LTD.: "Applied for approval of the world's first AI-equipped medical device for diagnosing infectious diseases that can detect influenza", PRTIMES.JP, PRTIMES.JP, JP, 16 June 2021 (2021-06-16), JP, pages 1 - 2, XP093024065, Retrieved from the Internet <URL:https://prtimes.jp/main/html/rd/p/000000014.000035813.html> [retrieved on 20230215] * |
MIYAMOTOWATANABE: "Consideration of Meanings and Values of Examination Findings of Pharynx (Influenza Follicles", JOURNAL OF THE JAPAN MEDICAL JOURNAL, vol. 72, no. 1, 2013, pages 11 - 18 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023286199A1 (ja) | 2023-01-19 |
US20240130604A1 (en) | 2024-04-25 |
KR20240032748A (ko) | 2024-03-12 |
CN117651517A (zh) | 2024-03-05 |
EP4371466A1 (en) | 2024-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7498521B2 (ja) | 口内撮影装置、医療装置及びプログラム | |
US9445713B2 (en) | Apparatuses and methods for mobile imaging and analysis | |
US11877843B2 (en) | Human body measurement using thermographic images | |
US6427022B1 (en) | Image comparator system and method for detecting changes in skin lesions | |
JP2021100555A (ja) | 医療画像処理装置、内視鏡システム、診断支援方法及びプログラム | |
JP2009077765A (ja) | 内視鏡システム | |
JP6830082B2 (ja) | 歯科分析システムおよび歯科分析x線システム | |
EP4091532A1 (en) | Medical image processing device, endoscope system, diagnosis assistance method, and program | |
JP7345023B2 (ja) | 内視鏡システム | |
US20220222817A1 (en) | Transfer learning for medical applications using limited data | |
JP4649965B2 (ja) | 健康度判定装置、及びプログラム | |
US20220084194A1 (en) | Computer program, processor for endoscope, and information processing method | |
WO2021085443A1 (ja) | 撮像装置及び撮像システム | |
WO2023286199A1 (ja) | 処理装置、処理プログラム、処理方法及び処理システム | |
WO2023073844A1 (ja) | 処理装置、処理プログラム及び処理方法 | |
WO2022009285A1 (ja) | 処理装置、処理プログラム、処理方法及び処理システム | |
KR101985378B1 (ko) | 치아 촬영용 엑스레이 촬영기 | |
WO2023181417A1 (ja) | 撮影装置、プログラム及び方法 | |
WO2019131328A1 (ja) | 口内撮影補助具、及びその口内撮影補助具を含む医療装置 | |
JP7427766B2 (ja) | 画像選択支援装置、画像選択支援方法、及び画像選択支援プログラム | |
WO2023195091A1 (ja) | 処理装置、処理プログラム及び処理方法 | |
CN117893953B (zh) | 一种软式消化道内镜操作规范动作评估方法及系统 | |
WO2023170788A1 (ja) | 補助具、撮影装置、プログラム及び方法 | |
CN114549524A (zh) | 牙齿图像数据处理方法、电子设备及可读存储介质 | |
Sengupta et al. | AUTOMATED DENTAL CARIES DETECTION |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21950140 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023534510 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180100203.6 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021950140 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021950140 Country of ref document: EP Effective date: 20240214 |