WO2020152815A1 - Deduction device, learning model, learning model generation method, and computer program - Google Patents

Deduction device, learning model, learning model generation method, and computer program Download PDF

Info

Publication number
WO2020152815A1
WO2020152815A1 PCT/JP2019/002196 JP2019002196W WO2020152815A1 WO 2020152815 A1 WO2020152815 A1 WO 2020152815A1 JP 2019002196 W JP2019002196 W JP 2019002196W WO 2020152815 A1 WO2020152815 A1 WO 2020152815A1
Authority
WO
WIPO (PCT)
Prior art keywords
oral
oral cavity
learning model
image
estimation
Prior art date
Application number
PCT/JP2019/002196
Other languages
French (fr)
Japanese (ja)
Inventor
慎一郎 平岡
Original Assignee
国立大学法人大阪大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人大阪大学 filed Critical 国立大学法人大阪大学
Priority to PCT/JP2019/002196 priority Critical patent/WO2020152815A1/en
Priority to PCT/JP2020/002491 priority patent/WO2020153471A1/en
Priority to JP2020567715A priority patent/JPWO2020153471A1/en
Publication of WO2020152815A1 publication Critical patent/WO2020152815A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/24Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons

Definitions

  • the present invention relates to an estimation device, a learning model, a learning model generation method, and a computer program.
  • oral malignant tumors There are various types of oral mucosal diseases such as oral malignant tumors and stomatitis, but diagnosis is often difficult. In particular, an early stage oral malignant tumor often presents with clinical findings similar to stomatitis, and many non-specialized medical staff overlook it. Regarding oral malignant tumors, many institutions are currently studying causative genes and prognostic factors, and the inventors have also reported pathological prognostic factors for oral malignant tumors (for example, non-patent document) See 1).
  • An object of the present invention is to provide an estimation device capable of estimating lesions in the oral mucosa, a learning model, a learning model generation method, and a computer program.
  • An estimation device is configured to output an acquisition unit that acquires an oral cavity image obtained by imaging the inside of the oral cavity of a subject, and information regarding a lesion in the oral mucosa in response to the input of the oral cavity image.
  • an estimation unit that estimates the presence or absence of a lesion in the oral mucosa of the subject from the oral image acquired by the acquisition unit, and an output unit that outputs the estimation result of the estimation unit ..
  • a learning model includes an input layer to which an oral cavity image obtained by imaging the inside of the oral cavity of a subject is input, an output layer to output information about lesions in the oral mucosa, and the oral cavity image and the oral cavity.
  • an oral cavity image is input to the input layer, including an intermediate layer that learns the relationship between the oral cavity image input to the input layer and the information output by the output layer by using an annotation for an image as teacher data , Causing the computer to function so as to output information regarding lesions in the oral mucosa from the output layer.
  • a learning model generation method using a computer, acquires teacher data including an oral cavity image obtained by imaging the inside of the oral cavity of a subject, and an annotation for the oral cavity image, and acquires Based on the teacher data, the learning model that outputs the information about the lesion in the oral mucosa in response to the input of the oral image is generated.
  • a computer program causes a computer to acquire an oral cavity image obtained by imaging the inside of the oral cavity of a subject, and output information regarding a lesion in the oral mucosa in response to the input of the oral cavity image. It is a computer program for executing a process of estimating the presence or absence of a lesion in the oral mucosa from the obtained oral image using the configured learning model and outputting the estimation result.
  • lesions in the oral mucosa can be estimated.
  • FIG. 3 is a block diagram illustrating the configuration of the estimation device according to the first embodiment.
  • FIG. It is a schematic diagram which shows an example of an oral cavity image. It is a schematic diagram which shows the structural example of a learning model.
  • 7 is a flowchart illustrating a procedure of processing executed by the estimation device according to the first embodiment. It is a schematic diagram which shows the output example of an estimation apparatus.
  • FIG. 6 is a block diagram illustrating a configuration of an estimation device according to a second embodiment. It is a schematic diagram which shows an example of extraction. 6 is a flowchart illustrating a procedure of processing executed by the estimation device according to the embodiment. It is a block diagram explaining the composition of a server apparatus. It is a conceptual diagram which shows an example of an oral cavity image database. It is a flow chart explaining the generation procedure of a learning model.
  • FIG. 1 is a block diagram illustrating the configuration of the estimation device 1 according to the first embodiment.
  • the estimation device 1 is a computer device installed in a facility such as a hospital, and estimates the presence/absence of a lesion in the oral mucosa of a subject from an oral image obtained by imaging the inside of the oral cavity of the subject.
  • the estimation device 1 provides diagnosis support by presenting an estimation result to a doctor or the like who is a diagnostician.
  • the estimation device 1 includes an input unit 11, a control unit 12, a storage unit 13, an output unit 14, a communication unit 15, and an operation unit 16.
  • the input unit 11 includes an input interface for inputting image data of an oral cavity image.
  • the input interface is, for example, an interface that connects an imaging device for imaging the inside of the oral cavity of the subject.
  • the imaging device is a digital camera or a digital video camera, and outputs, for example, image data in which each pixel is represented by RGB gradation values.
  • the input unit 11 acquires image data of an oral cavity image from an imaging device connected to the input interface.
  • the input interface may be an interface for accessing a recording medium in which captured image data is recorded.
  • the input unit 11 acquires the image data relating to the oral cavity image by reading the image data recorded in the recording medium.
  • the image data acquired by the input unit 11 is output to the control unit 12 and stored in the storage unit 13 via the control unit 12.
  • the control unit 12 includes, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory).
  • the ROM included in the control unit 12 stores a control program or the like for controlling the operation of each hardware unit included in the estimation device 1.
  • the CPU in the control unit 12 executes the control program stored in the ROM and various computer programs stored in the storage unit 13 to be described later, and controls the operation of each part of the hardware, thereby changing the oral image from the lesion on the oral mucosa. Realize the function to estimate the presence or absence of.
  • the RAM used by the control unit 12 temporarily stores data used during execution of the calculation.
  • the control unit 12 is configured to include a CPU, a ROM, and a RAM, but a GPU (Graphics Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), a quantum processor, a volatile or nonvolatile memory. It may be one or a plurality of arithmetic circuits including the above. Further, the control unit 12 may have functions such as a clock for outputting date and time information, a timer for measuring an elapsed time from giving a measurement start instruction to giving a measurement end instruction, and a counter for counting the number.
  • a clock for outputting date and time information
  • a timer for measuring an elapsed time from giving a measurement start instruction to giving a measurement end instruction
  • a counter for counting the number.
  • the storage unit 13 includes a storage device such as an HDD (Hard Disk Drive), an SSD (Solid State Drive), and an EEPROM (Electronically Erasable Programmable Read Only Memory).
  • the storage unit 13 stores a computer program executed by the control unit 12, a learning model 130 used for the process of estimating the presence/absence of a lesion in the oral mucosa, and the like.
  • the computer program stored in the storage unit 13 includes an estimation processing program P1 for causing the estimation device 1 to perform processing for estimating the presence or absence of a lesion in the oral mucosa from the acquired oral image using the learning model 130.
  • the computer program stored in the storage unit 13 may be provided by the non-transitory recording medium M1 in which the computer program is readablely recorded.
  • the recording medium M1 is, for example, a portable memory such as a CD-ROM, a USB memory, an SD (Secure Digital) card, a micro SD card, or a compact flash (registered trademark).
  • the control unit 12 reads various programs from the recording medium M1 through the input unit 11, and installs the read various programs in the storage unit 13, for example.
  • the learning model 130 is a learning model configured to output information regarding lesions in the oral mucosa in response to input of an oral image.
  • the learning model 130 is described by its definition information.
  • the definition information of the learning model 130 includes structural information of the learning model 130, various parameters such as weights and biases between nodes used in the learning model 130, and the like.
  • the learning model 130 preliminarily learned by the predetermined learning algorithm is stored in the storage unit 13 by using the mouth image and the annotation for the mouth image as the teacher data.
  • the control unit 12 executes the estimation processing program P1 stored in the storage unit 13 and supplies the image data of the oral cavity image to the learning model 130 to acquire the information regarding the lesion from the learning model 130.
  • the control unit 12 estimates the presence or absence of a lesion in the oral mucosa based on the information on the lesion acquired from the learning model 130.
  • the output unit 14 has an output interface for connecting an output device.
  • An example of the output device is a display device 140 including a liquid crystal panel, an organic EL (Electro-Luminescence) panel, or the like.
  • the control unit 12 When outputting the estimation result, the control unit 12 generates display data to be displayed on the display device 140, and outputs the generated display data to the display device 140 through the output unit 14, thereby displaying the estimation result on the display device 140.
  • the communication unit 15 has a communication interface for transmitting and receiving various data.
  • the communication interface included in the communication unit 15 is, for example, a communication interface conforming to the communication standard of LAN (Local Area Network) used in WiFi (registered trademark) or Ethernet (registered trademark).
  • LAN Local Area Network
  • WiFi registered trademark
  • Ethernet registered trademark
  • the operation unit 16 includes input interfaces such as various operation buttons, switches, and touch panels, and receives various operation information and setting information.
  • the control unit 12 performs appropriate control based on the operation information input from the operation unit 16, and stores the setting information in the storage unit 13 as necessary.
  • FIG. 2 is a schematic diagram showing an example of an oral cavity image.
  • the oral cavity image in the present embodiment is an image obtained by capturing the inside of the oral cavity of the subject with an imaging device.
  • an oral image captured so as to include the left side surface of the subject's tongue is shown.
  • Oral mucosa includes at least a portion of the subject's tongue, upper lip, hard palate, soft palate, uvula, tonsils, buccal mucosa, floor of the mouth, gums, and lower lip.
  • the oral cavity image may include teeth other than the oral mucosa such as the subject's teeth, the imager's fingers, and other structures.
  • FIG. 3 is a schematic diagram showing a configuration example of the learning model 130.
  • the learning model 130 is, for example, a learning model based on CNN (Convolutional Neural Networks), and includes an input layer 131, an intermediate layer 132, and an output layer 133.
  • the learning model 130 is learned in advance so as to output information regarding lesions in the oral mucosa in response to the input of the oral cavity image.
  • Image data of an oral cavity image is input to the input layer 131.
  • the image data of the oral cavity image input to the input layer 131 is sent to the intermediate layer 132.
  • the intermediate layer 132 is composed of, for example, a convolutional layer 132a, a pooling layer 132b, and a total coupling layer 132c.
  • a plurality of convolutional layers 132a and pooling layers 132b may be alternately provided.
  • the convolutional layer 132a and the pooling layer 132b extract the features of the oral cavity image input through the input layer 131 by the calculation using the nodes of each layer.
  • the fully-combined layer 132c combines the data whose feature portions have been extracted by the convolutional layer 132a and the pooling layer 132b into one node, and outputs the feature variable transformed by the activation function.
  • the feature variable is output to the output layer 133 through the full connection layer 132c.
  • the output layer 133 includes one or a plurality of nodes.
  • the output layer 133 converts into a probability using a softmax function based on the feature variable input from the fully connected layer 132c of the intermediate layer 132, and outputs the probability that the oral cavity image falls into each category from each node. That is, in the present embodiment, the probability that the oral cavity image belongs to each category is output as the information regarding the lesion.
  • the categories for classifying oral images are oral malignancies, precancerous lesions, benign tumors, traumatic ulcers, inflammatory diseases, viral diseases, fungal infections, autoimmune diseases, stomatitis, cheilitis, decubitus ulcers, tongue surface It can be arbitrarily set to include an organic change in the mucous membrane or at least one lesion belonging to graft-versus-host disease.
  • the categories to be classified are oral cancer and oral sarcoma belonging to oral malignant tumors, leukoplakia belonging to precancerous lesions, erythema and lichen planus, gingivitis belonging to inflammatory diseases, periodontitis, jaw inflammation, Mandibular osteomyelitis and drug-induced osteonecrosis of the jaw, herpes belonging to viral diseases, herpes zoster, herpangina, and hand-foot-and-mouth disease, oral candidiasis belonging to fungal infections, pemphigus belonging to autoimmune diseases, pemphigus vulgaris, and Behcet Disease, and at least one of a cartilaginous tongue, a grooved tongue, a black-haired tongue, and a rhomboid glossum belonging to the organic change of the surface mucous membrane of the tongue.
  • the categories to be classified may include pigmentation that does not belong to the lesion, and may include normal state that does not belong to the lesion and pigmentation.
  • the example in FIG. 3 shows a learning model 130 in which n categories are set as categories for classifying oral cavity images.
  • the learning model 130 is configured to output from each node of the output layer 133 a probability X1 of oral malignancy, a probability X2 of leukoplakia 2, a probability X3 of lichen planus,..., A probability Xn of normal. Has been done.
  • the control unit 12 of the estimation device 1 acquires the probability of each lesion set as a category to be classified from the output layer 133 of the learning model 130, and estimates the presence/absence of a lesion in the oral mucosa based on the acquired probability. To do. For example, when only the probability X1 of being an oral malignancy exceeds a threshold value (for example, 80%), the control unit 12 can estimate that a malignant tumor has developed in the oral cavity of the subject. The same applies when any one of the probabilities X2, X3,..., Xn ⁇ 1 exceeds the threshold value.
  • a threshold value for example, 80%
  • the control unit 12 causes the lesion in the oral cavity of the subject. Can be inferred to not exist.
  • CNN learning model 130 is shown in the example of FIG. 3, a machine learning model for constructing the learning model 130 can be set arbitrarily.
  • a learning model based on R-CNN (Region-based CNN), YOLO (You Only Look Once), SSD (Single Shot Detector), or the like may be set.
  • FIG. 4 is a flowchart illustrating a procedure of processing executed by the estimation device 1 according to the first embodiment.
  • the control unit 12 of the estimation device 1 executes the estimation process program P1 stored in the storage unit 13 to execute the following estimation process.
  • the control unit 12 acquires an oral cavity image through the input unit 11 (step S101), and applies the acquired oral cavity image to the input layer 131 of the learning model 130 to execute the calculation by the learning model 130 (step S102).
  • the image data of the oral cavity image given to the input layer 131 of the learning model 130 is sent to the intermediate layer 132.
  • an operation using an activation function including weights and biases between nodes is executed.
  • Image characteristics are extracted in the convolutional layer 132a and the pooling layer 132b of the intermediate layer 132.
  • the data of the characteristic part extracted by the convolutional layer 132a and the pooling layer 132b is combined with each node which the total connection layer 132c comprises, and converted into a characteristic variable by an activation function.
  • the converted feature variable is output to the output layer 133 through the fully connected layer 132c.
  • the output layer 133 converts the feature variables input from the fully connected layer 132c of the intermediate layer 132 into probabilities using a softmax function, and outputs the probabilities belonging to each category from each node.
  • the control unit 12 acquires a calculation result from the learning model 130 and estimates the presence or absence of a lesion in the oral mucosa based on the acquired calculation result (step S103). As described above, the probability of each lesion set as a category to be classified is output from each node forming the output layer 133 of the learning model 130. The control unit 12 can estimate the presence or absence of a lesion based on the probability output from each node of the output layer 133.
  • the control unit 12 outputs the estimation result through the output unit 14 (step S104). Specifically, the control unit 12 causes the display device 140 to display the estimation result by generating display data for displaying the estimation result on the display device 140 and outputting the generated display data to the display device 140. ..
  • the display mode of the estimation result can be set arbitrarily. For example, the control unit 12 generates display data including a character or a graphic representing the presence or absence of a specific lesion (for example, an oral malignant tumor) and outputs the display data to the display device 140. Alternatively, it may be displayed by a graphic. In addition, the control unit 12 generates display data including a probability value corresponding to each lesion, outputs the display data to the display device 140, and displays the probability value corresponding to each lesion on the display device 140 as numerical information. Good.
  • FIG. 5 is a schematic diagram showing an output example of the estimation device 1.
  • the subject ID for identifying the subject, the name of the subject, the mouth image used in the estimation process, the probability of being in each category, and the character information indicating the estimation result are displayed on the display device 140. It shows the displayed state.
  • the presence or absence of a lesion in the oral mucosa is estimated using the learning model 130 for machine learning including deep learning, and the estimation result is output. Therefore, by using the estimation result for diagnosis support, The possibility of overlooking the lesion can be reduced.
  • the estimation device 1 has been described as a computer device installed in a facility such as a hospital, but the estimation device 1 is a server device that can be accessed by communication from a computer device installed in a facility such as a hospital. Good.
  • the estimation device 1 acquires an oral cavity image obtained by imaging the inside of the oral cavity of the subject by communication from a computer device such as a hospital, and estimates the presence or absence of a lesion in the oral mucosa based on the acquired oral cavity image.
  • the estimation device 1 transmits the estimation result to a computer device such as a hospital by communication.
  • the computer device displays the estimation result received from the estimation device 1 on the display device to support diagnosis for a doctor or the like.
  • FIG. 6 is a block diagram illustrating the configuration of the estimation device 1 according to the second embodiment.
  • the estimation device 1 includes an input unit 11, a control unit 12, a storage unit 13, an output unit 14, a communication unit 15, and an operation unit 16. Since these configurations are similar to those in the first embodiment, detailed description thereof will be omitted.
  • the storage unit 13 stores a region extraction program P2 in addition to the learning model 130 and the estimation processing program P1 described above.
  • the region extraction program P2 is a computer program for causing the estimation device 1 to execute a process of extracting a region corresponding to the oral mucosa of the subject from the oral image.
  • a well-known area extraction algorithm is used for the area extraction program P2.
  • the GrabCut algorithm learns the distribution of pixel values in the foreground area and background area using a mixed normal distribution model (GMM:GaussianMixtureModel), and calculates statistical values of pixel values for pixels set as unknown areas. Based on the relationship between the foreground region and the background region, the foreground region and the background region are calculated to separate and extract the foreground region and the background region.
  • GMM:GaussianMixtureModel a mixed normal distribution model
  • the estimation device 1 extracts a region corresponding to the oral mucosa of all image regions of the oral image as a foreground region and a region excluding the region corresponding to the oral mucosa of the entire image region as a background region. To extract. Although it is possible that the subject's teeth, the photographer's finger, various appliances, etc. are reflected in the background region, the foreground region is separated from the background region by using a region extraction algorithm such as the GrabCut algorithm. Can be extracted.
  • a region extraction algorithm such as the GrabCut algorithm.
  • the region corresponding to the oral mucosa (foreground region) and the other region (background region) are separated, but the region corresponding to a specific oral mucosa (for example, tongue) is set.
  • FIG. 7 is a schematic diagram showing an example of extraction.
  • the example of FIG. 7 shows the result of dividing the entire image region of the oral cavity image into a region (foreground region) corresponding to the oral mucosa of the subject and a region (background region) other than that.
  • the background area is shown as a hatched area. It can be seen that the background region includes a region corresponding to the tooth of the subject and a region outside the oral cavity.
  • the estimation device 1 estimates the presence or absence of a lesion in the oral mucosa by passing the oral image of the region (foreground region) corresponding to the oral mucosa of the subject to the learning model 130.
  • FIG. 8 is a flowchart illustrating a procedure of processing executed by the estimation device 1 according to the second embodiment.
  • the control unit 12 of the estimation device 1 executes the estimation process program P1 and the region extraction program P2 stored in the storage unit 13 to perform the following estimation process.
  • the control unit 12 acquires an oral cavity image through the input unit 11 (step S201), and extracts a region corresponding to the oral mucosa from the entire image region of the acquired oral cavity image (step S202).
  • step S202 the tooth of the subject, the finger of the imager, and a portion corresponding to various appliances are removed from the oral image.
  • the control unit 12 gives an oral cavity image in which a region corresponding to the oral region is extracted (an image in which a portion corresponding to the tooth of the subject is removed) to the input layer 131 of the learning model 130, thereby learning model.
  • the calculation by 130 is executed (step S203).
  • the data of the oral cavity image given to the input layer 131 of the learning model 130 is sent to the intermediate layer 132.
  • an operation using an activation function including weights and biases between nodes is executed.
  • Image features are extracted in the convolutional layer 132a and the pooling layer 132b of the intermediate layer 132.
  • the data of the characteristic part extracted by the convolutional layer 132a and the pooling layer 132b is combined with each node which the total connection layer 132c comprises, and converted into a characteristic variable by an activation function.
  • the converted feature variable is output to the output layer 133 through the fully connected layer 132c.
  • the output layer 133 converts the feature variables input from the fully connected layer 132c of the intermediate layer 132 into probabilities using a softmax function, and outputs the probabilities belonging to each category from each node.
  • the control unit 12 acquires a calculation result from the learning model 130 and estimates the presence/absence of a lesion in the oral mucosa based on the acquired calculation result (step S204). As described above, the probability of each lesion set as a category to be classified is output from each node forming the output layer 133 of the learning model 130. The control unit 12 can estimate the presence or absence of a lesion based on the probability output from each node of the output layer 133.
  • the control unit 12 outputs the estimation result through the output unit 14 (step S205). Specifically, the control unit 12 causes the display device 140 to display the estimation result by generating display data for displaying the estimation result on the display device 140 and outputting the generated display data to the display device 140. ..
  • the display mode of the estimation result can be set arbitrarily. For example, the control unit 12 generates display data including a character or a graphic representing the presence or absence of a specific lesion (for example, oral malignant tumor) and outputs the display data to the display device 140, and the display device 140 displays the presence or absence of the specific lesion as a character. Alternatively, it may be displayed by a graphic. In addition, the control unit 12 generates display data including a probability value corresponding to each lesion, outputs the display data to the display device 140, and displays the probability value corresponding to each lesion on the display device 140 as numerical information. Good.
  • the estimation process can be executed after removing the portion unnecessary for the lesion estimation, so that the estimation accuracy can be improved.
  • the learning model 130 used in the estimation device 1 is generated, for example, in the server device 2 communicatively connected to the estimation device 1.
  • FIG. 9 is a block diagram illustrating the configuration of the server device 2.
  • the server device 2 includes a control unit 21, a storage unit 22, an input unit 23, a communication unit 24, an operation unit 25, and a display unit 26.
  • the control unit 21 includes, for example, a CPU, ROM, RAM and the like.
  • the ROM included in the control unit 21 stores a control program or the like for controlling the operation of each hardware unit included in the server device 2.
  • the CPU in the control unit 21 executes the control program stored in the ROM and various programs stored in the storage unit 22 to control the operation of each unit of the hardware.
  • the control unit 21 is not limited to the above configuration.
  • the control unit 21 is not limited to the configuration including the CPU, the ROM, and the RAM.
  • the control unit 21 may be, for example, one or more control circuits or arithmetic circuits including a GPU, FPGA, DSP, volatile or non-volatile memory, or the like. Further, the control unit 21 may have functions such as a clock for outputting date and time information, a timer for measuring an elapsed time from giving a measurement start instruction to giving a measurement end instruction, and a counter for counting the number.
  • the storage unit 22 includes a storage device such as a hard disk drive.
  • the storage unit 22 stores various computer programs executed by the control unit 21, various data used by the computer programs, data acquired from the outside, and the like.
  • An example of the computer program stored in the storage unit 22 is a model generation program P3 for generating a learning model.
  • the storage unit 22 also includes an oral cavity image database (oral cavity image DB) 220 that stores the oral cavity image and the annotation of the oral cavity image in association with each other.
  • oral cavity image DB oral cavity image database
  • the input unit 23 includes an input interface for acquiring data and programs from a recording medium that records various data or programs. Various data and programs input through the input unit 23 are stored in the storage unit 22.
  • the communication unit 24 includes a communication interface connected to the communication network N.
  • the communication network N is an internet network, a LAN or WAN (Wide Area Network) for a specific purpose, or the like.
  • the communication unit 24 transmits the data to be transmitted to the estimation device 1 to the estimation device 1 via the communication network N.
  • the communication unit 24 also receives, via the communication network N, the data transmitted from the estimation device 1 with the server device 2 as the destination.
  • the operation unit 25 has an input interface such as a keyboard and a mouse, and receives various operation information and setting information.
  • the control unit 21 performs appropriate control based on the operation information input from the operation unit 25, and stores the setting information in the storage unit 22 as necessary.
  • the display unit 26 includes a display device such as a liquid crystal display panel and an organic EL display panel, and displays information to be notified to the administrator of the server device 2 based on the control signal output from the control unit 21. ..
  • the server device 2 is configured to include the operation unit 25 and the display unit 26, but the operation unit 25 and the display unit 26 are not essential, and the operation is accepted and notified through the computer connected to the outside.
  • the information to be output may be output to an external computer.
  • FIG. 10 is a conceptual diagram showing an example of the oral cavity image database 220.
  • the oral cavity image database 220 stores the oral cavity image and the annotation for the oral cavity image in association with each other.
  • the oral cavity image includes, for example, an image of the oral cavity in which a malignant tumor has developed, an image of the oral cavity having a morphology (eg, ulcer, erosion, bulge, etc.) specific to an oral mucosal disease.
  • the annotation includes the doctor's diagnosis result.
  • the diagnosis result includes a pathological diagnosis result or a definitive diagnosis result, and in the present embodiment, it is used as label data indicating that the oral image stored in association with each other is normal, or as label data indicating which lesion.
  • the annotation may include information such as the subject ID and the subject name.
  • FIG. 11 is a flowchart for explaining the learning model generation procedure.
  • the control unit 21 of the server device 2 accesses the oral cavity image database 220 of the storage unit 22 and acquires the teacher data used for generating the learning model (step S301).
  • the teacher data includes, for example, a mouth image and an annotation for the mouth image.
  • the teacher data set by the operator of the server device 2 or the like is set.
  • the estimation result by the learning model 130 and the oral cavity image used for the estimation process may be acquired from the estimation device 1, and the acquired data may be set as the teacher data.
  • control unit 21 inputs the image data included as teacher data into the learning model for learning (step S302), and acquires the calculation result from the learning model (step S303).
  • learning it is assumed that an initial setting value is given to the definition information that describes the learning model.
  • the calculation by this learning model is the same as the calculation of the learning model 130 in the estimation processing.
  • control unit 21 evaluates the calculation result obtained in step S303 (step S304) and determines whether the learning is completed (step S305). Specifically, the control unit 21 can evaluate the calculation result using an error function (also called an objective function, a loss function, a cost function) based on the calculation result and the teacher data obtained in step S303.
  • the control unit 21 is a process of optimizing (minimizing or maximizing) the error function by a gradient descent method such as the steepest descent method, and when the error function becomes equal to or less than a threshold value (or more than a threshold value), learning is completed. to decide. In order to avoid the problem of overlearning, methods such as cross validation and early censoring may be introduced and learning may be terminated at an appropriate timing.
  • control unit 21 updates the weight and bias between the nodes of the learning model (step S306), and returns the process to step S301.
  • the control unit 21 can update the weights and biases between the nodes by using the error backpropagation method that sequentially updates the weights and biases between the nodes from the output layer of the learning model to the input layer.
  • control unit 21 stores the learned model as a learned model in the storage unit 22 (step S307), and ends the process according to this flowchart.
  • the learning model 130 used in the estimation device 1 can be generated in the server device 2.
  • the server device 2 transmits the generated learning model to the estimation device 1 in response to a request from the estimation device 1.
  • the estimation device 1 receives the learning model from the server device 2, stores it in the storage unit 13, and then executes the estimation processing program P1 to execute the lesion estimation process.
  • the server device 2 may be configured to newly collect an oral cavity image and an annotation for the oral cavity image and re-learn the learning model using these data at an appropriate timing after the learning is completed.
  • the oral cavity image may be an oral cavity image (see FIG. 2) obtained by capturing at least a part of the oral mucosa, and an oral cavity image obtained by extracting a region corresponding to the oral mucosa (see FIG. 7). ).
  • the estimation device 1 may accept a selection (diagnosis result) as to whether the estimation result is correct and transmit the accepted diagnosis result to the server device 2 as an annotation.
  • the procedure of re-learning is exactly the same as the procedure of generating the learning model, and the oral image included in the teacher data is input to the learning model, and the calculation result obtained as the output of the learning model and the annotation included in the teacher data are input.
  • Re-learning is performed by evaluating the error of.

Abstract

Provided are a deduction device, a learning model, a learning model generation method, and a computer program. The present invention is provided with: an acquisition unit that acquires an oral cavity image obtained by taking an image of the oral cavity of a subject; a deduction unit that, by using a learning model configured to output information about lesions in oral cavity mucosa in response to an input of the oral cavity image, deduces the presence/absence of lesions in the oral cavity mucosa of the subject from the oral cavity image acquired by the acquisition unit; and an output unit that outputs the result of deduction by the deduction unit.

Description

推定装置、学習モデル、学習モデルの生成方法、及びコンピュータプログラムEstimating apparatus, learning model, learning model generation method, and computer program
 本発明は、推定装置、学習モデル、学習モデルの生成方法、及びコンピュータプログラムに関する。 The present invention relates to an estimation device, a learning model, a learning model generation method, and a computer program.
 口腔粘膜疾患には、口腔悪性腫瘍、口内炎等の様々なものがあるが、診断に苦慮することが多い。特に、初期の口腔悪性腫瘍については、口内炎に似た臨床所見を呈することも多く、専門外の医療従事者による見落としも多く経験する。口腔悪性腫瘍については、現在多くの施設で原因遺伝子や予後因子についての研究が進められており、発明者らも口腔悪性腫瘍の病理学的予後予測因子について報告している(例えば、非特許文献1を参照)。 There are various types of oral mucosal diseases such as oral malignant tumors and stomatitis, but diagnosis is often difficult. In particular, an early stage oral malignant tumor often presents with clinical findings similar to stomatitis, and many non-specialized medical staff overlook it. Regarding oral malignant tumors, many institutions are currently studying causative genes and prognostic factors, and the inventors have also reported pathological prognostic factors for oral malignant tumors (for example, non-patent document) See 1).
 しかしながら、他領域に比較して口腔領域は研究が遅れており、実用化できる簡便な診断支援システム確立の目処は立っていない。 However, research in the oral area is delayed compared to other areas, and there is no prospect of establishing a simple diagnostic support system that can be put to practical use.
 本発明は、口腔粘膜における病変を推定できる推定装置、学習モデル、学習モデルの生成方法、及びコンピュータプログラムを提供することを目的とする。 An object of the present invention is to provide an estimation device capable of estimating lesions in the oral mucosa, a learning model, a learning model generation method, and a computer program.
 本発明の一態様に係る推定装置は、被検者の口腔内を撮像して得られる口腔画像を取得する取得部と、口腔画像の入力に対して口腔粘膜における病変に関する情報を出力するように構成された学習モデルを用いて、前記取得部が取得した口腔画像から、被検者の口腔粘膜における病変の有無を推定する推定部と、該推定部の推定結果を出力する出力部とを備える。 An estimation device according to one aspect of the present invention is configured to output an acquisition unit that acquires an oral cavity image obtained by imaging the inside of the oral cavity of a subject, and information regarding a lesion in the oral mucosa in response to the input of the oral cavity image. Using the configured learning model, an estimation unit that estimates the presence or absence of a lesion in the oral mucosa of the subject from the oral image acquired by the acquisition unit, and an output unit that outputs the estimation result of the estimation unit ..
 本発明の一態様に係る学習モデルは、被検者の口腔内を撮像して得られる口腔画像が入力される入力層、口腔粘膜における病変に関する情報を出力する出力層、及び口腔画像と該口腔画像に対するアノテーションとを教師データに用いて、前記入力層に入力される口腔画像と前記出力層が出力する情報との関係を学習した中間層を備え、前記入力層に口腔画像が入力された場合、前記中間層にて演算し、口腔粘膜における病変に関する情報を前記出力層から出力するようコンピュータを機能させる。 A learning model according to one aspect of the present invention includes an input layer to which an oral cavity image obtained by imaging the inside of the oral cavity of a subject is input, an output layer to output information about lesions in the oral mucosa, and the oral cavity image and the oral cavity. When an oral cavity image is input to the input layer, including an intermediate layer that learns the relationship between the oral cavity image input to the input layer and the information output by the output layer by using an annotation for an image as teacher data , Causing the computer to function so as to output information regarding lesions in the oral mucosa from the output layer.
 本発明の一態様に係る学習モデルの生成方法は、コンピュータを用いて、被検者の口腔内を撮像して得られる口腔画像と、該口腔画像に対するアノテーションとを含む教師データを取得し、取得した教師データに基づき、口腔画像の入力に対して口腔粘膜における病変に関する情報を出力する学習モデルを生成する。 A learning model generation method according to an aspect of the present invention, using a computer, acquires teacher data including an oral cavity image obtained by imaging the inside of the oral cavity of a subject, and an annotation for the oral cavity image, and acquires Based on the teacher data, the learning model that outputs the information about the lesion in the oral mucosa in response to the input of the oral image is generated.
 本発明の一態様に係るコンピュータプログラムは、コンピュータに、被検者の口腔内を撮像して得られる口腔画像を取得し、口腔画像の入力に対して口腔粘膜における病変に関する情報を出力するように構成された学習モデルを用いて、取得した口腔画像から、口腔粘膜における病変の有無を推定し、推定結果を出力する処理を実行させるためのコンピュータプログラムである。 A computer program according to an aspect of the present invention causes a computer to acquire an oral cavity image obtained by imaging the inside of the oral cavity of a subject, and output information regarding a lesion in the oral mucosa in response to the input of the oral cavity image. It is a computer program for executing a process of estimating the presence or absence of a lesion in the oral mucosa from the obtained oral image using the configured learning model and outputting the estimation result.
 本願によれば、口腔粘膜における病変を推定できる。 According to the present application, lesions in the oral mucosa can be estimated.
実施の形態1に係る推定装置の構成を説明するブロック図である。3 is a block diagram illustrating the configuration of the estimation device according to the first embodiment. FIG. 口腔画像の一例を示す模式図である。It is a schematic diagram which shows an example of an oral cavity image. 学習モデルの構成例を示す模式図である。It is a schematic diagram which shows the structural example of a learning model. 実施の形態1に係る推定装置が実行する処理の手順を説明するフローチャートである。7 is a flowchart illustrating a procedure of processing executed by the estimation device according to the first embodiment. 推定装置の出力例を示す模式図である。It is a schematic diagram which shows the output example of an estimation apparatus. 実施の形態2に係る推定装置の構成を説明するブロック図である。FIG. 6 is a block diagram illustrating a configuration of an estimation device according to a second embodiment. 抽出例を示す模式図である。It is a schematic diagram which shows an example of extraction. 実施の形態に係る推定装置が実行する処理の手順を説明するフローチャートである。6 is a flowchart illustrating a procedure of processing executed by the estimation device according to the embodiment. サーバ装置の構成を説明するブロック図である。It is a block diagram explaining the composition of a server apparatus. 口腔画像データベースの一例を示す概念図である。It is a conceptual diagram which shows an example of an oral cavity image database. 学習モデルの生成手順を説明するフローチャートである。It is a flow chart explaining the generation procedure of a learning model.
 以下、本発明をその実施の形態を示す図面に基づいて具体的に説明する。
 (実施の形態1)
 図1は実施の形態1に係る推定装置1の構成を説明するブロック図である。推定装置1は、病院等の施設に設置されるコンピュータ装置であり、被検者の口腔内を撮像して得られる口腔画像から、被検者の口腔粘膜における病変の有無を推定する。推定装置1は、診断者である医師等に推定結果を提示することによって診断支援を行う。
Hereinafter, the present invention will be specifically described based on the drawings showing the embodiments.
(Embodiment 1)
FIG. 1 is a block diagram illustrating the configuration of the estimation device 1 according to the first embodiment. The estimation device 1 is a computer device installed in a facility such as a hospital, and estimates the presence/absence of a lesion in the oral mucosa of a subject from an oral image obtained by imaging the inside of the oral cavity of the subject. The estimation device 1 provides diagnosis support by presenting an estimation result to a doctor or the like who is a diagnostician.
 推定装置1は、入力部11、制御部12、記憶部13、出力部14、通信部15、及び操作部16を備える。 The estimation device 1 includes an input unit 11, a control unit 12, a storage unit 13, an output unit 14, a communication unit 15, and an operation unit 16.
 入力部11は、口腔画像の画像データが入力される入力インタフェースを備える。入力インタフェースは、例えば、被検者の口腔内を撮像するための撮像装置を接続するインタフェースである。撮像装置は、デジタルカメラ又はデジタルビデオカメラであり、例えば、各画素がRGBの階調値により表現される画像データを出力する。入力部11は、入力インタフェースに接続された撮像装置から口腔画像の画像データを取得する。入力インタフェースは、撮像済みの画像データが記録されている記録媒体にアクセスするためのインタフェースであってもよい。入力部11は、記録媒体に記録されている画像データを読み出すことによって、口腔画像に係る画像データを取得する。入力部11が取得した画像データは、制御部12へ出力され、制御部12を介して記憶部13に記憶される。 The input unit 11 includes an input interface for inputting image data of an oral cavity image. The input interface is, for example, an interface that connects an imaging device for imaging the inside of the oral cavity of the subject. The imaging device is a digital camera or a digital video camera, and outputs, for example, image data in which each pixel is represented by RGB gradation values. The input unit 11 acquires image data of an oral cavity image from an imaging device connected to the input interface. The input interface may be an interface for accessing a recording medium in which captured image data is recorded. The input unit 11 acquires the image data relating to the oral cavity image by reading the image data recorded in the recording medium. The image data acquired by the input unit 11 is output to the control unit 12 and stored in the storage unit 13 via the control unit 12.
 制御部12は、例えば、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)などを備える。制御部12が備えるROMには、推定装置1が備えるハードウェア各部の動作を制御する制御プログラム等が記憶される。制御部12内のCPUは、ROMに記憶された制御プログラムや後述する記憶部13に記憶された各種コンピュータプログラムを実行し、ハードウェア各部の動作を制御することによって、口腔画像から口腔粘膜における病変の有無を推定する機能を実現する。制御部12が備えるRAMには、演算の実行中に利用されるデータが一時的に記憶される。 The control unit 12 includes, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory). The ROM included in the control unit 12 stores a control program or the like for controlling the operation of each hardware unit included in the estimation device 1. The CPU in the control unit 12 executes the control program stored in the ROM and various computer programs stored in the storage unit 13 to be described later, and controls the operation of each part of the hardware, thereby changing the oral image from the lesion on the oral mucosa. Realize the function to estimate the presence or absence of. The RAM used by the control unit 12 temporarily stores data used during execution of the calculation.
 制御部12は、CPU、ROM、及びRAMを備える構成としたが、GPU(Graphics Processing Unit)、FPGA(Field Programmable Gate Array)、DSP(Digital Signal Processor)、量子プロセッサ、揮発性又は不揮発性のメモリ等を備える1又は複数の演算回路であってもよい。また、制御部12は、日時情報を出力するクロック、計測開始指示を与えてから計測終了指示を与えるまでの経過時間を計測するタイマ、数をカウントするカウンタ等の機能を備えていてもよい。 The control unit 12 is configured to include a CPU, a ROM, and a RAM, but a GPU (Graphics Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), a quantum processor, a volatile or nonvolatile memory. It may be one or a plurality of arithmetic circuits including the above. Further, the control unit 12 may have functions such as a clock for outputting date and time information, a timer for measuring an elapsed time from giving a measurement start instruction to giving a measurement end instruction, and a counter for counting the number.
 記憶部13は、HDD(Hard Disk Drive)、SSD(Solid State Drive)、EEPROM(Electronically Erasable Programmable Read Only Memory)などの記憶装置を備える。記憶部13には、制御部12によって実行されるコンピュータプログラム、口腔粘膜における病変の有無を推定する処理に用いられる学習モデル130等が記憶される。 The storage unit 13 includes a storage device such as an HDD (Hard Disk Drive), an SSD (Solid State Drive), and an EEPROM (Electronically Erasable Programmable Read Only Memory). The storage unit 13 stores a computer program executed by the control unit 12, a learning model 130 used for the process of estimating the presence/absence of a lesion in the oral mucosa, and the like.
 記憶部13に記憶されるコンピュータプログラムは、取得した口腔画像から、学習モデル130を用いて、口腔粘膜における病変の有無を推定する処理を推定装置1に実行させるための推定処理プログラムP1を含む。 The computer program stored in the storage unit 13 includes an estimation processing program P1 for causing the estimation device 1 to perform processing for estimating the presence or absence of a lesion in the oral mucosa from the acquired oral image using the learning model 130.
 記憶部13に記憶されるコンピュータプログラムは、このコンピュータプログラムを読み取り可能に記録した非一時的な記録媒体M1により提供されてもよい。記録媒体M1は、例えば、CD-ROM、USBメモリ、SD(Secure Digital)カード、マイクロSDカード、コンパクトフラッシュ(登録商標)などの可搬型メモリである。制御部12は、例えば入力部11を通じて、記録媒体M1から各種プログラムを読み取り、読み取った各種プログラムを記憶部13にインストールする。 The computer program stored in the storage unit 13 may be provided by the non-transitory recording medium M1 in which the computer program is readablely recorded. The recording medium M1 is, for example, a portable memory such as a CD-ROM, a USB memory, an SD (Secure Digital) card, a micro SD card, or a compact flash (registered trademark). The control unit 12 reads various programs from the recording medium M1 through the input unit 11, and installs the read various programs in the storage unit 13, for example.
 学習モデル130は、口腔画像の入力に対して口腔粘膜における病変に関する情報を出力するように構成された学習モデルである。学習モデル130は、その定義情報によって記述される。学習モデル130の定義情報は、学習モデル130の構造情報、学習モデル130で用いられるノード間の重み及びバイアスなどの各種パラメータ等を含む。本実施の形態では、口腔画像と、この口腔画像に対するアノテーションとを教師データに用いて、所定の学習アルゴリズムによって予め学習された学習モデル130が記憶部13に記憶される。 The learning model 130 is a learning model configured to output information regarding lesions in the oral mucosa in response to input of an oral image. The learning model 130 is described by its definition information. The definition information of the learning model 130 includes structural information of the learning model 130, various parameters such as weights and biases between nodes used in the learning model 130, and the like. In the present embodiment, the learning model 130 preliminarily learned by the predetermined learning algorithm is stored in the storage unit 13 by using the mouth image and the annotation for the mouth image as the teacher data.
 制御部12は、記憶部13に記憶されている推定処理プログラムP1を実行し、口腔画像の画像データを学習モデル130に与えることによって、学習モデル130から病変に関する情報を取得する。制御部12は、学習モデル130から取得した病変に関する情報を基に口腔粘膜における病変の有無を推定する。 The control unit 12 executes the estimation processing program P1 stored in the storage unit 13 and supplies the image data of the oral cavity image to the learning model 130 to acquire the information regarding the lesion from the learning model 130. The control unit 12 estimates the presence or absence of a lesion in the oral mucosa based on the information on the lesion acquired from the learning model 130.
 出力部14は、出力装置を接続する出力インタフェースを備える。出力装置の一例は、液晶パネル又は有機EL(Electro-Luminescence)パネル等を備えた表示装置140である。制御部12は、推定結果を出力する際、表示装置140に表示させる表示データを生成し、生成した表示データを出力部14を通じて表示装置140へ出力することによって、表示装置140に推定結果を表示させる。 The output unit 14 has an output interface for connecting an output device. An example of the output device is a display device 140 including a liquid crystal panel, an organic EL (Electro-Luminescence) panel, or the like. When outputting the estimation result, the control unit 12 generates display data to be displayed on the display device 140, and outputs the generated display data to the display device 140 through the output unit 14, thereby displaying the estimation result on the display device 140. Let
 通信部15は、各種データを送受信する通信インタフェースを備える。通信部15が備える通信インタフェースは、例えば、WiFi(登録商標)やイーサネット(登録商標)で用いられるLAN(Local Area Network)の通信規格に準じた通信インタフェースである。通信部15は、送信すべきデータが制御部12から入力された場合、指定された宛先へ送信すべきデータを送信する。また、通信部15は、外部装置から送信されたデータを受信した場合、受信したデータを制御部12へ出力する。 The communication unit 15 has a communication interface for transmitting and receiving various data. The communication interface included in the communication unit 15 is, for example, a communication interface conforming to the communication standard of LAN (Local Area Network) used in WiFi (registered trademark) or Ethernet (registered trademark). When the data to be transmitted is input from the control unit 12, the communication unit 15 transmits the data to be transmitted to the designated destination. Further, when the communication unit 15 receives the data transmitted from the external device, the communication unit 15 outputs the received data to the control unit 12.
 操作部16は、各種操作ボタン、スイッチ、タッチパネル等の入力インタフェースを備えており、各種の操作情報及び設定情報を受付ける。制御部12は、操作部16から入力される操作情報に基づき適宜の制御を行い、必要に応じて設定情報を記憶部13に記憶させる。 The operation unit 16 includes input interfaces such as various operation buttons, switches, and touch panels, and receives various operation information and setting information. The control unit 12 performs appropriate control based on the operation information input from the operation unit 16, and stores the setting information in the storage unit 13 as necessary.
 次に、推定装置1に入力される口腔画像について説明する。
 図2は口腔画像の一例を示す模式図である。本実施の形態における口腔画像は、被検者の口腔内を撮像装置により撮像して得られる画像である。図2の例では、被検者の舌の左側面を含むように撮像した口腔画像を示している。
Next, the oral cavity image input to the estimation device 1 will be described.
FIG. 2 is a schematic diagram showing an example of an oral cavity image. The oral cavity image in the present embodiment is an image obtained by capturing the inside of the oral cavity of the subject with an imaging device. In the example of FIG. 2, an oral image captured so as to include the left side surface of the subject's tongue is shown.
 本実施の形態では、口腔粘膜の少なくとも一部が口腔画像に含まれていればよい。口腔粘膜は、被検者の舌、上唇、硬口蓋、軟口蓋、口蓋垂、口蓋扁桃、頬粘膜、口腔底、歯肉、及び下唇の少なくとも一部を含む。また、口腔画像には、被検者の歯、撮像者の指、その他の構造物等の口腔粘膜以外のものが含まれていてもよい。 In the present embodiment, it is sufficient that at least part of the oral mucosa is included in the oral image. Oral mucosa includes at least a portion of the subject's tongue, upper lip, hard palate, soft palate, uvula, tonsils, buccal mucosa, floor of the mouth, gums, and lower lip. The oral cavity image may include teeth other than the oral mucosa such as the subject's teeth, the imager's fingers, and other structures.
 次に、推定装置1において用いられる学習モデル130について説明する。
 図3は学習モデル130の構成例を示す模式図である。学習モデル130は、例えば、CNN(Convolutional Neural Networks)による学習モデルであり、入力層131、中間層132、及び、出力層133を備える。学習モデル130は、口腔画像の入力に対して、口腔粘膜における病変に関する情報を出力するように予め学習される。
Next, the learning model 130 used in the estimation device 1 will be described.
FIG. 3 is a schematic diagram showing a configuration example of the learning model 130. The learning model 130 is, for example, a learning model based on CNN (Convolutional Neural Networks), and includes an input layer 131, an intermediate layer 132, and an output layer 133. The learning model 130 is learned in advance so as to output information regarding lesions in the oral mucosa in response to the input of the oral cavity image.
 入力層131には、口腔画像の画像データが入力される。入力層131に入力された口腔画像の画像データは、中間層132へ送出される。 Image data of an oral cavity image is input to the input layer 131. The image data of the oral cavity image input to the input layer 131 is sent to the intermediate layer 132.
 中間層132は、例えば、畳み込み層132a、プーリング層132b、及び全結合層132cにより構成される。畳み込み層132a及びプーリング層132bは交互に複数設けられてもよい。畳み込み層132a及びプーリング層132bは、各層のノードを用いた演算によって、入力層131を通じて入力される口腔画像の特徴を抽出する。全結合層132cは、畳み込み層132a及びプーリング層132bによって特徴部分が抽出されたデータを1つのノードに結合し、活性化関数によって変換された特徴変数を出力する。特徴変数は、全結合層132cを通じて出力層133へ出力される。 The intermediate layer 132 is composed of, for example, a convolutional layer 132a, a pooling layer 132b, and a total coupling layer 132c. A plurality of convolutional layers 132a and pooling layers 132b may be alternately provided. The convolutional layer 132a and the pooling layer 132b extract the features of the oral cavity image input through the input layer 131 by the calculation using the nodes of each layer. The fully-combined layer 132c combines the data whose feature portions have been extracted by the convolutional layer 132a and the pooling layer 132b into one node, and outputs the feature variable transformed by the activation function. The feature variable is output to the output layer 133 through the full connection layer 132c.
 出力層133は、1つ又は複数のノードを備える。出力層133は、中間層132の全結合層132cから入力される特徴変数を基に、ソフトマックス関数を用いて確率に変換し、口腔画像が各カテゴリに該当する確率を各ノードから出力する。すなわち、本実施の形態では、病変に関する情報として口腔画像が各カテゴリに属する確立を出力する。口腔画像を分類するカテゴリは、口腔悪性腫瘍、前癌病変、良性腫瘍、外傷性潰瘍、炎症性疾患、ウイルス性疾患、真菌感染症、自己免疫疾患、口内炎、口角炎、褥瘡性潰瘍、舌表面粘膜の器質的変化、又は移植片対宿主病に属する少なくとも1つの病変を含むように任意に設定することができる。例えば、分類するカテゴリは、口腔悪性腫瘍に属する口腔癌及び口腔肉腫、前癌病変に属する白板症、紅班症、及び扁平苔癬、炎症性疾患に属する歯肉炎、歯周炎、顎炎、顎骨骨髄炎、及び薬剤性顎骨壊死、ウイルス性疾患に属するヘルペス、帯状疱疹、ヘルパンギーナ、及び手足口病、真菌感染症に属する口腔カンジダ症、自己免疫疾患に属する天疱瘡、類天疱瘡、及びBehcet病、並びに、舌表面粘膜の器質的変化に属する地図状舌、溝状舌、黒毛舌、及び正中菱形舌炎の少なくとも1つを含んでもよい。更に、分類するカテゴリは、病変に属さない色素沈着を含んでもよく、病変及び色素沈着に属さない正常状態を含んでもよい。 The output layer 133 includes one or a plurality of nodes. The output layer 133 converts into a probability using a softmax function based on the feature variable input from the fully connected layer 132c of the intermediate layer 132, and outputs the probability that the oral cavity image falls into each category from each node. That is, in the present embodiment, the probability that the oral cavity image belongs to each category is output as the information regarding the lesion. The categories for classifying oral images are oral malignancies, precancerous lesions, benign tumors, traumatic ulcers, inflammatory diseases, viral diseases, fungal infections, autoimmune diseases, stomatitis, cheilitis, decubitus ulcers, tongue surface It can be arbitrarily set to include an organic change in the mucous membrane or at least one lesion belonging to graft-versus-host disease. For example, the categories to be classified are oral cancer and oral sarcoma belonging to oral malignant tumors, leukoplakia belonging to precancerous lesions, erythema and lichen planus, gingivitis belonging to inflammatory diseases, periodontitis, jaw inflammation, Mandibular osteomyelitis and drug-induced osteonecrosis of the jaw, herpes belonging to viral diseases, herpes zoster, herpangina, and hand-foot-and-mouth disease, oral candidiasis belonging to fungal infections, pemphigus belonging to autoimmune diseases, pemphigus vulgaris, and Behcet Disease, and at least one of a cartilaginous tongue, a grooved tongue, a black-haired tongue, and a rhomboid glossum belonging to the organic change of the surface mucous membrane of the tongue. Further, the categories to be classified may include pigmentation that does not belong to the lesion, and may include normal state that does not belong to the lesion and pigmentation.
 図3の例は、口腔画像を分類するカテゴリとして、n個のカテゴリを設定した学習モデル130を示している。この学習モデル130は、出力層133の各ノードから、口腔悪性腫瘍である確率X1,白板症である確率X2,扁平苔癬である確率X3,…,正常である確率Xnを出力するように構成されている。なお、設定するカテゴリの数(=n)は、1個であってもよく、複数個であってもよい。 The example in FIG. 3 shows a learning model 130 in which n categories are set as categories for classifying oral cavity images. The learning model 130 is configured to output from each node of the output layer 133 a probability X1 of oral malignancy, a probability X2 of leukoplakia 2, a probability X3 of lichen planus,..., A probability Xn of normal. Has been done. The number of categories (=n) to be set may be one or more.
 推定装置1の制御部12は、学習モデル130の出力層133から、分類すべきカテゴリとして設定された病変のそれぞれについての確率を取得し、取得した確率に基づき、口腔粘膜における病変の有無を推定する。例えば、口腔悪性腫瘍である確率X1のみが閾値(例えば80%)を超える場合、制御部12は、被検者の口腔内に悪性腫瘍が発症していると推定することができる。確率X2,X3,…,Xn-1の何れか1つが閾値を超える場合についても同様である。一方、確率X1,X2,…,Xn-1の何れも閾値を超えない場合、若しくは、正常であることを示す確率Xnが閾値を超える場合、制御部12は、被検者の口腔内に病変が存在しないと推定できる。 The control unit 12 of the estimation device 1 acquires the probability of each lesion set as a category to be classified from the output layer 133 of the learning model 130, and estimates the presence/absence of a lesion in the oral mucosa based on the acquired probability. To do. For example, when only the probability X1 of being an oral malignancy exceeds a threshold value (for example, 80%), the control unit 12 can estimate that a malignant tumor has developed in the oral cavity of the subject. The same applies when any one of the probabilities X2, X3,..., Xn−1 exceeds the threshold value. On the other hand, if none of the probabilities X1, X2,..., Xn−1 exceeds the threshold value, or if the probability Xn indicating that the probability is normal exceeds the threshold value, the control unit 12 causes the lesion in the oral cavity of the subject. Can be inferred to not exist.
 なお、図3の例ではCNNによる学習モデル130を示したが、学習モデル130を構築する機械学習のモデルは任意に設定することができる。例えば、CNNに代えて、R-CNN(Region-based CNN)、YOLO(You Only Look Once)、SSD(Single Shot Detector)等に基づく学習モデルを設定してもよい。 Although the CNN learning model 130 is shown in the example of FIG. 3, a machine learning model for constructing the learning model 130 can be set arbitrarily. For example, instead of CNN, a learning model based on R-CNN (Region-based CNN), YOLO (You Only Look Once), SSD (Single Shot Detector), or the like may be set.
 図4は実施の形態1に係る推定装置1が実行する処理の手順を説明するフローチャートである。推定装置1の制御部12は、記憶部13に記憶されている推定処理プログラムP1を実行することにより、以下の推定処理を実行する。 FIG. 4 is a flowchart illustrating a procedure of processing executed by the estimation device 1 according to the first embodiment. The control unit 12 of the estimation device 1 executes the estimation process program P1 stored in the storage unit 13 to execute the following estimation process.
 制御部12は、入力部11を通じて口腔画像を取得し(ステップS101)、取得した口腔画像を学習モデル130の入力層131に与えることによって、学習モデル130による演算を実行させる(ステップS102)。学習モデル130の入力層131に与えられた口腔画像の画像データは中間層132へ送出される。中間層132では、ノード間の重み及びバイアスを含む活性化関数を用いた演算が実行される。中間層132の畳み込み層132a及びプーリング層132bでは画像の特徴が抽出される。畳み込み層132a及びプーリング層132bによって抽出された特徴部分のデータは、全結合層132cの構成する各ノードに結合され、活性化関数によって特徴変数に変換される。変換された特徴変数は、全結合層132cを通じて出力層133へ出力される。出力層133は、中間層132の全結合層132cから入力される特徴変数を基に、ソフトマックス関数を用いて確率に変換し、各カテゴリに属する確率を各ノードから出力する。 The control unit 12 acquires an oral cavity image through the input unit 11 (step S101), and applies the acquired oral cavity image to the input layer 131 of the learning model 130 to execute the calculation by the learning model 130 (step S102). The image data of the oral cavity image given to the input layer 131 of the learning model 130 is sent to the intermediate layer 132. In the intermediate layer 132, an operation using an activation function including weights and biases between nodes is executed. Image characteristics are extracted in the convolutional layer 132a and the pooling layer 132b of the intermediate layer 132. The data of the characteristic part extracted by the convolutional layer 132a and the pooling layer 132b is combined with each node which the total connection layer 132c comprises, and converted into a characteristic variable by an activation function. The converted feature variable is output to the output layer 133 through the fully connected layer 132c. The output layer 133 converts the feature variables input from the fully connected layer 132c of the intermediate layer 132 into probabilities using a softmax function, and outputs the probabilities belonging to each category from each node.
 制御部12は、学習モデル130から演算結果を取得し、取得した演算結果を基に口腔粘膜における病変の有無を推定する(ステップS103)。上述したように、学習モデル130の出力層133を構成する各ノードからは、分類すべきカテゴリとして設定された病変のそれぞれについての確率が出力される。制御部12は、出力層133の各ノードから出力される確率に基づき、病変の有無を推定できる。 The control unit 12 acquires a calculation result from the learning model 130 and estimates the presence or absence of a lesion in the oral mucosa based on the acquired calculation result (step S103). As described above, the probability of each lesion set as a category to be classified is output from each node forming the output layer 133 of the learning model 130. The control unit 12 can estimate the presence or absence of a lesion based on the probability output from each node of the output layer 133.
 制御部12は、出力部14を通じて、推定結果を出力する(ステップS104)。具体的には、制御部12は、推定結果を表示装置140に表示させるための表示データを生成し、生成した表示データを表示装置140へ出力することによって、推定結果を表示装置140に表示させる。推定結果の表示態様は任意に設定することができる。例えば、制御部12は、特定の病変(例えば口腔悪性腫瘍)の有無を表す文字又は図形を含んだ表示データを生成して表示装置140へ出力し、表示装置140において特定の病変の有無を文字又は図形により表示してもよい。また、制御部12は、各病変に該当する確率の値を含んだ表示データを生成して表示装置140へ出力し、表示装置140において各病変に該当する確率の値を数字情報として表示してもよい。 The control unit 12 outputs the estimation result through the output unit 14 (step S104). Specifically, the control unit 12 causes the display device 140 to display the estimation result by generating display data for displaying the estimation result on the display device 140 and outputting the generated display data to the display device 140. .. The display mode of the estimation result can be set arbitrarily. For example, the control unit 12 generates display data including a character or a graphic representing the presence or absence of a specific lesion (for example, an oral malignant tumor) and outputs the display data to the display device 140. Alternatively, it may be displayed by a graphic. In addition, the control unit 12 generates display data including a probability value corresponding to each lesion, outputs the display data to the display device 140, and displays the probability value corresponding to each lesion on the display device 140 as numerical information. Good.
 図5は推定装置1の出力例を示す模式図である。図5の例では、被検者を識別する被検者ID、被検者の氏名、推定処理に用いた口腔画像、各カテゴリに該当する確率、及び推定結果を示す文字情報を表示装置140に表示させた状態を示している。 FIG. 5 is a schematic diagram showing an output example of the estimation device 1. In the example of FIG. 5, the subject ID for identifying the subject, the name of the subject, the mouth image used in the estimation process, the probability of being in each category, and the character information indicating the estimation result are displayed on the display device 140. It shows the displayed state.
 以上のように、本実施の形態では、深層学習を含む機械学習の学習モデル130を用いて口腔粘膜における病変の有無を推定し、推定結果を出力するので、推定結果を診断支援に用いることによって、病変の見落としが発生する可能性を低減できる。 As described above, in the present embodiment, the presence or absence of a lesion in the oral mucosa is estimated using the learning model 130 for machine learning including deep learning, and the estimation result is output. Therefore, by using the estimation result for diagnosis support, The possibility of overlooking the lesion can be reduced.
 なお、本実施の形態では、推定装置1を病院等の施設に設置されるコンピュータ装置として説明したが、推定装置1を病院等の施設に設置されるコンピュータ装置から通信によってアクセス可能なサーバ装置としてもよい。この場合、推定装置1は、病院等のコンピュータ装置から、被検者の口腔内を撮像した口腔画像を通信により取得し、取得した口腔画像に基づき、口腔粘膜における病変の有無を推定する。推定装置1は、推定結果を通信により病院等のコンピュータ装置へ送信する。コンピュータ装置は、推定装置1から受信した推定結果を表示装置に表示することによって、医師等に対する診断支援を行う。 In the present embodiment, the estimation device 1 has been described as a computer device installed in a facility such as a hospital, but the estimation device 1 is a server device that can be accessed by communication from a computer device installed in a facility such as a hospital. Good. In this case, the estimation device 1 acquires an oral cavity image obtained by imaging the inside of the oral cavity of the subject by communication from a computer device such as a hospital, and estimates the presence or absence of a lesion in the oral mucosa based on the acquired oral cavity image. The estimation device 1 transmits the estimation result to a computer device such as a hospital by communication. The computer device displays the estimation result received from the estimation device 1 on the display device to support diagnosis for a doctor or the like.
(実施の形態2)
 実施の形態2では、推定装置1において、口腔画像から口腔粘膜に対応する領域を抽出し、抽出した領域の口腔画像から、口腔粘膜における病変の有無を推定する構成について説明する。
(Embodiment 2)
In the second embodiment, a configuration in which the estimation device 1 extracts a region corresponding to the oral mucosa from the oral image and estimates the presence or absence of a lesion in the oral mucosa from the oral image of the extracted region will be described.
 図6は実施の形態2に係る推定装置1の構成を説明するブロック図である。推定装置1は、入力部11、制御部12、記憶部13、出力部14、通信部15、及び操作部16を備える。これらの構成は、実施の形態1と同様であるため、その詳細な説明を省略することとする。 FIG. 6 is a block diagram illustrating the configuration of the estimation device 1 according to the second embodiment. The estimation device 1 includes an input unit 11, a control unit 12, a storage unit 13, an output unit 14, a communication unit 15, and an operation unit 16. Since these configurations are similar to those in the first embodiment, detailed description thereof will be omitted.
 記憶部13には、前述の学習モデル130及び推定処理プログラムP1に加え、領域抽出プログラムP2が記憶される。領域抽出プログラムP2は、口腔画像から、被検者の口腔粘膜に対応する領域を抽出する処理を推定装置1に実行させるためのコンピュータプログラムである。領域抽出プログラムP2には、公知の領域抽出アルゴリズムが用いられる。 The storage unit 13 stores a region extraction program P2 in addition to the learning model 130 and the estimation processing program P1 described above. The region extraction program P2 is a computer program for causing the estimation device 1 to execute a process of extracting a region corresponding to the oral mucosa of the subject from the oral image. A well-known area extraction algorithm is used for the area extraction program P2.
 領域抽出アルゴリズムの一例はGrabCutアルゴリズムである。GrabCutアルゴリズムでは、混合正規分布モデル(GMM : Gaussian Mixture Model)を用いて前景領域及び背景領域の画素値の分布を学習し、未知の領域として設定された画素に対して、画素値の統計値を基に前景領域及び背景領域の関係から前景らしさ及び背景らしさを計算することによって、前景領域及び背景領域を分離して抽出する。 An example of the area extraction algorithm is the GrabCut algorithm. The GrabCut algorithm learns the distribution of pixel values in the foreground area and background area using a mixed normal distribution model (GMM:GaussianMixtureModel), and calculates statistical values of pixel values for pixels set as unknown areas. Based on the relationship between the foreground region and the background region, the foreground region and the background region are calculated to separate and extract the foreground region and the background region.
 本実施の形態に係る推定装置1は、口腔画像の全画像領域のうち口腔粘膜に対応する領域を前景領域として抽出し、全画像領域のうち口腔粘膜に対応する領域を除いた領域を背景領域として抽出する。背景領域には、被検者の歯、撮影者の指、様々な器具などが写り込んでいる可能性があるが、GrabCutアルゴリズム等の領域抽出アルゴリズムを用いることによって、前景領域を背景領域から分離して抽出することができる。 The estimation device 1 according to the present embodiment extracts a region corresponding to the oral mucosa of all image regions of the oral image as a foreground region and a region excluding the region corresponding to the oral mucosa of the entire image region as a background region. To extract. Although it is possible that the subject's teeth, the photographer's finger, various appliances, etc. are reflected in the background region, the foreground region is separated from the background region by using a region extraction algorithm such as the GrabCut algorithm. Can be extracted.
 なお、本実施の形態では、口腔粘膜に対応する領域(前景領域)と、それ以外の領域(背景領域)とを分離する構成としたが、特定の口腔粘膜(例えば舌)に対応する領域を前景領域として抽出し、それ以外の口腔粘膜を含む領域を背景領域として抽出してもよい。 In the present embodiment, the region corresponding to the oral mucosa (foreground region) and the other region (background region) are separated, but the region corresponding to a specific oral mucosa (for example, tongue) is set. You may extract as a foreground area|region, and extract the area|region containing other oral mucous membranes as a background area|region.
 図7は抽出例を示す模式図である。図7の例は、口腔画像の全画像領域を、被検者の口腔粘膜に対応する領域(前景領域)と、それ以外の領域(背景領域)とに分離した結果を示している。図7において、背景領域はハッチングを付した領域として示されている。背景領域には、被検者の歯に対応する領域、及び口腔外の領域が含まれていることが分かる。推定装置1は、被検者の口腔粘膜に対応する領域(前景領域)の口腔画像を学習モデル130へ引き渡すことによって、口腔粘膜における病変の有無を推定する。 FIG. 7 is a schematic diagram showing an example of extraction. The example of FIG. 7 shows the result of dividing the entire image region of the oral cavity image into a region (foreground region) corresponding to the oral mucosa of the subject and a region (background region) other than that. In FIG. 7, the background area is shown as a hatched area. It can be seen that the background region includes a region corresponding to the tooth of the subject and a region outside the oral cavity. The estimation device 1 estimates the presence or absence of a lesion in the oral mucosa by passing the oral image of the region (foreground region) corresponding to the oral mucosa of the subject to the learning model 130.
 図8は実施の形態2に係る推定装置1が実行する処理の手順を説明するフローチャートである。推定装置1の制御部12は、記憶部13に記憶されている推定処理プログラムP1及び領域抽出プログラムP2を実行することにより、以下の推定処理を実行する。 FIG. 8 is a flowchart illustrating a procedure of processing executed by the estimation device 1 according to the second embodiment. The control unit 12 of the estimation device 1 executes the estimation process program P1 and the region extraction program P2 stored in the storage unit 13 to perform the following estimation process.
 制御部12は、入力部11を通じて口腔画像を取得し(ステップS201)、取得した口腔画像の全画像領域から、口腔粘膜に対応する領域を抽出する(ステップS202)。ステップS202の処理によって、口腔画像からは、被検者の歯、撮像者の指、様々な器具に対応する部分が除去される。 The control unit 12 acquires an oral cavity image through the input unit 11 (step S201), and extracts a region corresponding to the oral mucosa from the entire image region of the acquired oral cavity image (step S202). By the process of step S202, the tooth of the subject, the finger of the imager, and a portion corresponding to various appliances are removed from the oral image.
 次いで、制御部12は、口腔領域に対応する領域が抽出された口腔画像(被検者の歯等に対応した部分を除去した画像)を学習モデル130の入力層131に与えることによって、学習モデル130による演算を実行させる(ステップS203)。学習モデル130の入力層131に与えられた口腔画像のデータは中間層132へ送出される。中間層132では、ノード間の重み及びバイアスを含む活性化関数を用いた演算が実行される。中間層132の畳み込み層132a及びプーリング層132bにおいて画像の特徴が抽出される。畳み込み層132a及びプーリング層132bによって抽出された特徴部分のデータは、全結合層132cの構成する各ノードに結合され、活性化関数によって特徴変数に変換される。変換された特徴変数は、全結合層132cを通じて出力層133へ出力される。出力層133は、中間層132の全結合層132cから入力される特徴変数を基に、ソフトマックス関数を用いて確率に変換し、各カテゴリに属する確率を各ノードから出力する。 Next, the control unit 12 gives an oral cavity image in which a region corresponding to the oral region is extracted (an image in which a portion corresponding to the tooth of the subject is removed) to the input layer 131 of the learning model 130, thereby learning model. The calculation by 130 is executed (step S203). The data of the oral cavity image given to the input layer 131 of the learning model 130 is sent to the intermediate layer 132. In the intermediate layer 132, an operation using an activation function including weights and biases between nodes is executed. Image features are extracted in the convolutional layer 132a and the pooling layer 132b of the intermediate layer 132. The data of the characteristic part extracted by the convolutional layer 132a and the pooling layer 132b is combined with each node which the total connection layer 132c comprises, and converted into a characteristic variable by an activation function. The converted feature variable is output to the output layer 133 through the fully connected layer 132c. The output layer 133 converts the feature variables input from the fully connected layer 132c of the intermediate layer 132 into probabilities using a softmax function, and outputs the probabilities belonging to each category from each node.
 制御部12は、学習モデル130から演算結果を取得し、取得した演算結果を基に口腔粘膜における病変の有無を推定する(ステップS204)。上述したように、学習モデル130の出力層133を構成する各ノードからは、分類すべきカテゴリとして設定された病変のそれぞれについての確率が出力される。制御部12は、出力層133の各ノードから出力される確率に基づき、病変の有無を推定できる。 The control unit 12 acquires a calculation result from the learning model 130 and estimates the presence/absence of a lesion in the oral mucosa based on the acquired calculation result (step S204). As described above, the probability of each lesion set as a category to be classified is output from each node forming the output layer 133 of the learning model 130. The control unit 12 can estimate the presence or absence of a lesion based on the probability output from each node of the output layer 133.
 制御部12は、出力部14を通じて、推定結果を出力する(ステップS205)。具体的には、制御部12は、推定結果を表示装置140に表示させるための表示データを生成し、生成した表示データを表示装置140へ出力することによって、推定結果を表示装置140に表示させる。推定結果の表示態様は任意に設定することができる。例えば、制御部12は、特定の病変(例えば口腔悪性腫瘍)の有無を表す文字又は図形を含んだ表示データを生成して表示装置140へ出力し、表示装置140において特定の病変の有無を文字又は図形により表示してもよい。また、制御部12は、各病変に該当する確率の値を含んだ表示データを生成して表示装置140へ出力し、表示装置140において各病変に該当する確率の値を数字情報として表示してもよい。 The control unit 12 outputs the estimation result through the output unit 14 (step S205). Specifically, the control unit 12 causes the display device 140 to display the estimation result by generating display data for displaying the estimation result on the display device 140 and outputting the generated display data to the display device 140. .. The display mode of the estimation result can be set arbitrarily. For example, the control unit 12 generates display data including a character or a graphic representing the presence or absence of a specific lesion (for example, oral malignant tumor) and outputs the display data to the display device 140, and the display device 140 displays the presence or absence of the specific lesion as a character. Alternatively, it may be displayed by a graphic. In addition, the control unit 12 generates display data including a probability value corresponding to each lesion, outputs the display data to the display device 140, and displays the probability value corresponding to each lesion on the display device 140 as numerical information. Good.
 以上のように、実施の形態2では、病変の推定に不要な部分を除去した上で、推定処理を実行できるので、推定精度の向上を図ることができる。 As described above, in the second embodiment, the estimation process can be executed after removing the portion unnecessary for the lesion estimation, so that the estimation accuracy can be improved.
(実施の形態3)
 実施の形態3では、学習モデルの生成方法について説明する。
(Embodiment 3)
In the third embodiment, a learning model generation method will be described.
 推定装置1において使用される学習モデル130は、例えば、推定装置1と通信可能に接続されるサーバ装置2において生成される。 The learning model 130 used in the estimation device 1 is generated, for example, in the server device 2 communicatively connected to the estimation device 1.
 図9はサーバ装置2の構成を説明するブロック図である。サーバ装置2は、制御部21、記憶部22、入力部23、通信部24、操作部25、及び表示部26を備える。 FIG. 9 is a block diagram illustrating the configuration of the server device 2. The server device 2 includes a control unit 21, a storage unit 22, an input unit 23, a communication unit 24, an operation unit 25, and a display unit 26.
 制御部21は、例えば、CPU、ROM、RAMなどを備える。制御部21が備えるROMには、サーバ装置2が備えるハードウェア各部の動作を制御するための制御プログラム等が記憶される。制御部21内のCPUは、ROMに記憶された制御プログラム、及び記憶部22に記憶された各種プログラムを実行し、ハードウェア各部の動作を制御する。 The control unit 21 includes, for example, a CPU, ROM, RAM and the like. The ROM included in the control unit 21 stores a control program or the like for controlling the operation of each hardware unit included in the server device 2. The CPU in the control unit 21 executes the control program stored in the ROM and various programs stored in the storage unit 22 to control the operation of each unit of the hardware.
 制御部21は上述の構成に限定されない。制御部21は、CPU、ROM及びRAMを備えた構成に限定されない。制御部21は、例えば、GPU、FPGA、DSP、揮発性または不揮発性のメモリ等を含む1又は複数の制御回路または演算回路であってもよい。また、制御部21は、日時情報を出力するクロック、計測開始指示を与えてから計測終了指示を与えるまでの経過時間を計測するタイマ、数をカウントするカウンタ等の機能を備えていてもよい。 The control unit 21 is not limited to the above configuration. The control unit 21 is not limited to the configuration including the CPU, the ROM, and the RAM. The control unit 21 may be, for example, one or more control circuits or arithmetic circuits including a GPU, FPGA, DSP, volatile or non-volatile memory, or the like. Further, the control unit 21 may have functions such as a clock for outputting date and time information, a timer for measuring an elapsed time from giving a measurement start instruction to giving a measurement end instruction, and a counter for counting the number.
 記憶部22は、ハードディスクドライブなどの記憶装置を備える。記憶部22には、制御部21によって実行される各種コンピュータプログラム、当該コンピュータプログラムによって利用される各種データ、外部から取得したデータ等が記憶される。記憶部22に記憶されるコンピュータプログラムの一例は、学習モデルを生成するためのモデル生成プログラムP3である。また、記憶部22は、口腔画像と口腔画像のアノテーションとを関連付けて記憶する口腔画像データベース(口腔画像DB)220を備える。 The storage unit 22 includes a storage device such as a hard disk drive. The storage unit 22 stores various computer programs executed by the control unit 21, various data used by the computer programs, data acquired from the outside, and the like. An example of the computer program stored in the storage unit 22 is a model generation program P3 for generating a learning model. The storage unit 22 also includes an oral cavity image database (oral cavity image DB) 220 that stores the oral cavity image and the annotation of the oral cavity image in association with each other.
 入力部23は、各種データ又はプログラムを記録した記録媒体から、データ及びプログラムを取得するための入力インタフェースを備える。入力部23を通じて入力された各種データ及びプログラムは、記憶部22に記憶される。 The input unit 23 includes an input interface for acquiring data and programs from a recording medium that records various data or programs. Various data and programs input through the input unit 23 are stored in the storage unit 22.
 通信部24は、通信ネットワークNに接続する通信インタフェースを備える。通信ネットワークNは、インターネット網、特定用途向けのLAN又はWAN(Wide Area Network)などである。通信部24は、推定装置1へ送信すべきデータを、通信ネットワークNを介して推定装置1へ送信する。また、通信部24は、サーバ装置2を宛先として推定装置1から送信されるデータを、通信ネットワークNを介して受信する。 The communication unit 24 includes a communication interface connected to the communication network N. The communication network N is an internet network, a LAN or WAN (Wide Area Network) for a specific purpose, or the like. The communication unit 24 transmits the data to be transmitted to the estimation device 1 to the estimation device 1 via the communication network N. The communication unit 24 also receives, via the communication network N, the data transmitted from the estimation device 1 with the server device 2 as the destination.
 操作部25は、キーボードやマウスなどの入力インタフェースを備えており、各種の操作情報や設定情報を受付ける。制御部21は、操作部25から入力される操作情報に基づき適宜の制御を行い、必要に応じて設定情報を記憶部22に記憶させる。 The operation unit 25 has an input interface such as a keyboard and a mouse, and receives various operation information and setting information. The control unit 21 performs appropriate control based on the operation information input from the operation unit 25, and stores the setting information in the storage unit 22 as necessary.
 表示部26は、液晶表示パネル、有機EL表示パネル等の表示デバイスを備えており、制御部21から出力される制御信号に基づいて、サーバ装置2の管理者等に通知すべき情報を表示する。 The display unit 26 includes a display device such as a liquid crystal display panel and an organic EL display panel, and displays information to be notified to the administrator of the server device 2 based on the control signal output from the control unit 21. ..
 なお、本実施の形態では、サーバ装置2が操作部25及び表示部26を備える構成としたが、操作部25及び表示部26は必須ではなく、外部に接続されたコンピュータを通じて操作を受付け、通知すべき情報を外部のコンピュータへ出力する構成であってもよい。 In addition, in the present embodiment, the server device 2 is configured to include the operation unit 25 and the display unit 26, but the operation unit 25 and the display unit 26 are not essential, and the operation is accepted and notified through the computer connected to the outside. The information to be output may be output to an external computer.
 図10は口腔画像データベース220の一例を示す概念図である。口腔画像データベース220では、口腔画像と、口腔画像に対するアノテーションとを関連付けて記憶する。口腔画像は、例えば、悪性腫瘍が発症している口腔の画像、口腔粘膜疾患に特異的な形態(例えば、潰瘍、びらん、膨隆など)を有する口腔の画像等を含む。アノテーションには医師の診断結果が含まれる。診断結果は、病理診断結果若しくは確定診断結果を含み、本実施の形態では、関連付けて記憶される口腔画像が正常であることを示すラベルデータ、又は何れの病変であるのかを示すラベルデータとして用いられる。また、アノテーションには、被検者ID及び被検者氏名等の情報が含まれていてもよい。 FIG. 10 is a conceptual diagram showing an example of the oral cavity image database 220. The oral cavity image database 220 stores the oral cavity image and the annotation for the oral cavity image in association with each other. The oral cavity image includes, for example, an image of the oral cavity in which a malignant tumor has developed, an image of the oral cavity having a morphology (eg, ulcer, erosion, bulge, etc.) specific to an oral mucosal disease. The annotation includes the doctor's diagnosis result. The diagnosis result includes a pathological diagnosis result or a definitive diagnosis result, and in the present embodiment, it is used as label data indicating that the oral image stored in association with each other is normal, or as label data indicating which lesion. To be Further, the annotation may include information such as the subject ID and the subject name.
 以下、サーバ装置2にて学習モデルを生成する手順について説明する。
 図11は学習モデルの生成手順を説明するフローチャートである。サーバ装置2の制御部21は、記憶部22の口腔画像データベース220にアクセスし、学習モデルの生成に用いる教師データを取得する(ステップS301)。教師データは、例えば、口腔画像と、口腔画像に対するアノテーションとを含む。学習モデルを生成する初期段階では、教師データは、サーバ装置2の運営者等によって用意されたものが設定される。また、学習が進めば、学習モデル130による推定結果と、推定処理に用いた口腔画像とを推定装置1から取得し、取得したデータを教師データとして設定してもよい。
Hereinafter, a procedure for generating a learning model in the server device 2 will be described.
FIG. 11 is a flowchart for explaining the learning model generation procedure. The control unit 21 of the server device 2 accesses the oral cavity image database 220 of the storage unit 22 and acquires the teacher data used for generating the learning model (step S301). The teacher data includes, for example, a mouth image and an annotation for the mouth image. In the initial stage of generating the learning model, the teacher data set by the operator of the server device 2 or the like is set. Further, as the learning progresses, the estimation result by the learning model 130 and the oral cavity image used for the estimation process may be acquired from the estimation device 1, and the acquired data may be set as the teacher data.
 次いで、制御部21は、教師データとして含まれる画像データを学習用の学習モデルへ入力し(ステップS302)、学習モデルから演算結果を取得する(ステップS303)。学習が開始される前の段階では、学習モデルを記述する定義情報には、初期設定値が与えられているものとする。この学習モデルによる演算は、推定処理における学習モデル130の演算と同様である。 Next, the control unit 21 inputs the image data included as teacher data into the learning model for learning (step S302), and acquires the calculation result from the learning model (step S303). Before the learning is started, it is assumed that an initial setting value is given to the definition information that describes the learning model. The calculation by this learning model is the same as the calculation of the learning model 130 in the estimation processing.
 次いで、制御部21は、ステップS303で得られた演算結果を評価し(ステップS304)、学習が完了したか否かを判断する(ステップS305)。具体的には、制御部21は、ステップS303で得られる演算結果と教師データとに基づく誤差関数(目的関数、損失関数、コスト関数ともいう)を用いて、演算結果を評価することができる。制御部21は、最急降下法などの勾配降下法により誤差関数を最適化(最小化又は最大化)する課程で、誤差関数が閾値以下(又は閾値以上)となった場合、学習が完了したと判断する。なお、過学習の問題を避けるために、交差検定、早期打ち切りなどの手法を取り入れ、適切なタイミングにて学習を終了させてもよい。 Next, the control unit 21 evaluates the calculation result obtained in step S303 (step S304) and determines whether the learning is completed (step S305). Specifically, the control unit 21 can evaluate the calculation result using an error function (also called an objective function, a loss function, a cost function) based on the calculation result and the teacher data obtained in step S303. The control unit 21 is a process of optimizing (minimizing or maximizing) the error function by a gradient descent method such as the steepest descent method, and when the error function becomes equal to or less than a threshold value (or more than a threshold value), learning is completed. to decide. In order to avoid the problem of overlearning, methods such as cross validation and early censoring may be introduced and learning may be terminated at an appropriate timing.
 学習が完了していないと判断した場合(S305:NO)、制御部21は、学習モデルのノード間の重み及びバイアスを更新し(ステップS306)、処理をステップS301へ戻す。制御部21は、学習モデルの出力層から入力層に向かって、ノード間の重み及びバイアスを順次更新する誤差逆伝搬法を用いて、各ノード間の重み及びバイアスを更新することができる。 When it is determined that the learning is not completed (S305: NO), the control unit 21 updates the weight and bias between the nodes of the learning model (step S306), and returns the process to step S301. The control unit 21 can update the weights and biases between the nodes by using the error backpropagation method that sequentially updates the weights and biases between the nodes from the output layer of the learning model to the input layer.
 学習が完了したと判断した場合(S305:YES)、制御部21は、学習済みの学習モデルとして記憶部22に記憶させ(ステップS307)、本フローチャートによる処理を終了する。 When it is determined that the learning is completed (S305: YES), the control unit 21 stores the learned model as a learned model in the storage unit 22 (step S307), and ends the process according to this flowchart.
 以上のように、本実施の形態では、推定装置1において用いられる学習モデル130をサーバ装置2において生成することができる。サーバ装置2は、推定装置1からの要求に応じて、生成した学習モデルを推定装置1へ送信する。推定装置1は、サーバ装置2から学習モデルを受信し、記憶部13に記憶させた後、推定処理プログラムP1を実行することによって、病変の推定処理を実行することができる。 As described above, in the present embodiment, the learning model 130 used in the estimation device 1 can be generated in the server device 2. The server device 2 transmits the generated learning model to the estimation device 1 in response to a request from the estimation device 1. The estimation device 1 receives the learning model from the server device 2, stores it in the storage unit 13, and then executes the estimation processing program P1 to execute the lesion estimation process.
 更に、サーバ装置2は、学習完了後の適宜のタイミングで、新たに口腔画像と、口腔画像に対するアノテーションとを収集し、これらのデータを用いて学習モデルを再学習する構成としてもよい。口腔画像は、口腔粘膜の少なくとも一部を含むように撮像して得られる口腔画像(図2を参照)であってもよく、口腔粘膜に対応する領域を抽出して得られる口腔画像(図7を参照)であってもよい。また、推定装置1は、推定結果を表示する際に、その推定結果が正しいか否かの選択(診断結果)を受付け、受付けた診断結果をアノテーションとしてサーバ装置2へ送信してもよい。再学習の手順は、学習モデルの生成手順と全く同様であり、教師データに含まれる口腔画像を学習モデルへ入力し、学習モデルの出力として得られる演算結果と教師データに含まれるアノテーションとの間の誤差を評価することによって、再学習が実行される。 Further, the server device 2 may be configured to newly collect an oral cavity image and an annotation for the oral cavity image and re-learn the learning model using these data at an appropriate timing after the learning is completed. The oral cavity image may be an oral cavity image (see FIG. 2) obtained by capturing at least a part of the oral mucosa, and an oral cavity image obtained by extracting a region corresponding to the oral mucosa (see FIG. 7). ). Further, when displaying the estimation result, the estimation device 1 may accept a selection (diagnosis result) as to whether the estimation result is correct and transmit the accepted diagnosis result to the server device 2 as an annotation. The procedure of re-learning is exactly the same as the procedure of generating the learning model, and the oral image included in the teacher data is input to the learning model, and the calculation result obtained as the output of the learning model and the annotation included in the teacher data are input. Re-learning is performed by evaluating the error of.
 今回開示された実施形態は、全ての点において例示であって、制限的なものではないと考えられるべきである。本発明の範囲は、上述した意味ではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内での全ての変更が含まれることが意図される。 The embodiments disclosed this time are to be considered as illustrative in all points and not restrictive. The scope of the present invention is shown not by the meanings described above but by the claims, and is intended to include meanings equivalent to the claims and all modifications within the scope.
 1 推定装置
 2 サーバ装置
 11 入力部
 12 制御部
 13 記憶部
 14 出力部
 15 通信部
 16 操作部
 21 制御部
 22 記憶部
 23 入力部
 24 通信部
 25 操作部
 26 表示部
 130 学習モデル
 220 口腔画像データベース
 P1 推定処理プログラム
 P2 領域抽出プログラム
 P3 モデル生成プログラム
 
DESCRIPTION OF SYMBOLS 1 estimation apparatus 2 server apparatus 11 input part 12 control part 13 storage part 14 output part 15 communication part 16 operation part 21 control part 22 storage part 23 input part 24 communication part 25 operation part 26 display part 130 learning model 220 oral cavity image database P1 Estimation processing program P2 area extraction program P3 model generation program

Claims (8)

  1.  被検者の口腔内を撮像して得られる口腔画像を取得する取得部と、
     口腔画像の入力に対して口腔粘膜における病変に関する情報を出力するように構成された学習モデルを用いて、前記取得部が取得した口腔画像から、被検者の口腔粘膜における病変の有無を推定する推定部と、
     該推定部の推定結果を出力する出力部と
     を備える推定装置。
    An acquisition unit that acquires an oral cavity image obtained by imaging the inside of the oral cavity of the subject,
    Using a learning model configured to output information regarding lesions in the oral mucosa in response to the input of the oral cavity image, the presence or absence of lesions in the oral mucosa of the subject is estimated from the oral cavity image acquired by the acquisition unit. An estimation section,
    And an output unit that outputs the estimation result of the estimation unit.
  2.  前記口腔画像から前記被検者の口腔粘膜に対応する領域を抽出する領域抽出部
     を更に備え、
     前記推定部は、前記領域抽出部が抽出した領域の口腔画像から、前記被検者の口腔粘膜における病変の有無を推定する
     請求項1に記載の推定装置。
    Further comprising a region extraction unit that extracts a region corresponding to the oral mucosa of the subject from the oral image,
    The estimation device according to claim 1, wherein the estimation unit estimates the presence or absence of a lesion in the oral mucosa of the subject from the oral cavity image of the region extracted by the region extraction unit.
  3.  前記推定部は、口腔悪性腫瘍、前癌病変、良性腫瘍、外傷性潰瘍、炎症性疾患、ウイルス性疾患、真菌感染症、自己免疫疾患、口内炎、口角炎、褥瘡性潰瘍、舌表面粘膜の器質的変化、又は移植片対宿主病に属する少なくとも1つの病変の有無を推定する
     請求項1又は請求項2に記載の推定装置。
    The estimation part is an oral malignant tumor, precancerous lesion, benign tumor, traumatic ulcer, inflammatory disease, viral disease, fungal infection, autoimmune disease, stomatitis, stomatitis, pressure ulcer, tongue surface mucosa structure The estimation apparatus according to claim 1 or 2, which estimates whether there is at least one lesion that belongs to a graft-versus-host disease or a physical change.
  4.  前記学習モデルは、口腔画像と、該口腔画像に対するアノテーションとを教師データに用いて、前記口腔画像と前記病変に関する情報との関係を学習した学習モデルである
     請求項1から請求項3の何れか1つに記載の推定装置。
    The said learning model is a learning model which learned the relationship between the said oral cavity image and the information regarding the said lesion|path, using the oral cavity image and the annotation with respect to this oral cavity image as teacher data. The estimation device according to one.
  5.  前記学習モデルは、畳み込みニューラルネットワークを用いて学習した学習モデルである
     請求項1から請求項4の何れか1つに記載の推定装置。
    The estimation device according to any one of claims 1 to 4, wherein the learning model is a learning model learned by using a convolutional neural network.
  6.  被検者の口腔内を撮像して得られる口腔画像が入力される入力層、
     口腔粘膜における病変に関する情報を出力する出力層、及び
     口腔画像と該口腔画像に対するアノテーションとを教師データに用いて、前記入力層に入力される口腔画像と前記出力層が出力する情報との関係を学習した中間層
     を備え、
     前記入力層に口腔画像が入力された場合、前記中間層にて演算し、口腔粘膜における病変に関する情報を前記出力層から出力するようコンピュータを機能させる
     学習モデル。
    An input layer into which an oral image obtained by imaging the inside of the oral cavity of the subject is input,
    An output layer that outputs information about lesions in the oral mucosa, and a relationship between the oral image that is input to the input layer and the information that the output layer outputs, using the oral image and the annotation for the oral image as teacher data. With the learned middle class,
    A learning model that causes a computer to function so that when an oral cavity image is input to the input layer, the intermediate layer calculates the information and outputs information about lesions in the oral mucosa from the output layer.
  7.  コンピュータを用いて、
     被検者の口腔内を撮像して得られる口腔画像と、該口腔画像に対するアノテーションとを含む教師データを取得し、
     取得した教師データに基づき、口腔画像の入力に対して口腔粘膜における病変に関する情報を出力する学習モデルを生成する
     学習モデルの生成方法。
    Using a computer
    An oral cavity image obtained by imaging the inside of the oral cavity of the subject, and acquiring teacher data including an annotation for the oral cavity image,
    A learning model generation method that generates a learning model that outputs information regarding lesions in the oral mucosa in response to the input of an oral image based on the acquired teacher data.
  8.  コンピュータに、
     被検者の口腔内を撮像して得られる口腔画像を取得し、
     口腔画像の入力に対して口腔粘膜における病変に関する情報を出力するように構成された学習モデルを用いて、取得した口腔画像から、口腔粘膜における病変の有無を推定し、
     推定結果を出力する
     処理を実行させるためのコンピュータプログラム。
    On the computer,
    Acquire the oral cavity image obtained by imaging the inside of the oral cavity of the subject,
    Using a learning model configured to output information on lesions in the oral mucosa in response to the input of the oral cavity image, from the acquired oral cavity image, the presence or absence of lesions in the oral mucosa is estimated,
    A computer program for executing the process of outputting the estimation result.
PCT/JP2019/002196 2019-01-24 2019-01-24 Deduction device, learning model, learning model generation method, and computer program WO2020152815A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2019/002196 WO2020152815A1 (en) 2019-01-24 2019-01-24 Deduction device, learning model, learning model generation method, and computer program
PCT/JP2020/002491 WO2020153471A1 (en) 2019-01-24 2020-01-24 Deduction device, learning model, learning model generation method, and computer program
JP2020567715A JPWO2020153471A1 (en) 2019-01-24 2020-01-24 Estimator, learning model, learning model generation method, and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/002196 WO2020152815A1 (en) 2019-01-24 2019-01-24 Deduction device, learning model, learning model generation method, and computer program

Publications (1)

Publication Number Publication Date
WO2020152815A1 true WO2020152815A1 (en) 2020-07-30

Family

ID=71736237

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2019/002196 WO2020152815A1 (en) 2019-01-24 2019-01-24 Deduction device, learning model, learning model generation method, and computer program
PCT/JP2020/002491 WO2020153471A1 (en) 2019-01-24 2020-01-24 Deduction device, learning model, learning model generation method, and computer program

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/002491 WO2020153471A1 (en) 2019-01-24 2020-01-24 Deduction device, learning model, learning model generation method, and computer program

Country Status (2)

Country Link
JP (1) JPWO2020153471A1 (en)
WO (2) WO2020152815A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7180799B1 (en) 2022-01-06 2022-11-30 三菱マテリアル株式会社 Dental information processing device, dental information processing system, program, and dental information processing method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102490077B1 (en) * 2021-01-28 2023-01-18 주식회사 피씨티 Method and system for predicting high risk adenoma related information based on plural of machine-leaned model
KR102577294B1 (en) * 2021-01-28 2023-09-13 주식회사 피씨티 Method and system for predicting adenoma related information based on machine-leaned model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017175282A1 (en) * 2016-04-04 2017-10-12 オリンパス株式会社 Learning method, image recognition device, and program
JP2018532441A (en) * 2015-08-04 2018-11-08 シーメンス アクティエンゲゼルシャフト Visual expression learning to classify brain tumors

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019118807A (en) * 2017-12-27 2019-07-22 Hoya株式会社 Image processing apparatus, computer program and image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018532441A (en) * 2015-08-04 2018-11-08 シーメンス アクティエンゲゼルシャフト Visual expression learning to classify brain tumors
WO2017175282A1 (en) * 2016-04-04 2017-10-12 オリンパス株式会社 Learning method, image recognition device, and program

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANANTHARAMAN RAJARAM; VELAZQUEZ MATTHEW; LEE YUGYUNG: "Utilizing Mask R-CNN for Detection and Segmentation of Oral Diseases", 2018 1EEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM, 3 December 2018 (2018-12-03), pages 2197 - 2204, XP033507331 *
NOZAKI, KAZUNORI ET AL.: "Efforts for Utilization of AI in Dental Diagnosis", THE NIPPON DENTAL REVIEW, vol. 910, 11 August 2018 (2018-08-11), pages 21 - 23 *
YAMADA, MASAYOSHI: "New Medicine in Japan", SPECIAL PROJECT] MEDICAL APPLICATIONS OF DIVERSIFYING AI, vol. 45, March 2018 (2018-03-01), pages 130 - 133 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7180799B1 (en) 2022-01-06 2022-11-30 三菱マテリアル株式会社 Dental information processing device, dental information processing system, program, and dental information processing method
JP2023100476A (en) * 2022-01-06 2023-07-19 三菱マテリアル株式会社 Dental information processing apparatus, dental information processing system, program, and dental information processing method
JP7313603B1 (en) 2022-01-06 2023-07-25 三菱マテリアル株式会社 Dental information processing device, dental information processing system, program, and dental information processing method
JP2023107199A (en) * 2022-01-06 2023-08-02 三菱マテリアル株式会社 Dental information processing apparatus, dental information processing system, program and dental information processing method

Also Published As

Publication number Publication date
JPWO2020153471A1 (en) 2021-12-02
WO2020153471A1 (en) 2020-07-30

Similar Documents

Publication Publication Date Title
WO2020152815A1 (en) Deduction device, learning model, learning model generation method, and computer program
Shan et al. Lung infection quantification of COVID-19 in CT images with deep learning
JP5220705B2 (en) Image processing apparatus, image processing program, and image processing method
JP6906347B2 (en) Medical image classifiers, methods and programs
US20080139966A1 (en) Automatic tongue diagnosis based on chromatic and textural features classification using bayesian belief networks
WO2021115084A1 (en) Structural magnetic resonance image-based brain age deep learning prediction system
JP6877486B2 (en) Information processing equipment, endoscope processors, information processing methods and programs
JP5576711B2 (en) Image processing apparatus, image processing method, and image processing program
JP5830295B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
JP5576775B2 (en) Image processing apparatus, image processing method, and image processing program
JP6768620B2 (en) Learning support device, operation method of learning support device, learning support program, learning support system, terminal device and program
CN103945755B (en) Image processing apparatus
CN115036002B (en) Treatment effect prediction method based on multi-mode fusion model and terminal equipment
ALbahbah et al. Detection of caries in panoramic dental X-ray images using back-propagation neural network
CN111798445A (en) Tooth image caries identification method and system based on convolutional neural network
CN109310292B (en) Image processing device, learning device, image processing method, recognition criterion generation method, learning method, and computer-readable recording medium containing program
KR102290799B1 (en) Method for providing tooth leison information and apparatus using the same
CN114445784A (en) Method and system for acquiring CRRT screen parameters in real time
KR102186709B1 (en) Method for providing tooth leison information and apparatus using the same
CN107832695A (en) The optic disk recognition methods based on textural characteristics and device in retinal images
Jaiswal et al. An intelligent deep network for dental medical image processing system
US8027939B2 (en) Automatic labeler assignment using a model built from multi-labeler data
CN114613498B (en) Machine learning-based MDT (minimization drive test) clinical decision making assisting method, system and equipment
WO2022071158A1 (en) Diagnosis assistance device, method for operating diagnosis assistance device, program for operating diagnosis assistance device, dementia diagnosis assistance method, and learned model for deriving dementia findings
TWI770591B (en) Computer-implemented method and computing device for predicting cancer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911468

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19911468

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP