WO2019088121A1 - 画像診断支援装置、資料収集方法、画像診断支援方法および画像診断支援プログラム - Google Patents
画像診断支援装置、資料収集方法、画像診断支援方法および画像診断支援プログラム Download PDFInfo
- Publication number
- WO2019088121A1 WO2019088121A1 PCT/JP2018/040381 JP2018040381W WO2019088121A1 WO 2019088121 A1 WO2019088121 A1 WO 2019088121A1 JP 2018040381 W JP2018040381 W JP 2018040381W WO 2019088121 A1 WO2019088121 A1 WO 2019088121A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- convolutional neural
- lesion
- neural network
- endoscopic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/045—Control thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/063—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements for monochromatic or narrow-band illumination
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/31—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/80—Recognising image objects characterised by unique random patterns
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present invention relates to an image diagnosis support apparatus, a data collection method, an image diagnosis support method, and an image diagnosis support program.
- Cancer is the most common cause of death in the world, and according to World Health Organization (WHO) statistics, it is estimated that 8.8 million people will die in 2015 and the digestive system including stomach and large intestine etc. Occupy the top.
- stomach cancer is the fifth most common malignancy in the world and the third most common cause of cancer-related death in the world, with approximately 1 million new cases occurring each year and about 700,000 deaths.
- the prognosis of gastric cancer patients depends on the stage (progression) of the cancer at the time of diagnosis. Advanced gastric cancer has a poor prognosis, but the 5-year survival rate for early gastric cancer is 90% or more, and early lesions can be detected quickly and surgically removed to cure many gastric cancers.
- endoscopic detection of early gastric cancer is the most effective means to reduce gastric cancer mortality, either by endoscopic mucosal resection (EMR) or endoscopic submucosal dissection (ESD).
- EMR endoscopic mucosal resection
- ESD endoscopic submucosal dissection
- Gastrointestinal endoscopy (especially upper gastrointestinal endoscopy: EGD) is a standard procedure for diagnosing gastric cancer, but false negative rate is 26% when detecting gastric cancer by observation with EGD It is said (see Non-Patent Document 1) and is frequent.
- most gastric cancers originate from atrophic mucosa, and some early gastric cancers show only minor morphological changes and are difficult to distinguish from background mucosa with atrophic changes, so inexperienced endoscopes Doctors tend to miss stomach cancer. Therefore, although an endoscopy doctor needs special training and experience to detect stomach cancer appropriately, 10,000 image diagnostic experiences and 10 are needed to train an endoscopy doctor who has passed through a certain level of experience. It is also said that it takes a period of years.
- AI's image recognition ability is comparable to that of human specialists, but in general endoscopy of digestive organs, diagnostic support technology using the diagnostic ability of AI's endoscopic image Has not yet been introduced to medical practice and is expected to be put to practical use in the future.
- An object of the present invention is to provide an image diagnosis support device, a data collection method, an image diagnosis support method, and an image diagnosis support program capable of supporting diagnosis of an endoscope image by an endoscopic doctor.
- the diagnostic imaging support apparatus is A lesion estimation unit for estimating, by a convolutional neural network, names and positions of lesions present in a digestive tract endoscopic image of a subject imaged by a digestive tract endoscopic imaging device, and information of their certainty;
- a display control unit that generates an analysis result image that displays the name and position of the lesion and their accuracy, and performs control to display the analysis result image on the digestive tract endoscope image; Equipped with The convolutional neural network includes: a lesion name of a lesion existing in a plurality of digestive organ tumor endoscopic images previously determined by feature extraction of atrophy, intestinal metaplasia, mucosal bulge or depression, and mucosal tone status
- a learning process is performed based on the lesion position.
- the display result of the display control unit is collected as data on digestive tract lesions in the digestive tract of the subject using the above-described image diagnosis assisting apparatus.
- the image diagnosis support method is A lesion estimation unit for estimating, by a convolutional neural network, names and positions of lesions present in a digestive tract endoscopic image of a subject imaged by a digestive tract endoscopic imaging device, and information of their certainty;
- an apparatus comprising: a display control unit that generates an analysis result image that displays the name and position of the lesion and their accuracy, and displays the analysis result image on the digestive tract endoscope image;
- the convolutional neural network includes: a lesion name of a lesion existing in a plurality of digestive organ tumor endoscopic images previously determined by feature extraction of atrophy, intestinal metaplasia, mucosal bulge or depression, and mucosal tone status A learning process is performed based on the lesion position.
- An image diagnosis support program is On the computer A process of estimating, by means of a convolutional neural network, names and positions of lesions present in a digestive tract endoscopic image of a subject imaged by a digestive tract endoscopic imaging device, and information on their accuracy; A process of generating an analysis result image that displays the name and position of the lesion and their accuracy, and performing control to display the image on the endoscopic image; To run
- the convolutional neural network includes: a lesion name of a lesion existing in a plurality of digestive organ tumor endoscopic images previously determined by feature extraction of atrophy, intestinal metaplasia, mucosal bulge or depression, and mucosal tone status A learning process is performed based on the lesion position.
- the criteria for determining the lesion site (atrophy, intestinal metaplasia, elevation or depression of mucous membrane, condition of mucous membrane tone) in the present invention by extraction of characteristics can be set with high accuracy by an experienced endoscope doctor For example, it is described in detail in the present inventor's book ("Pick-up and diagnosis of early gastric cancer usually by endoscopic observation", Toshiaki Hirasawa / Hiroshi Kawachi, Author Junko Fujisaki, Nippon Medical Center, 2016). ing.
- FIG. 1 is a block diagram showing an overall configuration of an image diagnosis assisting apparatus in the present embodiment. It is a figure which shows the hardware constitutions of the image-diagnosis assistance apparatus in this Embodiment. It is a figure which shows the structure of the convolutional neural network in this Embodiment. It is a figure which shows the example which displays an analysis result image on the endoscopic image in this Embodiment.
- FIG. 6 is a diagram showing patient and lesion characteristics regarding the endoscopic image used in the evaluation test data set. 6A and 6B are diagrams showing examples of an endoscopic image and an analysis result image. It is a figure explaining the case where cancer exists in a plurality of endoscopic images.
- FIGS. 8A and 8B are diagrams for explaining the difference between the lesion position (range) diagnosed by the doctor and the lesion position (range) diagnosed by the convolutional neural network.
- FIG. 5 shows the change in sensitivity according to the difference in tumor depth and tumor size.
- FIG. 6 shows details of a lesion missed by a convolutional neural network.
- 11A, 11B, 11C, 11D, 11E, and 11F are diagrams showing an endoscopic image in which there is a lesion missed by the convolutional neural network.
- FIG. 2 shows details of non-cancerous lesions detected as gastric cancer by a convolutional neural network.
- 13A, 13B, and 13C are diagrams showing analysis result images including non-cancerous lesions detected as stomach cancer by the convolutional neural network.
- 14A, 14B, 14C, 14D, 14E and 14F are endoscopic images of the large intestine including adenomas, hyperplastic polyps or SSAP.
- Figures 15A, 15B, 15C, 15D, 15E, 15F show endoscopic images of the large intestine, including rare types of large intestine polyps. It is a figure which shows the characteristics, such as a colon polyp regarding the endoscopic image used for the data set for learning. It is a figure which shows the characteristics, such as a colon polyp regarding the endoscopic image used for the data set for evaluation tests. It is a figure which shows the classification result of a false positive image and a false negative image.
- 19A and 19B are diagrams showing the degree of coincidence between the CNN classification and the tissue classification.
- 21A, 21B, 21C, 21D, 21E, and 21F are diagrams showing examples of an endoscopic image and an analysis result image in the second evaluation test.
- 22A, 22B, 22C, 22D, 22E, 22F, 22G, and 22H are diagrams showing an example of an endoscopic image and an analysis result image in the second evaluation test.
- FIG. 6 is a diagram showing patient and lesion characteristics regarding the endoscopic image used in the evaluation test data set.
- FIG. 24A, FIG. 24B, FIG. 24C, and FIG. 24D are diagrams showing an example of an endoscopic image and an analysis result image in the third evaluation test.
- FIG. 5 shows white light sensitivity, narrowband light sensitivity for NBI and global sensitivity. It is a figure which shows the detection result of esophagus cancer / non-esophageal cancer by a convolutional neural network, and the detection result of esophagus cancer / non-esophageal cancer by a biopsy. It is a figure which shows white light sensitivity and narrow-band light sensitivity for NBI. It is a figure which shows the coincidence degree of CNN classification and a deep delivery degree.
- 31A, 31B, 31C, 31D, 31E and 31F are diagrams showing an endoscopic image and an analysis result image which are erroneously detected and classified by the convolutional neural network as false positive images.
- 32A, 32B, 32C, 32D and 32E are diagrams showing an endoscopic image including esophageal cancer not detected by the convolutional neural network as a false negative image.
- FIG. 1 is a block diagram showing the overall configuration of the image diagnosis support apparatus 100.
- FIG. 2 is a diagram showing an example of a hardware configuration of the image diagnosis support device 100 in the present embodiment.
- the diagnostic imaging support apparatus 100 uses the diagnostic capability of an endoscopic image possessed by a convolutional neural network (CNN) in endoscopy of digestive organs (for example, esophagus, stomach, duodenum, large intestine, etc.). Assists the diagnosis of endoscopic images by a doctor (for example, an endoscopic doctor).
- An endoscope imaging device 200 (corresponding to the “digestive endoscope imaging device” of the present invention) and a display device 300 are connected to the image diagnosis assisting device 100.
- the endoscope imaging apparatus 200 is, for example, an electronic endoscope (also referred to as a videoscope) incorporating an imaging unit, or a camera attached endoscope equipped with a camera head having an imaging unit incorporated in an optical endoscope. is there.
- the endoscope imaging apparatus 200 is inserted into, for example, the digestive tract from the mouth or nose of a subject, and images a diagnostic target site in the digestive tract.
- the endoscopic imaging device 200 is endoscopic image data D1 (still image) representing an endoscopic image (corresponding to the "digestive endoscopic image" of the present invention) obtained by imaging the diagnosis target region in the digestive tract. ) Is output to the diagnostic imaging support apparatus 100.
- the endoscope moving image may be used instead of the endoscope image data D1.
- the display device 300 is, for example, a liquid crystal display, and displays the analysis result image output from the image diagnosis support device 100 in a distinguishable manner to the doctor.
- the image diagnosis support apparatus 100 includes, as main components, a central processing unit (CPU) 101, a read only memory (ROM) 102, a random access memory (RAM) 103, an external storage device (for example, flash memory) 104, and a communication interface 105.
- CPU central processing unit
- ROM read only memory
- RAM random access memory
- external storage device for example, flash memory
- communication interface 105 a communication interface 105.
- a computer including a GPU (Graphics Processing Unit) 106 and the like.
- Each function of the image diagnosis support apparatus 100 is, for example, a control program (for example, an image diagnosis support program) stored in the ROM 102, the RAM 103, the external storage device 104 or the like by the CPU 101 or various data (for example, endoscope image data, a teacher This is realized by referring to data, model data (structure data, learned weight parameters, etc.) of a convolutional neural network, etc.
- the RAM 103 functions as, for example, a work area or a temporary save area of data.
- DSP Digital Signal Processor
- part or all of each function may be realized by processing by a dedicated hardware circuit instead of or in addition to processing by software.
- the diagnostic imaging support apparatus 100 includes an endoscopic image acquisition unit 10, a lesion estimation unit 20, and a display control unit 30.
- the learning device 40 has a function of generating model data (structure data, learned weight parameters, etc.) of the convolutional neural network used in the image diagnosis support device 100.
- the image acquisition unit 10 acquires endoscope image data D1 output from the endoscope imaging device 200. Then, the image acquisition unit 10 outputs the acquired endoscopic image data D1 to the lesion estimation unit 20. When acquiring the endoscopic image data D1, the image acquiring unit 10 may directly acquire the endoscopic image data D1, or the endoscopic image data D1 stored in the external storage device 104, or the like. The endoscope image data D1 provided via an internet connection or the like may be acquired.
- the lesion estimation unit 20 uses the convolutional neural network to identify the lesion name (name) of the lesion present in the endoscopic image represented by the endoscopic image data D1 output from the endoscopic image acquisition unit 10 The position of the lesion (position) and the name of the lesion and the accuracy of the position of the lesion are estimated. Then, the lesion estimation unit 20 sends, to the display control unit 30, endoscopic image data D1 output from the endoscopic image acquisition unit 10 and estimation result data D2 representing estimation results of the lesion name, lesion position, and accuracy. Output.
- the lesion estimation unit 20 estimates a probability score as an index indicating the accuracy of the lesion name and the lesion position.
- the probability score is represented by a value greater than 0 and less than or equal to 1. The higher the probability score, the more accurate the lesion name and lesion position.
- the probability score is an example of an index indicating the lesion name and the accuracy of the lesion position, and an index of any other aspect may be used.
- the probability score may be an aspect represented by a value of 0% to 100%, or an aspect represented by any of several levels.
- a convolutional neural network is a kind of forward propagation neural network, and is based on knowledge in the structure of the visual cortex of the brain. Basically, it has a structure in which a convolutional layer responsible for local feature extraction of an image and a pooling layer (subsampling layer) for collecting features for each locality are repeated. According to each layer of the convolutional neural network, a plurality of neurons (Neuron) are possessed, and individual neurons are arranged in such a manner as to correspond to the visual cortex. The basic function of each neuron consists of signal input and output.
- the input weights are not output as they are, but coupling weights are set for the respective inputs, and the sum of the weighted inputs is
- the threshold set to the neuron is exceeded, the signal is output to the neuron of the next layer.
- the connection weight between these neurons is calculated from the learning data. This enables estimation of the output value by inputting real-time data.
- the algorithm that configures it is not particularly limited.
- FIG. 3 is a diagram showing the configuration of a convolutional neural network according to the present embodiment.
- the model data (structure data, learned weight parameters, etc.) of the convolutional neural network is stored in the external storage device 104 together with the image diagnosis support program.
- the convolutional neural network has, for example, a feature extraction unit Na and an identification unit Nb.
- the feature extraction unit Na performs a process of extracting an image feature from the input image (endoscope image data D1).
- the identification unit Nb outputs an estimation result of an image from the image feature extracted by the feature extraction unit Na.
- the feature extraction unit Na is configured by hierarchically connecting a plurality of feature quantity extraction layers Na1, Na2,.
- Each feature amount extraction layer Na1, Na2... Comprises a convolution layer, an activation layer, and a pooling layer.
- the first feature amount extraction layer Na1 scans the input image for each predetermined size by raster scan. Then, the feature amount extraction layer Na1 extracts the feature amounts included in the input image by performing the feature amount extraction process on the scanned data by the convolution layer, the activation layer, and the pooling layer.
- the first feature amount extraction layer Na1 extracts, for example, relatively simple single feature amounts such as a linear feature amount extending in the horizontal direction and a linear feature amount extending in the oblique direction.
- the second feature amount extraction layer Na2 scans an image (also referred to as a feature map) input from the feature amount extraction layer Na1 of the previous layer, for example, by raster scan every predetermined size. Then, the feature quantity extraction layer Na2 similarly extracts feature quantities included in the input image by subjecting the scanned data to feature quantity extraction processing by the convolution layer, the activation layer, and the pooling layer. Note that the second feature quantity extraction layer Na2 is combined with reference to the positional relationship of the plurality of feature quantities extracted by the first feature quantity extraction layer Na1, and so on, thereby achieving a higher dimensional complex. Feature values are extracted.
- the second and subsequent feature quantity extraction layers execute the same processing as the second layer feature quantity extraction layer Na 2 Do. Then, the output of the feature quantity extraction layer of the final layer (each value in the plurality of feature map maps) is input to the identification unit Nb.
- the identification unit Nb is configured, for example, by a multilayer perceptron in which a plurality of fully connected layers (fully connected) are hierarchically connected.
- All coupled layers on the input side of the identification unit Nb are all coupled to each value in the plurality of feature map maps acquired from the feature extraction unit Na, and product-sum operation is performed while changing the weighting factor for each value. Go and output.
- the all coupled layers of the next layer of the identification unit Nb are all coupled to the values output by the elements of all the coupled layers of the previous layer, and the product-sum operation is performed while making the weighting coefficients different for each value.
- a layer for example, softmax
- softmax that outputs the lesion name and lesion position of the lesion present in the endoscopic image and the probability score (accuracy) of the lesion name and lesion position in the last stage of the identification unit Nb Functions etc) are provided.
- the convolutional neural network performs learning processing using reference data (hereinafter referred to as "teacher data") which has been subjected to marking processing in advance by an experienced endoscope doctor to obtain an endoscopic image from input
- teacher data reference data
- An estimation function can be possessed so that desired estimation results (here, lesion name, lesion location and probability score) can be output.
- the convolutional neural network in the present embodiment receives endoscopic image data D1 (input in FIG. 3), and a lesion name according to the image feature of the endoscopic image represented by the endoscopic image data D1, The lesion position and the probability score are configured to be output as the estimation result data D2 (output in FIG. 3).
- the convolutional neural network is more preferably configured to be able to input information related to age, gender, area, or medical history in addition to the endoscopic image data D1 (for example, provided as an input element of the identification unit Nb ) May be used.
- the importance of real world data in clinical practice is particularly recognized, and by adding such patient attribute information, it can be developed into a more useful system in clinical practice. That is, the feature of the endoscopic image has a correlation with the information related to the age, sex, area or a medical history, and in the convolutional neural network, in addition to the endoscopic image data D1, patient attributes such as age etc.
- the information it is possible to make it possible to estimate the lesion name and the lesion position with higher accuracy. This method is a matter to be adopted particularly when the present invention is used internationally, since the disease condition may differ depending on regions and races.
- the lesion estimation unit 20 performs, as pre-processing, processing of converting to the size and aspect ratio of the endoscopic image, color division processing of the endoscopic image, and color conversion of the endoscopic image. Processing, color extraction processing, luminance gradient extraction processing, etc. may be performed.
- the display control unit 30 is a lesion name represented by the estimation result data D2 output from the lesion estimation unit 20 on the endoscopic image represented by the endoscope image data D1 output from the lesion estimation unit 20. Generate an analysis result image that displays the lesion location and the probability score. Then, the display control unit 30 outputs the endoscopic image data D1 and the analysis result image data D3 representing the generated analysis result image to the display device 300.
- a digital image processing system such as structural enhancement, color enhancement, high contrast, high definition, etc. of a lesion in an endoscopic image may be connected to perform processing to assist the viewer's understanding and judgment. it can.
- the display device 300 causes the analysis result image represented by the analysis result image data D3 to be displayed on the endoscope image represented by the endoscope image data D1 output from the display control unit 30.
- the displayed endoscopic image and analysis result image are used, for example, for double check operation of the endoscopic image.
- the time until the display of one endoscopic image and analysis result image is very fast, so in addition to the double check operation of the endoscopic image, real time by the doctor as an endoscope moving image Can be used as a diagnostic aid.
- FIG. 4 is a diagram showing an example in which an analysis result image is displayed on the endoscopic image in the present embodiment.
- a rectangular frame 50 indicating the lesion position (range) estimated by the lesion estimation unit 20, a lesion name (early cancer: early stomach cancer) and a probability score (0.8) ) Is displayed.
- the rectangular frame indicating the is displayed in yellow.
- the display control unit 30 specifies lesion position identification information (in the present embodiment, according to the present embodiment) for identifying the lesion position in the analysis result image.
- Change the display mode of the rectangular frame For reference, the rectangular frame 52 indicates the position (range) of the lesion diagnosed as stomach cancer by the doctor and is not displayed in the actual analysis result image, but the same result as the judgment of a skilled endoscope doctor It shows that it becomes.
- the learning device 40 uses the teacher data D4 stored in the external storage device (not shown) so that the convolutional neural network of the lesion estimation unit 20 can estimate the lesion position, lesion name and probability score from the endoscopic image data D1. Then, the learning processing is performed on the convolutional neural network of the learning device 40.
- the learning device 40 includes an endoscope image (corresponding to the “digestive tumor endoscopic image” of the present invention) captured by the endoscope imaging device 200 about the digestive tract of the subject and by the doctor.
- the teaching data D4 the lesion name and lesion position of the lesion present in the endoscopic image, which is determined in advance by feature extraction of the atrophy, intestinal metaplasia, mucosal swelling or depression, and the condition of the mucosal color tone Use the learning process.
- the learning device 40 reduces error (also referred to as loss) of the output data from the correct value (lesion name and lesion position) when the endoscopic image is input to the convolutional neural network. Perform learning processing of convolutional neural network.
- the endoscope image as the teacher data D 4 includes an endoscope image imaged by irradiating white light into the digestive tract of the subject, a pigment (in the digestive tract of the subject, For example, an endoscopic image captured by spraying indigo carmine, an iodine solution, and narrow band light (for example, narrow band imaging (NBI), BLI (Blue An endoscopic image captured by irradiating the narrow band light for Laser Imaging) is included.
- Endoscopic images as teacher data D4 in learning processing mainly use the database of Japan's top class cancer treatment specialist hospital, and all Japanese GP specialists with abundant diagnosis and treatment experience have it. The images were carefully examined, sorted, and marked on the lesion position by precise manual processing.
- the teacher data D4 of the endoscopic image may be data of pixel values, or may be data subjected to predetermined color conversion processing and the like. Moreover, what extracted the texture feature, the shape feature, the expansion feature etc. may be used as pre-processing. In addition to the endoscopic image data, the teacher data D4 may be subjected to a learning process in association with information relating to age, gender, area, or past medical history.
- the algorithm when the learning device 40 performs the learning process may be a known method.
- the learning device 40 performs learning processing on the convolutional neural network using, for example, a well-known backpropagation (error propagation) method, and adjusts network parameters (weighting factor, bias, etc.). Then, model data (structure data, learned weight parameters, and the like) of the convolutional neural network subjected to the learning process by the learning device 40 are stored in the external storage device 104 together with, for example, the image diagnosis support program.
- the diagnostic imaging support apparatus 100 includes the name and the position of the lesion present in the digestive tract endoscopic image of the subject imaged by the digestive tract endoscopic imaging device; A lesion estimation unit that estimates the accuracy information thereof by a convolutional neural network, an analysis result image that displays the name and position of the lesion, and their accuracy, is generated on the digestive tract endoscope image And a display control unit that performs control to display.
- the convolutional neural network includes the lesion names and lesions of lesions present in a plurality of digestive organ tumor endoscopic images previously determined by feature extraction of atrophy, intestinal metaplasia, mucosal bumps or depressions, and mucosal tone status A learning process is performed based on the position.
- the convolutional neural network is obtained in advance for each of the plurality of subjects, with endoscopic images of the plurality of digestive organs obtained for each of the plurality of subjects in advance. Since the learning is based on the lesion name of the lesion and the definitive diagnosis result of the lesion position, the lesion name of the digestive tract of the new subject with accuracy comparable to a short-term and substantially experienced endoscope doctor And the position of the lesion can be estimated. Therefore, in the endoscopic examination of the digestive tract, the diagnostic ability of the endoscopic image possessed by the convolutional neural network according to the present invention can be used to strongly support the diagnosis of the endoscopic image by the endoscopic doctor.
- an endoscopy doctor can also be used directly as a diagnostic support tool in a doctor's office, while using endoscopic images transmitted from a plurality of doctor's offices as a central diagnostic support service, It can also be used as a diagnostic support service for a remote site by remote control via an internet connection.
- EGD An endoscopic image of EGD performed from April 2004 to December 2016 was prepared as a training data set (teacher data) used for learning of a convolutional neural network in an image diagnosis support apparatus.
- EGD is performed for screening or preoperative examination in daily practice, and endoscopic images are standard endoscope (GIF-H290Z, GIF-H290, GIF-XP290N, GIF-H260Z, GIF-Q260J, GIF-XP260, GIF-XP260NS, GIF-N260, etc., Olympus Medical Systems, Inc., Tokyo) and standard endoscopic video systems (EVIS LUCERA CV-260 / CLV-260, EVIS LUCERA ELITE CV-290 / CLV-290SL , Olympus Medical Systems, Inc.).
- the endoscopic image as a learning data set includes an endoscopic image captured by irradiating white light into the digestive tract of the subject, a hue (for example, indigo carmine, An endoscopic image taken by scattering iodine solution) and a narrow band light (for example, NBI narrow band light, BLI narrow band light) is imaged by irradiating the inside of a subject's digestive system with narrow band light (for example, NBI) An endoscopic image was included.
- endoscopic images with poor image quality were excluded from the training data set due to poor stomach extension due to insuffi- cient insufflation, bleeding after biopsy, halation, lens clouding, defocusing or mucus, etc.
- EGDs have a standard endoscope (GIF-H290Z, Olympus Medical Systems, Tokyo) and a standard endoscope video system (EVIS LUCERA ELITE CV-290 / CLV-290SL, Olympus Medical Systems) It carried out using.
- EGD the inside of the stomach was observed all over, endoscopic images were taken, and the number of imaging was 18 to 69 per patient.
- FIG. 5 is a diagram showing patient and lesion characteristics regarding the endoscopic image used in the evaluation test data set.
- the median tumor size (diameter) was 24 mm, and the range of tumor sizes (diameter) was 3-170 mm.
- superficial lesions (0-IIa, 0-IIb, 0-IIc, 0-IIa + IIc, 0-IIc + IIb, 0-IIc + III) were the most frequent in 55 lesions (71.4%).
- 42 lesions (67.5%) were early gastric cancer (T1) and 25 lesions (32.5%) advanced gastric cancer (T2-T4).
- the evaluation test data set is input to a convolutional neural network-based image diagnosis support device subjected to learning processing using the learning data set, and each of the constituent elements of the evaluation test data set is configured. It was evaluated whether it could detect stomach cancer correctly from an endoscopic image. The case where stomach cancer was detected correctly was regarded as "correct”.
- the convolutional neural network outputs the lesion name (early stomach cancer or advanced gastric cancer), a lesion position and a probability score.
- FIG. 6 is a view for explaining the case where the same cancer is present in a plurality of endoscopic images.
- rectangular frames 54 and 56 indicate lesion positions (ranges) of stomach cancer manually set by the doctor.
- a rectangular frame 58 indicates the gastric cancer lesion position (range) estimated by the convolutional neural network.
- FIG. 6A shows an endoscopic image obtained by imaging gastric cancer in a distant view
- FIG. 6B shows an endoscopic image obtained by imaging the gastric cancer in a near field.
- the convolutional neural network could not detect gastric cancer in the distant view, but could detect gastric cancer in the near view. In this case, in this evaluation test, it was regarded as the correct answer.
- FIG. 7 is a diagram for explaining the difference between the lesion position (range) diagnosed by the doctor and the lesion position (range) diagnosed by the convolutional neural network.
- a rectangular frame 60 indicates a lesion position (range) of stomach cancer manually set by the doctor.
- a rectangular frame 62 indicates the gastric cancer lesion position (range) estimated by the convolutional neural network.
- the convolutional neural network detected at least a part of gastric cancer, it was regarded as correct in the evaluation test.
- the sensitivity and positive predictive value (PPV) for the diagnostic ability of the convolutional neural network for detecting gastric cancer were calculated using the following formulas (1) and (2).
- Sensitivity (the number of stomach cancer detected by the convolutional neural network) / (the number of stomach cancer present in the endoscopic image constituting the evaluation test data set (77))
- Positive predictive value (the number of stomach cancer detected by the convolutional neural network) / (the number of lesions diagnosed by the convolutional neural network as the stomach cancer) (2)
- the convolutional neural network ended the process of analyzing the 2,296 endoscopic images constituting the evaluation test data set in a short time of 47 seconds.
- the convolutional neural network also detected 71 of 77 gastric cancers (lesions). That is, the sensitivity to the diagnostic ability of the convolutional neural network was 92.2%.
- FIG. 8 is a view showing an example of an endoscopic image and an analysis result image.
- FIG. 8A is an endoscopic image in which there is a slight red flat lesion in the middle gastric body. Since gastric cancer is similar to atrophy of background mucous membrane, detecting endoscopic cancer from the endoscopic image of FIG. 8A seems difficult even for an endoscopic doctor.
- FIG. 8B is an analysis result image showing that the convolutional neural network detected stomach cancer (0-IIc, 5 mm, tub1, T1a).
- a rectangular frame 64 indicates the gastric cancer lesion position (range) manually set by the doctor.
- a rectangular frame 66 indicates the gastric cancer lesion position (range) estimated by the convolutional neural network.
- FIG. 9 is a diagram showing the change in sensitivity according to the difference in tumor depth and tumor size in this evaluation test.
- the convolutional neural network detected 71 stomach cancer (98.6%) out of 71 stomach cancer having a tumor size (diameter) of 6 mm or more.
- the convolutional neural network also detected all invasive cancers (T1 b, T2, T3, T4 a).
- convolutional neural networks missed six stomach cancers. Five of the six gastric cancers were minor cancers (tumor size ⁇ 5 mm). All missed gastric cancers were in differentiated mucous membranes that were difficult to distinguish from gastritis even by endoscopy. In addition, since the doubling time (time to double the volume of the tumor) of gastric intramucosal cancer is considered to be 2 to 3 years, even if you miss such a small cancer, the mucous membrane in the annual EGD It is considered to be detected as internal cancer and does not interfere with the utility and clinical application of the convolutional neural network of the present invention.
- FIG. 10 is a diagram showing details of a lesion (gastric cancer) missed by the convolutional neural network.
- FIG. 11 is a diagram showing an endoscopic image (analysis result image) in which there is a lesion missed by the convolutional neural network.
- a rectangular frame 70 indicates the lesion position (range) of stomach cancer (Vestibular Major, 0-IIc, 3 mm, tub1, T1a) missed by the convolutional neural network.
- a rectangular frame 72 indicates the lesion position (range) of stomach cancer (smallpox in the middle of the stomach, 0-IIc, 4 mm, tub1, T1a) missed by the convolutional neural network.
- a rectangular frame 74 indicates the lesion position (range) of stomach cancer (the antrum posterior wall, 0-IIc, 4 mm, tub1, T1a) missed by the convolutional neural network.
- a rectangular frame 76 indicates the lesion position (range) of stomach cancer (the antrum posterior wall, 0-IIc, 5 mm, tub1, T1a) missed by the convolutional neural network.
- a rectangular frame 78 indicates the lesion position (range) of gastric cancer (Vestibular area, 0-IIc, 5 mm, tub1, T1a) missed by the convolutional neural network.
- a rectangular frame 80 is a non-cancerous lesion (pyloric ring) presumed to be gastric cancer by a convolutional neural network.
- a rectangular frame 82 indicates the lesion position (range) of stomach cancer (front wall of lower stomach, 0-IIc, 16 mm, tub1, T1a) missed by the convolutional neural network.
- FIG. 12 is a diagram showing details of non-cancerous lesions detected as gastric cancer by the convolutional neural network. As shown in FIG. 12, approximately half of the non-cancerous lesions detected as gastric cancer were gastritis with changes in color tone or irregular changes in mucosal surface. Such gastrositis is often difficult for an endoscopy doctor to distinguish from gastric cancer, and the positive predictive value (PPV: Positive Predictive Value) of gastric cancer diagnosis by gastric biopsy is 3.2 to 5.6. There is a report of%.
- PPVs of convolutional neural networks are considered to be clinically well tolerated.
- FIG. 13 is a diagram showing an analysis result image including non-cancerous lesions detected as stomach cancer by the convolutional neural network.
- a rectangular frame 84 indicates a lesion position (range) of gastritis (enteric metaplasia with irregular mucosal surface structure) detected as a stomach cancer by a convolutional neural network.
- a rectangular frame 86 indicates the lesion position (range) of gastritis (white mucosa due to local atrophy) detected as stomach cancer by the convolutional neural network.
- a rectangular frame 88 indicates a lesion position (range) of gastritis (a mucous membrane redness due to chronic gastritis) detected as gastric cancer by a convolutional neural network.
- Endoscopic images include adenocarcinomas, adenomas, hyperplastic polyps, SSAPs (sessile serrated adenoma / polyps), juvenile polyps, Peutz-Jeghers polyps, inflammatory polyps histologically proven by qualified pathologists , Lymphoid aggregates and the like.
- EGD is performed for screening or preoperative examination in daily practice, and endoscopic images are obtained from a standard endoscopic video system (EVIS LUCERA: CF TYPE H 260 AL / I, PCF TYPE Q 260 AI, Q 260 AZ I, H 290 I, H 290 Z , Olympus Medical Systems, Inc.).
- FIG. 14A shows an endoscopic image of the large intestine including a protruding adenoma.
- FIG. 14B shows an endoscopic image of the colon including flat type tumor (see dotted line 90).
- FIG. 14C shows an endoscopic image of the large intestine containing protruding hyperplastic polyps.
- FIG. 14D shows an endoscopic image of the colon including flat type hyperplastic polyps (see dotted line 92).
- FIG. 14E shows an endoscopic image of a large intestine including a projecting SSAP.
- FIG. 14F shows an endoscopic image of the colon including the flat SSAP (see dotted line 94).
- the endoscopic image as a training data set includes an endoscopic image captured by irradiating white light into the large intestine of the subject, and narrow band light (for example, NBI) into the large intestine of the subject. Narrowband light) and included endoscopic images. Also, endoscopic images with poor image quality were excluded from the training data set due to fecal residue, halation and post-biopsy bleeding.
- NBI narrow band light
- FIG. 15A shows an endoscopic image captured by irradiating white light to Koz-Jeghers polyps in the colon of a subject.
- FIG. 15B shows an endoscopic image captured by irradiating narrow band light (NBI narrow band light) to Koz-Jeghers polyps in the large intestine of the subject.
- FIG. 15C shows an endoscopic image captured by irradiating white light on an inflammatory polyp in the colon of a subject.
- FIG. 15D shows an endoscopic image captured by irradiating narrow band light (NBI narrow band light) on an inflammatory polyp in the large intestine of a subject.
- FIG. 15E shows an endoscopic image obtained by imaging a nonneoplastic mucosa that looks like a polypoid region in the colon of a subject.
- FIG. 15F shows an endoscopic image obtained by imaging a lymphocyte assembly that looks like a polypoid region in the colon of a subject.
- FIG. 16 is a view showing features of a colon polyp and the like regarding an endoscopic image used for a learning data set.
- FIG. 16 when a plurality of large intestine polyps are included in one endoscopic image, each of the plurality of large intestine polyps is counted as different endoscopic images.
- FIG. 17 is a view showing features of a colon polyp and the like regarding an endoscopic image used for a data set for evaluation test. In FIG. 17, when a plurality of large intestine polyps are included in one endoscopic image, each of the plurality of large intestine polyps is counted as different endoscopic images.
- the evaluation test data set is input to a convolutional neural network-based image diagnosis support device subjected to learning processing using the learning data set, and each of the constituent elements of the evaluation test data set is configured. It was evaluated whether colon polyps could be detected correctly from endoscopic images. If the colon polyps were detected correctly, it was regarded as "correct”.
- a convolutional neural network detects a colon polyp from an endoscopic image, it outputs the lesion name (type), lesion position and probability score.
- the sensitivity and positive predictive value (PPV) for the diagnostic ability of the convolutional neural network for detecting a colon polyp were calculated using the following formulas (1) and (2).
- Sensitivity (number of large intestine polyps detected by convolutional neural network) / (number of large intestine polyps present in endoscopic image constituting data set for evaluation test) (1)
- Positive predictive value (number of colon polyps detected by convolutional neural network) / (number of lesions diagnosed as colon polyps by convolutional neural network) (2)
- the convolutional neural network finishes the process of analyzing the endoscopic images that make up the evaluation test data set at a high speed of 48.7 sheets / second (ie, the analysis processing time per endoscopic image: 20 ms) I did.
- the convolutional neural network estimates the lesion position of 1,247 large intestine polyps in the endoscopic image constituting the evaluation test data set, and 1,172 true (histologically proven) large intestine polyps Of these, 1,073 colon polyps were correctly detected.
- the sensitivity and positive predictive value for the diagnostic ability of the convolutional neural network were 92% and 86%, respectively.
- the sensitivity and positive predictive value for the diagnostic ability of the convolutional neural network are 90% and 82%, respectively, in an endoscopic image imaged by irradiating white light into the large intestine of the subject.
- the convolutional neural network estimates the lesion position of 1,143 large intestine polyps in endoscopic images (including less than 10 mm true large intestine polyps) constituting the evaluation test data set, Of the large intestine polyps, 969 large intestine polyps were correctly detected.
- the sensitivity and positive predictive value for the diagnostic ability of the convolutional neural network were 92% and 85%, respectively.
- FIG. 18 is a diagram showing classification results of false positive images and false negative images.
- 55 false positive images (33%) were large intestine folds, the majority of which were images with insufflation.
- 12 false positive images (7%) were suspected as genuine polyps but were not finally confirmed.
- FIG. 19 is a diagram showing the degree of coincidence between the CNN classification and the tissue classification.
- the positive predictive value and the negative predictive value for the diagnostic ability (classification ability) of the convolutional neural network at that time were 64% and 90%, respectively. Also, many of the colorectal polyps that were histologically proven as SSAP were misclassified as adenomas (26%) or hyperplastic polyps (52%) by convolutional neural networks.
- FIG. 20 is a diagram showing the degree of agreement between CNN classification and tissue classification for a colon polyp of 5 mm or less.
- the positive predictive value and the negative predictive value for the diagnostic ability (classification ability) of the convolutional neural network at that time were 77% and 88%, respectively.
- the positive predictive value and the negative predictive value for the diagnostic ability (classification ability) of the convolutional neural network at that time were 84% and 88%, respectively.
- the diagnostic ability (classification ability) of the convolutional neural network shows that they are equivalent regardless of the size of the colon polyps.
- the convolutional neural network effectively detects the colon polyps at a fairly accurate and surprising speed even if the colon polyps are small, and misses the colon polyps in endoscopy of the colon. It has been found that it may help to reduce Furthermore, it has been found that the convolutional neural network can correctly classify the detected colon polyps and strongly support the diagnosis of the endoscopic image by the endoscopy doctor.
- FIG. 21 and 22 are diagrams showing an example of an endoscopic image and an analysis result image in the second evaluation test.
- FIG. 21A shows an endoscopic image and analysis result image including colon polyps (adenomas) correctly detected and classified by a convolutional neural network.
- a rectangular frame 110 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (adenoma: Adenoma) and a probability score (0.97) are displayed.
- the rectangular frame 112 indicates, for reference, the lesion position (range) of the histologically proven large intestine polyp (adenoma) and is not displayed in the actual analysis result image.
- FIG. 21B shows an endoscopic image and analysis result image including colon polyps (hyperplastic polyps) correctly detected and classified by a convolutional neural network.
- a rectangular frame 114 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (hyperplastic polyp: Hyperplastic) and a probability score (0.83) It is displayed.
- the rectangular frame 116 indicates the lesion position (range) of the histologically proven colon polyps (hyperplastic polyps), and is not displayed in the actual analysis result image.
- FIG. 21C shows an endoscopic image including a colon polyp (adenoma) not detected by the convolutional neural network, ie, missed, as a false negative image.
- a rectangular frame 118 shows the lesion position (range) of the histologically proven colon polyps (adenomas) for reference, and is not displayed in the actual analysis result image.
- FIG. 21D shows an endoscopic image and an analysis result image including a normal colon fold incorrectly detected and classified by a convolutional neural network as a false positive image.
- a rectangular frame 120 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (adenoma: Hyperplastic) and a probability score (0.70) are displayed. There is.
- FIG. 21E shows an endoscopic image and analysis result image including a incorrectly classified colon polyp (adenoma) although the lesion position (range) was correctly detected by the convolutional neural network.
- a rectangular frame 122 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (hyperplastic polyp: Hyperplastic) and a probability score (0.54) It is displayed.
- the rectangular frame 124 indicates the histologically proven lesion position (range) of the colon polyp (adenoma), and it is not displayed in the actual analysis result image.
- FIG. 21F shows an endoscopic image and analysis result image including incorrectly classified colon polyps (hyperplastic polyps) although the lesion position (range) was correctly detected by the convolutional neural network.
- a rectangular frame 126 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (adenoma: Adenoma) and a probability score (0.62) are displayed.
- the rectangular frame 128 indicates the lesion position (range) of the histologically proven colon polyps (hyperplastic polyps), and is not displayed in the actual analysis result image.
- FIG. 22A shows an endoscopic image including a colon polyp (adenoma) not detected by the convolutional neural network, ie, missed, as a false negative image.
- Rectangular frames 130 and 132 indicate, for reference, histologically proven lesion positions (ranges) of colon polyps (adenomas), which are not displayed in the actual analysis result image.
- the large intestine polyps (adenomas) indicated by the rectangular frames 130 and 132 are so small that the large intestine polyps are difficult to be recognized, so it is considered that they were not detected by the convolutional neural network.
- FIG. 22B shows an endoscopic image including a colon polyp (adenoma) not detected by the convolutional neural network, ie, missed, as a false negative image.
- a rectangular frame 134 shows the lesion position (range) of the histologically proven large intestine polyp (adenoma) for reference, and is not displayed in the actual analysis result image.
- the large intestine polyps (adenomas) indicated by the rectangular frame 134 are dark and difficult to recognize the large intestine polyps, so it is considered that they were not detected by the convolutional neural network.
- FIG. 22C shows an endoscopic image including a colon polyp (adenoma) not detected by the convolutional neural network, ie, missed, as a false negative image.
- a rectangular frame 136 shows the lesion position (range) of the histologically proven colon polyp (adenoma) for reference, and is not displayed in the actual analysis result image. It is considered that the colon polyps (adenomas) indicated by the rectangular frame 136 were not detected by the convolutional neural network because they were imaged from the side or partially.
- FIG. 22D shows an endoscopic image including a colon polyp (adenoma) not detected by the convolutional neural network, ie, missed, as a false negative image.
- a rectangular frame 138 shows the lesion position (range) of the histologically proven colon polyps (adenomas) for reference, and is not displayed in the actual analysis result image.
- the large intestine polyps (adenomas) indicated by the rectangular frame 138 are so large that it is difficult to recognize the large intestine polyps, so it is considered that they were not detected by the convolutional neural network.
- FIG. 22E shows an endoscopic image and an analysis result image as false-positive images, including an iridescent valve (normal structure) incorrectly detected and classified by a convolutional neural network.
- a rectangular frame 140 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (others: The others) and a probability score (0.62) are displayed. ing.
- FIG. 22F shows an endoscopic image and an analysis result image including a normal colon fold incorrectly detected and classified by a convolutional neural network as a false positive image.
- a rectangular frame 142 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (adenoma: Adenoma) and a probability score (0.32) are displayed. There is.
- FIG. 22G shows an endoscopic image and an analysis result image including a halation (artificial abnormal image) erroneously detected and classified by a convolutional neural network as a false positive image.
- a rectangular frame 144 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (adenoma: Adenoma) and a probability score (0.43) are displayed. There is.
- FIG. 22H shows an endoscopic image and analysis result image including polyps incorrectly detected and classified by a convolutional neural network as false positive images.
- the polyp was suspected to be an authentic polyp but could not be finally confirmed.
- a rectangular frame 146 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (hyperplastic polyp: Hyperplastic) and a probability score (0.48) It is displayed.
- a training data set (teacher data) that uses 8,428 (384 people) endoscopic images of the esophagus from February 2016 to April 2017 for the learning of convolutional neural networks in the diagnostic imaging support system Prepared as.
- Endoscopic images include esophageal cancer (specifically squamous cell carcinoma (ESCC) or adenocarcinoma (EAC)) that has been histologically proven by a certified pathologist.
- ESCC squamous cell carcinoma
- EAC adenocarcinoma
- Endoscopy is performed for screening or preoperative examination in daily practice, and endoscopic images are standard endoscopes (GIF-H290Z, GIF-H290, GIF-XP290N, GIF-H260Z, GIF -H260, Olympus Medical Systems Ltd., Tokyo) and a standard endoscopic video system (EVIS LUCERA CV-260 / CLV-260, EVIS LUCERA ELITE CV-290 / CLV-290SL, Olympus Medical Systems, Inc.) did.
- the endoscopic image as a training data set includes an endoscopic image captured by irradiating white light in the esophagus of the subject, and narrow band light (nBI for NBI) in the esophagus of the subject. An endoscopic image captured by irradiating the band light) is included. In addition, endoscopic images with poor image quality were excluded from the training data set due to halation, overcast lenses, out of focus, mucus or lack of insufflation.
- the endoscopic image includes a squamous cell carcinoma of 397 lesions consisting of 332 lesions of superficial esophagus and 65 lesions of advanced gastric cancer, and a gland consisting of 19 lesions of superficial esophagus and 13 lesions of advanced gastric cancer.
- the cancer included 32 lesions.
- experienced endoscopes with more than 2,000 cases of upper endoscopy are the names of all esophageal cancer (squamous cell carcinoma or adenocarcinoma) lesions (The superficial esophagus cancer or advanced esophagus cancer) and the lesion position were precisely set manually and the feature extraction marking was set.
- the endoscopic image as the evaluation test data set, similarly to the learning data set, the endoscopic image taken by irradiating white light to the inside of the subject's esophagus, and the inside of the subject's esophagus
- endoscopic images captured by irradiating narrowband light were included.
- the median tumor size (diameter) was 20 mm, and the range of tumor sizes (diameter) was 5-700 mm.
- 43 lesions were more superficial (0-I, 0-IIa, 0-IIb, 0-IIc) than advanced (6 lesions).
- At the depth of the tumor there were 42 lesions of superficial esophagus cancer (mucosal cancer: T1a, submucosal cancer: T1b) and 7 lesions of advanced gastric cancer (T2-T4). Histopathology showed 41 lesions in squamous cell carcinoma and 8 lesions in adenocarcinoma.
- the evaluation test data set is input to a convolutional neural network-based image diagnosis support device subjected to learning processing using the learning data set, and each of the constituent elements of the evaluation test data set is configured. It was evaluated whether it could detect esophagus cancer correctly from an endoscopic image. If esophagus cancer was detected correctly, it was regarded as "correct”.
- a convolutional neural network detects esophagus cancer from an endoscopic image, it outputs the lesion name (superficial esophagus cancer or advanced esophageal cancer), lesion position and probability score.
- the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for diagnostic ability of a convolutional neural network to detect esophageal cancer in each endoscopic image are Calculated using (1) to (4).
- Sensitivity (number of endoscopic images in which the convolutional neural network correctly detected esophageal cancer) / (number of endoscopic images in which a data set for evaluation test is present and esophageal cancer is present) ...
- Specificity (Number of endoscopic images that convolutional neural network correctly detected the absence of esophageal cancer) / (Number of endoscopic images that constitute the evaluation test data set and there is no esophageal cancer) ) ...
- the convolutional neural network finished the process of analyzing the 1,118 endoscopic images constituting the evaluation test data set in 27 seconds.
- the convolutional neural network correctly detected all (seven) esophageal cancers with tumor sizes less than 10 mm.
- the positive predictive value for the diagnostic ability of the convolutional neural network was 40%, and there was a misdiagnosis of shadow and normal structure, but the negative predictive value was 95%.
- the convolutional neural network correctly detected esophagus cancer classification (superficial esophagus cancer or advanced esophagus cancer) with 98% accuracy.
- FIG. 24 is a view showing an example of an endoscopic image and an analysis result image in the third evaluation test.
- FIG. 24A shows an endoscopic image (e.g., an endoscopic image captured by irradiating white light into the esophagus of the subject) and an analysis result image including an esophageal cancer correctly detected and classified by the convolutional neural network. Show.
- a rectangular frame 150 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.91) It is displayed.
- the rectangular frame 152 indicates the histologically proven lesion position (range) of esophagus cancer for reference, and is not displayed in the actual analysis result image.
- FIG. 24B corresponds to FIG. 24A, and is an endoscopic image including esophagus cancer correctly detected and classified by the convolutional neural network (within the esophagus of the subject's esophagus by irradiating narrow band light for NBI)
- An endoscopic image and an analysis result image are shown.
- a rectangular frame 154 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.97) It is displayed.
- the rectangular frame 156 indicates the histologically proven lesion position (range) of esophagus cancer for reference, and is not displayed in the actual analysis result image.
- FIG. 24C shows an endoscopic image not detected by the convolutional neural network, that is, an endoscopic image (an endoscopic image captured by irradiating white light into the esophagus of the subject) including the missed esophageal cancer. Shown as a negative image.
- a rectangular frame 158 indicates, for reference, the histologically proven lesion position (range) of esophagus cancer and is not displayed in the actual analysis result image.
- FIG. 24D corresponds to FIG. 24C, and is an endoscopic image including esophagus cancer correctly detected and classified by the convolutional neural network (within the esophagus of the subject's esophagus by irradiating narrow band light for NBI)
- An endoscopic image and an analysis result image are shown.
- a rectangular frame 160 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.98) It is displayed.
- the rectangular frame 162 indicates the histologically proven lesion position (range) of esophagus cancer and is not displayed in the actual analysis result image.
- FIG. 25 Esophageal cancer / non-esophageal cancer by convolutional neural network for 47 cases with esophageal cancer (esophageal cancer) and 50 cases without esophageal cancer (non-esophageal cancer) It is a figure which shows a detection result and the detection result of esophagus cancer / non-esophageal cancer by biopsy.
- esophageal cancer esophageal cancer
- non-esophageal cancer non-esophageal cancer
- the convolutional neural network is esophagus cancer / If non-esophageal cancer was correctly detected, this evaluation test determined that the convolutional neural network correctly detected esophageal cancer / non-esophageal cancer. As shown in FIG. 25, the convolutional neural network correctly detected esophagus cancer in 98% (46/47) of cases with esophagus cancer in the endoscopic image, when comprehensive diagnosis was performed. . Also, although not shown, the convolutional neural network correctly detected all esophagus cancers with tumor sizes less than 10 mm.
- FIG. 26 shows the sensitivity (hereinafter referred to as white light sensitivity) in an endoscopic image captured by irradiating white light in each case shown in FIG. 25 and an endoscope imaged by irradiating narrow band light for NBI.
- white light sensitivity A diagram showing sensitivity in an image (hereinafter, narrowband light sensitivity for NBI) and sensitivity in an endoscopic image captured by irradiating at least one of white light and narrowband light for NBI (hereinafter, comprehensive sensitivity) It is.
- narrowband light sensitivity for NBI narrowband light sensitivity for NBI
- comprehensive sensitivity white light sensitivity
- White light sensitivity for squamous cell carcinoma, narrowband light sensitivity for NBI and global sensitivity were 79%, 89% and 97%, respectively.
- White light sensitivity for adenocarcinoma, narrowband light sensitivity for NBI and global sensitivity were 88%, 88% and 100%, respectively.
- FIG. 27 is a detection result of esophagus cancer / non-esophagus cancer by a convolutional neural network for each endoscopic image imaged by irradiating white light or narrow band light for NBI and esophagus cancer by biopsy It is a figure which shows / detection result of non-esophageal cancer.
- FIG. 28 shows the sensitivity (hereinafter referred to as white light sensitivity) in the endoscopic image taken by irradiating white light and the narrow band light for NBI in each endoscopic image shown in FIG. It is a figure which shows the sensitivity (The following is narrow band light sensitivity for NBI) in the endoscope image which was carried out.
- the convolutional neural network correctly detected esophagus cancer in 74% (125/168) of endoscopic images diagnosed as having esophagus cancer as a result of biopsy. And the sensitivity, specificity, positive predictive value and negative predictive value to diagnostic ability of the convolutional neural network were 74%, 80%, 40% and 95%, respectively.
- the narrow band light sensitivity (81%) for NBI was higher than the white light sensitivity (69%).
- White light sensitivity for squamous cell carcinoma and narrowband light sensitivity for NBI were 72% and 84%, respectively.
- White light sensitivity for adenocarcinoma and narrowband light sensitivity for NBI were 55% and 67%, respectively.
- FIG. 29 shows the degree of coincidence between CNN classification and deep penetration.
- the classification of 100% (89/89) of the total esophagus cancers is performed by the convolutional neural network. It was correctly classified. That is, 100% (75/75) of esophagus cancers that were histologically proven to be superficial esophagus cancers were correctly classified as superficial esophagus cancers by convolutional neural networks. In addition, 100% (14/14) of esophagus cancers that were histologically proven as advanced esophagus cancers were correctly classified as advanced esophagus cancers by convolutional neural networks.
- the classification accuracy of the convolutional neural network is very high.
- the classification accuracy of the convolutional neural network for squamous cell carcinoma and adenocarcinoma was 99% (146/147) and 90% (19/21), respectively.
- FIG. 30 is a diagram showing classification results of false positive images and false negative images. As shown in FIG. 30, 95 false positive images (50%) out of 188 false positive images were shaded. In addition, 61 false positive images (32%) included normal structures that are easy to distinguish from esophageal cancer, and most of them were the esophagogastric junction (EGJ) and the left main bronchus. In addition, 32 false-positive images (17%) included benign lesions that could be misdiagnosed as esophageal cancer, the majority of which were postoperative scarring, focal atrophy, Barrett's esophagus, and inflammation.
- EGJ esophagogastric junction
- 32 false-positive images included benign lesions that could be misdiagnosed as esophageal cancer, the majority of which were postoperative scarring, focal atrophy, Barrett's esophagus, and inflammation.
- FIG. 31A is a view showing an endoscopic image and an analysis result image which are erroneously detected and classified by the convolutional neural network as false positive images because they were accompanied by shadows.
- the analysis result image a rectangular frame 170 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.70) It is displayed.
- FIG. 31B shows an endoscopic image and analysis result image incorrectly detected and classified by the convolutional neural network as a false positive image because the normal structure (esophago-gastric junction) easy to distinguish from esophageal cancer is included.
- FIG. 31B in the analysis result image, a rectangular frame 172 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.57) It is displayed.
- FIG. 31C shows an endoscopic image and an analysis result image falsely detected and classified as false positive images because they included a normal structure (left main bronchus) that is easy to distinguish from esophageal cancer. It is.
- a rectangular frame 174 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.60) It is displayed.
- FIG. 31D is a view showing an endoscopic image and an analysis result image falsely detected and classified as false positive images because they included a normal structure (vertebral body) that is easy to distinguish from esophageal cancer. is there.
- a rectangular frame 176 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.80) It is displayed.
- FIG. 31E contains a benign lesion (postoperative scar) that may be misdiagnosed as esophageal cancer, so the endoscopic image and analysis result image falsely detected and classified falsely by the convolutional neural network are false positive. It is a figure shown as an image. As shown in FIG. 31E, in the analysis result image, a rectangular frame 178 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.88) It is displayed.
- FIG. 31F included benign lesions (focal atrophy) that may be misdiagnosed as esophageal cancer, false positive images are detected as endoscopic images and analysis result images incorrectly detected and classified by the convolutional neural network.
- FIG. 31F in the analysis result image, a rectangular frame 180 indicating a lesion position (range) estimated by the convolutional neural network, a lesion name (superficial esophagus cancer) and a probability score (0.83) It is displayed.
- FIG. 32A is a view showing, as a false negative image, an endoscopic image including esophagus cancer not detected by the convolutional neural network because the lesion was at a distant position in the endoscopic image and in a state where diagnosis was difficult.
- a rectangular frame 182 shows, for reference, the histologically proven lesion position (range) of esophagus cancer and is not displayed in the actual analysis result image.
- FIG. 32B is a view showing an endoscopic image including esophagus cancer not detected by the convolutional neural network as a false negative image because only a part of the lesion was present in the endoscopic image and the diagnosis was a difficult state is there.
- a rectangular frame 184 shows the histologically proven esophageal cancer lesion position (range) for reference, and is not displayed in the actual analysis result image.
- FIG. 32C is a view showing an endoscopic image including esophagus cancer not detected by the convolutional neural network as a false negative image as a result of misdiagnosis as inflammation due to background mucosa.
- a rectangular frame 186 shows the histologically proven esophageal cancer lesion position (range) for reference, and is not displayed in the actual analysis result image.
- FIG. 32D is a view showing an endoscopic image including esophagus cancer not detected by the convolutional neural network as a false negative image because the squamous cell cancer irradiated with narrowband light for NBI is unclearly imaged is there.
- a rectangular frame 188 shows the histologically proven esophageal cancer lesion position (range) for reference, and is not displayed in the actual analysis result image.
- FIG. 32E is an endoscopic image including esophagus cancer (Barrett's esophagus cancer) that was not detected by the convolutional neural network because there was an area where Barrett's esophagus adenocarcinoma was present but not enough to learn about the adenocarcinoma. Is a figure which shows as a false negative image.
- a rectangular frame 190 indicates the histologically proven esophageal cancer lesion position (range) for reference, and is not displayed in the actual analysis result image.
- the convolutional neural network effectively detects the esophagus cancer at a fairly accurate and surprising rate, and the esophagus in the endoscopy of the esophagus It has been found that it may help to reduce the likelihood of Furthermore, it has been found that the convolutional neural network can accurately classify the detected esophageal cancer and strongly support the diagnosis of the endoscopic image by the endoscopic doctor. And, by performing more learning processing, it is believed that the convolutional neural network achieves higher diagnostic accuracy.
- the present invention is useful as an image diagnosis support device capable of supporting the diagnosis of an endoscopic image by an endoscopic doctor, a data collection method, an image diagnosis support method, and an image diagnosis support program.
- endoscope image acquisition unit 20 lesion estimation unit 30 display control unit 40 learning device 100 image diagnosis support device 101 CPU 102 ROM 103 RAM 104 external storage device 105 communication interface 200 endoscope imaging device 300 display device D1 endoscope image data D2 estimation result data D3 analysis result image data D4 teacher data
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Gastroenterology & Hepatology (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
Priority Applications (7)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2019520910A JP6657480B2 (ja) | 2017-10-30 | 2018-10-30 | 画像診断支援装置、画像診断支援装置の作動方法および画像診断支援プログラム |
| EP18873255.6A EP3705025A4 (en) | 2017-10-30 | 2018-10-30 | IMAGE DIAGNOSIS ASSISTANT APPARATUS, DATA COLLECTION PROCESS, IMAGE DIAGNOSIS ASSISTANCE PROCESS AND IMAGE DIAGNOSIS ASSISTANCE PROGRAM |
| BR112020008774-2A BR112020008774A2 (pt) | 2017-10-30 | 2018-10-30 | Aparelho para auxiliar no diagnóstico por imagem, método para a coleta de dados, método para auxiliar no diagnóstico por imagem e programa para auxiliar no diagnóstico por imagem |
| US16/760,458 US11633084B2 (en) | 2017-10-30 | 2018-10-30 | Image diagnosis assistance apparatus, data collection method, image diagnosis assistance method, and image diagnosis assistance program |
| CN201880071367.9A CN111655116A (zh) | 2017-10-30 | 2018-10-30 | 图像诊断辅助装置、资料收集方法、图像诊断辅助方法及图像诊断辅助程序 |
| SG11202003973VA SG11202003973VA (en) | 2017-10-30 | 2018-10-30 | Image diagnosis assistance apparatus, data collection method, image diagnosis assistance method, and image diagnosis assistance program |
| KR1020207015517A KR20200106028A (ko) | 2017-10-30 | 2018-10-30 | 화상 진단 지원 장치, 자료 수집 방법, 화상 진단 지원 방법 및 화상 진단 지원 프로그램 |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017-209232 | 2017-10-30 | ||
| JP2017209232 | 2017-10-30 | ||
| JP2018007967 | 2018-01-22 | ||
| JP2018-007967 | 2018-01-22 | ||
| JP2018038828 | 2018-03-05 | ||
| JP2018-038828 | 2018-03-05 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019088121A1 true WO2019088121A1 (ja) | 2019-05-09 |
Family
ID=66333530
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/040381 Ceased WO2019088121A1 (ja) | 2017-10-30 | 2018-10-30 | 画像診断支援装置、資料収集方法、画像診断支援方法および画像診断支援プログラム |
Country Status (9)
| Country | Link |
|---|---|
| US (1) | US11633084B2 (enExample) |
| EP (1) | EP3705025A4 (enExample) |
| JP (2) | JP6657480B2 (enExample) |
| KR (1) | KR20200106028A (enExample) |
| CN (1) | CN111655116A (enExample) |
| BR (1) | BR112020008774A2 (enExample) |
| SG (1) | SG11202003973VA (enExample) |
| TW (1) | TW201922174A (enExample) |
| WO (1) | WO2019088121A1 (enExample) |
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020008834A1 (ja) * | 2018-07-05 | 2020-01-09 | 富士フイルム株式会社 | 画像処理装置、方法及び内視鏡システム |
| WO2020174747A1 (ja) * | 2019-02-26 | 2020-09-03 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理方法、及びプログラム |
| JP2021002339A (ja) * | 2019-06-21 | 2021-01-07 | ストラックスコープ ピーティワイ リミテッドStraxcorp Pty Ltd | 画像内での構造又は物質セグメンテーションに基づいた機械学習分類のための方法及びシステム |
| WO2021005856A1 (ja) * | 2019-07-08 | 2021-01-14 | 株式会社日立製作所 | 破面解析装置及び破面解析方法 |
| CN112330686A (zh) * | 2019-08-05 | 2021-02-05 | 罗雄彪 | 肺部支气管的分割及标定方法 |
| WO2021033303A1 (ja) * | 2019-08-22 | 2021-02-25 | Hoya株式会社 | 訓練データ生成方法、学習済みモデル及び情報処理装置 |
| JP2021058464A (ja) * | 2019-10-08 | 2021-04-15 | 公立大学法人会津大学 | 大腸内視鏡検査補助装置、大腸内視鏡検査補助方法及び大腸内視鏡検査補助プログラム |
| WO2021095446A1 (ja) * | 2019-11-11 | 2021-05-20 | 富士フイルム株式会社 | 情報表示システムおよび情報表示方法 |
| JP2021079565A (ja) * | 2019-11-15 | 2021-05-27 | オーアイ・イノベーション株式会社 | 髄位置推定装置および製材システム |
| JP2021100555A (ja) * | 2019-12-24 | 2021-07-08 | 富士フイルム株式会社 | 医療画像処理装置、内視鏡システム、診断支援方法及びプログラム |
| WO2021054477A3 (ja) * | 2019-09-20 | 2021-07-22 | 株式会社Aiメディカルサービス | 消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体 |
| WO2021199294A1 (ja) * | 2020-03-31 | 2021-10-07 | 日本電気株式会社 | 情報処理装置、表示方法、及びプログラムが格納された非一時的なコンピュータ可読媒体 |
| WO2021205777A1 (ja) * | 2020-04-08 | 2021-10-14 | 富士フイルム株式会社 | プロセッサ装置及びその作動方法 |
| WO2021220279A1 (en) * | 2020-05-01 | 2021-11-04 | Given Imaging Ltd. | Systems and methods for selecting images of event indicators |
| WO2021220822A1 (ja) * | 2020-04-27 | 2021-11-04 | 公益財団法人がん研究会 | 画像診断装置、画像診断方法、画像診断プログラムおよび学習済みモデル |
| JP2022502150A (ja) * | 2018-10-02 | 2022-01-11 | インダストリー アカデミック コオペレーション ファウンデーション、ハルリム ユニヴァーシティ | 胃内視鏡イメージのディープラーニングを利用して胃病変を診断する装置及び方法 |
| JP2022507002A (ja) * | 2018-10-02 | 2022-01-18 | インダストリー アカデミック コオペレーション ファウンデーション、ハルリム ユニヴァーシティ | リアルタイムに取得される胃内視鏡イメージに基づいて胃病変を診断する内視鏡装置及び方法 |
| WO2022185651A1 (ja) * | 2021-03-04 | 2022-09-09 | Hoya株式会社 | プログラム、情報処理方法及び情報処理装置 |
| WO2023042546A1 (ja) * | 2021-09-17 | 2023-03-23 | Hoya株式会社 | コンピュータプログラム、情報処理方法及び内視鏡 |
| JP2023519472A (ja) * | 2020-03-21 | 2023-05-11 | スマート・メディカル・システムズ・リミテッド | 機械的に強化されたトポグラフィのための人工知能検出システム |
| US20230218175A1 (en) * | 2020-07-31 | 2023-07-13 | Tokyo University Of Science Foundation | Image Processing Device, Image Processing Method, Image Processing Program, Endoscope Device, and Endoscope Image Processing System |
| JP2023106327A (ja) * | 2022-01-20 | 2023-08-01 | 株式会社Aiメディカルサービス | 検査支援装置、検査支援方法および検査支援プログラム |
| CN116681681A (zh) * | 2023-06-13 | 2023-09-01 | 富士胶片(中国)投资有限公司 | 内窥镜图像的处理方法、装置、用户设备及介质 |
| WO2024009631A1 (ja) * | 2022-07-06 | 2024-01-11 | 富士フイルム株式会社 | 画像処理装置及び画像処理装置の作動方法 |
| JP2024050897A (ja) * | 2019-12-13 | 2024-04-10 | Hoya株式会社 | 機械学習モデルの出力を使用して視覚的証拠に基づいて映像信号内の被写体を検出するための装置、方法、およびコンピュータ可読記憶媒体 |
| US12048413B2 (en) | 2018-06-22 | 2024-07-30 | Ai Medical Service Inc. | Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ |
| US12169963B2 (en) | 2020-03-11 | 2024-12-17 | Olympus Corporation | Processing system, image processing method, learning method, and processing device |
| WO2025027815A1 (ja) * | 2023-08-02 | 2025-02-06 | オリンパスメディカルシステムズ株式会社 | 内視鏡診断支援方法、推論モデル、内視鏡画像処理装置、内視鏡画像処理システムおよび内視鏡画像処理プログラム |
| WO2025084278A1 (ja) * | 2023-10-20 | 2025-04-24 | オリンパス株式会社 | 画像診断装置および画像診断方法 |
Families Citing this family (47)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6818424B2 (ja) * | 2016-04-13 | 2021-01-20 | キヤノン株式会社 | 診断支援装置、情報処理方法、診断支援システム及びプログラム |
| US10810460B2 (en) | 2018-06-13 | 2020-10-20 | Cosmo Artificial Intelligence—AI Limited | Systems and methods for training generative adversarial networks and use of trained generative adversarial networks |
| US11100633B2 (en) | 2018-06-13 | 2021-08-24 | Cosmo Artificial Intelligence—Al Limited | Systems and methods for processing real-time video from a medical image device and detecting objects in the video |
| WO2020026341A1 (ja) * | 2018-07-31 | 2020-02-06 | オリンパス株式会社 | 画像解析装置および画像解析方法 |
| US11510561B2 (en) * | 2018-08-21 | 2022-11-29 | Verily Life Sciences Llc | Endoscope defogging |
| CN109523522B (zh) * | 2018-10-30 | 2023-05-09 | 腾讯医疗健康(深圳)有限公司 | 内窥镜图像的处理方法、装置、系统及存储介质 |
| US20210407077A1 (en) * | 2018-12-04 | 2021-12-30 | Hoya Corporation | Information processing device and model generation method |
| CN112566540B (zh) * | 2019-03-27 | 2023-12-19 | Hoya株式会社 | 内窥镜用处理器、信息处理装置、内窥镜系统、程序以及信息处理方法 |
| US12380555B2 (en) * | 2019-07-19 | 2025-08-05 | The Jackson Laboratory | Convolutional neural networks for classification of cancer histological images |
| CN110517745B (zh) * | 2019-08-15 | 2023-06-27 | 中山大学肿瘤防治中心 | 医疗检查结果的展示方法、装置、电子设备及存储介质 |
| TWI726459B (zh) * | 2019-10-25 | 2021-05-01 | 中國醫藥大學附設醫院 | 遷移學習輔助預測系統、方法及電腦程式產品 |
| EP3846477B1 (en) * | 2020-01-05 | 2023-05-03 | Isize Limited | Preprocessing image data |
| TWI725716B (zh) * | 2020-01-21 | 2021-04-21 | 雲象科技股份有限公司 | 內視鏡檢測系統及其方法 |
| CN111291755B (zh) * | 2020-02-13 | 2022-11-15 | 腾讯科技(深圳)有限公司 | 对象检测模型训练及对象检测方法、装置、计算机设备和存储介质 |
| WO2021176664A1 (ja) * | 2020-03-05 | 2021-09-10 | オリンパス株式会社 | 検査支援システム、検査支援方法、及び、プログラム |
| JP7529021B2 (ja) * | 2020-05-26 | 2024-08-06 | 日本電気株式会社 | 画像処理装置、制御方法及びプログラム |
| KR102417531B1 (ko) * | 2020-07-08 | 2022-07-06 | 주식회사 메가젠임플란트 | 학습 데이터 생성장치 및 그 장치의 구동방법, 그리고 컴퓨터 판독가능 기록매체 |
| CN116133572A (zh) * | 2020-07-14 | 2023-05-16 | 富士胶片株式会社 | 图像分析处理装置、内窥镜系统、图像分析处理装置的工作方法及图像分析处理装置用程序 |
| KR102222547B1 (ko) * | 2020-07-15 | 2021-03-04 | 주식회사 웨이센 | 인공지능 기반의 대장내시경 영상 분석 방법 |
| KR102255311B1 (ko) | 2020-08-10 | 2021-05-24 | 주식회사 웨이센 | 인공지능 기반의 위내시경 영상 분석 방법 |
| US11730491B2 (en) | 2020-08-10 | 2023-08-22 | Kunnskap Medical, LLC | Endoscopic image analysis and control component of an endoscopic system |
| WO2022054400A1 (ja) * | 2020-09-11 | 2022-03-17 | 富士フイルム株式会社 | 画像処理システム、プロセッサ装置、内視鏡システム、画像処理方法及びプログラム |
| KR102375786B1 (ko) * | 2020-09-14 | 2022-03-17 | 주식회사 뷰노 | 의료 영상에서 이상 소견 탐지 및 판독문 생성 방법 |
| KR102262684B1 (ko) * | 2020-11-13 | 2021-06-09 | 주식회사 웨이센 | 영상 수신 장치의 인공지능 기반의 영상 처리 방법 |
| JP7124041B2 (ja) * | 2020-11-25 | 2022-08-23 | 株式会社朋 | ハンナ病変の指摘のためのプログラム |
| WO2022124315A1 (ja) | 2020-12-08 | 2022-06-16 | 国立研究開発法人産業技術総合研究所 | 内視鏡診断支援方法及び内視鏡診断支援システム |
| KR102505791B1 (ko) * | 2021-01-11 | 2023-03-03 | 한림대학교 산학협력단 | 실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법, 장치 및 프로그램 |
| CN112426119B (zh) * | 2021-01-26 | 2021-04-13 | 上海孚慈医疗科技有限公司 | 一种内窥镜筛查处理方法和装置 |
| JP7647864B2 (ja) * | 2021-03-01 | 2025-03-18 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
| WO2022208615A1 (ja) * | 2021-03-29 | 2022-10-06 | 日本電気株式会社 | 画像処理装置、画像処理方法及び記憶媒体 |
| TWI797585B (zh) * | 2021-03-31 | 2023-04-01 | 艾陽科技股份有限公司 | 雷達感測心律方法及其系統 |
| CN113456002A (zh) * | 2021-07-15 | 2021-10-01 | 显微智能科技(湖南)有限公司 | 一种体内癌症细胞定位装置及方法 |
| TWI762388B (zh) * | 2021-07-16 | 2022-04-21 | 國立中正大學 | 以超頻譜檢測物件影像之方法 |
| KR102616961B1 (ko) * | 2021-08-31 | 2023-12-27 | 동국대학교 산학협력단 | 이종 캡슐내시경 간의 도메인 적응에 의한 병증정보 제공 방법 |
| JP2023045168A (ja) * | 2021-09-21 | 2023-04-03 | 学校法人帝京大学 | 医用画像診断支援装置、医用画像診断支援方法およびプログラム |
| TWI789932B (zh) * | 2021-10-01 | 2023-01-11 | 國泰醫療財團法人國泰綜合醫院 | 大腸瘜肉影像偵測方法、裝置及其系統 |
| CN114569043B (zh) * | 2022-01-29 | 2025-09-05 | 重庆天如生物科技有限公司 | 一种基于人工智能的内窥镜辅助检查方法及装置 |
| TWI796156B (zh) * | 2022-03-04 | 2023-03-11 | 國立中正大學 | 以波段用於超頻譜檢測物件影像之方法 |
| WO2024117838A1 (ko) * | 2022-12-01 | 2024-06-06 | 프리베노틱스 주식회사 | 내시경 영상을 분석하여 복수의 병변들에 대한 정보를 제공하기 위한 전자 장치 및 그러한 전자 장치를 포함하는 내시경 검사 시스템 |
| CN115998225A (zh) * | 2022-12-12 | 2023-04-25 | 珠海泰科医疗技术有限公司 | 一种内窥镜病灶区域曝光方法、内窥镜和存储介质 |
| US20240296550A1 (en) * | 2023-03-02 | 2024-09-05 | Bh2 Innovations Inc. | High Speed Detection of Anomalies in Medical Scopes and the Like Using Image Segmentation |
| CN116486367B (zh) * | 2023-03-15 | 2025-08-29 | 卡斯柯信号有限公司 | 一种轨道交通障碍物检测方法、设备及介质 |
| WO2025028737A1 (ko) * | 2023-08-03 | 2025-02-06 | 주식회사 웨이센 | 내시경 영상 분석 결과 시각화 및 대표 영상 선별 시스템과 그 방법 |
| KR102722605B1 (ko) * | 2023-11-21 | 2024-10-28 | 주식회사 메디인테크 | 내시경 영상에서 병변을 탐지하는 방법 및 이를 수행하는 인공신경망 모델을 학습시키는 방법 및 컴퓨팅 장치 |
| US20250279199A1 (en) * | 2024-02-29 | 2025-09-04 | Hong Kong Applied Science and Technology Research Institute Company Limited | Portable Edge AI-Assisted Diagnosis and Quality Control System for Gastrointestinal Endoscopy |
| EP4625315A1 (en) * | 2024-03-28 | 2025-10-01 | Odin Medical Ltd. | Systems and methods for polyp classification |
| CN120727265A (zh) * | 2025-09-03 | 2025-09-30 | 温州市人民医院 | 基于深度学习技术的消化道早期癌症智能辅助诊断系统 |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002165757A (ja) * | 2000-11-30 | 2002-06-11 | Olympus Optical Co Ltd | 診断支援装置 |
| JP2007280229A (ja) * | 2006-04-11 | 2007-10-25 | Fujifilm Corp | 類似症例検索装置、類似症例検索方法およびそのプログラム |
| WO2012165505A1 (ja) * | 2011-06-02 | 2012-12-06 | オリンパス株式会社 | 蛍光観察装置 |
| WO2016185617A1 (ja) * | 2015-05-21 | 2016-11-24 | オリンパス株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
| JP2017045341A (ja) | 2015-08-28 | 2017-03-02 | カシオ計算機株式会社 | 診断装置、及び診断装置における学習処理方法、並びにプログラム |
| JP2017067489A (ja) | 2015-09-28 | 2017-04-06 | 富士フイルムRiファーマ株式会社 | 診断支援装置、方法及びコンピュータプログラム |
| WO2017170233A1 (ja) * | 2016-03-29 | 2017-10-05 | 富士フイルム株式会社 | 画像処理装置、画像処理装置の作動方法、および画像処理プログラム |
| WO2017175282A1 (ja) * | 2016-04-04 | 2017-10-12 | オリンパス株式会社 | 学習方法、画像認識装置およびプログラム |
| JP2017209232A (ja) | 2016-05-24 | 2017-11-30 | 株式会社三共 | 遊技機 |
| JP2018007967A (ja) | 2016-07-15 | 2018-01-18 | 株式会社三共 | 遊技機 |
| JP2018038828A (ja) | 2006-11-15 | 2018-03-15 | シーエフピーエイチ, エル.エル.シー. | ゲームサーバと通信しているゲーム機を決定する装置および方法 |
Family Cites Families (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE602007007340D1 (de) * | 2006-08-21 | 2010-08-05 | Sti Medical Systems Llc | Computergestützte analyse mit hilfe von videodaten aus endoskopen |
| CN101584571A (zh) * | 2009-06-15 | 2009-11-25 | 无锡骏聿科技有限公司 | 一种胶囊内镜辅助读片方法 |
| JP5455550B2 (ja) * | 2009-10-23 | 2014-03-26 | Hoya株式会社 | 電子内視鏡用プロセッサ |
| US20110301447A1 (en) * | 2010-06-07 | 2011-12-08 | Sti Medical Systems, Llc | Versatile video interpretation, visualization, and management system |
| JP5926728B2 (ja) * | 2010-07-26 | 2016-05-25 | ケイジャヤ、エルエルシー | 内科医が直接用いるのに適応したビジュアライゼーション |
| JP5800595B2 (ja) * | 2010-08-27 | 2015-10-28 | キヤノン株式会社 | 医療診断支援装置、医療診断支援システム、医療診断支援の制御方法、及びプログラム |
| JP5670695B2 (ja) * | 2010-10-18 | 2015-02-18 | ソニー株式会社 | 情報処理装置及び方法、並びにプログラム |
| US20170140528A1 (en) * | 2014-01-25 | 2017-05-18 | Amir Aharon Handzel | Automated histological diagnosis of bacterial infection using image analysis |
| KR20160049897A (ko) * | 2014-10-28 | 2016-05-10 | 삼성전자주식회사 | 연속적인 의료 영상을 이용한 컴퓨터 보조 진단 장치 및 방법 |
| US9672596B2 (en) * | 2015-03-31 | 2017-06-06 | Olympus Corporation | Image processing apparatus to generate a reduced image of an endoscopic image |
| WO2017017722A1 (ja) * | 2015-07-24 | 2017-02-02 | オリンパス株式会社 | 処理装置、処理方法及びプログラム |
| WO2017042812A2 (en) * | 2015-09-10 | 2017-03-16 | Magentiq Eye Ltd. | A system and method for detection of suspicious tissue regions in an endoscopic procedure |
| CN105574871A (zh) * | 2015-12-16 | 2016-05-11 | 深圳市智影医疗科技有限公司 | 在放射图像中检测肺部局部性病变的分割分类方法和系统 |
| JPWO2017104192A1 (ja) * | 2015-12-17 | 2017-12-14 | オリンパス株式会社 | 医用観察システム |
| CN106097335B (zh) | 2016-06-08 | 2019-01-25 | 安翰光电技术(武汉)有限公司 | 消化道病灶图像识别系统及识别方法 |
| US9589374B1 (en) * | 2016-08-01 | 2017-03-07 | 12 Sigma Technologies | Computer-aided diagnosis system for medical images using deep convolutional neural networks |
| CN106934799B (zh) * | 2017-02-24 | 2019-09-03 | 安翰科技(武汉)股份有限公司 | 胶囊内窥镜图像辅助阅片系统及方法 |
| TW201902411A (zh) * | 2017-06-09 | 2019-01-16 | 多田智裕 | 藉由消化器官之內視鏡影像之疾病的診斷支援方法、診斷支援系統、診斷支援程式及記憶此診斷支援程式之電腦可讀取之記錄媒體 |
-
2018
- 2018-10-30 TW TW107138465A patent/TW201922174A/zh unknown
- 2018-10-30 JP JP2019520910A patent/JP6657480B2/ja active Active
- 2018-10-30 WO PCT/JP2018/040381 patent/WO2019088121A1/ja not_active Ceased
- 2018-10-30 CN CN201880071367.9A patent/CN111655116A/zh active Pending
- 2018-10-30 KR KR1020207015517A patent/KR20200106028A/ko not_active Abandoned
- 2018-10-30 BR BR112020008774-2A patent/BR112020008774A2/pt not_active IP Right Cessation
- 2018-10-30 SG SG11202003973VA patent/SG11202003973VA/en unknown
- 2018-10-30 US US16/760,458 patent/US11633084B2/en active Active
- 2018-10-30 EP EP18873255.6A patent/EP3705025A4/en not_active Withdrawn
-
2020
- 2020-02-05 JP JP2020018003A patent/JP7335552B2/ja active Active
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002165757A (ja) * | 2000-11-30 | 2002-06-11 | Olympus Optical Co Ltd | 診断支援装置 |
| JP2007280229A (ja) * | 2006-04-11 | 2007-10-25 | Fujifilm Corp | 類似症例検索装置、類似症例検索方法およびそのプログラム |
| JP2018038828A (ja) | 2006-11-15 | 2018-03-15 | シーエフピーエイチ, エル.エル.シー. | ゲームサーバと通信しているゲーム機を決定する装置および方法 |
| WO2012165505A1 (ja) * | 2011-06-02 | 2012-12-06 | オリンパス株式会社 | 蛍光観察装置 |
| WO2016185617A1 (ja) * | 2015-05-21 | 2016-11-24 | オリンパス株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
| JP2017045341A (ja) | 2015-08-28 | 2017-03-02 | カシオ計算機株式会社 | 診断装置、及び診断装置における学習処理方法、並びにプログラム |
| JP2017067489A (ja) | 2015-09-28 | 2017-04-06 | 富士フイルムRiファーマ株式会社 | 診断支援装置、方法及びコンピュータプログラム |
| WO2017170233A1 (ja) * | 2016-03-29 | 2017-10-05 | 富士フイルム株式会社 | 画像処理装置、画像処理装置の作動方法、および画像処理プログラム |
| WO2017175282A1 (ja) * | 2016-04-04 | 2017-10-12 | オリンパス株式会社 | 学習方法、画像認識装置およびプログラム |
| JP2017209232A (ja) | 2016-05-24 | 2017-11-30 | 株式会社三共 | 遊技機 |
| JP2018007967A (ja) | 2016-07-15 | 2018-01-18 | 株式会社三共 | 遊技機 |
Non-Patent Citations (4)
| Title |
|---|
| "Dermatologist-level Classification of Skin Cancer with Deep Neural Networks", NATURE, February 2017 (2017-02-01), Retrieved from the Internet <URL:http://www.natureasia.com/ja-jp/nature/highlights/82762> |
| HOSOKAWA O ET AL., HEPATOGASTROENTEROLOGY, vol. 54, no. 74, 2007, pages 442 - 4 |
| TOSHIAKI HIRASAWAHIROSHI KAWACHI: "Detection and Diagnosis of Early Gastric Cancer -- Using Conventional Endoscopy", 2016, NIHON MEDICAL CENTER |
| YUICHI MORI: "Novel computer-aided diagnostic system for colorectal lesions by using endocytoscopy", PRESENTED AT DIGESTIVE DISEASE WEEK, 3 May 2014 (2014-05-03), Retrieved from the Internet <URL:http://www.giejournal.org/article/S0016-5107(14)02171-3/fulltext> |
Cited By (59)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12048413B2 (en) | 2018-06-22 | 2024-07-30 | Ai Medical Service Inc. | Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ |
| WO2020008834A1 (ja) * | 2018-07-05 | 2020-01-09 | 富士フイルム株式会社 | 画像処理装置、方法及び内視鏡システム |
| JP7218432B2 (ja) | 2018-10-02 | 2023-02-06 | インダストリー アカデミック コオペレーション ファウンデーション、ハルリム ユニヴァーシティ | リアルタイムに取得される胃内視鏡イメージに基づいて胃病変を診断する内視鏡装置及び方法 |
| JP2022507002A (ja) * | 2018-10-02 | 2022-01-18 | インダストリー アカデミック コオペレーション ファウンデーション、ハルリム ユニヴァーシティ | リアルタイムに取得される胃内視鏡イメージに基づいて胃病変を診断する内視鏡装置及び方法 |
| JP2022502150A (ja) * | 2018-10-02 | 2022-01-11 | インダストリー アカデミック コオペレーション ファウンデーション、ハルリム ユニヴァーシティ | 胃内視鏡イメージのディープラーニングを利用して胃病変を診断する装置及び方法 |
| JPWO2020174747A1 (ja) * | 2019-02-26 | 2021-12-02 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理方法、及びプログラム |
| WO2020174747A1 (ja) * | 2019-02-26 | 2020-09-03 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理方法、及びプログラム |
| JP7143504B2 (ja) | 2019-02-26 | 2022-09-28 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理装置の作動方法及びプログラム |
| US12106394B2 (en) | 2019-02-26 | 2024-10-01 | Fujifilm Corporation | Medical image processing apparatus, processor device, endoscope system, medical image processing method, and program |
| JP7623795B2 (ja) | 2019-06-21 | 2025-01-29 | カーブビーム エーアイ リミテッド | 画像内での構造又は物質セグメンテーションに基づいた機械学習分類のための方法及びシステム |
| JP2021002339A (ja) * | 2019-06-21 | 2021-01-07 | ストラックスコープ ピーティワイ リミテッドStraxcorp Pty Ltd | 画像内での構造又は物質セグメンテーションに基づいた機械学習分類のための方法及びシステム |
| WO2021005856A1 (ja) * | 2019-07-08 | 2021-01-14 | 株式会社日立製作所 | 破面解析装置及び破面解析方法 |
| JP2021012570A (ja) * | 2019-07-08 | 2021-02-04 | 株式会社日立製作所 | 破面解析装置及び破面解析方法 |
| US12198321B2 (en) | 2019-07-08 | 2025-01-14 | Hitachi, Ltd. | Fracture surface analysis apparatus and fracture surface analysis method |
| CN112330686A (zh) * | 2019-08-05 | 2021-02-05 | 罗雄彪 | 肺部支气管的分割及标定方法 |
| JPWO2021033303A1 (ja) * | 2019-08-22 | 2021-12-02 | Hoya株式会社 | 訓練データ生成方法、学習済みモデル及び情報処理装置 |
| WO2021033303A1 (ja) * | 2019-08-22 | 2021-02-25 | Hoya株式会社 | 訓練データ生成方法、学習済みモデル及び情報処理装置 |
| WO2021054477A3 (ja) * | 2019-09-20 | 2021-07-22 | 株式会社Aiメディカルサービス | 消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体 |
| JP7315809B2 (ja) | 2019-10-08 | 2023-07-27 | 公立大学法人会津大学 | 大腸内視鏡検査補助装置、大腸内視鏡検査補助方法及び大腸内視鏡検査補助プログラム |
| JP2021058464A (ja) * | 2019-10-08 | 2021-04-15 | 公立大学法人会津大学 | 大腸内視鏡検査補助装置、大腸内視鏡検査補助方法及び大腸内視鏡検査補助プログラム |
| WO2021095446A1 (ja) * | 2019-11-11 | 2021-05-20 | 富士フイルム株式会社 | 情報表示システムおよび情報表示方法 |
| JPWO2021095446A1 (enExample) * | 2019-11-11 | 2021-05-20 | ||
| JP7257544B2 (ja) | 2019-11-11 | 2023-04-13 | 富士フイルム株式会社 | 情報表示システムおよび情報表示方法 |
| JP2021079565A (ja) * | 2019-11-15 | 2021-05-27 | オーアイ・イノベーション株式会社 | 髄位置推定装置および製材システム |
| JP7320260B2 (ja) | 2019-11-15 | 2023-08-03 | オーアイ・イノベーション株式会社 | 髄位置推定装置および製材システム |
| JP2024050897A (ja) * | 2019-12-13 | 2024-04-10 | Hoya株式会社 | 機械学習モデルの出力を使用して視覚的証拠に基づいて映像信号内の被写体を検出するための装置、方法、およびコンピュータ可読記憶媒体 |
| JP7696031B2 (ja) | 2019-12-13 | 2025-06-19 | Hoya株式会社 | 機械学習モデルの出力を使用して視覚的証拠に基づいて映像信号内の被写体を検出するための装置、方法、およびコンピュータ可読記憶媒体 |
| US12340511B2 (en) | 2019-12-13 | 2025-06-24 | Hoya Corporation | Apparatus, method and computer-readable storage medium for detecting objects in a video signal based on visual evidence using an output of a machine learning model |
| JP2021100555A (ja) * | 2019-12-24 | 2021-07-08 | 富士フイルム株式会社 | 医療画像処理装置、内視鏡システム、診断支援方法及びプログラム |
| JP7346285B2 (ja) | 2019-12-24 | 2023-09-19 | 富士フイルム株式会社 | 医療画像処理装置、内視鏡システム、医療画像処理装置の作動方法及びプログラム |
| US12169963B2 (en) | 2020-03-11 | 2024-12-17 | Olympus Corporation | Processing system, image processing method, learning method, and processing device |
| JP2023519472A (ja) * | 2020-03-21 | 2023-05-11 | スマート・メディカル・システムズ・リミテッド | 機械的に強化されたトポグラフィのための人工知能検出システム |
| JP7448923B2 (ja) | 2020-03-31 | 2024-03-13 | 日本電気株式会社 | 情報処理装置、情報処理装置の作動方法、及びプログラム |
| WO2021199294A1 (ja) * | 2020-03-31 | 2021-10-07 | 日本電気株式会社 | 情報処理装置、表示方法、及びプログラムが格納された非一時的なコンピュータ可読媒体 |
| JPWO2021199294A1 (enExample) * | 2020-03-31 | 2021-10-07 | ||
| US12419492B2 (en) | 2020-03-31 | 2025-09-23 | Nec Corporation | Information processing device, display method, and non-transitory computer-readable medium for storing a program for lesion detection processing supporting decision making using machine learning |
| CN115397303A (zh) * | 2020-04-08 | 2022-11-25 | 富士胶片株式会社 | 处理器装置及其工作方法 |
| WO2021205777A1 (ja) * | 2020-04-08 | 2021-10-14 | 富士フイルム株式会社 | プロセッサ装置及びその作動方法 |
| JPWO2021205777A1 (enExample) * | 2020-04-08 | 2021-10-14 | ||
| JP7447243B2 (ja) | 2020-04-08 | 2024-03-11 | 富士フイルム株式会社 | プロセッサ装置及びその作動方法 |
| JP7550409B2 (ja) | 2020-04-27 | 2024-09-13 | 公益財団法人がん研究会 | 画像診断装置、画像診断方法、および画像診断プログラム |
| WO2021220822A1 (ja) * | 2020-04-27 | 2021-11-04 | 公益財団法人がん研究会 | 画像診断装置、画像診断方法、画像診断プログラムおよび学習済みモデル |
| JPWO2021220822A1 (enExample) * | 2020-04-27 | 2021-11-04 | ||
| CN115460968A (zh) * | 2020-04-27 | 2022-12-09 | 公益财团法人癌研究会 | 图像诊断装置、图像诊断方法、图像诊断程序和学习完毕模型 |
| WO2021220279A1 (en) * | 2020-05-01 | 2021-11-04 | Given Imaging Ltd. | Systems and methods for selecting images of event indicators |
| US20230218175A1 (en) * | 2020-07-31 | 2023-07-13 | Tokyo University Of Science Foundation | Image Processing Device, Image Processing Method, Image Processing Program, Endoscope Device, and Endoscope Image Processing System |
| JP2022135013A (ja) * | 2021-03-04 | 2022-09-15 | Hoya株式会社 | プログラム、情報処理方法及び情報処理装置 |
| JP7565824B2 (ja) | 2021-03-04 | 2024-10-11 | Hoya株式会社 | プログラム、情報処理方法及び情報処理装置 |
| EP4233684A4 (en) * | 2021-03-04 | 2024-10-30 | Hoya Corporation | PROGRAM, INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING DEVICE |
| WO2022185651A1 (ja) * | 2021-03-04 | 2022-09-09 | Hoya株式会社 | プログラム、情報処理方法及び情報処理装置 |
| JP7603560B2 (ja) | 2021-09-17 | 2024-12-20 | Hoya株式会社 | コンピュータプログラム、情報処理方法及び内視鏡 |
| WO2023042546A1 (ja) * | 2021-09-17 | 2023-03-23 | Hoya株式会社 | コンピュータプログラム、情報処理方法及び内視鏡 |
| JP2023044308A (ja) * | 2021-09-17 | 2023-03-30 | Hoya株式会社 | コンピュータプログラム、情報処理方法及び内視鏡 |
| JP2023106327A (ja) * | 2022-01-20 | 2023-08-01 | 株式会社Aiメディカルサービス | 検査支援装置、検査支援方法および検査支援プログラム |
| WO2024009631A1 (ja) * | 2022-07-06 | 2024-01-11 | 富士フイルム株式会社 | 画像処理装置及び画像処理装置の作動方法 |
| CN116681681B (zh) * | 2023-06-13 | 2024-04-02 | 富士胶片(中国)投资有限公司 | 内窥镜图像的处理方法、装置、用户设备及介质 |
| CN116681681A (zh) * | 2023-06-13 | 2023-09-01 | 富士胶片(中国)投资有限公司 | 内窥镜图像的处理方法、装置、用户设备及介质 |
| WO2025027815A1 (ja) * | 2023-08-02 | 2025-02-06 | オリンパスメディカルシステムズ株式会社 | 内視鏡診断支援方法、推論モデル、内視鏡画像処理装置、内視鏡画像処理システムおよび内視鏡画像処理プログラム |
| WO2025084278A1 (ja) * | 2023-10-20 | 2025-04-24 | オリンパス株式会社 | 画像診断装置および画像診断方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200337537A1 (en) | 2020-10-29 |
| JP6657480B2 (ja) | 2020-03-04 |
| KR20200106028A (ko) | 2020-09-10 |
| JP7335552B2 (ja) | 2023-08-30 |
| JP2020073081A (ja) | 2020-05-14 |
| EP3705025A4 (en) | 2021-09-08 |
| JPWO2019088121A1 (ja) | 2019-11-14 |
| TW201922174A (zh) | 2019-06-16 |
| US11633084B2 (en) | 2023-04-25 |
| CN111655116A (zh) | 2020-09-11 |
| SG11202003973VA (en) | 2020-05-28 |
| BR112020008774A2 (pt) | 2020-12-22 |
| EP3705025A1 (en) | 2020-09-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7335552B2 (ja) | 画像診断支援装置、学習済みモデル、画像診断支援装置の作動方法および画像診断支援プログラム | |
| JP7216376B2 (ja) | 消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体 | |
| JP7550409B2 (ja) | 画像診断装置、画像診断方法、および画像診断プログラム | |
| Cai et al. | Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video) | |
| Igarashi et al. | Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet | |
| CN113573654B (zh) | 用于检测并测定病灶尺寸的ai系统、方法和存储介质 | |
| Shimamoto et al. | Real-time assessment of video images for esophageal squamous cell carcinoma invasion depth using artificial intelligence | |
| US20250143543A1 (en) | Method for Real-Time Detection of Objects, Structures or Patterns in a Video, an Associated System and an Associated Computer Readable Medium | |
| Pogorelov et al. | Deep learning and hand-crafted feature based approaches for polyp detection in medical videos | |
| JP7218432B2 (ja) | リアルタイムに取得される胃内視鏡イメージに基づいて胃病変を診断する内視鏡装置及び方法 | |
| CN107730489A (zh) | 无线胶囊内窥镜小肠病变计算机辅助检测系统及检测方法 | |
| CN113164010A (zh) | 利用消化器官的内窥镜影像的疾病的诊断支持方法、诊断支持系统、诊断支持程序及存储了该诊断支持程序的计算机可读记录介质 | |
| CN109635871B (zh) | 一种基于多特征融合的胶囊内窥镜图像分类方法 | |
| WO2021054477A2 (ja) | 消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体 | |
| CN116745861B (zh) | 通过实时影像获得的病变判断系统的控制方法、装置及记录媒介 | |
| CN115018767B (zh) | 基于本征表示学习的跨模态内镜图像转换及病灶分割方法 | |
| WO2020224153A1 (zh) | 一种基于深度学习和图像增强的nbi图像处理方法及其应用 | |
| CN114372951A (zh) | 基于图像分割卷积神经网络的鼻咽癌定位分割方法和系统 | |
| TWI820624B (zh) | 基於語義分割於影像辨識之方法 | |
| US20230162356A1 (en) | Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model | |
| Katayama et al. | Development of Computer-Aided Diagnosis System Using Single FCN Capable for Indicating Detailed Inference Results in Colon NBI Endoscopy | |
| Yan | Intelligent diagnosis of precancerous lesions in gastrointestinal endoscopy based on advanced deep learning techniques and limited data | |
| Rodriguez | Integration of Multimodal, Multiscale Imaging and Biomarker Data for Squamous Precancer Detection and Diagnosis | |
| Andrade | A Portable System for Screening of Cervical Cancer |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| ENP | Entry into the national phase |
Ref document number: 2019520910 Country of ref document: JP Kind code of ref document: A |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18873255 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2018873255 Country of ref document: EP Effective date: 20200602 |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112020008774 Country of ref document: BR |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01E Ref document number: 112020008774 Country of ref document: BR Free format text: ESCLARECER, EM ATE 60 (SESSENTA) DIAS, APRESENTANDO DOCUMENTACAO COMPROBATORIA, A EXCLUSAO DO DEPOSITANTE AI MEDICAL SERVICE INC. QUE CONSTA NA PUBLICACAO INTERNACIONAL WO/2019/088121 DE 09/05/2019 DO QUADRO DE DEPOSITANTES CONSTANTE NO FORMULARIO DA PETICAO NO 870200094553 DE 29/07/2020. |
|
| ENP | Entry into the national phase |
Ref document number: 112020008774 Country of ref document: BR Kind code of ref document: A2 Effective date: 20200430 |