WO2022267981A1 - 内窥镜影像识别方法、电子设备及存储介质 - Google Patents
内窥镜影像识别方法、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2022267981A1 WO2022267981A1 PCT/CN2022/099318 CN2022099318W WO2022267981A1 WO 2022267981 A1 WO2022267981 A1 WO 2022267981A1 CN 2022099318 W CN2022099318 W CN 2022099318W WO 2022267981 A1 WO2022267981 A1 WO 2022267981A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- disease
- neural network
- network model
- test sample
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 201000010099 disease Diseases 0.000 claims abstract description 129
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 129
- 238000003062 neural network model Methods 0.000 claims abstract description 60
- 238000012360 testing method Methods 0.000 claims abstract description 46
- 238000003745 diagnosis Methods 0.000 claims abstract description 19
- 239000002775 capsule Substances 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 25
- 230000006403 short-term memory Effects 0.000 claims description 16
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 230000007774 longterm Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002457 bidirectional effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 238000013135 deep learning Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 7
- 125000004122 cyclic group Chemical group 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 239000000835 fiber Substances 0.000 claims description 3
- 210000001035 gastrointestinal tract Anatomy 0.000 description 16
- 230000003902 lesion Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 6
- 230000000306 recurrent effect Effects 0.000 description 5
- 230000003628 erosive effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 208000032843 Hemorrhage Diseases 0.000 description 3
- 208000037062 Polyps Diseases 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 241000167880 Hirundinidae Species 0.000 description 2
- 238000001839 endoscopy Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 206010013554 Diverticulum Diseases 0.000 description 1
- 206010034719 Personality change Diseases 0.000 description 1
- 206010043189 Telangiectasia Diseases 0.000 description 1
- 208000025865 Ulcer Diseases 0.000 description 1
- 208000009443 Vascular Malformations Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 208000034158 bleeding Diseases 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 244000045947 parasite Species 0.000 description 1
- 239000002861 polymer material Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 208000009056 telangiectasis Diseases 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 231100000397 ulcer Toxicity 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00165—Optical arrangements with light-conductive means, e.g. fibre optics
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/041—Capsule endoscopes for imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
Definitions
- the present invention relates to the field of medical equipment imaging, and more specifically, to an endoscopic image recognition method based on deep learning, electronic equipment and a storage medium.
- Capsule endoscope is an effective diagnosis and treatment tool for checking patients' digestive tract diseases, which integrates devices such as cameras, LED lights, and wireless communication modules.
- the patient swallows the capsule endoscope, which takes images while traveling through the digestive tract and transmits the images outside the patient's body. Images acquired by a capsule endoscope are analyzed to identify lesions in the digestive tract.
- the advantage of capsule endoscopy is that it causes less pain to patients and can inspect the entire digestive tract. As a revolutionary technological breakthrough, it has been widely used.
- the capsule endoscope collects a large number of images (for example, tens of thousands of images) during the inspection process, and the image reading work becomes arduous and time-consuming.
- lesion identification using image processing and computer vision techniques has gained widespread attention.
- lesion recognition is performed on each image collected by the capsule endoscope through a convolutional neural network and a diagnosis result is obtained. Even if the accuracy rate of the endoscopic image recognition method is as high as 90%, for a large number of images collected from the patient's digestive tract, any wrong lesion recognition result of any image will result in a wrong case diagnosis result.
- the object of the present invention is to provide an endoscope image recognition method, electronic equipment and storage medium, wherein, after multiple original images are predicted according to a single image, based on the result of the disease prediction The accuracy of disease identification is performed on multiple image features of the test sample set.
- a method for endoscopic image recognition comprising: using a first neural network model to perform multiple disease category predictions on multiple original images;
- the disease type prediction result of the image is to establish the test sample set of the multiple disease types, each test sample set includes the image features of a predetermined number of original images; using the second neural network model, for the multiple disease types
- the test sample sets are respectively subjected to disease identification; and the disease identification results of the multiple diseases are superimposed to obtain a case diagnosis result; A weighted combination is performed to obtain the disease identification result.
- the first neural network model is a convolutional neural network model
- the convolutional neural network model inputs a single image of the multiple original images, and outputs the image features and classification probabilities of the multiple disease categories .
- the second neural network model is a cyclic neural network model
- the cyclic neural network model inputs a plurality of image features in the test sample set, and outputs a disease identification result corresponding to the test sample set.
- the second neural network model includes: a first fully connected layer, which performs dimensionality reduction processing on a plurality of image features in the test sample set; The features respectively predict the hidden states according to the forward and backward directions; and the attention mechanism weights and combines the hidden states of the plurality of image features into a final feature, wherein the second neural network model obtains the disease type based on the final feature recognition result.
- the first fully connected layer includes a plurality of fully connected units, and each of the plurality of fully connected units performs dimensionality reduction processing on a corresponding image feature.
- the bidirectional long-short-term memory layer includes a plurality of forward long-short-term memory units and a plurality of backward long-short-term memory units, and the plurality of forward long-short-term memory units respectively perform forward prediction on a corresponding image feature,
- the multiple backward long-short-term memory units respectively perform backward prediction on a corresponding image feature.
- the weighted combination includes a weighted summation of hidden states of the plurality of image features, and the weight coefficients of the plurality of image features represent the influence on the identification of the corresponding disease category.
- the weight coefficients of the plurality of image features are as follows:
- Wu u and W e represent the weight matrix
- bu represents the bias item
- H t represents the hidden state obtained by the bidirectional long-term short-term memory layer at step t
- e t represents the influence value
- at represents the weight coefficient
- the step of establishing the test sample set of the plurality of disease categories includes: for different disease categories in the plurality of disease categories, respectively selecting the one with the highest classification probability from the plurality of original images
- the image features of a predetermined number of original images form a test sample set.
- the predetermined number is any integer within the range of 2-128.
- the multiple images are collected by any one of the following endoscopes: fiber optic endoscope, active capsule endoscope, and passive capsule endoscope.
- an electronic device including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the above-mentioned depth-based Steps in the learned endoscopic image recognition method.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above deep learning-based endoscope image recognition method are implemented.
- the first neural network model is used for disease prediction
- the second neural network model is used for disease identification.
- the Multiple image features in the test sample set are weighted and combined to obtain the disease identification result, which can improve the accuracy of disease identification.
- multiple disease identification results are obtained based on multiple test sample sets corresponding to multiple disease categories, and the identification results of multiple disease categories are superimposed to obtain a case diagnosis result.
- the second neural network model includes a two-way long-short-term memory layer, which is used to predict the hidden state of a plurality of image features according to the forward and backward directions respectively, and combine the image features before and after moments together to perform disease identification, thus The accuracy of disease identification can be further improved.
- each test sample set includes image features of a predetermined number of original images, for example, 2-128 original images, so that both the accuracy of disease identification and the calculation time of disease types can be considered.
- Fig. 1 shows a schematic structural diagram of a capsule endoscope system.
- Fig. 2 shows a schematic sectional view of an example of a capsule endoscope.
- Fig. 3 shows a flowchart of an endoscope image recognition method according to an embodiment of the present invention.
- Fig. 4 shows a schematic block diagram of an endoscope image recognition method according to an embodiment of the present invention.
- Fig. 5 shows a schematic block diagram of a first neural network model in an endoscope image recognition method according to an embodiment of the present invention.
- Fig. 6 shows a schematic block diagram of a second neural network model in the endoscope image recognition method according to an embodiment of the present invention.
- Fig. 1 shows a schematic structural diagram of a capsule endoscope system.
- the capsule endoscope system includes, for example, a host 104 , a magnetic ball 105 , a three-axis displacement base 106 , a magnetic ball support 107 , and a wireless receiving device 108 .
- the magnetic ball bracket 107 includes a first end connected to the three-axis displacement base 106 and a second end connected to the magnetic ball 105 .
- the three-axis displacement base 106 can, for example, translate along three coordinate axes that are perpendicular to each other.
- the magnetic ball mount 107 translates with the three-axis displacement base 106 and allows the magnetic ball 105 to rotate relative to the magnetic ball mount 107 in the horizontal and vertical planes.
- a motor and a lead screw are used to drive the translation of the three-axis displacement base 106
- a motor and a belt are used to drive the rotation of the magnetic ball 105 .
- the magnetic ball 105 is composed of, for example, a permanent magnet, and includes N poles and S poles facing each other. When the attitude of the magnetic ball 105 changes, an external magnetic field whose position and orientation change accordingly is generated.
- the patient 101 swallows the capsule endoscope 10 and lies flat on the bed 102 , for example.
- the capsule endoscope 10 travels along the digestive tract.
- the interior of the capsule endoscope 10 includes permanent magnets.
- the host computer 104 sends operation commands to the three-axis displacement base 106 and the magnetic ball bracket 107 to control the attitude change of the magnetic ball 105 .
- the external magnetic field generated by the magnetic ball 105 acts on the permanent magnet, so the position and orientation of the capsule endoscope 10 in the patient's digestive tract can be controlled.
- the capsule endoscope 10 takes images while traveling in the digestive tract, and transmits the images to a wireless receiving device 108 outside the patient's body.
- the host computer 104 is connected with the wireless receiving device 108, and is used for acquiring images collected by the capsule endoscope 10, so as to analyze the images and identify lesions in the digestive tract.
- Fig. 2 shows a schematic sectional view of an example of a capsule endoscope.
- the capsule endoscope 10 includes a housing 11 and circuit components located in the housing 11 .
- the casing 11 is made of polymer materials such as plastic, and includes a transparent end for providing an illumination light path and a shooting light path.
- the circuit assembly includes an image sensor 12 , a first circuit board 21 , a permanent magnet 15 , a battery 16 , a second circuit board 22 , and a wireless transmitter 17 arranged in sequence along the main axis of the casing 11 .
- the image sensor 12 is opposite to the transparent end of the casing 11 , for example, installed in the middle of the first circuit board 21 .
- a plurality of LEDs 13 surrounding the image sensor 12 are also mounted on the first circuit board 21.
- the wireless transmitter 17 is installed on the second circuit board 22 .
- the first circuit board 21 and the second circuit board 22 are connected via a flexible circuit board 23 , and the permanent magnet 15 and the battery 16 are sandwiched between them.
- the positive and negative contacts of the battery 16 are provided using a flexible circuit board 23 or an additional circuit board.
- circuit assembly may also include a limiting block 18 fixedly connected to the second circuit board 22 for engaging the flexible circuit board 23 or the housing 11 .
- a plurality of LEDs 13 are lit to provide irradiation light through the end of the housing, and the image sensor 12 acquires an image of the patient's digestive tract through the end of the housing.
- the image data is transmitted to the wireless transmitter 17 via the flexible circuit board 23 and sent to the wireless receiving device 108 outside the patient's body, so that the host computer 104 can acquire images for lesion analysis.
- 3 and 4 respectively show a flowchart and a schematic block diagram of an endoscope image recognition method according to an embodiment of the present invention.
- the magnetic ball is used to control the position and orientation of the capsule endoscope, and the capsule endoscope collects a large number of original images of different positions and orientations of the patient's digestive tract. Endoscopic image recognition method to obtain case diagnosis results.
- the above-mentioned capsule endoscope system includes an active capsule endoscope for collecting images of the digestive tract, which is only a way to obtain the original image.
- the original image can be an image of the digestive tract acquired through a fiber optic endoscope , and can also be images of the digestive tract collected through a passive capsule endoscope, etc.
- the first neural network model is used to predict the disease type of a single image of the original image, so as to obtain the image characteristics and classification probability of the disease type of the original image, wherein the classification probability refers to the single image The probability of being recognized as corresponding to different diseases.
- the first neural network model is, for example, a convolutional neural network (abbreviated as CNN) model.
- the first neural network model includes, for example, multiple convolutional layers, at least one pooling layer, at least one fully connected layer, and at least one normalized exponential layer (eg, softmax layer).
- the convolution operation can extract different features of the image.
- Multiple convolutional layers can sequentially extract low-level image features and high-level image features.
- the pooling layer down-samples image features (i.e., low-level image features and high-level image features), thereby compressing the data and parameters of image features while maintaining the invariance of image features.
- Each node of the fully connected layer is connected to all the nodes of the previous layer, and is used to combine the final features extracted by the previous layer (that is, the downsampled image features) for classification.
- the normalized index layer is used to map the output of the previous layer (for example, the fully connected layer) to a probability value in the interval (0, 1), so as to obtain the classification probability of the corresponding disease.
- the first neural network model can be obtained through training with labeled training sample sets. The single image of the original image collected during the inspection is used as the input of the first neural network model, the image features are extracted from the pooling layer, and the classification probability is calculated from the normalized index layer.
- the endoscope image recognition method of the present invention is not limited to a specific convolutional neural network (CNN) model, and commonly used network models such as Resnet, Densenet, and MobileNet can be used.
- CNN convolutional neural network
- the applicant disclosed a convolutional neural network model that can be applied to this step in Chinese patent application 202110010379.4.
- the input of the first neural network model is a single image of at least a part of the original image, and the single image is processed to obtain the corresponding disease category and classification probability.
- the disease category includes at least one of erosion, hemorrhage, ulcer, polyp, protuberance, telangiectasia, vascular malformation, diverticulum, and parasite. In this embodiment, a total of 9 disease categories are listed. It can be understood that the number of disease categories that can be identified by the first neural network model is related to the training sample set, and the present invention is not limited to a specific number of disease categories.
- step S02 for different disease categories, the image features of multiple images with the highest probability of disease classification are selected from the original images to form a test sample set.
- the multiple original images that have been predicted for the disease category are sorted according to the classification probability, and the image features of the original image with the highest classification probability of the corresponding disease category are selected to form their respective test sample sets.
- the image features in the test sample set are preferably the image features output by the pooling layer.
- the number S of images in the test sample set for each disease type can be a predetermined number, for example, any integer in the range of 2-128, so as to balance the accuracy of disease type identification and the calculation time of disease type identification.
- the number of disease categories and the test sample set for each disease category can be adjusted according to actual needs.
- the collected images are input into the first neural network model (ie, the convolutional neural network model) for disease prediction.
- the first neural network model processes each collected image, and according to the image features of each collected image, obtains the probability that the collected image is judged to correspond to different disease categories. According to this, the collected images classified into category 1, image 1, image 2, image 3...image M can be obtained, and image samples are selected in order of classification probability from high to low, and image 3, image M, Image 2...Image S. The processing of other categories is similar to category 1, and will not be repeated here.
- the first neural network model Based on the selected image samples, the first neural network model outputs image features corresponding to the image samples, and forms a test sample set.
- the second neural network model is used to perform disease identification on the test sample sets of multiple diseases.
- the second neural network model is, for example, a recurrent neural network (abbreviated as RNN) model.
- the second neural network model For each test sample set of multiple disease categories, the second neural network model performs disease identification based on the test sample set of image features extracted from multiple original images, that is, based on the test sample set output by the first neural network model, to Improve the accuracy of disease identification.
- the S images with the highest probability of the suspected erosion image selected by the first neural network model, and the image features of the suspected category 1 (such as erosion) extracted from each of the S images are used as test samples Set
- the test sample set is input to the second neural network model
- the second neural network model can confirm whether it is really suffering from a disease of category 1 (such as an erosive disease), and so on for other types of diseases.
- step S04 the identification results of multiple disease types are superimposed to obtain a case diagnosis result.
- the massive original images collected during the patient's examination process can be processed to obtain identification results of multiple disease types, which can be superimposed to obtain case diagnosis results.
- the diagnosis result of the case is that the patient's lesions include one or more of 9 disease categories. For example, for the above 9 disease categories, if the recognition results of the two disease categories of bleeding and polyps are lesions, and the recognition results of other disease categories are no lesions, then the diagnosis result of the case is all the disease categories after superimposition , that is, the patient has lesions of the two disease categories of hemorrhage and polyp.
- the second neural network model in the endoscopic image recognition method according to the embodiment of the present invention will be described in detail below with reference to FIG. 6 .
- the second neural network model is a recurrent neural network model (RNN).
- the recurrent neural network model is a recurrent neural network with sequence data as input.
- the second neural network model includes, for example, at least one first fully connected layer, at least one bidirectional long short-term memory (abbreviated as LSTM) layer, attention mechanism, at least one second fully connected layer, and at least one normalized Exponentialization layers (e.g., softmax layers).
- the test sample set of a single disease type obtained from the disease type prediction of the first neural network model is used as the input of the second neural network model.
- the test sample set includes multiple image features obtained from multiple original images.
- the first fully connected layer includes multiple fully connected units, and multiple fully connected units respectively perform dimensionality reduction processing on a corresponding image feature, that is, multiple fully connected units respectively perform dimensionality reduction processing on multiple high-dimensional image features to obtain low Dimensions of multiple image features.
- the bidirectional long-short-term memory layer includes multiple forward long-term short-term memory units and multiple backward long-term short-term memory units, and is used to respectively predict the hidden state according to the forward and backward directions for multiple image features.
- a plurality of forward long-term short-term memory units respectively perform forward prediction on a corresponding image feature
- a plurality of backward long-term short-term memory units respectively perform backward prediction on a corresponding image feature.
- the inventors of the present invention have noticed that when doctors perform diagnosis based on digestive tract images (especially continuously captured digestive tract images), they will not only refer to the images taken at the previous moment, but also refer to the images taken at the next moment, Diagnosis is performed by combining the images of the before and after moments together.
- the recurrent neural network model in the existing capsule endoscope image processing method uses a one-way long-short-term memory layer, so it can only predict the output at the next moment based on the input at the previous moment, and cannot obtain accurate information based on the collected images. Disease identification results. Different from the existing cyclic neural network model, the cyclic neural network model of the present invention adopts a two-way long-short-term memory layer, and combines image features before and after moments to identify diseases together.
- the input of each forward long-short-term memory unit is a corresponding image feature that has been dimensionally reduced, and the output is a corresponding hidden state.
- the forward long short-term memory unit calculates the input image features from front to back according to the input order.
- the input of each backward long short-term memory unit is a corresponding image feature that has been dimensionally reduced, and the output is a corresponding hidden state.
- the backward long short-term memory unit calculates the input image features from the back to the front according to the input order. The calculation is as follows:
- h t represents the hidden state of the forward long short-term memory unit at step t
- h′ t represents the hidden state of the backward long short-term memory unit at step t.
- the bidirectional LSTM layer can obtain multiple hidden states corresponding to multiple image features.
- the attention mechanism of the second neural network model is used to weight the hidden states of multiple image features into the final feature.
- the weight coefficient of each image feature represents the impact on disease identification, as shown in the following formula:
- Wu u and W e represent the weight matrix
- bu represents the bias item
- H t represents the hidden state obtained by the bidirectional long-term short-term memory layer at step t
- e t represents the influence value
- at represents the weight coefficient
- the second fully connected layer combines the final features T extracted by the previous layer for classification.
- the normalized index layer is used to map the output of the previous layer (that is, the second fully connected layer) to a probability value in the (0, 1) interval, so as to obtain the probability that each final feature T is classified into a different disease category, That is, the suspected probability of the disease category, and then obtain the case diagnosis result according to the suspected probability of the disease category, and output it.
- the second neural network model identifies the disease type based on the test sample set of image features of the multiple original images, so as to confirm whether the multiple original images with the highest suspected probability of the disease category really contain lesions.
- an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and when the processor executes the program, the above-mentioned Steps in an endoscope image recognition method based on deep learning.
- an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned deep learning-based endoscope image recognition method are implemented. .
- the deep learning-based endoscope image recognition method, electronic equipment, and storage medium of the present invention after predicting the disease type of a single image of the original image, select multiple images for weighting based on the disease type prediction results. Combine to improve the accuracy of disease identification, and superimpose the identification results of multiple disease categories to obtain case diagnosis results.
- the device implementation described above is only illustrative, wherein the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Optics & Photonics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Quality & Reliability (AREA)
Abstract
Description
Claims (13)
- 一种内窥镜影像识别方法,包括:采用第一神经网络模型,对多张原始图像分别进行多个病种类别的病种预测;基于所述多张原始图像的病种预测结果,建立所述多个病种类别的测试样本集,每个所述测试样本集包括预定数量原始图像的图像特征;采用第二神经网络模型,对所述多个病种类别的测试样本集分别进行病种识别;以及对所述多个病种的病种识别结果进行叠加以获得病例诊断结果;其中,所述第二神经网络模型对所述测试样本集中的多个图像特征进行加权组合以获得所述病种识别结果。
- 根据权利要求1所述的内窥镜影像识别方法,其中,所述第一神经网络模型为卷积神经网络模型,所述卷积神经网络模型输入所述多张原始图像的单张图像,输出所述多个病种类别的图像特征和分类概率。
- 根据权利要求2所述的内窥镜影像识别方法,其中,所述第二神经网络模型为循环神经网络模型,所述循环神经网络模型输入所述测试样本集中的多个图像特征,输出与所述测试样本集相对应的病种识别结果。
- 根据权利要求1所述的内窥镜影像识别方法,其中,所述第二神经网络模型包括:第一全连接层,将所述测试样本集中的多个图像特征分别进行降维处理;双向长短期记忆层,对经过降维处理的多个图像特征分别按照前向和后向预测隐藏状态;以及注意力机制,将所述多个图像特征的隐藏状态加权组合成最终特征,其中,所述第二神经网络模型基于所述最终特征获得病种识别结果。
- 根据权利要求4所述的内窥镜影像识别方法,其中,所述第一全连接层包括多个全连接单元,所述多个全连接单元分别对相应一个图 像特征进行降维处理。
- 根据权利要求4所述的内窥镜影像识别方法,其中,所述双向长短期记忆层包括多个前向长短期记忆单元和多个后向长短期记忆单元,所述多个前向长短期记忆单元分别对相应一个图像特征进行前向预测,所述多个后向长短期记忆单元分别对相应一个图像特征进行后向预测。
- 根据权利要求4所述的内窥镜影像识别方法,其中,所述加权组合包括对所述多个图像特征的隐藏状态加权求和,所述多个图像特征的权重系数表示对相应病种类别的病种识别影响。
- 根据权利要求7所述的内窥镜影像识别方法,其中,所述多个图像特征的权重系数如下式所示:e t=W etanh(W uH t+b u)a t=softmax(e t)其中,W u,W e表示权重矩阵,b u表示偏置项,H t表示双向长短期记忆层在第t步获得的隐藏状态,e t表示影响值,a t表示权重系数。
- 根据权利要求2所述的内窥镜影像识别方法,其中,建立所述多个病种类别的测试样本集的步骤包括:对于所述多个病种类别中的不同病种类别,分别从所述多张原始图像中选择所述分类概率最高的预定数量原始图像的图像特征形成测试样本集。
- 根据权利要求9所述的内窥镜影像识别方法,其中,所述预定数量是2~128的范围内的任意整数。
- 根据权利要求1所述的内窥镜影像识别方法,其中,所述多张原始图像采用以下任意一种内窥镜采集获得:光纤内窥镜、主动式胶囊内窥镜或被动式胶囊内窥镜。
- 一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现一种基于深度学习的内窥镜影像识别方法中的步骤,所述方法包括:采用第一神经网络模型,对多张原始图像分别进行多个病种类别的病种预测;基于所述多张原始图像的病种预测结果,建立所述多个病种类别的测试样本集,每个所述测试样本集包括预定数量原始图像的图像特征;采用第二神经网络模型,对所述多个病种类别的测试样本集分别进行病种识别;以及对所述多个病种的病种识别结果进行叠加以获得病例诊断结果;其中,所述第二神经网络模型对所述测试样本集中的多个图像特征进行加权组合以获得所述病种识别结果。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现一种基于深度学习的内窥镜影像识别方法中的步骤,所述方法包括:采用第一神经网络模型,对多张原始图像分别进行多个病种类别的病种预测;基于所述多张原始图像的病种预测结果,建立所述多个病种类别的测试样本集,每个所述测试样本集包括预定数量原始图像的图像特征;采用第二神经网络模型,对所述多个病种类别的测试样本集分别进行病种识别;以及对所述多个病种的病种识别结果进行叠加以获得病例诊断结果;其中,所述第二神经网络模型对所述测试样本集中的多个图像特征进行加权组合以获得所述病种识别结果。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22827476.7A EP4361882A1 (en) | 2021-06-23 | 2022-06-17 | Endoscopic image recognition method, electronic device, and storage medium |
KR1020237045348A KR20240015109A (ko) | 2021-06-23 | 2022-06-17 | 내시경 영상 식별 방법, 전자 기기 및 저장 매체 |
JP2023579477A JP2024528490A (ja) | 2021-06-23 | 2022-06-17 | 内視鏡の画像識別方法、電子デバイスおよび記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110695472.3 | 2021-06-23 | ||
CN202110695472.3A CN113159238B (zh) | 2021-06-23 | 2021-06-23 | 内窥镜影像识别方法、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022267981A1 true WO2022267981A1 (zh) | 2022-12-29 |
Family
ID=76876029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/099318 WO2022267981A1 (zh) | 2021-06-23 | 2022-06-17 | 内窥镜影像识别方法、电子设备及存储介质 |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4361882A1 (zh) |
JP (1) | JP2024528490A (zh) |
KR (1) | KR20240015109A (zh) |
CN (1) | CN113159238B (zh) |
WO (1) | WO2022267981A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118037714A (zh) * | 2024-03-29 | 2024-05-14 | 华伦医疗用品(深圳)有限公司 | 基于gpu医疗内窥镜图像的处理方法、系统和介质 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113159238B (zh) * | 2021-06-23 | 2021-10-26 | 安翰科技(武汉)股份有限公司 | 内窥镜影像识别方法、电子设备及存储介质 |
CN114022718B (zh) * | 2022-01-07 | 2022-03-22 | 安翰科技(武汉)股份有限公司 | 消化系统病理图像识别方法、系统及计算机存储介质 |
CN116051486B (zh) * | 2022-12-29 | 2024-07-02 | 抖音视界有限公司 | 内窥镜图像识别模型的训练方法、图像识别方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111275118A (zh) * | 2020-01-22 | 2020-06-12 | 复旦大学 | 基于自我修正式标签生成网络的胸片多标签分类方法 |
CN111539930A (zh) * | 2020-04-21 | 2020-08-14 | 浙江德尚韵兴医疗科技有限公司 | 基于深度学习的动态超声乳腺结节实时分割与识别的方法 |
US20200381083A1 (en) * | 2019-05-31 | 2020-12-03 | 410 Ai, Llc | Estimating predisposition for disease based on classification of artificial image objects created from omics data |
CN112348125A (zh) * | 2021-01-06 | 2021-02-09 | 安翰科技(武汉)股份有限公司 | 基于深度学习的胶囊内窥镜影像识别方法、设备及介质 |
CN113159238A (zh) * | 2021-06-23 | 2021-07-23 | 安翰科技(武汉)股份有限公司 | 内窥镜影像识别方法、电子设备及存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10573003B2 (en) * | 2017-02-13 | 2020-02-25 | Amit Sethi | Systems and methods for computational pathology using points-of-interest |
CN109948733B (zh) * | 2019-04-01 | 2023-04-07 | 深圳大学 | 消化道内窥镜图像的多分类方法、分类装置及存储介质 |
CN111653365B (zh) * | 2020-07-23 | 2023-06-23 | 中山大学附属第一医院 | 一种鼻咽癌辅助诊断模型构建和辅助诊断方法及系统 |
AU2020103613A4 (en) * | 2020-11-23 | 2021-02-04 | Agricultural Information and Rural Economic Research Institute of Sichuan Academy of Agricultural Sciences | Cnn and transfer learning based disease intelligent identification method and system |
-
2021
- 2021-06-23 CN CN202110695472.3A patent/CN113159238B/zh active Active
-
2022
- 2022-06-17 WO PCT/CN2022/099318 patent/WO2022267981A1/zh active Application Filing
- 2022-06-17 EP EP22827476.7A patent/EP4361882A1/en active Pending
- 2022-06-17 JP JP2023579477A patent/JP2024528490A/ja active Pending
- 2022-06-17 KR KR1020237045348A patent/KR20240015109A/ko unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200381083A1 (en) * | 2019-05-31 | 2020-12-03 | 410 Ai, Llc | Estimating predisposition for disease based on classification of artificial image objects created from omics data |
CN111275118A (zh) * | 2020-01-22 | 2020-06-12 | 复旦大学 | 基于自我修正式标签生成网络的胸片多标签分类方法 |
CN111539930A (zh) * | 2020-04-21 | 2020-08-14 | 浙江德尚韵兴医疗科技有限公司 | 基于深度学习的动态超声乳腺结节实时分割与识别的方法 |
CN112348125A (zh) * | 2021-01-06 | 2021-02-09 | 安翰科技(武汉)股份有限公司 | 基于深度学习的胶囊内窥镜影像识别方法、设备及介质 |
CN113159238A (zh) * | 2021-06-23 | 2021-07-23 | 安翰科技(武汉)股份有限公司 | 内窥镜影像识别方法、电子设备及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118037714A (zh) * | 2024-03-29 | 2024-05-14 | 华伦医疗用品(深圳)有限公司 | 基于gpu医疗内窥镜图像的处理方法、系统和介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113159238A (zh) | 2021-07-23 |
JP2024528490A (ja) | 2024-07-30 |
CN113159238B (zh) | 2021-10-26 |
EP4361882A1 (en) | 2024-05-01 |
KR20240015109A (ko) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022267981A1 (zh) | 内窥镜影像识别方法、电子设备及存储介质 | |
EP3876190B1 (en) | Endoscopic image processing method and system and computer device | |
JP2021513435A (ja) | 胃腸腫瘍を診断するシステム及び方法 | |
JP2022537974A (ja) | ニューラルネットワーク訓練方法及び装置、電子機器並びに記憶媒体 | |
WO2017022908A1 (ko) | 심층신경망을 이용한 골 연령 산출방법 및 프로그램 | |
WO2022148216A1 (zh) | 基于深度学习的胶囊内窥镜影像识别方法、设备及介质 | |
WO2023030523A1 (zh) | 用于内窥镜的组织腔体定位方法、装置、介质及设备 | |
Glock et al. | Measles rash identification using transfer learning and deep convolutional neural networks | |
CN111402217A (zh) | 一种图像分级方法、装置、设备和存储介质 | |
Raut et al. | Transfer learning based video summarization in wireless capsule endoscopy | |
CN113222957A (zh) | 一种基于胶囊镜图像的多类别病灶高速检测方法及系统 | |
Rifai et al. | Analysis for diagnosis of pneumonia symptoms using chest X-ray based on MobileNetV2 models with image enhancement using white balance and contrast limited adaptive histogram equalization (CLAHE) | |
UÇan et al. | Multi-class gastrointestinal images classification using EfficientNet-B0 CNN Model | |
Pandey et al. | An analysis of pneumonia prediction approach using deep learning | |
US20240379230A1 (en) | Endoscopic image recognition method, electronic device, and storage medium | |
Ghafoor | COVID-19 pneumonia level detection using deep learning algorithm | |
Amirthalingam et al. | Improved Water Strider Optimization with Deep Learning based Image Classification for Wireless Capsule Endoscopy | |
Jin et al. | Multiple instance learning for lymph node metastasis prediction from cervical cancer MRI | |
Dasare et al. | Covid19 Infection Detection and Classification Using CNN On Chest X-ray Images | |
Werner et al. | Precise localization within the GI tract by combining classification of CNNs and time-series analysis of HMMs | |
Karthikha et al. | Effect of U-net hyperparameter optimisation in polyp segmentation from colonoscopy images | |
CN117671012B (zh) | 术中内窥镜绝对与相对位姿计算的方法、装置及设备 | |
Shaik et al. | A Deep Learning Framework for Prognosis Patients with COVID-19 | |
Amirthalingam et al. | Cuckoo Search Algorithm with Deep Learning Assisted Wireless Capsule Endoscopic Image Analysis | |
Manivannan et al. | Optimal Deep Transfer Learning Model for Automated Tuberculosis Classification on Chest Radiographs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22827476 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023579477 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20237045348 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022827476 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022827476 Country of ref document: EP Effective date: 20240123 |