CN106934799A - Capsule endoscope image aids in diagosis system and method - Google Patents

Capsule endoscope image aids in diagosis system and method Download PDF

Info

Publication number
CN106934799A
CN106934799A CN201710104172.7A CN201710104172A CN106934799A CN 106934799 A CN106934799 A CN 106934799A CN 201710104172 A CN201710104172 A CN 201710104172A CN 106934799 A CN106934799 A CN 106934799A
Authority
CN
China
Prior art keywords
image
module
model
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710104172.7A
Other languages
Chinese (zh)
Other versions
CN106934799B (en
Inventor
张行
张皓
袁文金
王新宏
段晓东
肖国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankon Technologies Co Ltd
Original Assignee
ANKON PHOTOELECTRIC TECHNOLOGY (WUHAN) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANKON PHOTOELECTRIC TECHNOLOGY (WUHAN) Co Ltd filed Critical ANKON PHOTOELECTRIC TECHNOLOGY (WUHAN) Co Ltd
Priority to CN201710104172.7A priority Critical patent/CN106934799B/en
Publication of CN106934799A publication Critical patent/CN106934799A/en
Application granted granted Critical
Publication of CN106934799B publication Critical patent/CN106934799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses a kind of capsule endoscope image auxiliary diagosis system, its data acquisition module is used to obtain the capsule endoscope view data of examiner;Picture position sort module is used to that capsule endoscope image be classified by the difference for shooting position using the first convolutional neural networks CNN models, obtains the image sequence at different shooting positions;Image sequence describing module is used to carry out the feature vector sequence that image characteristics extraction obtains different alimentary canal positions image sequence to the image sequence at different shooting positions using the second convolutional neural networks CNN models;Image sequence describing module is additionally operable to that the characteristics of image in feature vector sequence is converted into descriptive matter in which there using recurrent neural network RNN models, so as to form auxiliary diagnosis report.The present invention can reduce the workload that doctor watches alimentary canal image, improve the diagnosis efficiency of doctor.

Description

Capsule endoscope image auxiliary film reading system and method
Technical Field
The invention relates to the field of medical equipment, in particular to a capsule endoscope image auxiliary film reading system and a capsule endoscope image auxiliary film reading method.
Background
Digestive tract diseases such as gastric cancer, intestinal cancer, acute and chronic gastritis, ulcer and the like are common diseases and frequently encountered diseases, and have great threat to human health. Data from the national tumor registry survey in 2015 showed that digestive tract cancer accounted for 43% of all cancers. The traditional optical fiber endoscope needs to be inserted into the body of a patient for observation, is very inconvenient and causes pain to the patient. The capsule endoscope can inspect the whole digestive tract in a painless and noninvasive mode, and is a revolutionary technical breakthrough. The capsule endoscope collects about 50000 images during the detection process, and the large amount of image data makes the image reading work of doctors difficult and time-consuming.
Image recognition based on deep learning and image and video description methods have become one of the very popular fields in recent years at home and abroad. With the breakthrough progress of the deep learning method in the aspects of image classification and positioning (ImageNet dataset), image semantic understanding (COCO dataset) and the like, the deep learning technology is also increasingly applied to the field of auxiliary medical diagnosis. At present, the deep learning technology is applied to the auxiliary detection of skin cancer, brain tumor, lung cancer and the like, and the research of applying the deep learning technology to the auxiliary diagnosis of digestive tract images is not common.
Chinese patent with patent publication No. CN103984957A discloses an automatic early warning system for suspicious lesion areas in capsule endoscope images, which adopts an image enhancement module to perform adaptive enhancement on images, and then detects texture features of flat lesions through a texture feature extraction module, and finally classifies the images by using a classification early warning module, thereby realizing the detection and early warning functions on the flat lesions of small intestine.
The scheme can only perform early warning on suspicious lesion areas, can not perform classification and identification on focuses, has a single effect, and can not provide position information of diseases. Is not beneficial to the accurate judgment of the doctor on the focus.
Disclosure of Invention
The invention aims to provide a capsule endoscope image auxiliary film reading system and a capsule endoscope image auxiliary film reading method.
In order to achieve the purpose, the capsule endoscope image auxiliary film reading system is characterized by comprising a data acquisition module, an image position classification module and an image sequence description module, wherein the signal output end of the data acquisition module is connected with the signal input end of the image position classification module, and the signal output end of the image position classification module is connected with the signal input end of the image sequence description module;
the data acquisition module is used for acquiring capsule endoscope image data of an examiner; the image position classification module is used for dividing the capsule endoscope image data into different image sequences according to different shot alimentary canal parts; the image sequence description module is used for identifying focuses in the image sequences of different parts of the digestive tract and generating descriptive characters for the image sequences so as to form a diagnosis report.
The capsule endoscope image auxiliary interpretation method using the system is characterized by comprising the following steps of:
step 1: the data acquisition module acquires capsule endoscope image data of an examinee;
step 2: the data acquisition module inputs the acquired capsule endoscope image data of the examiner into a first Convolutional Neural Network (CNN) model in the image position classification module to classify images according to different shooting parts so as to obtain image sequences of different digestive tract parts;
and step 3: the image position classification module sends the image sequences of different digestive tract parts to a second Convolutional Neural Network (CNN) model of the image sequence description module to obtain a feature vector sequence of the image sequences of the different digestive tract parts, each image in the image sequences of the different digestive tract parts corresponds to a feature vector, and all the feature vectors form the feature vector sequence;
and 4, step 4: inputting the characteristic vector sequence into a Recurrent Neural Network (RNN) model of an image sequence description module to obtain descriptive characters of the image sequences of different digestive tract parts, and generating an auxiliary diagnosis report by the image sequence description module according to the descriptive characters of the image sequences of the different digestive tract parts.
The invention adopts a deep learning model, automatically learns how to classify images according to the shot positions of the digestive tracts, identifies focuses in the images and generates descriptive characters through the training of the model, and then helps doctors to process a large number of digestive tract images, and finally assists the doctors to make correct judgment and effective decision. The invention can greatly reduce the workload and the working pressure of doctors and improve the working efficiency of the doctors.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a block diagram of the structure of the model training module incorporated in the present invention;
FIG. 3 is a network structure diagram of the image sequence description method of the present invention;
FIG. 4 is a block diagram of an LSTM network in the present invention;
the system comprises a data acquisition module, an image position classification module, an image sequence description module, a human-computer interaction module, a model training module and a storage module, wherein the data acquisition module is 1, the image position classification module is 2, the image sequence description module is 3, and the model training module is 5.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples:
the capsule endoscope image auxiliary interpretation system shown in fig. 1 and 2 comprises a data acquisition module 1, an image position classification module 2 and an image sequence description module 3, wherein a signal output end of the data acquisition module 1 is connected with a signal input end of the image position classification module 2, and a signal output end of the image position classification module 2 is connected with a signal input end of the image sequence description module 3;
the data acquisition module 1 is used for acquiring capsule endoscope image data of an examinee; the image position classification module 2 is used for dividing the capsule endoscope image data into different image sequences according to different shot alimentary canal parts; the image sequence description module 3 is used for identifying the focus in the image sequence of different parts of the digestive tract and generating descriptive characters for the image sequence, thereby forming a diagnosis report.
In the above technical solution, the image position classification module 2 is configured to classify the capsule endoscope image according to different shooting locations by using the first convolutional neural network CNN model, and the image position classification module is divided into image sequences of locations including an esophagus, a cardia, a fundus ventriculi, a body of the stomach, a horn of the stomach, a antrum of the stomach, a pylorus, a duodenum, a jejunum, an ileum, and the like.
In the above technical solution, the image sequence description module 3 is configured to perform image feature extraction on image sequences of different shooting locations by using a second convolutional neural network CNN model to obtain feature vector sequences of image sequences of different digestive tract locations; the image sequence description module 3 is further configured to convert image features in the feature vector sequence into descriptive words by using a recurrent neural network RNN model, so as to form an auxiliary diagnosis report.
In the technical scheme, the image sequence description system further comprises a human-computer interaction module 4, wherein a signal output end of the image sequence description module 3 is connected with a signal input end of the human-computer interaction module 4;
and the human-computer interaction module 4 is used for presenting the generated diagnosis report to a doctor for the doctor to carry out disease diagnosis and analysis according to the result of machine identification.
In the above technical solution, the image position classification system further includes a model training module 5, wherein a first data communication end of the model training module 5 is connected to a training data communication end of the image position classification module 2, and a second communication end of the model training module 5 is connected to a training data communication end of the image sequence description module 3;
the model training module 5 is used for training a first Convolutional Neural Network (CNN) model used in the image position classification module 2 by using a random gradient descent method, and the trained classification model can classify input capsule endoscope images according to different shooting parts;
the model training module 5 is further configured to train a second Convolutional Neural Network (CNN) model used in the image sequence description module 3 by using a random gradient descent method, and the trained classification model can extract feature vector sequences of image sequences of different digestive tract regions and obtain a feature vector sequence;
the model training module 5 is further configured to train a recurrent neural network RNN model used in the image sequence description module 3 by using a stochastic gradient descent method, and the trained recurrent neural network RNN model can obtain descriptive characters of image sequences of different digestive tract parts according to the feature vector sequence.
In the above technical solution, the recurrent neural network RNN model uses a long-term memory LSTM network, and can learn a long-term dependency relationship and map a variable length input to a variable length output.
The capsule endoscope image auxiliary interpretation method using the system is characterized by comprising the following steps of:
step 1: the data acquisition module 1 acquires capsule endoscope image data of an examinee;
step 2: the data acquisition module 1 inputs the acquired capsule endoscope image data of the examiner into a first Convolutional Neural Network (CNN) model in the image position classification module 2 to perform image classification according to different shooting parts, so as to obtain image sequences of different digestive tract parts;
and step 3: the image position classification module 2 sends the image sequences of different digestive tract parts to a second convolutional neural network CNN model of the image sequence description module 3 to obtain a feature vector sequence of the image sequences of different digestive tract parts (a vector for representing image features is directly obtained by removing the last fully-connected classification layer in the traditional neural network), each image in the image sequences of different digestive tract parts corresponds to a feature vector, and all the feature vectors form the feature vector sequence;
and 4, step 4: the feature vector sequence is input into a Recurrent Neural Network (RNN) model of the image sequence description module 3 to obtain descriptive characters (mainly focus information in the image) of the image sequences of different digestive tract parts, and the image sequence description module 3 generates an auxiliary diagnosis report according to the descriptive characters of the image sequences of different digestive tract parts.
In step 4 of the above technical solution, the recurrent neural network RNN model of the image sequence description module 3 is composed of two layers of LSTM (Long Short-Term Memory, LSTM, which is a time recurrent neural network) networks, the output of the convolutional neural network CNN model of the image sequence description module 3 is used as the input of the first layer of LSTM network, the hidden layer of the first layer of LSTM network is used as the input of the second layer of LSTM network, and the LSTM network is used as the decoder of the feature vector sequence to generate the corresponding auxiliary diagnostic description words.
In step 2 of the above technical scheme, the model training module 5 trains a first convolutional neural network CNN model in the image position classification module 2, the training process includes two stages of pre-training and model optimization, and in the pre-training stage, the first convolutional neural network CNN model in the image position classification module 2 is trained on the ImageNet data set by using a random gradient descent method to obtain the pre-trained first convolutional neural network CNN model; in the optimization stage, the parameters of the pre-trained first convolutional neural network CNN model are adjusted by using the manually labeled digestive tract segmentation samples (the network is trained again by using its own data, fine-tuning in english).
In the step 3 of the above technical scheme, the model training module 5 trains the second convolutional neural network CNN model of the image sequence description module 3, the training process includes two stages of pre-training and model optimization, and in the pre-training stage, the second convolutional neural network CNN model in the image sequence description module 3 is trained on the ImageNet data set by using a random gradient descent method to obtain the pre-trained second convolutional neural network CNN model; in the optimization stage, parameters of the second convolutional neural network CNN model which is trained in advance are optimized by using image data samples of different types of digestive tract lesions (such as bleeding, polyps, ulcers and the like) which are marked manually.
In the step 3 of the above technical solution, the model training module 5 inputs the feature vector sequence and the corresponding relation sample of the manually marked image features and the image sequence descriptive characters into the recurrent neural network RNN model for training to obtain the descriptive characters corresponding to the input image feature sequence.
In the above technical solution, the reason why the LSTM network is selected for the recurrent neural network RNN model is that for diagnosis of a certain digestive tract region, it is necessary to integrate all image information of the region. To obtain a diagnosis report, a model capable of processing a long history memory function is needed so as to comprehensively consider all image information, while the LSTM model can memorize information for a long time and map a variable-length input into a variable-length output, i.e., the LSTM network can well process image sequence information to obtain a comprehensive description of an image sequence.
In the above technical solution, the first convolutional neural network CNN model and the second convolutional neural network CNN model may be AlexNet, VGG, google lenet, ResNet, etc., and the embodiment is preferably a deep residual error network (deep residual network), and the network model has a deeper network hierarchy compared with other models, and has a lower recognition error rate, so as to obtain a better classification recognition effect.
Fig. 3 is a network architecture diagram of a method for image sequence description. And inputting the image sequence as a Convolutional Neural Network (CNN) model, generating an image feature vector by using the Convolutional Neural Network (CNN) model, and then generating corresponding auxiliary diagnosis description words by using an LSTM network as a decoder of the image feature vector. The LSTM model can learn the dependency relationship between longer feature sequences, so the temporal sequence relationship of image sequences can be learned using this characteristic of the LSTM model, and diagnostic information of the image sequences of the digestive tract can be generated from the learned language model.
Fig. 4 is a diagram of an LSTM network architecture. LSTM is divided into an encoding process (encoding) and a decoding process (decoding).
In the above technical solution, the LSTM network is divided into an encoding process and a decoding process:
encoding process for LSTM networks uses image features xtAnd hidden state ht-1As an input, the memory cell state c is obtained by calculation of the LSTM cells in the LSTM networktWhere the subscript t denotes the t-th recursion, the calculation formula of the complete LSTM encoding process is as follows:
it=σ(Wxixt+Whiht-1+bi)
ft=σ(Wxfxt+Whfht-1+bf)
ot=σ(Wxoxt+Whoht-1+bo)
ct=ft·ct-1+it·gt
wherein,represents an arctangent function, σ represents a sigmoid function, and · represents a point product; i, f, o, g represent 4 gates inside the LSTM unit; x is the number oft,ht-1Representing image features and hidden states, respectively, ctRepresenting a memory unit, subscript t represents the t-th recursion step, and t-1 represents the t-1-th recursion step; wxi,Wxf,Wxo,WxgRepresenting the weight of the image feature x; whi,Whf,Who,WhgRepresenting the weight of the hidden state h; bi,bf,bo,bgRepresents an offset value, a weight of the image feature x, a weight of the hidden state h, and an offset valueThe values of the data are determined by the machine through training;
the auxiliary diagnostic information generated by the LSTM network decoding process needs to be evaluated by the conditional probability of the input image sequence, the formula is as follows:
wherein (x)1,…,xn) A sequence of images representing different parts of the digestive tract input, n representing the number of frames, (y)1,…,ym) The character sequence of the output is shown, and m represents the number of characters; p (y)t|hn+t) Is the probability, h, calculated from the softmax function of the vocabularyn+tIs through hn+t-1,yt-1Calculating according to a formula of an encoding process, wherein a subscript t represents the t recursion, n represents the nth frame image, and p represents the probability;
training process of LSTM network, i.e. solving code phase p (y)1,…,ym|x1,…,xn) The maximum likelihood estimation of (2) is formulated as follows:
whereinAn operator for maximizing theta; theta represents a parameter required to be trained in the LSTM model, theta*A maximum likelihood estimate representing a parameter; the log operator represents the operator for natural logarithm; p (y)t|hn+t-1,yt-1) Is the probability calculated from the softmax function of the vocabulary, with the index t representing the t-th recursion and n representing the n-th frame image.
Details not described in this specification are within the skill of the art that are well known to those skilled in the art.

Claims (8)

1. The capsule endoscope image auxiliary film reading system is characterized by comprising a data acquisition module (1), an image position classification module (2) and an image sequence description module (3), wherein the signal output end of the data acquisition module (1) is connected with the signal input end of the image position classification module (2), and the signal output end of the image position classification module (2) is connected with the signal input end of the image sequence description module (3);
the data acquisition module (1) is used for acquiring capsule endoscope image data of an examinee; the image position classification module (2) is used for dividing the capsule endoscope image data into different image sequences according to different shot alimentary canal parts; the image sequence description module (3) is used for identifying focuses in the image sequences of different parts of the digestive tract and generating descriptive texts for the image sequences so as to form a diagnosis report.
2. The capsule endoscope image-assisted interpretation system of claim 1, wherein: the image position classification module (2) is used for classifying the capsule endoscope images according to different shooting parts by using a first Convolutional Neural Network (CNN) model.
3. The capsule endoscope image-assisted interpretation system of claim 1, wherein: the image sequence description module (3) is used for extracting image features of image sequences of different shooting parts by utilizing a second Convolutional Neural Network (CNN) model to obtain feature vector sequences of image sequences of different digestive tract parts; the image sequence description module (3) is also used for converting the image features in the feature vector sequence into descriptive words by using a Recurrent Neural Network (RNN) model so as to form an auxiliary diagnosis report.
4. The capsule endoscope image-assisted interpretation system of claim 1, wherein: the image sequence description module is characterized by further comprising a human-computer interaction module (4), wherein the signal output end of the image sequence description module (3) is connected with the signal input end of the human-computer interaction module (4);
and the human-computer interaction module (4) is used for presenting the generated diagnosis report to a doctor for the doctor to carry out disease diagnosis and analysis according to the result of machine identification.
5. The capsule endoscope image-assisted interpretation system of claim 1, wherein: the image position classification system further comprises a model training module (5), wherein a first data communication end of the model training module (5) is connected with a training data communication end of the image position classification module (2), and a second communication end of the model training module (5) is connected with a training data communication end of the image sequence description module (3);
the model training module (5) is used for training a first Convolutional Neural Network (CNN) model used in the image position classification module (2) by using a random gradient descent method, and the trained classification model can classify input capsule endoscope images according to different shooting parts;
the model training module (5) is also used for training a second Convolutional Neural Network (CNN) model used in the image sequence description module (3) by using a random gradient descent method, and the trained classification model can extract feature vector sequences of image sequences of different digestive tract positions and obtain a feature vector sequence;
the model training module (5) is also used for training the recurrent neural network RNN model used in the image sequence description module (3) by using a stochastic gradient descent method, and the trained recurrent neural network RNN model can obtain descriptive characters of image sequences of different digestive tract parts according to the characteristic vector sequence.
6. A method for assisting interpretation of images from a capsule endoscope using the system of claim 1, comprising the steps of:
step 1: the data acquisition module (1) acquires capsule endoscope image data of an examinee;
step 2: the data acquisition module (1) inputs the acquired capsule endoscope image data of the examiner into a first Convolutional Neural Network (CNN) model in the image position classification module (2) to perform image classification according to different shooting parts, so as to obtain image sequences of different digestive tract parts;
and step 3: the image position classification module (2) sends the image sequences of different digestive tract parts to a second Convolutional Neural Network (CNN) model of the image sequence description module (3) to obtain a feature vector sequence of the image sequences of the different digestive tract parts, each image in the image sequences of the different digestive tract parts corresponds to a feature vector, and all the feature vectors form the feature vector sequence;
and 4, step 4: and inputting the characteristic vector sequence into a Recurrent Neural Network (RNN) model of the image sequence description module (3) to obtain descriptive characters of the image sequences of different digestive tract parts, and generating an auxiliary diagnosis report by the image sequence description module (3) according to the descriptive characters of the image sequences of different digestive tract parts.
7. The capsule endoscopic image reading assisting method according to claim 5, characterized in that: in the step 4, the recurrent neural network RNN model of the image sequence description module (3) is composed of two layers of LSTM networks, the output of the recurrent neural network CNN model of the image sequence description module (3) is used as the input of the first layer of LSTM network, the hidden layer of the first layer of LSTM network is used as the input of the second layer of LSTM network, and the LSTM network is used as a decoder of the feature vector sequence to generate corresponding auxiliary diagnostic description words.
8. The capsule endoscopic image reading assisting method according to claim 5, characterized in that: in the step 2, the model training module (5) trains a first Convolutional Neural Network (CNN) model in the image position classification module (2), wherein the training process comprises two stages of pre-training and model optimization, and in the pre-training stage, the first Convolutional Neural Network (CNN) model in the image position classification module (2) is trained on an ImageNet data set by using a random gradient descent method to obtain the pre-trained first Convolutional Neural Network (CNN) model; in the optimization stage, parameters of the pre-trained first convolutional neural network CNN model are adjusted by using the manually marked digestive tract segmentation samples.
In the step 3, the model training module (5) trains a second convolutional neural network CNN model of the image sequence description module (3), wherein the training process comprises two stages of pre-training and model optimization, and in the pre-training stage, the second convolutional neural network CNN model in the image sequence description module (3) is trained on the ImageNet data set by using a random gradient descent method to obtain the pre-trained second convolutional neural network CNN model; and in the optimization stage, adjusting the parameters of the second pre-trained convolutional neural network CNN model by using different types of the image data samples of the digestive tract lesions marked manually.
In the step 3, the model training module (5) inputs the feature vector sequence and the corresponding relation sample of the manually marked image features and the image sequence descriptive characters into the recurrent neural network RNN model for training to obtain the descriptive characters corresponding to the input image feature sequence.
CN201710104172.7A 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method Active CN106934799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710104172.7A CN106934799B (en) 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710104172.7A CN106934799B (en) 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method

Publications (2)

Publication Number Publication Date
CN106934799A true CN106934799A (en) 2017-07-07
CN106934799B CN106934799B (en) 2019-09-03

Family

ID=59423088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710104172.7A Active CN106934799B (en) 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method

Country Status (1)

Country Link
CN (1) CN106934799B (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516075A (en) * 2017-08-03 2017-12-26 安徽华米信息科技有限公司 Detection method, device and the electronic equipment of electrocardiosignal
CN107705852A (en) * 2017-12-06 2018-02-16 北京华信佳音医疗科技发展有限责任公司 Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope
CN107730489A (en) * 2017-10-09 2018-02-23 杭州电子科技大学 Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method
CN108229463A (en) * 2018-02-07 2018-06-29 众安信息技术服务有限公司 Character recognition method based on image
CN108354578A (en) * 2018-03-14 2018-08-03 重庆金山医疗器械有限公司 A kind of capsule endoscope positioning system
CN108461152A (en) * 2018-01-12 2018-08-28 平安科技(深圳)有限公司 Medical model training method, medical recognition methods, device, equipment and medium
CN108877915A (en) * 2018-06-07 2018-11-23 合肥工业大学 The intelligent edge calculations system of minimally invasive video processing
CN109035339A (en) * 2017-10-27 2018-12-18 重庆金山医疗器械有限公司 The location recognition method of capsule endoscope system and its operation area detection picture
CN109102491A (en) * 2018-06-28 2018-12-28 武汉大学人民医院(湖北省人民医院) A kind of gastroscope image automated collection systems and method
CN109146884A (en) * 2018-11-16 2019-01-04 青岛美迪康数字工程有限公司 Endoscopy monitoring method and device
CN109272483A (en) * 2018-08-01 2019-01-25 安翰光电技术(武汉)有限公司 Capsule endoscope diagosis and the system and control method of quality control
CN109447973A (en) * 2018-10-31 2019-03-08 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus and system of polyp of colon image
CN109447985A (en) * 2018-11-16 2019-03-08 青岛美迪康数字工程有限公司 Colonoscopic images analysis method, device and readable storage medium storing program for executing
CN110110750A (en) * 2019-03-29 2019-08-09 广州思德医疗科技有限公司 A kind of classification method and device of original image
CN110232413A (en) * 2019-05-31 2019-09-13 华北电力大学(保定) Insulator image, semantic based on GRU network describes method, system, device
CN110367913A (en) * 2019-07-29 2019-10-25 杭州电子科技大学 Wireless capsule endoscope image pylorus and ileocaecal sphineter localization method
US10537720B2 (en) 2018-04-09 2020-01-21 Vibrant Ltd. Method of enhancing absorption of ingested medicaments for treatment of parkinsonism
CN111026799A (en) * 2019-12-06 2020-04-17 安翰科技(武汉)股份有限公司 Capsule endoscopy report text structuring method, apparatus and medium
WO2020078252A1 (en) * 2018-10-16 2020-04-23 The Chinese University Of Hong Kong Method, apparatus and system for automatic diagnosis
CN111275041A (en) * 2020-01-20 2020-06-12 腾讯科技(深圳)有限公司 Endoscope image display method and device, computer equipment and storage medium
CN111340094A (en) * 2020-02-21 2020-06-26 湘潭大学 Capsule endoscope image auxiliary classification system and classification method based on deep learning
CN111612027A (en) * 2019-02-26 2020-09-01 沛智生医科技股份有限公司 Cell classification method, system and medical analysis platform
CN111655116A (en) * 2017-10-30 2020-09-11 公益财团法人癌研究会 Image diagnosis support device, data collection method, image diagnosis support method, and image diagnosis support program
CN111798408A (en) * 2020-05-18 2020-10-20 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Endoscope interference image detection and grading system and method
US10814113B2 (en) 2019-01-03 2020-10-27 Vibrant Ltd. Device and method for delivering an ingestible medicament into the gastrointestinal tract of a user
CN112200773A (en) * 2020-09-17 2021-01-08 苏州慧维智能医疗科技有限公司 Large intestine polyp detection method based on encoder and decoder of cavity convolution
CN112200250A (en) * 2020-10-14 2021-01-08 重庆金山医疗器械有限公司 Digestive tract segmentation identification method, device and equipment of capsule endoscope image
US10888277B1 (en) 2017-01-30 2021-01-12 Vibrant Ltd Method for treating diarrhea and reducing Bristol stool scores using a vibrating ingestible capsule
US10905378B1 (en) 2017-01-30 2021-02-02 Vibrant Ltd Method for treating gastroparesis using a vibrating ingestible capsule
CN112686899A (en) * 2021-03-22 2021-04-20 深圳科亚医疗科技有限公司 Medical image analysis method and apparatus, computer device, and storage medium
US20210134442A1 (en) * 2019-11-05 2021-05-06 Infinitt Healthcare Co., Ltd. Medical image diagnosis assistance apparatus and method using plurality of medical image diagnosis algorithms for endoscopic images
CN112784652A (en) * 2019-11-11 2021-05-11 中强光电股份有限公司 Image recognition method and device
CN112837275A (en) * 2021-01-14 2021-05-25 长春大学 Capsule endoscope image organ classification method, device, equipment and storage medium
US11020018B2 (en) 2019-01-21 2021-06-01 Vibrant Ltd. Device and method for delivering a flowable ingestible medicament into the gastrointestinal tract of a user
US11052018B2 (en) 2019-02-04 2021-07-06 Vibrant Ltd. Temperature activated vibrating capsule for gastrointestinal treatment, and a method of use thereof
CN113470792A (en) * 2017-11-06 2021-10-01 科亚医疗科技股份有限公司 System, method, and medium for generating reports based on medical images of patients
WO2022194126A1 (en) * 2021-03-19 2022-09-22 安翰科技(武汉)股份有限公司 Method for building image reading model based on capsule endoscope, device, and medium
US11478401B2 (en) 2016-09-21 2022-10-25 Vibrant Ltd. Methods and systems for adaptive treatment of disorders in the gastrointestinal tract
US11504024B2 (en) 2018-03-30 2022-11-22 Vibrant Ltd. Gastrointestinal treatment system including a vibrating capsule, and method of use thereof
US11510590B1 (en) 2018-05-07 2022-11-29 Vibrant Ltd. Methods and systems for treating gastrointestinal disorders
US11638678B1 (en) 2018-04-09 2023-05-02 Vibrant Ltd. Vibrating capsule system and treatment method
US12083303B2 (en) 2019-01-21 2024-09-10 Vibrant Ltd. Device and method for delivering a flowable ingestible medicament into the gastrointestinal tract of a user

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584571A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Capsule endoscopy auxiliary film reading method
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN105979847A (en) * 2014-02-07 2016-09-28 国立大学法人广岛大学 Endoscopic image diagnosis support system
CN106097335A (en) * 2016-06-08 2016-11-09 安翰光电技术(武汉)有限公司 Digestive tract focus image identification system and recognition methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584571A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Capsule endoscopy auxiliary film reading method
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN105979847A (en) * 2014-02-07 2016-09-28 国立大学法人广岛大学 Endoscopic image diagnosis support system
CN106097335A (en) * 2016-06-08 2016-11-09 安翰光电技术(武汉)有限公司 Digestive tract focus image identification system and recognition methods

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11478401B2 (en) 2016-09-21 2022-10-25 Vibrant Ltd. Methods and systems for adaptive treatment of disorders in the gastrointestinal tract
US12090112B2 (en) 2016-09-21 2024-09-17 Vibrant Ltd. Methods and systems for adaptive treatment of disorders in the gastrointestinal tract
US10905378B1 (en) 2017-01-30 2021-02-02 Vibrant Ltd Method for treating gastroparesis using a vibrating ingestible capsule
US10888277B1 (en) 2017-01-30 2021-01-12 Vibrant Ltd Method for treating diarrhea and reducing Bristol stool scores using a vibrating ingestible capsule
US11534097B2 (en) 2017-08-03 2022-12-27 Anhui Huami Information Technology Co., Ltd. Detection of electrocardiographic signal
CN107516075A (en) * 2017-08-03 2017-12-26 安徽华米信息科技有限公司 Detection method, device and the electronic equipment of electrocardiosignal
CN107730489A (en) * 2017-10-09 2018-02-23 杭州电子科技大学 Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method
CN109035339A (en) * 2017-10-27 2018-12-18 重庆金山医疗器械有限公司 The location recognition method of capsule endoscope system and its operation area detection picture
CN109091098A (en) * 2017-10-27 2018-12-28 重庆金山医疗器械有限公司 Magnetic control capsule endoscopic diagnostic and examination system
CN111655116A (en) * 2017-10-30 2020-09-11 公益财团法人癌研究会 Image diagnosis support device, data collection method, image diagnosis support method, and image diagnosis support program
CN113470792A (en) * 2017-11-06 2021-10-01 科亚医疗科技股份有限公司 System, method, and medium for generating reports based on medical images of patients
CN107705852A (en) * 2017-12-06 2018-02-16 北京华信佳音医疗科技发展有限责任公司 Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope
CN108461152A (en) * 2018-01-12 2018-08-28 平安科技(深圳)有限公司 Medical model training method, medical recognition methods, device, equipment and medium
CN108229463A (en) * 2018-02-07 2018-06-29 众安信息技术服务有限公司 Character recognition method based on image
CN108354578A (en) * 2018-03-14 2018-08-03 重庆金山医疗器械有限公司 A kind of capsule endoscope positioning system
US11504024B2 (en) 2018-03-30 2022-11-22 Vibrant Ltd. Gastrointestinal treatment system including a vibrating capsule, and method of use thereof
US10537720B2 (en) 2018-04-09 2020-01-21 Vibrant Ltd. Method of enhancing absorption of ingested medicaments for treatment of parkinsonism
US10543348B2 (en) 2018-04-09 2020-01-28 Vibrant Ltd. Method of enhancing absorption of ingested medicaments for treatment of an an ailment of the GI tract
US11638678B1 (en) 2018-04-09 2023-05-02 Vibrant Ltd. Vibrating capsule system and treatment method
US11510590B1 (en) 2018-05-07 2022-11-29 Vibrant Ltd. Methods and systems for treating gastrointestinal disorders
CN108877915A (en) * 2018-06-07 2018-11-23 合肥工业大学 The intelligent edge calculations system of minimally invasive video processing
CN109102491A (en) * 2018-06-28 2018-12-28 武汉大学人民医院(湖北省人民医院) A kind of gastroscope image automated collection systems and method
CN109102491B (en) * 2018-06-28 2021-12-28 武汉楚精灵医疗科技有限公司 Gastroscope image automatic acquisition system and method
CN109272483A (en) * 2018-08-01 2019-01-25 安翰光电技术(武汉)有限公司 Capsule endoscope diagosis and the system and control method of quality control
WO2020078252A1 (en) * 2018-10-16 2020-04-23 The Chinese University Of Hong Kong Method, apparatus and system for automatic diagnosis
US11468563B2 (en) 2018-10-31 2022-10-11 Tencent Technology (Shenzhen) Company Limited Colon polyp image processing method and apparatus, and system
US11748883B2 (en) 2018-10-31 2023-09-05 Tencent Technology (Shenzhen) Company Limited Colon polyp image processing method and apparatus, and system
CN109447973A (en) * 2018-10-31 2019-03-08 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus and system of polyp of colon image
CN109447985A (en) * 2018-11-16 2019-03-08 青岛美迪康数字工程有限公司 Colonoscopic images analysis method, device and readable storage medium storing program for executing
CN109146884A (en) * 2018-11-16 2019-01-04 青岛美迪康数字工程有限公司 Endoscopy monitoring method and device
CN109146884B (en) * 2018-11-16 2020-07-03 青岛美迪康数字工程有限公司 Endoscopic examination monitoring method and device
US10814113B2 (en) 2019-01-03 2020-10-27 Vibrant Ltd. Device and method for delivering an ingestible medicament into the gastrointestinal tract of a user
US12083303B2 (en) 2019-01-21 2024-09-10 Vibrant Ltd. Device and method for delivering a flowable ingestible medicament into the gastrointestinal tract of a user
US11020018B2 (en) 2019-01-21 2021-06-01 Vibrant Ltd. Device and method for delivering a flowable ingestible medicament into the gastrointestinal tract of a user
US11052018B2 (en) 2019-02-04 2021-07-06 Vibrant Ltd. Temperature activated vibrating capsule for gastrointestinal treatment, and a method of use thereof
CN111612027A (en) * 2019-02-26 2020-09-01 沛智生医科技股份有限公司 Cell classification method, system and medical analysis platform
CN110110750A (en) * 2019-03-29 2019-08-09 广州思德医疗科技有限公司 A kind of classification method and device of original image
CN110232413A (en) * 2019-05-31 2019-09-13 华北电力大学(保定) Insulator image, semantic based on GRU network describes method, system, device
CN110367913B (en) * 2019-07-29 2021-09-28 杭州电子科技大学 Wireless capsule endoscope image pylorus and ileocecal valve positioning method
CN110367913A (en) * 2019-07-29 2019-10-25 杭州电子科技大学 Wireless capsule endoscope image pylorus and ileocaecal sphineter localization method
US20210134442A1 (en) * 2019-11-05 2021-05-06 Infinitt Healthcare Co., Ltd. Medical image diagnosis assistance apparatus and method using plurality of medical image diagnosis algorithms for endoscopic images
US11742072B2 (en) * 2019-11-05 2023-08-29 Infinitt Healthcare Co., Ltd. Medical image diagnosis assistance apparatus and method using plurality of medical image diagnosis algorithms for endoscopic images
CN112784652A (en) * 2019-11-11 2021-05-11 中强光电股份有限公司 Image recognition method and device
CN112784652B (en) * 2019-11-11 2024-08-13 中强光电股份有限公司 Image recognition method and device
US11676017B2 (en) 2019-11-11 2023-06-13 Coretronic Corporation Image recognition method and device
CN111026799A (en) * 2019-12-06 2020-04-17 安翰科技(武汉)股份有限公司 Capsule endoscopy report text structuring method, apparatus and medium
CN111026799B (en) * 2019-12-06 2023-07-18 安翰科技(武汉)股份有限公司 Method, equipment and medium for structuring text of capsule endoscopy report
CN111275041A (en) * 2020-01-20 2020-06-12 腾讯科技(深圳)有限公司 Endoscope image display method and device, computer equipment and storage medium
CN111340094A (en) * 2020-02-21 2020-06-26 湘潭大学 Capsule endoscope image auxiliary classification system and classification method based on deep learning
CN111798408A (en) * 2020-05-18 2020-10-20 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Endoscope interference image detection and grading system and method
CN112200773A (en) * 2020-09-17 2021-01-08 苏州慧维智能医疗科技有限公司 Large intestine polyp detection method based on encoder and decoder of cavity convolution
CN112200250A (en) * 2020-10-14 2021-01-08 重庆金山医疗器械有限公司 Digestive tract segmentation identification method, device and equipment of capsule endoscope image
CN112837275A (en) * 2021-01-14 2021-05-25 长春大学 Capsule endoscope image organ classification method, device, equipment and storage medium
CN112837275B (en) * 2021-01-14 2023-10-24 长春大学 Capsule endoscope image organ classification method, device, equipment and storage medium
WO2022194126A1 (en) * 2021-03-19 2022-09-22 安翰科技(武汉)股份有限公司 Method for building image reading model based on capsule endoscope, device, and medium
US11494908B2 (en) 2021-03-22 2022-11-08 Shenzhen Keya Medical Technology Corporation Medical image analysis using navigation processing
CN112686899A (en) * 2021-03-22 2021-04-20 深圳科亚医疗科技有限公司 Medical image analysis method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN106934799B (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN106934799B (en) Capsule endoscope visual aids diagosis system and method
Du et al. Review on the applications of deep learning in the analysis of gastrointestinal endoscopy images
EP3876190B1 (en) Endoscopic image processing method and system and computer device
He et al. Hookworm detection in wireless capsule endoscopy images with deep learning
Pogorelov et al. Deep learning and hand-crafted feature based approaches for polyp detection in medical videos
Sharmay et al. HistoTransfer: understanding transfer learning for histopathology
CN110367913B (en) Wireless capsule endoscope image pylorus and ileocecal valve positioning method
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
CN112686856A (en) Real-time enteroscopy polyp detection device based on deep learning
CN113496489A (en) Training method of endoscope image classification model, image classification method and device
CN111667453A (en) Gastrointestinal endoscope image anomaly detection method based on local feature and class mark embedded constraint dictionary learning
CN117274270B (en) Digestive endoscope real-time auxiliary system and method based on artificial intelligence
CN115082448B (en) Intestinal tract cleanliness scoring method and device and computer equipment
CN115564712B (en) Capsule endoscope video image redundant frame removing method based on twin network
CN113129287A (en) Automatic lesion mapping method for upper gastrointestinal endoscope image
CN114399465B (en) Benign and malignant ulcer identification method and system
CN110427994A (en) Digestive endoscope image processing method, device, storage medium, equipment and system
CN111462082A (en) Focus picture recognition device, method and equipment and readable storage medium
CN115311737A (en) Method for recognizing hand motion of non-aware stroke patient based on deep learning
Chatterjee et al. A survey on techniques used in medical imaging processing
CN111784669B (en) Multi-range detection method for capsule endoscopic images
CN117218127A (en) Ultrasonic endoscope auxiliary monitoring system and method
Garcia-Peraza-Herrera et al. Interpretable fully convolutional classification of intrapapillary capillary loops for real-time detection of early squamous neoplasia
CN115994999A (en) Goblet cell semantic segmentation method and system based on boundary gradient attention network
Yang et al. CL-TransFER: Collaborative learning based transformer for facial expression recognition with masked reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 430075 666 new high tech Avenue, East Lake New Technology Development Zone, Wuhan, Hubei

Applicant after: Anhan Science and Technology (Wuhan) Co., Ltd.

Address before: 430075 666 new high tech Avenue, East Lake New Technology Development Zone, Wuhan, Hubei

Applicant before: Ankon Photoelectric Technology (Wuhan) Co., Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200731

Address after: Room b218, 2 / F, building 2, Zhongguancun innovation center, Xixia District, Yinchuan City, Ningxia Hui Autonomous Region, 750000

Patentee after: Yinchuan Anhan Internet hospital Co., Ltd

Address before: 430075 666 hi tech Avenue, East Lake New Technology Development Zone, Wuhan, Hubei

Patentee before: Anhan Technology (Wuhan) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210114

Address after: 430000 No. 666 High-tech Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee after: Anhan Technology (Wuhan) Co.,Ltd.

Address before: B218, 2nd floor, building 2, Zhongguancun innovation center, Xixia District, Yinchuan City, Ningxia Hui Autonomous Region

Patentee before: Yinchuan Anhan Internet hospital Co., Ltd