CN106934799B - Capsule endoscope visual aids diagosis system and method - Google Patents

Capsule endoscope visual aids diagosis system and method Download PDF

Info

Publication number
CN106934799B
CN106934799B CN201710104172.7A CN201710104172A CN106934799B CN 106934799 B CN106934799 B CN 106934799B CN 201710104172 A CN201710104172 A CN 201710104172A CN 106934799 B CN106934799 B CN 106934799B
Authority
CN
China
Prior art keywords
module
image
training
model
image sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710104172.7A
Other languages
Chinese (zh)
Other versions
CN106934799A (en
Inventor
张行
张皓
袁文金
王新宏
段晓东
肖国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankon Technologies Co Ltd
Original Assignee
Anhan Science And Technology (wuhan) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhan Science And Technology (wuhan) Co Ltd filed Critical Anhan Science And Technology (wuhan) Co Ltd
Priority to CN201710104172.7A priority Critical patent/CN106934799B/en
Publication of CN106934799A publication Critical patent/CN106934799A/en
Application granted granted Critical
Publication of CN106934799B publication Critical patent/CN106934799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses a kind of capsule endoscope visual aids diagosis system, its data acquisition module is used to obtain the capsule endoscope image data of examiner;Picture position categorization module is used to that capsule endoscope image to be classified by the difference at shooting position using the first convolutional neural networks CNN model, obtains the image sequence at different shooting positions;Image sequence describing module is used to carry out image characteristics extraction using image sequence of the second convolutional neural networks CNN model to different shooting positions to obtain the feature vector sequence of different alimentary canal positions image sequence;Image sequence describing module is also used to convert descriptive matter in which there for the characteristics of image in feature vector sequence using recurrent neural network RNN model, to form auxiliary diagnosis report.The present invention can reduce the workload that doctor watches alimentary canal image, improve the diagnosis efficiency of doctor.

Description

Capsule endoscope visual aids diagosis system and method
Technical field
The present invention relates to medical equipment fields, in particular to a kind of capsule endoscope visual aids diagosis system and method.
Background technique
Disease of digestive tract such as gastric cancer, intestinal cancer, acute or chronic gastritis, canker etc., mostly common disease, frequently-occurring disease, it is to the mankind Health have very big threat.National tumour Register survey data is shown within 2015, and alimentary tract cancer accounts for whole cancer hairs The 43% of sick rate.Traditional optical fiber type endoscope need to be inserted into be observed in patient body, extremely inconvenient, and is caused to patient Pain.Capsule endoscope can by it is painless it is noninvasive in a manner of entire alimentary canal is checked, be that a revolutionary technology is prominent It is broken.Capsule endoscope can acquire about 50000 images during being detected, and a large amount of image data makes doctor's Diagosis work becomes arduous and time-consuming.
The description method of image recognition and image and video based on deep learning becomes recent domestic very Popular one of field.As deep learning method is managed in picture classification and positioning (ImageNet data set) and image, semantic The progress of (COCO data set) etc. making a breakthrough property is solved, depth learning technology is also increasingly used in complementary medicine and examines Disconnected field.There is the auxiliary detection that depth learning technology is applied to cutaneum carcinoma, brain tumor, lung cancer etc. at present, for inciting somebody to action Research of the depth learning technology for alimentary canal visual aids diagnosis aspect is still rare.
Patent publication No. is the Chinese patent of CN103984957A, discloses a kind of capsule endoscope image suspicious lesions area Domain automatic early-warning system, which adaptively enhances image using image enhancement module, then passes through texture feature extraction Module detects the textural characteristics of flat lesion, is finally classified with classification warning module, realizes flat to small intestine The detection and warning function that smooth venereal disease becomes.
Above scheme can only to suspicious lesions region carry out early warning, can not to lesion carry out Classification and Identification, single effect, And the location information of disease cannot be provided.It is unfavorable for doctor to the accurate judgement of lesion.
Summary of the invention
Present invention aim to provide a kind of capsule endoscope visual aids diagosis system and method, the present invention is utilized Machine learning techniques carry out position classification to alimentary canal image first, obtain alimentary canal position classification data, and be able to detect Out include the image of suspicious lesions, diagnosis report is further generated according to the suspicious lesions image detected, it can hypochondriasis by these The image and diagnosis report of stove are presented to doctor, support doctor to the suspicious lesions (such as bleeding, polyp, ulcer) in image into The further diagnostic analysis of row improves the diagnosis efficiency of doctor to greatly reduce the workload that doctor watches alimentary canal image.
In order to achieve this, a kind of capsule endoscope visual aids diagosis system designed by the present invention, which is characterized in that It includes data acquisition module, picture position categorization module and image sequence describing module, wherein the signal of data acquisition module Output end connects the signal input part of picture position categorization module, and the signal output end of picture position categorization module connects image sequence The signal input part of column describing module;
The data acquisition module is used to obtain the capsule endoscope image data of examiner;Described image position classification mould Block is used to the capsule endoscope image data being divided into different image sequences according to the difference at the alimentary canal position of shooting; The described image sequence description module lesion in the image sequence of alimentary canal different parts for identification, and generate to image sequence Descriptive matter in which there, to form diagnosis report.
A kind of capsule endoscope visual aids diagosis method using above system, which is characterized in that it includes following step It is rapid:
Step 1: the capsule endoscope image data of data acquisition module acquisition examiner;
Step 2: the capsule endoscope image data for the examiner that data acquisition module will acquire is input to picture position point According to shooting in the first convolutional neural networks CNN (Convolutional Neural Network, CNN) model in generic module The different of position carry out image classification, to obtain the image sequence at different alimentary canal positions;
Step 3: the image sequence at different alimentary canal positions is sent to image sequence and describes mould by picture position categorization module In second convolutional neural networks CNN model of block, the feature vector sequence of different alimentary canal positions image sequence is obtained, difference disappears Each image in the image sequence at the position Hua Dao corresponds to a characteristic vector, all characteristic vector composition characteristic vectors Sequence;
Step 4: feature vector sequence is input to the recurrent neural network RNN (Recurrent of image sequence describing module Neural Network) obtain in model different alimentary canal positions image sequence descriptive matter in which there, image sequence describes mould Root tuber generates auxiliary diagnosis report according to the descriptive matter in which there of the image sequence at above-mentioned different alimentary canals position.
The present invention use deep learning model, by the training of model learn automatically how according to shooting alimentary canal position It carries out image classification, the lesion in identification image and generates descriptive matter in which there, then doctor is helped to handle a large amount of alimentary canal Image, final auxiliary doctor makes accurate judgment and effective decision-making.The present invention can substantially reduce the workload and work of doctor Pressure improves its working efficiency.
Detailed description of the invention
Fig. 1 is structural block diagram of the invention;
Fig. 2 is the structural block diagram that model training module is added in the present invention;
Fig. 3 is the network structure that image sequence describes method in the present invention;
Fig. 4 is the structure chart of LSTM network in the present invention;
Wherein, 1-data acquisition module, 2-picture position categorization modules, 3-image sequence describing modules, 4-man-machine Interactive module, 5-model training modules.
Specific embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail:
Capsule endoscope visual aids diagosis system as illustrated in fig. 1 and 2, it includes data acquisition module 1, picture position Categorization module 2 and image sequence describing module 3, wherein the signal output end connection picture position classification mould of data acquisition module 1 The signal input part of block 2, the signal input of the signal output end connection image sequence description module 3 of picture position categorization module 2 End;
The data acquisition module 1 is used to obtain the capsule endoscope image data of examiner;The classification of described image position Module 2 is used to the capsule endoscope image data being divided into different image sequences according to the difference at the alimentary canal position of shooting Column;The lesion in the image sequence of alimentary canal different parts for identification of described image sequence description module 3, and generate to image The descriptive matter in which there of sequence, to form diagnosis report.
In above-mentioned technical proposal, described image position categorization module 2 is used for will using the first convolutional neural networks CNN model Capsule endoscope image by shooting position difference classify, be divided into including esophagus, cardia, stomach bottom, body of stomach, stomach angle, antrum, The image sequence at the positions such as pylorus, duodenum, jejunum, ileum.
In above-mentioned technical proposal, described image sequence description module 3 is used to utilize the second convolutional neural networks CNN model pair The image sequence at difference shooting position carries out image characteristics extraction and obtains the characteristic vector sequence of different alimentary canal positions image sequence Column;Described image sequence description module 3 is also used to using recurrent neural network RNN model that the image in feature vector sequence is special Sign is converted into descriptive matter in which there, to form auxiliary diagnosis report.
In above-mentioned technical proposal, it further includes human-computer interaction module 4, the signal output of described image sequence description module 3 The signal input part of end connection human-computer interaction module 4;
The human-computer interaction module 4 is used to the diagnosis report of generation being presented to doctor, for doctor according to machine recognition As a result condition-inference analysis is carried out.
In above-mentioned technical proposal, it further includes model training module 5, the first data communication of the model training module 5 The training data communication ends of end connection picture position categorization module 2, the second communication ends of model training module 5 connect image sequence The training data communication ends of describing module 3;
The model training module 5 is used to make using in stochastic gradient descent method training described image position categorization module 2 First convolutional neural networks CNN model, the disaggregated model after training can be to the capsule endoscope image of input by shooting The difference at position is classified;
Model training module 5 is also used to use using in stochastic gradient descent method training described image sequence description module 3 The second convolutional neural networks CNN model, the disaggregated model after training can extract the spy of different alimentary canal positions image sequence Vector sequence is levied, and obtains feature vector sequence;
Model training module 5 is also used to use using in stochastic gradient descent method training described image sequence description module 3 Recurrent neural network RNN model, the recurrent neural network RNN model after training can obtain difference according to feature vector sequence and disappear The descriptive matter in which there of the image sequence at the position Hua Dao.
In above-mentioned technical proposal, recurrent neural network RNN model uses long short-term memory LSTM network, can learn long-term Variable length input is mapped as the output of variable length by dependence.
A kind of capsule endoscope visual aids diagosis method using above system, which is characterized in that it includes following step It is rapid:
Step 1: the capsule endoscope image data of the acquisition examiner of data acquisition module 1;
Step 2: the capsule endoscope image data for the examiner that data acquisition module 1 will acquire is input to picture position point Image classification is carried out according to the different of shooting position in the first convolutional neural networks CNN model in generic module 2, to obtain not With the image sequence at alimentary canal position;
Step 3: the image sequence at different alimentary canal positions is sent to image sequence and describes mould by picture position categorization module 2 In second convolutional neural networks CNN model of block 3, the feature vector sequence for obtaining different alimentary canal positions image sequence (passes through The last one full link sort layer in traditional neural network is removed, the vector of characterization characteristics of image is directly obtained), difference disappears Each image in the image sequence at the position Hua Dao corresponds to a characteristic vector, all characteristic vector composition characteristic vectors Sequence;
Step 4: feature vector sequence being input in the recurrent neural network RNN model of image sequence describing module 3 and obtained To the descriptive matter in which there (lesion information mainly in image) of the image sequence at different alimentary canal positions, image sequence describes mould Block 3 generates auxiliary diagnosis report according to the descriptive matter in which there of the image sequence at above-mentioned different alimentary canals position.
In the step 4 of above-mentioned technical proposal, the recurrent neural network RNN model of image sequence describing module 3 is by two layers A kind of LSTM (Long Short-Term Memory, LSTM are time recurrent neural networks) network composition, image sequence is retouched Input of the output of the convolutional neural networks CNN model of module 3 as first layer LSTM network is stated, by first layer LSTM network Input of the hidden layer as second layer LSTM network, decoder of the LSTM network as feature vector sequence generate corresponding auxiliary Help diagnosis descriptive text.
In the step 2 of above-mentioned technical proposal, model training module 5 is to the first convolution mind in picture position categorization module 2 It is trained through network C NN model, training process includes two stages of pre-training and model optimization, in the pre-training stage, is used First convolutional neural networks CNN of the stochastic gradient descent method on ImageNet data set in training image position categorization module 2 Model, the first convolutional neural networks CNN model after obtaining pre-training;In the optimizing phase, the digestive tract that manually marks It is (right again using the data of oneself that the parameter for the first convolutional neural networks CNN model that sample crosses pre-training carries out tuning Network is trained, and English is fine-tuning).
In the step 3 of above-mentioned technical proposal, second convolution of the model training module 5 to image sequence describing module 3 Neural network CNN model is trained, and training process includes making in the pre-training stage in two stages of pre-training and model optimization With second convolutional neural networks of the stochastic gradient descent method on ImageNet data set in training image sequence description module 3 CNN model, the second convolutional neural networks CNN model after obtaining pre-training;In the optimizing phase, the different type that manually marks Alimentary canal lesion (such as bleeding, polyp, ulcer) image data sample the second convolutional neural networks CNN mould that pre-training is crossed The parameter of type carries out tuning.
In the step 3 of above-mentioned technical proposal, model training module 5 is by the image of feature vector sequence and handmarking The corresponding relationship sample of feature and image sequence descriptive matter in which there, which is input in recurrent neural network RNN model, to be trained, and is obtained Descriptive matter in which there corresponding to characteristics of image sequence to input.
In above-mentioned technical proposal, why the recurrent neural network RNN model selects LSTM network, be because for The diagnosis at some alimentary canal position is to need to integrate all image informations in the position.Obtain diagnosis report, it is necessary to can Handle the model of longer History-memory Function, to comprehensively consider all image informations, and LSTM model can remember it is longer Variable length input is mapped as the output of variable length by the information of time, i.e. LSTM network can be good at handling image sequence letter Breath, obtains the comprehensive description of image sequence.
In above-mentioned technical proposal, the first convolutional neural networks CNN model and the second convolutional neural networks CNN model be can be AlexNet, VGG, GoogLeNet, ResNet etc., the present embodiment are preferably depth residual error network ResNet (Deep Residual Network), which has deeper network layer compared to other models, and identifies that error rate is lower, Jin Erneng Enough obtain preferable Classification and Identification effect.
Fig. 3 is the network structure that a kind of image sequence describes method.Using image sequence as convolutional neural networks CNN mould Type input, by convolutional neural networks CNN model generate image feature vector, next use LSTM network as characteristics of image to The decoder of amount generates corresponding auxiliary diagnosis descriptive text.LSTM model is due to that can learn between longer characteristic sequence Dependence, therefore arrived using the time series relationship of this characteristic of LSTM model study image sequence, and by study Language model generate alimentary canal image sequence diagnostic message.
Fig. 4 is a kind of LSTM network structure.LSTM points are cataloged procedure (encoding) and decoding process (decoding)。
In above-mentioned technical proposal, the LSTM network is divided into cataloged procedure and decoding process:
The cataloged procedure of LSTM network uses characteristics of image xtWith hidden state ht-1As input, by LSTM network Memory unit state c is calculated in LSTM unitt, wherein subscript t indicates that t walks recurrence, complete LSTM cataloged procedure Calculation formula is as follows:
it=σ (Wxixt+Whiht-1+bi)
ft=σ (Wxfxt+Whfht-1+bf)
ot=σ (Wxoxt+Whoht-1+bo)
ct=ft·ct-1+it·gt
Wherein,It indicates that arctan function, σ indicate sigmoid function, indicates dot product;I, f, o, g indicate LSTM unit 4 internal doors;xt,ht-1Respectively indicate characteristics of image and hidden state, ctIndicate memory unit, subscript t indicates that t step is passed Return, t-1 indicates that t-1 walks recurrence;Wxi,Wxf,Wxo,WxgIndicate the weight of characteristics of image x;Whi,Whf,Who,WhgIt indicates to hide shape The weight of state h;bi,bf,bo,bgIndicate deviant, the weight of above-mentioned characteristics of image x, the weight of hidden state h and deviant Value is all to be determined by training by machine oneself;
The complementary diagnostic information that LSTM network decoding process generates needs to comment by the conditional probability of input image sequence Estimate, formula is as follows:
Wherein, (x1,…,xn) indicate input different alimentary canal positions image sequence, n indicate number of image frames, (y1,…,ym) indicate that the word sequence exported, m indicate text number;p(yt|hn+t) it is softmax function meter by vocabulary Obtained probability, hn+tIt is to pass through hn+t-1,yt-1It is calculated according to the formula of cataloged procedure, subscript t indicates the t times recurrence, n Indicate that n-th frame image, p indicate probability;
The training process of LSTM network seeks decoding stage p (y1,…,ym|x1,…,xn) Maximum-likelihood estimation, greatly The formula of possibility predication is as follows:
WhereinThe operator for making θ maximum value is sought in expression;θ indicates the parameter for needing training to obtain in LSTM model, θ*The Maximum-likelihood estimation of expression parameter;Log operator indicates to seek the operator of natural logrithm;p(yt|hn+t-1,yt-1) it is by word The probability that the softmax function of remittance table is calculated, subscript t indicate the t times recurrence, and n indicates n-th frame image.
The content that this specification is not described in detail belongs to the prior art well known to professional and technical personnel in the field.

Claims (7)

1. a kind of capsule endoscope visual aids diagosis system, which is characterized in that it includes data acquisition module (1), image position Set categorization module (2) and image sequence describing module (3), wherein the signal output end connection figure image position of data acquisition module (1) The signal input part of categorization module (2) is set, the signal output end of picture position categorization module (2) connects image sequence description module (3) signal input part;
The data acquisition module (1) is used to obtain the capsule endoscope image data of examiner;Described image position classification mould Block (2) is used to the capsule endoscope image data being divided into different image sequences according to the difference at the alimentary canal position of shooting Column;Described image sequence description module (3) lesion in the image sequence of alimentary canal different parts for identification, and generate to figure As the descriptive matter in which there of sequence, to form diagnosis report;
Described image sequence description module (3) is used for the figure using the second convolutional neural networks CNN model to different shooting positions The feature vector sequence of different alimentary canal positions image sequence is obtained as sequence carries out image characteristics extraction;Described image sequence is retouched State module (3) be also used to convert the characteristics of image in feature vector sequence to using recurrent neural network RNN model it is descriptive Text, to form auxiliary diagnosis report;
The recurrent neural network RNN model of image sequence describing module (3) is made of two layers of LSTM network, and image sequence is described Input of the output of the convolutional neural networks CNN model of module (3) as first layer LSTM network, by first layer LSTM network Input of the hidden layer as second layer LSTM network, decoder of the LSTM network as feature vector sequence generate corresponding auxiliary Help diagnosis descriptive text.
2. capsule endoscope visual aids diagosis system according to claim 1, it is characterised in that: described image position point Generic module (2) is used to carry out capsule endoscope image by the different of shooting position using the first convolutional neural networks CNN model Classification.
3. capsule endoscope visual aids diagosis system according to claim 1, it is characterised in that: it further includes man-machine friendship Mutual module (4), the signal input part of signal output end connection human-computer interaction module (4) of described image sequence description module (3);
The human-computer interaction module (4) is used to the diagnosis report of generation being presented to doctor, for doctor according to the knot of machine recognition Fruit carries out condition-inference analysis.
4. capsule endoscope visual aids diagosis system according to claim 1, it is characterised in that: it further includes model instruction Practice module (5), the training number of the first data communication end connection picture position categorization module (2) of the model training module (5) According to communication ends, the training data communication ends of the second communication ends connection image sequence description module (3) of model training module (5);
The model training module (5) is used to make using in stochastic gradient descent method training described image position categorization module (2) First convolutional neural networks CNN model, the disaggregated model after training can be to the capsule endoscope image of input by shooting The difference at position is classified;
Model training module (5) is also used to use using in stochastic gradient descent method training described image sequence description module (3) The second convolutional neural networks CNN model, the disaggregated model after training can extract the spy of different alimentary canal positions image sequence Vector sequence is levied, and obtains feature vector sequence;
Model training module (5) is also used to use using in stochastic gradient descent method training described image sequence description module (3) Recurrent neural network RNN model, the recurrent neural network RNN model after training can obtain difference according to feature vector sequence and disappear The descriptive matter in which there of the image sequence at the position Hua Dao.
5. a kind of capsule endoscope visual aids diagosis method using system described in claim 1, which is characterized in that it includes Following steps:
Step 1: the capsule endoscope image data of data acquisition module (1) acquisition examiner;
Step 2: the capsule endoscope image data for the examiner that data acquisition module (1) will acquire is input to picture position classification Image classification is carried out according to the different of shooting position in the first convolutional neural networks CNN model in module (2), to obtain not With the image sequence at alimentary canal position;
Step 3: the image sequence at different alimentary canal positions is sent to image sequence describing module by picture position categorization module (2) (3) in the second convolutional neural networks CNN model, the feature vector sequence of different alimentary canal positions image sequence is obtained, it is different Each image in the image sequence at alimentary canal position corresponds to a characteristic vector, all characteristic vector composition characteristic arrows Measure sequence;
Step 4: feature vector sequence being input in the recurrent neural network RNN model of image sequence describing module (3) and obtained The descriptive matter in which there of the image sequence at different alimentary canal positions, image sequence describing module (3) is according to above-mentioned different alimentary canals portion The descriptive matter in which there of the image sequence of position generates auxiliary diagnosis report.
6. capsule endoscope visual aids diagosis method according to claim 5, it is characterised in that: in the step 4, figure The recurrent neural network RNN model of picture sequence description module (3) is made of two layers of LSTM network, by image sequence describing module (3) input of the output of convolutional neural networks CNN model as first layer LSTM network, by hiding for first layer LSTM network Input of the layer as second layer LSTM network, decoder of the LSTM network as feature vector sequence generate corresponding auxiliary and examine Disconnected descriptive text.
7. capsule endoscope visual aids diagosis method according to claim 5, it is characterised in that: in the step 2, mould Type training module (5) is trained the first convolutional neural networks CNN model in picture position categorization module (2), trains Journey includes two stages of pre-training and model optimization, in the pre-training stage, using stochastic gradient descent method in ImageNet data The first convolutional neural networks CNN model on collection in training image position categorization module (2), the first convolution after obtaining pre-training Neural network CNN model;In the optimizing phase, the first convolutional Neural that the digestive tract sample manually marked crosses pre-training The parameter of network C NN model carries out tuning;
In the step 3, second convolutional neural networks CNN model of the model training module (5) to image sequence describing module (3) It is trained, training process includes in the pre-training stage using stochastic gradient descent method in two stages of pre-training and model optimization The second convolutional neural networks CNN model on ImageNet data set in training image sequence description module (3), obtains pre- instruction The second convolutional neural networks CNN model after white silk;In the optimizing phase, the different types of alimentary canal lesion image that manually marks The parameter for the second convolutional neural networks CNN model that data sample crosses pre-training carries out tuning;
In the step 3, model training module (5) retouches feature vector sequence and the characteristics of image of handmarking with image sequence The corresponding relationship sample of the property stated text is input in recurrent neural network RNN model and is trained, the characteristics of image inputted Descriptive matter in which there corresponding to sequence.
CN201710104172.7A 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method Active CN106934799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710104172.7A CN106934799B (en) 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710104172.7A CN106934799B (en) 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method

Publications (2)

Publication Number Publication Date
CN106934799A CN106934799A (en) 2017-07-07
CN106934799B true CN106934799B (en) 2019-09-03

Family

ID=59423088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710104172.7A Active CN106934799B (en) 2017-02-24 2017-02-24 Capsule endoscope visual aids diagosis system and method

Country Status (1)

Country Link
CN (1) CN106934799B (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2554354B (en) 2016-09-21 2021-06-02 Vibrant Ltd Systems for adaptive treatment of disorders in the gastrointestinal tract
US10905378B1 (en) 2017-01-30 2021-02-02 Vibrant Ltd Method for treating gastroparesis using a vibrating ingestible capsule
US10888277B1 (en) 2017-01-30 2021-01-12 Vibrant Ltd Method for treating diarrhea and reducing Bristol stool scores using a vibrating ingestible capsule
CN107516075B (en) 2017-08-03 2020-10-09 安徽华米智能科技有限公司 Electrocardiosignal detection method and device and electronic equipment
CN107730489A (en) * 2017-10-09 2018-02-23 杭州电子科技大学 Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method
CN107886503A (en) * 2017-10-27 2018-04-06 重庆金山医疗器械有限公司 A kind of alimentary canal anatomical position recognition methods and device
JP6657480B2 (en) * 2017-10-30 2020-03-04 公益財団法人がん研究会 Image diagnosis support apparatus, operation method of image diagnosis support apparatus, and image diagnosis support program
CN107705852A (en) * 2017-12-06 2018-02-16 北京华信佳音医疗科技发展有限责任公司 Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope
CN108461152A (en) * 2018-01-12 2018-08-28 平安科技(深圳)有限公司 Medical model training method, medical recognition methods, device, equipment and medium
CN108229463A (en) * 2018-02-07 2018-06-29 众安信息技术服务有限公司 Character recognition method based on image
CN108354578B (en) * 2018-03-14 2020-10-30 重庆金山医疗器械有限公司 Capsule endoscope positioning system
US11504024B2 (en) 2018-03-30 2022-11-22 Vibrant Ltd. Gastrointestinal treatment system including a vibrating capsule, and method of use thereof
US10537720B2 (en) 2018-04-09 2020-01-21 Vibrant Ltd. Method of enhancing absorption of ingested medicaments for treatment of parkinsonism
US11638678B1 (en) 2018-04-09 2023-05-02 Vibrant Ltd. Vibrating capsule system and treatment method
US11510590B1 (en) 2018-05-07 2022-11-29 Vibrant Ltd. Methods and systems for treating gastrointestinal disorders
CN108877915A (en) * 2018-06-07 2018-11-23 合肥工业大学 The intelligent edge calculations system of minimally invasive video processing
CN109102491B (en) * 2018-06-28 2021-12-28 武汉楚精灵医疗科技有限公司 Gastroscope image automatic acquisition system and method
CN109272483B (en) * 2018-08-01 2021-03-30 安翰科技(武汉)股份有限公司 Capsule endoscopy and quality control system and control method
CN113302649A (en) * 2018-10-16 2021-08-24 香港中文大学 Method, device and system for automatic diagnosis
CN109447973B (en) 2018-10-31 2021-11-26 腾讯医疗健康(深圳)有限公司 Method, device and system for processing colon polyp image
CN109447985B (en) * 2018-11-16 2020-09-11 青岛美迪康数字工程有限公司 Colonoscope image analysis method and device and readable storage medium
CN109146884B (en) * 2018-11-16 2020-07-03 青岛美迪康数字工程有限公司 Endoscopic examination monitoring method and device
BR112021012849A2 (en) 2019-01-03 2021-09-21 Vibrant Ltd. DEVICE AND METHOD FOR DELIVERING AN INGESBLE DRUG IN THE GASTROINTESTINAL TRACT OF A USER
GB201900780D0 (en) 2019-01-21 2019-03-06 Vibrant Ltd Device and method for delivering a flowable ingestible medicament into the gastrointestinal tract of a user
GB201901470D0 (en) 2019-02-04 2019-03-27 Vibrant Ltd Vibrating capsule for gastrointestinal treatment, and method of use thereof
TW202032574A (en) * 2019-02-26 2020-09-01 沛智生醫科技股份有限公司 Method and system for classifying cells and medical analysis platform
CN110110750B (en) * 2019-03-29 2021-03-05 广州思德医疗科技有限公司 Original picture classification method and device
CN110232413A (en) * 2019-05-31 2019-09-13 华北电力大学(保定) Insulator image, semantic based on GRU network describes method, system, device
CN110367913B (en) * 2019-07-29 2021-09-28 杭州电子科技大学 Wireless capsule endoscope image pylorus and ileocecal valve positioning method
KR102360615B1 (en) * 2019-11-05 2022-02-09 주식회사 인피니트헬스케어 Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images
CN112784652A (en) * 2019-11-11 2021-05-11 中强光电股份有限公司 Image recognition method and device
CN111026799B (en) * 2019-12-06 2023-07-18 安翰科技(武汉)股份有限公司 Method, equipment and medium for structuring text of capsule endoscopy report
CN111275041B (en) * 2020-01-20 2022-12-13 腾讯科技(深圳)有限公司 Endoscope image display method and device, computer equipment and storage medium
CN111340094A (en) * 2020-02-21 2020-06-26 湘潭大学 Capsule endoscope image auxiliary classification system and classification method based on deep learning
CN111798408B (en) * 2020-05-18 2023-07-21 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Endoscope interference image detection and classification system and method
CN112200773A (en) * 2020-09-17 2021-01-08 苏州慧维智能医疗科技有限公司 Large intestine polyp detection method based on encoder and decoder of cavity convolution
CN112200250A (en) * 2020-10-14 2021-01-08 重庆金山医疗器械有限公司 Digestive tract segmentation identification method, device and equipment of capsule endoscope image
CN112837275B (en) * 2021-01-14 2023-10-24 长春大学 Capsule endoscope image organ classification method, device, equipment and storage medium
CN113052956B (en) * 2021-03-19 2023-03-10 安翰科技(武汉)股份有限公司 Method, device and medium for constructing film reading model based on capsule endoscope
CN112686899B (en) * 2021-03-22 2021-06-18 深圳科亚医疗科技有限公司 Medical image analysis method and apparatus, computer device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584571A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Capsule endoscopy auxiliary film reading method
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN105979847A (en) * 2014-02-07 2016-09-28 国立大学法人广岛大学 Endoscopic image diagnosis support system
CN106097335A (en) * 2016-06-08 2016-11-09 安翰光电技术(武汉)有限公司 Digestive tract focus image identification system and recognition methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584571A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Capsule endoscopy auxiliary film reading method
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN105979847A (en) * 2014-02-07 2016-09-28 国立大学法人广岛大学 Endoscopic image diagnosis support system
CN106097335A (en) * 2016-06-08 2016-11-09 安翰光电技术(武汉)有限公司 Digestive tract focus image identification system and recognition methods

Also Published As

Publication number Publication date
CN106934799A (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN106934799B (en) Capsule endoscope visual aids diagosis system and method
CN107730489A (en) Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method
He et al. Hookworm detection in wireless capsule endoscopy images with deep learning
CN111655116A (en) Image diagnosis support device, data collection method, image diagnosis support method, and image diagnosis support program
Pogorelov et al. Deep learning and hand-crafted feature based approaches for polyp detection in medical videos
CN110367913B (en) Wireless capsule endoscope image pylorus and ileocecal valve positioning method
CN107292347A (en) A kind of capsule endoscope image-recognizing method
CN113496489B (en) Training method of endoscope image classification model, image classification method and device
CN107886503A (en) A kind of alimentary canal anatomical position recognition methods and device
CN109102491A (en) A kind of gastroscope image automated collection systems and method
CN106682616A (en) Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN110600122A (en) Digestive tract image processing method and device and medical system
CN109102899A (en) Chinese medicine intelligent assistance system and method based on machine learning and big data
CN111341437B (en) Digestive tract disease judgment auxiliary system based on tongue image
CN109166619A (en) Chinese medicine intelligent diagnostics auxiliary system and method based on neural network algorithm
CN109635871A (en) A kind of capsule endoscope image classification method based on multi-feature fusion
CN111340094A (en) Capsule endoscope image auxiliary classification system and classification method based on deep learning
CN108877923A (en) A method of the tongue fur based on deep learning generates prescriptions of traditional Chinese medicine
CN110321827A (en) A kind of pain level appraisal procedure based on face pain expression video
Sun et al. A novel gastric ulcer differentiation system using convolutional neural networks
Yang et al. DRR-Net: A dense-connected residual recurrent convolutional network for surgical instrument segmentation from endoscopic images
CN115170385A (en) Method and system for coloring black-and-white mode video of laser scalpel operation
CN111462082A (en) Focus picture recognition device, method and equipment and readable storage medium
CN113177940A (en) Gastroscope video part identification network structure based on Transformer
CN112419246B (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 430075 666 new high tech Avenue, East Lake New Technology Development Zone, Wuhan, Hubei

Applicant after: Anhan Science and Technology (Wuhan) Co., Ltd.

Address before: 430075 666 new high tech Avenue, East Lake New Technology Development Zone, Wuhan, Hubei

Applicant before: Ankon Photoelectric Technology (Wuhan) Co., Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200731

Address after: Room b218, 2 / F, building 2, Zhongguancun innovation center, Xixia District, Yinchuan City, Ningxia Hui Autonomous Region, 750000

Patentee after: Yinchuan Anhan Internet hospital Co., Ltd

Address before: 430075 666 hi tech Avenue, East Lake New Technology Development Zone, Wuhan, Hubei

Patentee before: Anhan Technology (Wuhan) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210114

Address after: 430000 No. 666 High-tech Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee after: Anhan Technology (Wuhan) Co.,Ltd.

Address before: B218, 2nd floor, building 2, Zhongguancun innovation center, Xixia District, Yinchuan City, Ningxia Hui Autonomous Region

Patentee before: Yinchuan Anhan Internet hospital Co., Ltd