CN109102491A - A kind of gastroscope image automated collection systems and method - Google Patents
A kind of gastroscope image automated collection systems and method Download PDFInfo
- Publication number
- CN109102491A CN109102491A CN201810690051.XA CN201810690051A CN109102491A CN 109102491 A CN109102491 A CN 109102491A CN 201810690051 A CN201810690051 A CN 201810690051A CN 109102491 A CN109102491 A CN 109102491A
- Authority
- CN
- China
- Prior art keywords
- image
- model
- differentiate
- recognition result
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
- A61B1/2736—Gastroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Quality & Reliability (AREA)
- Gastroenterology & Hepatology (AREA)
- Biophysics (AREA)
- Optics & Photonics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Analysis (AREA)
- Endoscopes (AREA)
Abstract
The present invention provides a kind of gastroscope image automated collection systems, including using the trained convolutional neural networks model of back-propagation algorithm and length time memory network model, convolutional neural networks model is used to carry out pretreated image the classification at position and focus characteristic, length time memory network model carries out sequence fit to the result of classification, obtains recognition result;Image display module, for the recognition result of video identification module to be carried out picture and text expression;As a result output module is ranked up for recording each recognition result, and according to the recognition result of video identification module, exports the most preceding image of each position sequence.Integrated use convolutional neural networks model and length time memory network model of the present invention carry out real-time acquisition filter to the gastroscope video of acquisition, can be avoided the omission of the target signature in image, optimize the classification of feature.
Description
Technical field
The invention belongs to medical endoscope images to identify field, and in particular to a kind of gastroscope image automated collection systems and side
Method.
Background technique
With deep learning algorithm continue to develop, it is increasingly mature, gradually be used for medical imaging analysis field.Endoscope
Image is the important evidence that doctor analyzes patient's disease of digestive tract, has developed a variety of utilization depth convolutional neural networks in recent years
Model is clinically of great significance to the screening of lesion, diagnostic method in current related Gastroscope Diagnosis system.
When implementing endoscopy, operation doctor observes scope video on one side, and what be will be seen that on one side includes vitals position
Image-capture is got off to be saved in scope reporting system with the image in suspicious lesions region by trampling special foot pedal,
Diagnosis report is provided according to these images grabbed by diagnostician again.It gastrocopy process usually only 5-7 minutes, is limited by
Operate the working condition of doctor, experience influences, and is easy to appear to miss emphasis image and does not grab situation, this will lead to subsequent examine
Disconnected doctor can not make comprehensive and accurate assessment.
Presently disclosed correlation gastroscope auxiliary system and method, majority use depth convolutional neural networks method, such side
Method obtains preferable precision on to static images discriminant classification, but due to the complexity of operating gastroscope process, such method
In real-time video analysis, precision is poor.
Summary of the invention
The technical problem to be solved by the present invention is providing a kind of gastroscope image automated collection systems and method, can be avoided
The omission of target signature in image, and the classification of optimization aim feature.
The technical solution taken by the invention to solve the above technical problem are as follows: a kind of gastroscope image automated collection systems,
It is characterized by: it includes:
Video reception module connects endoscopic assistance by video frequency collection card, receives the video flowing of endoscopic assistance acquisition, and right
Video flowing is pre-processed;
Video identification module, including remembered using the trained convolutional neural networks model of back-propagation algorithm and long short time
Recall network model, convolutional neural networks model is used to carry out pretreated image the classification at position and focus characteristic, length
Time memory network model carries out sequence fit to the result of classification, obtains recognition result;The recognition result includes picture
Corresponding position classification and position classification confidence and the clear confidence level of picture;
Image display module, for the recognition result of video identification module to be carried out picture and text expression;
As a result output module, for recording each recognition result, and according to position in the recognition result of video identification module
It is ranked up after classification confidence and the clear confidence level weighting of picture, exports the most preceding image of each position sequence.
By above system, the convolutional neural networks model includes that image locations differentiate that CNN model and focus characteristic are sentenced
Other CNN model;The length time memory network model includes that image locations differentiate that LSTM model and focus characteristic differentiate
LSTM model;Wherein,
Image locations differentiate CNN model according to the corresponding weight of typical parts of 26 stomaches, in the single image of input
Corresponding typical parts are identified, to carry out position classification to the single image;Then continuous N positions will be inputted to classify
Image to position differentiate LSTM model, differentiate that image locations differentiate the identification weight of CNN model, export in continuous N most
The site categories of latter image;
Focus characteristic differentiates that CNN model is used for the single image sorted to position, judges the probability with focus characteristic;
Focus characteristic differentiates that LSTM model is used to input the identification weight that N focus characteristics differentiate CNN model, exports last picture
Probability with focus characteristic.
A kind of gastroscope image automatic acquiring method, it is characterised in that: it the following steps are included:
S1, the video flowing for receiving gastroscope from endoscopic assistance using video frequency collection card, and video flowing is pre-processed, by 1
Frame speed per second is sent to video identification module;
S2, video identification module carry out position and focus characteristic to pretreated image using convolutional neural networks model
Classification, then carry out sequence fit with result of the length time memory network model to classification, obtain recognition result;Video is known
It is ranked up after position classification confidence and the clear confidence level weighting of picture in the recognition result of other module;Convolutional neural networks
Model and length time memory network model are all made of back-propagation algorithm and train;
S3, the recognition result of video identification module is subjected to picture and text expression;
S4, each recognition result is recorded, and according to position classification confidence and figure in the recognition result of video identification module
It is ranked up after the clear confidence level weighting of piece, exports the most preceding image of each position sequence.
According to the above method, in the S2, convolutional neural networks model includes that image locations differentiate that CNN model and lesion are special
Sign differentiates CNN model;The length time memory network model includes that image locations differentiate that LSTM model and focus characteristic are sentenced
Other LSTM model;Wherein,
Image locations differentiate CNN model according to the corresponding weight of typical parts of 26 stomaches, in the single image of input
Corresponding typical parts are identified, to carry out position classification to the single image;Then continuous N positions will be inputted to classify
Image to position differentiate LSTM model, differentiate that image locations differentiate the identification weight of CNN model, export in continuous N most
The site categories of latter image;
Focus characteristic differentiates that CNN model is used for the single image sorted to position, judges the probability with focus characteristic;
Focus characteristic differentiates that LSTM model is used to input the identification weight that N focus characteristics differentiate CNN model, exports last picture
Probability with focus characteristic.
The invention has the benefit that integrated use convolutional neural networks model and length time memory network model are to adopting
The gastroscope video of collection carries out real-time acquisition filter, solves the problems, such as that current gastroscope video image acquisition real-time accuracy is poor, can
Avoid the omission of the target signature in image, and the classification of optimization aim feature.
Detailed description of the invention
Fig. 1 is the hardware block diagram of one embodiment of the invention.
Specific embodiment
Below with reference to specific example and attached drawing, the present invention will be further described.
The present invention provides a kind of gastroscope image automated collection systems, it includes:
Video reception module connects the bnc interface of endoscopic assistance by video frequency collection card, receives the view of endoscopic assistance acquisition
Frequency flows, and pre-processes to video flowing.
Video identification module, including remembered using the trained convolutional neural networks model of back-propagation algorithm and long short time
Recall network model, convolutional neural networks model is used to carry out pretreated image the classification at position and focus characteristic, length
Time memory network model carries out sequence fit to the result of classification, obtains recognition result;The recognition result includes picture
Corresponding position classification and position classification confidence and the clear confidence level of picture.
Convolutional neural networks model includes that image locations differentiate that CNN model and focus characteristic differentiate CNN model.The length
Short time memory network model includes that image locations differentiate that LSTM model and focus characteristic differentiate LSTM model.Wherein, image portion
Position differentiates CNN model according to the corresponding weights of typical parts of 26 stomaches, to identifying corresponding allusion quotation in the single image of input
Type position, to carry out position classification to the single image;Then image to the position that continuous N positions have been classified will be inputted to sentence
Other LSTM model differentiates that image locations differentiate the identification weight of CNN model, exports the portion of last image in continuous N
Position classification;Focus characteristic differentiates that CNN model is used for the single image sorted to position, judges the probability with focus characteristic;
Focus characteristic differentiates that LSTM model is used to input the identification weight that N focus characteristics differentiate CNN model, exports last picture
Probability with focus characteristic.
26 typical parts be respectively oesophagus, cardia, big curved, the antrum rear wall of antrum, antrum antetheca, antrum it is small it is curved, 12
Duodenum 12 bulb, descendant duodenum, telescope direct body of stomach lower part big curved, telescope direct body of stomach lower part rear wall, telescope direct body of stomach lower front wall, telescope direct
Body of stomach lower part small curved, telescope direct body of stomach middle and upper part big curved, telescope direct body of stomach middle and upper part rear wall, telescope direct body of stomach middle and upper part antetheca, telescope direct stomach
Big curved, the reversing face stomach bottom rear wall in small curved, the reversing face stomach bottom in body middle and upper part, reversing face stomach bottom antetheca, in small curved, the reversing face body of stomach in reversing face stomach bottom on
Portion's rear wall, reversing face body of stomach middle and upper part antetheca, small curved, the reversing face stomach angle rear wall in reversing face body of stomach middle and upper part, reversing face stomach angle antetheca, reversing face stomach
Angle is small curved.
In the present embodiment, image locations differentiate CNN model, to identifying that corresponding position classifies in the single image of input
Confidence level and corresponding Class Activation figure CAM.CAM is a kind of visualizes on picture convolutional layer last in CNN by " attention " area
The method in domain.The framework that CAM visualization is suitable for before final full articulamentum having global average pond layer, wherein we are defeated
Out on the last one convolutional layer the Feature Mapping of each unit spatial averaging.The CAM of one class is to be based on each characteristic pattern,
Image is distributed to such importance.The CAM calculation formula of CNN model is as follows:
In formula, Mp is the final characteristics map of Class Activation mapping, and Fk represents input picture in CNN the last one convolutional layer
K-th of Feature Mapping, wk represent the weight between k-th of Feature Mapping and complete connectivity layer.
Then the CAM input picture position for inputting continuous N CNN outputs is differentiated into LSTM model.Image locations differentiate
By N number of length, memory unit forms LSTM model in short-term.H indicates that LSTM output state, x represent the CAM of this input.Last
State ht-1With this input xtPosition judgement after being calculated by long memory unit in short-term, after output current image fitting
As a result.The long specific formula of memory unit in short-term is expressed as follows:
zt=σ (Wz·[ht-1, xt])
rt=σ (Wr·[ht-1, xt])
In formula, ztTo determine to forget information, σ is that σ is logic sigmoid function, WzTo forget the corresponding weight of feature, rt
For the information for determining memory, WrFor the corresponding weight of memory character,For updated state, tanh is logic tanh function, W
For the corresponding weight of output feature, ht is the information of final output.
Focus characteristic differentiates that CNN model is used for the single image sorted to position, judges the probability with focus characteristic
With corresponding Class Activation figure (CAM);After focus characteristic differentiates that LSTM model is used to input the CAM that N focus characteristics differentiate, output
Lesion judging result after current image fitting.
Image display module, for the recognition result of video identification module to be carried out picture and text expression.In the present embodiment, picture and text
Expression specifically: construct the virtual image of a stomach, be grey under original state, the corresponding position that will test is lighted, together
When show the position number that currently has detected that.
As a result output module, for recording each recognition result, and according to progress after confidence level and picture quality weighting
It is ranked up after position classification confidence and the clear confidence level weighting of picture in the recognition result of sequencing video identification module, it is defeated
Sort most preceding image at each position out.A possibility that most preceding explanation of sequence includes focus characteristic is maximum.In the present embodiment, including
The image and the n images comprising suspicious lesions feature of 26 different parts.
Hardware block diagram of the invention as shown in Figure 1, wherein food acquisition module is connect by USB with video identification module,
Video identification module is connect with image display, and image display is fixed by a movable supporting frame, and is passed through
Network is connect with vocational work station, to send report.Wherein video identification module, image display and result output module can
It is realized by intelligent terminals such as computer or tablet computers.
A kind of gastroscope image automatic acquiring method, comprising the following steps:
S1, the video flowing for receiving gastroscope from endoscopic assistance using video frequency collection card, and video flowing is pre-processed, by 1
Frame speed per second is sent to video identification module.
S2, video identification module carry out position and focus characteristic to pretreated image using convolutional neural networks model
Classification, then carry out sequence fit with result of the length time memory network model to classification, obtain recognition result;Described
Recognition result includes the corresponding position classification of picture and position classification confidence and the clear confidence level of picture.Convolutional neural networks mould
Type and length time memory network model are all made of back-propagation algorithm and train.Convolutional neural networks model includes image locations
Differentiate that CNN model and focus characteristic differentiate CNN model;The length time memory network model includes that image locations differentiate
LSTM model and focus characteristic differentiate LSTM model;Wherein, image locations differentiate CNN model according to the typical parts pair of 26 stomaches
The weight answered, to corresponding typical parts are identified in the single image of input, to carry out position classification to the single image;
Then image to the position that continuous N positions have been classified will be inputted and differentiate LSTM model, differentiate that image locations differentiate CNN model
It identifies weight, exports the site categories of last image in continuous N;Focus characteristic differentiates that CNN model is used for position
Sorted single image judges the probability with focus characteristic;Focus characteristic differentiates that LSTM model is special for inputting N lesions
Sign differentiates the identification weight of CNN model, exports the probability that last picture has focus characteristic.
S3, the recognition result of video identification module is subjected to picture and text expression.
S4, each recognition result is recorded, and according to position classification confidence and figure in the recognition result of video identification module
It is ranked up after the clear confidence level weighting of piece, exports the most preceding image of each position sequence.
The invention discloses a kind of gastroscope image automated collection systems and method, this method building is based on length time memory
The gastroscope identifying system of network (LSTM) and convolutional neural networks (CNN), using image sequence as the input of system, using reversed
Propagation algorithm is trained LSTM and CNN, optimizes the parameter of network, the network model after being optimized;Using trained
Good network model carries out position classification to the image sequence newly inputted.LSTM network model in this method can comprehensively consider
CNN recognition result and video context accurately extract the position image of gastrocopy needs from video.
It should be strongly noted that the focus characteristic referred in the present invention, the only target signature in image, and not disease
The diagnosis of disease.
Above embodiments are merely to illustrate design philosophy and feature of the invention, and its object is to make technology in the art
Personnel can understand the content of the present invention and implement it accordingly, and protection scope of the present invention is not limited to the above embodiments.So it is all according to
It is within the scope of the present invention according to equivalent variations made by disclosed principle, mentality of designing or modification.
Claims (4)
1. a kind of gastroscope image automated collection systems, it is characterised in that: it includes:
Video reception module connects endoscopic assistance by video frequency collection card, receives the video flowing of endoscopic assistance acquisition, and to video
Stream is pre-processed;
Video identification module, including using the trained convolutional neural networks model of back-propagation algorithm and length time memory net
Network model, convolutional neural networks model are used to carry out pretreated image the classification at position and focus characteristic, long short time
Memory network model carries out sequence fit to the result of classification, obtains recognition result;The recognition result includes that picture is corresponding
Position classification and position classification confidence and the clear confidence level of picture;
Image display module, for the recognition result of video identification module to be carried out picture and text expression;
As a result output module is classified for recording each recognition result, and according to position in the recognition result of video identification module
It is ranked up after confidence level and the clear confidence level weighting of picture, exports the most preceding image of each position sequence.
2. gastroscope image automated collection systems according to claim 1, it is characterised in that: the convolutional neural networks mould
Type includes that image locations differentiate that CNN model and focus characteristic differentiate CNN model;The length time memory network model includes
Image locations differentiate that LSTM model and focus characteristic differentiate LSTM model;Wherein,
Image locations differentiate CNN model according to the corresponding weights of typical parts of 26 stomaches, to identifying in the single image of input
Corresponding typical parts out, to carry out position classification to the single image;Then continuous N will be inputted and open the figure that positions have been classified
As differentiating LSTM model to position, differentiates that image locations differentiate the identification weight of CNN model, export last in continuous N
Open the site categories of image;
Focus characteristic differentiates that CNN model is used for the single image sorted to position, judges the probability with focus characteristic;Lesion
Feature decision LSTM model is used to input the identification weight that N focus characteristics differentiate CNN model, and exporting last picture has
The probability of focus characteristic.
3. a kind of gastroscope image automatic acquiring method, it is characterised in that: it the following steps are included:
S1, the video flowing for receiving gastroscope from endoscopic assistance using video frequency collection card, and video flowing is pre-processed, it is every by 1 frame
The speed of second is sent to video identification module;
S2, video identification module carries out position to pretreated image using convolutional neural networks model and focus characteristic divides
Then class carries out sequence fit with result of the length time memory network model to classification, obtains recognition result;The identification
It as a result include the corresponding position classification of picture and position classification confidence and the clear confidence level of picture;Convolutional neural networks model and
Length time memory network model is all made of back-propagation algorithm and trains;
S3, the recognition result of video identification module is subjected to picture and text expression;
S4, each recognition result is recorded, and clear according to position classification confidence in the recognition result of video identification module and picture
It is ranked up after clear confidence level weighting, exports the most preceding image of each position sequence.
4. gastroscope image automatic acquiring method according to claim 3, it is characterised in that: in the S2, convolutional Neural
Network model includes that image locations differentiate that CNN model and focus characteristic differentiate CNN model;The length time memory network mould
Type includes that image locations differentiate that LSTM model and focus characteristic differentiate LSTM model;Wherein,
Image locations differentiate CNN model according to the corresponding weights of typical parts of 26 stomaches, to identifying in the single image of input
Corresponding typical parts out, to carry out position classification to the single image;Then continuous N will be inputted and open the figure that positions have been classified
As differentiating LSTM model to position, differentiates that image locations differentiate the identification weight of CNN model, export last in continuous N
Open the site categories of image;
Focus characteristic differentiates that CNN model is used for the single image sorted to position, judges the probability with focus characteristic;Lesion
Feature decision LSTM model is used to input the identification weight that N focus characteristics differentiate CNN model, and exporting last picture has
The probability of focus characteristic.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810690051.XA CN109102491B (en) | 2018-06-28 | 2018-06-28 | Gastroscope image automatic acquisition system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810690051.XA CN109102491B (en) | 2018-06-28 | 2018-06-28 | Gastroscope image automatic acquisition system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109102491A true CN109102491A (en) | 2018-12-28 |
CN109102491B CN109102491B (en) | 2021-12-28 |
Family
ID=64845366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810690051.XA Active CN109102491B (en) | 2018-06-28 | 2018-06-28 | Gastroscope image automatic acquisition system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109102491B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110746A (en) * | 2019-03-29 | 2019-08-09 | 广州思德医疗科技有限公司 | A kind of method and device of determining tag along sort |
CN110176295A (en) * | 2019-06-13 | 2019-08-27 | 上海孚慈医疗科技有限公司 | A kind of real-time detecting method and its detection device of Gastrointestinal Endoscopes lower portion and lesion |
CN110334582A (en) * | 2019-05-09 | 2019-10-15 | 河南萱闱堂医疗信息科技有限公司 | The method that intelligent recognition and record Endoscopic submucosal dissection extract polyp video |
CN110491502A (en) * | 2019-03-08 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Microscope video stream processing method, system, computer equipment and storage medium |
CN110929806A (en) * | 2019-12-06 | 2020-03-27 | 腾讯科技(北京)有限公司 | Picture processing method and device based on artificial intelligence and electronic equipment |
CN111563523A (en) * | 2019-02-14 | 2020-08-21 | 西门子医疗有限公司 | COPD classification using machine trained anomaly detection |
CN111588342A (en) * | 2020-06-03 | 2020-08-28 | 电子科技大学 | Intelligent auxiliary system for bronchofiberscope intubation |
WO2020215672A1 (en) * | 2019-08-05 | 2020-10-29 | 平安科技(深圳)有限公司 | Method, apparatus, and device for detecting and locating lesion in medical image, and storage medium |
CN112200250A (en) * | 2020-10-14 | 2021-01-08 | 重庆金山医疗器械有限公司 | Digestive tract segmentation identification method, device and equipment of capsule endoscope image |
WO2021070108A1 (en) * | 2019-10-11 | 2021-04-15 | International Business Machines Corporation | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory |
CN113129287A (en) * | 2021-04-22 | 2021-07-16 | 武汉楚精灵医疗科技有限公司 | Automatic lesion mapping method for upper gastrointestinal endoscope image |
CN113177940A (en) * | 2021-05-26 | 2021-07-27 | 复旦大学附属中山医院 | Gastroscope video part identification network structure based on Transformer |
CN113269230A (en) * | 2021-04-23 | 2021-08-17 | 复旦大学 | Multi-pneumonia CT classification method and device based on time sequence high-dimensional feature extraction |
CN113284110A (en) * | 2021-05-26 | 2021-08-20 | 复旦大学附属中山医院 | Gastroscope video position identification network structure based on double-flow method |
CN113435248A (en) * | 2021-05-18 | 2021-09-24 | 武汉天喻信息产业股份有限公司 | Mask face recognition base enhancement method, device, equipment and readable storage medium |
CN113610847A (en) * | 2021-10-08 | 2021-11-05 | 武汉楚精灵医疗科技有限公司 | Method and system for evaluating stomach markers in white light mode |
CN113679327A (en) * | 2021-10-26 | 2021-11-23 | 青岛美迪康数字工程有限公司 | Endoscopic image acquisition method and device |
CN113743384A (en) * | 2021-11-05 | 2021-12-03 | 广州思德医疗科技有限公司 | Stomach picture identification method and device |
CN114283192A (en) * | 2021-12-10 | 2022-04-05 | 厦门影诺医疗科技有限公司 | Gastroscopy blind area monitoring method, system and application based on scene recognition |
US11417424B2 (en) | 2019-10-11 | 2022-08-16 | International Business Machines Corporation | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory and multiple instance learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934799A (en) * | 2017-02-24 | 2017-07-07 | 安翰光电技术(武汉)有限公司 | Capsule endoscope image aids in diagosis system and method |
CN107423756A (en) * | 2017-07-05 | 2017-12-01 | 武汉科恩斯医疗科技有限公司 | Nuclear magnetic resonance image sequence sorting technique based on depth convolutional neural networks combination shot and long term memory models |
CN107967946A (en) * | 2017-12-21 | 2018-04-27 | 武汉大学 | Operating gastroscope real-time auxiliary system and method based on deep learning |
-
2018
- 2018-06-28 CN CN201810690051.XA patent/CN109102491B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934799A (en) * | 2017-02-24 | 2017-07-07 | 安翰光电技术(武汉)有限公司 | Capsule endoscope image aids in diagosis system and method |
CN107423756A (en) * | 2017-07-05 | 2017-12-01 | 武汉科恩斯医疗科技有限公司 | Nuclear magnetic resonance image sequence sorting technique based on depth convolutional neural networks combination shot and long term memory models |
CN107967946A (en) * | 2017-12-21 | 2018-04-27 | 武汉大学 | Operating gastroscope real-time auxiliary system and method based on deep learning |
Non-Patent Citations (2)
Title |
---|
BOLEI ZHOU 等: "Learning Deep Features for Discriminative Localization", 《ARXIV》 * |
ORIOL VINYALS 等: "Show and Tell: A Neural Image Caption Generator", 《ARXIV》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563523B (en) * | 2019-02-14 | 2024-03-26 | 西门子医疗有限公司 | COPD classification using machine-trained anomaly detection |
CN111563523A (en) * | 2019-02-14 | 2020-08-21 | 西门子医疗有限公司 | COPD classification using machine trained anomaly detection |
US11908188B2 (en) | 2019-03-08 | 2024-02-20 | Tencent Technology (Shenzhen) Company Limited | Image analysis method, microscope video stream processing method, and related apparatus |
CN110491502A (en) * | 2019-03-08 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Microscope video stream processing method, system, computer equipment and storage medium |
WO2020182078A1 (en) * | 2019-03-08 | 2020-09-17 | 腾讯科技(深圳)有限公司 | Image analysis method, microscope video stream processing method, and related apparatus |
EP3937183A4 (en) * | 2019-03-08 | 2022-05-11 | Tencent Technology (Shenzhen) Company Limited | Image analysis method, microscope video stream processing method, and related apparatus |
CN110110746A (en) * | 2019-03-29 | 2019-08-09 | 广州思德医疗科技有限公司 | A kind of method and device of determining tag along sort |
CN110334582A (en) * | 2019-05-09 | 2019-10-15 | 河南萱闱堂医疗信息科技有限公司 | The method that intelligent recognition and record Endoscopic submucosal dissection extract polyp video |
CN110334582B (en) * | 2019-05-09 | 2021-11-12 | 河南萱闱堂医疗信息科技有限公司 | Method for intelligently identifying and recording polyp removing video of endoscopic submucosal dissection |
CN110176295A (en) * | 2019-06-13 | 2019-08-27 | 上海孚慈医疗科技有限公司 | A kind of real-time detecting method and its detection device of Gastrointestinal Endoscopes lower portion and lesion |
WO2020215672A1 (en) * | 2019-08-05 | 2020-10-29 | 平安科技(深圳)有限公司 | Method, apparatus, and device for detecting and locating lesion in medical image, and storage medium |
US11961227B2 (en) | 2019-08-05 | 2024-04-16 | Ping An Technology (Shenzhen) Co., Ltd. | Method and device for detecting and locating lesion in medical image, equipment and storage medium |
GB2604503A (en) * | 2019-10-11 | 2022-09-07 | Ibm | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory |
GB2604503B (en) * | 2019-10-11 | 2023-12-20 | Merative Us L P | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory |
WO2021070108A1 (en) * | 2019-10-11 | 2021-04-15 | International Business Machines Corporation | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory |
US11195273B2 (en) | 2019-10-11 | 2021-12-07 | International Business Machines Corporation | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory |
US11417424B2 (en) | 2019-10-11 | 2022-08-16 | International Business Machines Corporation | Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory and multiple instance learning |
CN110929806A (en) * | 2019-12-06 | 2020-03-27 | 腾讯科技(北京)有限公司 | Picture processing method and device based on artificial intelligence and electronic equipment |
CN110929806B (en) * | 2019-12-06 | 2023-07-21 | 深圳市雅阅科技有限公司 | Picture processing method and device based on artificial intelligence and electronic equipment |
CN111588342A (en) * | 2020-06-03 | 2020-08-28 | 电子科技大学 | Intelligent auxiliary system for bronchofiberscope intubation |
CN112200250A (en) * | 2020-10-14 | 2021-01-08 | 重庆金山医疗器械有限公司 | Digestive tract segmentation identification method, device and equipment of capsule endoscope image |
CN113129287A (en) * | 2021-04-22 | 2021-07-16 | 武汉楚精灵医疗科技有限公司 | Automatic lesion mapping method for upper gastrointestinal endoscope image |
CN113269230A (en) * | 2021-04-23 | 2021-08-17 | 复旦大学 | Multi-pneumonia CT classification method and device based on time sequence high-dimensional feature extraction |
CN113435248A (en) * | 2021-05-18 | 2021-09-24 | 武汉天喻信息产业股份有限公司 | Mask face recognition base enhancement method, device, equipment and readable storage medium |
CN113284110A (en) * | 2021-05-26 | 2021-08-20 | 复旦大学附属中山医院 | Gastroscope video position identification network structure based on double-flow method |
CN113177940A (en) * | 2021-05-26 | 2021-07-27 | 复旦大学附属中山医院 | Gastroscope video part identification network structure based on Transformer |
CN113610847A (en) * | 2021-10-08 | 2021-11-05 | 武汉楚精灵医疗科技有限公司 | Method and system for evaluating stomach markers in white light mode |
CN113610847B (en) * | 2021-10-08 | 2022-01-04 | 武汉楚精灵医疗科技有限公司 | Method and system for evaluating stomach markers in white light mode |
CN113679327B (en) * | 2021-10-26 | 2022-02-18 | 青岛美迪康数字工程有限公司 | Endoscopic image acquisition method and device |
CN113679327A (en) * | 2021-10-26 | 2021-11-23 | 青岛美迪康数字工程有限公司 | Endoscopic image acquisition method and device |
CN113743384B (en) * | 2021-11-05 | 2022-04-05 | 广州思德医疗科技有限公司 | Stomach picture identification method and device |
CN113743384A (en) * | 2021-11-05 | 2021-12-03 | 广州思德医疗科技有限公司 | Stomach picture identification method and device |
CN114283192A (en) * | 2021-12-10 | 2022-04-05 | 厦门影诺医疗科技有限公司 | Gastroscopy blind area monitoring method, system and application based on scene recognition |
Also Published As
Publication number | Publication date |
---|---|
CN109102491B (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102491A (en) | A kind of gastroscope image automated collection systems and method | |
CN107564580B (en) | Gastroscope visual aids processing system and method based on integrated study | |
CN109858540B (en) | Medical image recognition system and method based on multi-mode fusion | |
CN110473186B (en) | Detection method based on medical image, model training method and device | |
CN111899229A (en) | Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology | |
CN111214255B (en) | Medical ultrasonic image computer-aided method | |
CN110390665B (en) | Knee joint disease ultrasonic diagnosis method based on deep learning multichannel and graph embedding method | |
CN112465772B (en) | Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium | |
CN111227864A (en) | Method and apparatus for lesion detection using ultrasound image using computer vision | |
CN108511055A (en) | Ventricular premature beat identifying system and method based on Multiple Classifier Fusion and diagnostic rule | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
Bourbakis | Detecting abnormal patterns in WCE images | |
CN114398979A (en) | Ultrasonic image thyroid nodule classification method based on feature decoupling | |
CN110176295A (en) | A kind of real-time detecting method and its detection device of Gastrointestinal Endoscopes lower portion and lesion | |
CN109460717A (en) | Alimentary canal Laser scanning confocal microscope lesion image-recognizing method and device | |
CN111862090A (en) | Method and system for esophageal cancer preoperative management based on artificial intelligence | |
CN113610118A (en) | Fundus image classification method, device, equipment and medium based on multitask course learning | |
Yang et al. | Unsupervised domain adaptation for cross-device OCT lesion detection via learning adaptive features | |
Kaushal et al. | An IoMT‐based smart remote monitoring system for healthcare | |
CN113946217B (en) | Intelligent auxiliary evaluation system for enteroscope operation skills | |
CN117322865B (en) | Temporal-mandibular joint disc shift MRI (magnetic resonance imaging) examination and diagnosis system based on deep learning | |
KahsayGebreslassie et al. | Automated gastrointestinal disease recognition for endoscopic images | |
CN112419246B (en) | Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution | |
WO2020263002A1 (en) | Blood vessel segmentation method | |
CN117557840A (en) | Fundus lesion grading method based on small sample learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200210 Address after: 430223 Room 001, Building D2, 10 Building, Phase III, Huacheng Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province Applicant after: Wuhan Chujingling Medical Technology Co., Ltd. Address before: 430060 No. 238, Wuchang, Wuhan District, Hubei, Jiefang Road Applicant before: People's Hospital of Wuhan University (People's Hospital of Hubei Province) |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |