CN105913075A - Endoscopic image focus identification method based on pulse coupling nerve network - Google Patents

Endoscopic image focus identification method based on pulse coupling nerve network Download PDF

Info

Publication number
CN105913075A
CN105913075A CN201610207950.0A CN201610207950A CN105913075A CN 105913075 A CN105913075 A CN 105913075A CN 201610207950 A CN201610207950 A CN 201610207950A CN 105913075 A CN105913075 A CN 105913075A
Authority
CN
China
Prior art keywords
input
focus
image
vector
icm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610207950.0A
Other languages
Chinese (zh)
Inventor
潘国兵
普帅帅
卢从成
陈金鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201610207950.0A priority Critical patent/CN105913075A/en
Publication of CN105913075A publication Critical patent/CN105913075A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2111Selection of the most significant subset of features by using evolutionary computational techniques, e.g. genetic algorithms

Abstract

The invention relates to an endoscopic image focus identification method based on the pulse coupling nerve network. The method comprises steps of video framing, image preprocessing, visual-perception-oriented color space conversion, suspected focus area positioning, characteristic vector construction, mode identification through employing the pulse coupling nerve network, area transfer, accomplishment of all-suspected-focus identification, focus image classification extraction and repetition of all the steps till accomplishment of identification of endoscopic images of a whole video file. Through the method, on the basis of focus area positioning of the human visual attention mechanism and the pulse coupling nerve network, mode identification is carried out, focus mode identification accuracy is improved, focus mode endoscopic image extraction accuracy is further improved, and workload of clinical doctors is reduced.

Description

Focus recognition methods in a kind of endoscopic picture based on Pulse Coupled Neural Network
Technical field
The present invention relates to medical image pattern recognition analysis field, peep detection particularly in a kind of human gastrointestinal tract The lesion image mode identification method of system.
Background technology
Digestive tract disease threatens the health of the mankind increasingly severely, and the disease of other numerous species all may be used simultaneously Directly or indirectly can be caused by the disease of gi system, the inspection of digestive tract disease and the diagnosis health to the mankind Situation has very important meaning.Detection digestive tract disease the best way is exactly directly to observe gastrointestinal tract, institute It is the most effective method with endoscope.But traditional plug-in type endoscope such as intestinal mirror, gastroscope etc., by Intestinal cannot be goed deep in the reason being mechanically inserted, make small intestine become check frequency, be inserted simultaneously into formula endoscope In-convenience in use, pain can be brought to patient, and have the danger of intestinal perforation.Along with semiconductor technology, sensing Technology, LED illumination technology, radio communication and the development of Micro-control Technology, going out for Wireless capsule endoscope Now lay a good foundation with universal.Wireless capsule endoscope is by miniature image sensor, lighting module, wireless transmit Module, power management module etc. form.Patient swallow after under human gastrointestinal tract creeping effect capsule endoscope suitable Digestion intestinal to move downward.In motor process, the bell glass of capsule front end struts intestinal and is close to intestinal wall, Lighting module illuminates the intestinal wall in visual field, and imageing sensor obtains the figure of intestinal inwall by short-focus lens simultaneously Picture, and view data is launched external.Gastrointestinal tract image is spread out of external by capsule endoscope constantly, until Human body is naturally drained by anus.Whole process without manual intervention, will not for patient bring any pain with not Just, and there is not check frequency, it is achieved that painless noinvasive all-digestive tract detect.Just because of these advantage glue Capsule endoscope is applied more and more as a kind of novel digestive tract detection technique in clinic.
The capsule endoscope working time in human body is about 8 hours, when suffering from people's metabolism of gastroenteropathy Between can longer, so one-time detection will produce at least 2 × 3600 × 8=57600 two field picture.In such enormous quantity Video image in find focus or pathological characters be the work taken time and effort very much, even experienced Expert the most at least to spend time of 2 hours.This not only loses time, and owing to visual fatigue there will be The situation of missing inspection.So utilizing image processing and pattern recognition to realize the hemorrhage image recognition of computer intelligence it is The trend of one certainty.Owing to endoscopic picture is human body alimentary canal image, situation is extremely complex, and focus feature is also Changeable, use conventional Digital Image Processing and algorithm for pattern recognition to be difficult to deal with complicated endoscopic picture and changeable Focus.Pulse Coupled Neural Network comes from the working mechanism that mammalian visual is neural, relative to traditional The neural network models such as BP, RBF, this model has inherent innate advantage in image processing field, And in certain applications, shown advantage, focus Intelligent Recognition has huge applications potentiality.
Existing all concentrate on capsule endoscope detection body for peeping detection technique in human gastrointestinal tract, and sick The process of stove image relatively lags behind with pattern identification research, it has also become the restriction bottle of capsule endoscope detecting system Neck.And image focus identification technology all concentrates in the pattern knowledge of concrete focus, but due to the polytropy of focus, Even its feature of same kind of focus is the most changeable.And the Digital Image Processing of routine and pattern recognition Algorithm is also difficult to tackle the endoscopic picture that content is complicated, causes recognition methods specificity and sensitivity the highest.
Summary of the invention
The difficult problem extracted to overcome existing focus pattern to be difficult to, the present invention is based on Pulse Coupled Neural Network With the localization method of vision noticing mechanism, the particular type of focus will not differentiated between, it is provided that a kind of based on pulse coupling Closing focus recognition methods in the endoscopic picture of neutral net, endoscopic picture can be categorized as normally by the method exactly Pattern and focus pattern, and the endoscopic picture focal area of identification is marked does further for clinician Judge, reduce the workload of clinician.
The technical scheme provided to solve above-mentioned technical problem is:
Focus recognition methods in a kind of endoscopic picture based on Pulse Coupled Neural Network, described recognition methods includes Following steps:
A. video framing
Peeping detection video file input by interior, framing obtains the single width endoscopic picture of bitmap format;
B. Image semantic classification
By the bitmap images obtained by step a by the visual field parameter of endoscope by smooth for the edge black surround of image place Reason, obtains sharply marginated endoscopic picture, then uses high pass filter (such as Bart irrigates husband's high pass filter) Filtering and noise reduction, then use median filter to be filtered strengthening, remove the noise of pending image-region and retain Image HFS;
C. towards the color space conversion of visually-perceptible
The bitmap images obtained in step b is device oriented RGB color, converts it to towards vision The Luv color space of perception;
D. suspected abnormality zone location
U, v component of the Luv color space image obtained using step c, as input, calculates color characteristic notable (c, s), (c s), then uses Laplce to figure uv significantly to scheme L using L * component as input calculating brightness Mapping algorithm and virtually connect method, obtains the border area of notable content in image, and calculating contour feature is significantly schemed (c, s) (c, s), obtained color characteristic is significantly schemed uv, and (c, s), brightness is special with the notable T of textural characteristics figure for O Levy and significantly scheme L (c, s), contour feature significantly schemes O, and (c s) significantly schemes T with textural characteristics (c, s) respectively multiple dimensioned Under carry out regularization computing and merge, obtain the saliency map S of image, then use etching algorithm to filter out face Long-pending less marking area, then according to order arrangement the significance degree, i.e. suspected abnormality of region area size Region;
E. structural feature vector
With the notable figure S obtained by step d for input, construct in suspected abnormality region pixel color feature to Amount V (uv) and brightness vector V (L), calculate and structure realm Outline Feature Vector V (O) texture feature vector V(T);
F. focal area carries out pattern recognition
Using the characteristic vector constructed by step e as input, Pulse Coupled Neural Network is used to carry out pattern recognition, Obtain the focus pattern of suspicious region to be identified, i.e. normal mode and focus pattern;
G. zone-transfer
It is identified respectively according to the order of suspected abnormality area size in image, if also having other suspected abnormality Region, repetition step e, f carry out pattern recognition, until all suspected abnormality region recognition terminate;
H. lesion image classification is extracted
The model results in suspected abnormality regions all in endoscopic picture is carried out or computing, obtains this width endoscopic picture Classification mode, i.e. image normal mode and image focus pattern, if focus mode flag focal area; I. step b, c, d, e, f, g, h are repeated, until the endoscopic picture end of identification of whole video file.
Further, in described step e, in endoscopic picture significantly schemes the marking area of S, construct pixel color Characteristic vector V ' (uv) and brightness vector V ' (L), be then mapped to higher-dimension by Sigmoid kernel function empty Between, use the method for principal component analysis (PCA) to extract the core principle component feature of characteristic, obtain dimensionality reduction Color feature vector V (uv) and brightness vector V (L), calculate and structure realm profile in region meanwhile Characteristic vector V (O) texture feature vector V (T), sets up eigenmatrix, carries out the pattern recognition of focus.
Further, in described step f, Pulse Coupled Neural Network is by input layer, the cortex model (ICM) that intersects Neuronal layers, competition output layer composition;
Described input layer is inputted by color characteristic, brightness inputs, contour feature inputs, textural characteristics input Totally four input channels, intersect cortex model neuronal layers use four ICM neurons, competition output layer by Competition neurons weight matrix LW and competitive function C composition;
Color characteristic input, brightness input channel and No. 1 ICM of ICM neuronal layers of described input layer Neuron input connects, contour feature input, textural characteristics input channel and No. 2 ICM of ICM neuronal layers Neuron input connects, and No. 1 and No. 2 ICM neurons input mutual with No. 3 and No. 4 ICM neurons respectively Connect, No. 3, No. 4 ICM neurons input with the competition neurons weight matrix LW of competition output layer and be connected, The output of competition neurons weight matrix LW is connected with competition layer neuron competitive function C input;
Further, described ICM neuron includes two coupled oscillators, connects weighting coefficient matrix and sets It is set to W.
Described competition neurons weight matrix LW is 2 n dimensional vector ns, and only one of which element is 1 simultaneously, and other is all It is 0, and, competition layer neuron competitive function uses Gaussian function, exports the vector of one 2 dimension, wherein The element that pattern class of likelihood probability maximum is corresponding is arranged to 1, and other is all 0,1 position occurred Put the classification that would indicate that input feature vector matrix is identified, i.e. normal mode or focus pattern.
Compared with prior art, the invention has the beneficial effects as follows:
1, the endoscopic picture of the present invention have employed in focus mode identification method and doubt based on human visual attention mechanism Positioning like focal area, this location will greatly reduce the amount of calculation of artificial neural network, Jin Erti in endoscopic picture The accuracy of focus pattern recognition in high endoscopic picture.
2, the image processing method of the present invention is at the color Luv space of view-based access control model perception, at utmost land productivity With the colouring information of endoscopic picture, and colouring information is the important information of focal area diagnosis, improves focus Accuracy that region determines and specificity.
3, the mode identification method of the present invention does not differentiates between the particular type of focus, is focus mould by all territorial classifications Formula and normal mode, and use Pulse Coupled Neural Network, in the endoscopic picture standard of focus identification will be greatly improved Really property and practicality, reduces the workload of clinician.
Accompanying drawing explanation
Fig. 1 is the focus recognition methods flow chart based on Pulse Coupled Neural Network of the present invention.
Fig. 2 is the structure chart of Pulse Coupled Neural Network.
Fig. 3 is ICM neuronal structure figure.
Detailed description of the invention
Below in conjunction with the accompanying drawings embodiments of the invention are elaborated.
With reference to Fig. 1~Fig. 3, focus recognition methods in a kind of endoscopic picture based on Pulse Coupled Neural Network, bag Include following steps:
A. video framing
Use the detection video file of the capsule endoscope detecting system of Given company, by endoscope check video Input, framing obtains the single width endoscopic picture of bitmap format;
B. Image semantic classification
By the bitmap images obtained by step a by the visual field parameter of endoscope by smooth for the edge black surround of image place Reason, obtains sharply marginated endoscopic picture, then uses Bart to irrigate husband's high pass filter filters and uses intermediate value to filter again Ripple device is filtered, and removes the noise of pending image-region and retains image HFS;
C. towards the color space conversion of visually-perceptible
The bitmap images obtained in step b is device oriented RGB color, converts it to towards vision The Luv color space of perception;
D. suspected abnormality zone location
U, v component of the Luv color space image obtained using step c, as input, calculates color characteristic notable (c, s), (c s), then uses Laplce to figure uv significantly to scheme L using L * component as input calculating brightness Mapping algorithm and virtually connect method, obtains the border area of notable content in image, and calculating contour feature is significantly schemed (c, s) (c, s), obtained color characteristic is significantly schemed uv, and (c, s), brightness is special with the notable T of textural characteristics figure for O Levy and significantly scheme L (c, s), contour feature significantly schemes O, and (c s) significantly schemes T with textural characteristics (c, s) respectively multiple dimensioned Under carry out regularization computing and merge, obtain the saliency map S of image, then use etching algorithm to filter out face Long-pending less marking area, then according to order arrangement the significance degree, i.e. suspected abnormality of region area size Region;
E. structural feature vector
With the notable figure S obtained by step d for input, construct in suspected abnormality region pixel color feature to Amount V (uv) and brightness vector V (L), calculate and structure realm Outline Feature Vector V (O) texture feature vector V(T);
F. focal area carries out pattern recognition
Using the characteristic vector constructed by step e as input, Pulse Coupled Neural Network is used to carry out pattern recognition, Obtain the focus pattern of suspicious region to be identified, i.e. normal mode and focus pattern;
G. zone-transfer
It is identified respectively according to the order of suspected abnormality area size in image, if also having other suspected abnormality Region, repetition step e, f carry out pattern recognition, until all suspected abnormality region recognition terminate;
H. lesion image classification is extracted
The model results in suspected abnormality regions all in endoscopic picture is carried out or computing, obtains this width endoscopic picture Classification mode, i.e. image normal mode and image focus pattern, if focus mode flag focal area; I. step b, c, d, e, f, g, h are repeated, until the endoscopic picture end of identification of whole video file.
Further, in described step e, in endoscopic picture significantly schemes the marking area of S, construct pixel color Characteristic vector V ' (uv) and brightness vector V ' (L), be then mapped to higher-dimension by Sigmoid kernel function empty Between, use the method for principal component analysis (PCA) to extract the core principle component feature of characteristic, obtain dimensionality reduction Color feature vector V (uv) and brightness vector V (L), calculate and structure realm profile in region meanwhile Characteristic vector V (O) texture feature vector V (T), sets up eigenmatrix, carries out the pattern recognition of focus.
Further, in described step f, Pulse Coupled Neural Network is by input layer, the cortex model (ICM) that intersects Neuronal layers, competition output layer composition;
Described input layer is inputted by color characteristic, brightness inputs, contour feature inputs, textural characteristics input Totally four input channels, ICM neuronal layers uses four ICM neurons, and competition output layer is by competition neurons Weight matrix LW and competitive function C composition;
Color characteristic input, brightness input channel and No. 1 ICM of ICM neuronal layers of described input layer Neuron input connects, contour feature input, textural characteristics input channel and No. 2 ICM of ICM neuronal layers Neuron input connects, and No. 1 and No. 2 ICM neurons input mutual with No. 3 and No. 4 ICM neurons respectively Connect, No. 3, No. 4 ICM neurons input with the competition neurons weight matrix LW of competition output layer and be connected, The output of competition neurons weight matrix LW is connected with competition layer neuron competitive function C input;
Further, described ICM neuron includes two coupled oscillators, connects weighting coefficient matrix and arranges For W.
Described competition neurons weight matrix LW is 2 n dimensional vector ns, and only one of which element is 1 simultaneously, and other is all It is 0, and, competition layer neuron competitive function uses Gaussian function, exports the vector of one 2 dimension, wherein The element that pattern class of likelihood probability maximum is corresponding is arranged to 1, and other is all 0,1 position occurred Put the classification that would indicate that input feature vector matrix is identified, i.e. normal mode or focus pattern.
Finally, in addition it is also necessary to be only the specific embodiment of the present invention it is noted that listed above.Obviously, The invention is not restricted to above example, it is also possible to have many deformation.Those of ordinary skill in the art can be from this All deformation that bright disclosure directly derives or associates, are all considered as protection scope of the present invention.

Claims (5)

1. focus recognition methods in an endoscopic picture based on Pulse Coupled Neural Network, it is characterised in that: described Recognition methods comprises the steps:
A. video framing
Peeping detection video file input by interior, framing obtains the single width endoscopic picture of bitmap format;
B. Image semantic classification
By the bitmap images obtained by step a by the visual field parameter of endoscope by smooth for the edge black surround of image place Reason, obtains sharply marginated endoscopic picture, then uses high pass filter filters denoising, then use medium filtering Device is filtered strengthening, and removes the noise of pending image-region and retains image HFS;
C. towards the color space conversion of visually-perceptible
The bitmap images obtained in step b is device oriented RGB color, converts it to towards vision The Luv color space of perception;
D. suspected abnormality zone location
U, v component of the Luv color space image obtained using step c, as input, calculates color characteristic notable (c, s), (c s), then uses Laplce to figure uv significantly to scheme L using L * component as input calculating brightness Mapping algorithm and virtually connect method, obtains the border area of notable content in image, and calculating contour feature is significantly schemed (c, s) (c, s), obtained color characteristic is significantly schemed uv, and (c, s), brightness is special with the notable T of textural characteristics figure for O Levy and significantly scheme L (c, s), contour feature significantly schemes O, and (c s) significantly schemes T with textural characteristics (c, s) respectively multiple dimensioned Under carry out regularization computing and merge, obtain the saliency map S of image, then use etching algorithm to filter out face Long-pending less marking area, then according to order arrangement the significance degree, i.e. suspected abnormality of region area size Region;
E. structural feature vector
With the notable figure S obtained by step d for input, construct in suspected abnormality region pixel color feature to Amount V (uv) and brightness vector V (L), calculate and structure realm Outline Feature Vector V (O) texture feature vector V(T);
F. focal area carries out pattern recognition
Using the characteristic vector constructed by step e as input, Pulse Coupled Neural Network is used to carry out pattern recognition, Obtain the focus pattern of suspicious region to be identified, i.e. normal mode and focus pattern;
G. zone-transfer
It is identified respectively according to the order of suspected abnormality area size in image, if also having other suspected abnormality Region, repetition step e, f carry out pattern recognition, until all suspected abnormality region recognition terminate;
H. lesion image classification is extracted
The model results in suspected abnormality regions all in endoscopic picture is carried out or computing, obtains this width endoscopic picture Classification mode, i.e. image normal mode and image focus pattern, if focus mode flag focal area; I. step b, c, d, e, f, g, h are repeated, until the endoscopic picture end of identification of whole video file.
2. focus recognition methods in endoscopic picture based on Pulse Coupled Neural Network as claimed in claim 1, its It is characterised by: in described step e, in endoscopic picture significantly schemes the marking area of S, constructs preliminary pixel face Color characteristic vector V ' (uv) and preliminary brightness vector V ' (L), be then mapped to height by Sigmoid kernel function Dimension space, uses the method for principal component analysis to extract the core principle component feature of characteristic, obtains the color of dimensionality reduction Characteristic vector V (uv) and brightness vector V (L), calculate and structure realm contour feature in region meanwhile Vector V (O) texture feature vector V (T), sets up eigenmatrix, carries out the pattern recognition of focus.
3. focus recognition methods in endoscopic picture based on Pulse Coupled Neural Network as claimed in claim 1 or 2, It is characterized in that: in described step f, Pulse Coupled Neural Network is by input layer, the cortex model neuron that intersects Layer and competition output layer composition;
Described input layer is inputted by color characteristic, brightness inputs, contour feature inputs, textural characteristics input Totally four input channels, intersect cortex model neuronal layers use four ICM neurons, competition output layer by Competition neurons weight matrix LW and competitive function C composition;
Color characteristic input, brightness input channel and No. 1 ICM nerve of neuronal layers of described input layer Unit's input connects, the input of described contour feature, textural characteristics input channel and No. 2 ICM nerves of neuronal layers Unit's input connects, and No. 1 and No. 2 ICM neurons input be connected with each other with No. 3 and No. 4 ICM neurons respectively, No. 3, No. 4 ICM neurons is connected with the competition neurons weight matrix LW input of competition output layer, compete The output of neuron weight matrix LW is connected with the input of competition layer neuron competitive function C.
4. focus recognition methods in endoscopic picture based on Pulse Coupled Neural Network as claimed in claim 3, its It is characterised by: described ICM neuron includes two coupled oscillators, connects weighting coefficient matrix and be set to W.
5. focus recognition methods in endoscopic picture based on Pulse Coupled Neural Network as claimed in claim 3, its It is characterised by: described competition neurons weight matrix LW is 2 n dimensional vector ns, and only one of which element is 1 simultaneously, Other is all 0, and, competition layer neuron competitive function C uses Gaussian function, export one 2 dimension to Amount, the element that wherein that pattern class of likelihood probability maximum is corresponding is arranged to 1, and other is all 0,1 institute The position occurred would indicate that the classification that input feature vector matrix is identified, i.e. normal mode or focus pattern.
CN201610207950.0A 2016-04-05 2016-04-05 Endoscopic image focus identification method based on pulse coupling nerve network Pending CN105913075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610207950.0A CN105913075A (en) 2016-04-05 2016-04-05 Endoscopic image focus identification method based on pulse coupling nerve network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610207950.0A CN105913075A (en) 2016-04-05 2016-04-05 Endoscopic image focus identification method based on pulse coupling nerve network

Publications (1)

Publication Number Publication Date
CN105913075A true CN105913075A (en) 2016-08-31

Family

ID=56745336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610207950.0A Pending CN105913075A (en) 2016-04-05 2016-04-05 Endoscopic image focus identification method based on pulse coupling nerve network

Country Status (1)

Country Link
CN (1) CN105913075A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274428A (en) * 2017-08-03 2017-10-20 汕头市超声仪器研究所有限公司 Multi-target three-dimensional ultrasonic image partition method based on emulation and measured data
CN107705852A (en) * 2017-12-06 2018-02-16 北京华信佳音医疗科技发展有限责任公司 Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope
CN109241898A (en) * 2018-08-29 2019-01-18 合肥工业大学 Object localization method and system, the storage medium of hysteroscope video
CN109411084A (en) * 2018-11-28 2019-03-01 武汉大学人民医院(湖北省人民医院) A kind of intestinal tuberculosis assistant diagnosis system and method based on deep learning
CN110706220A (en) * 2019-09-27 2020-01-17 贵州大学 Capsule endoscope image processing and analyzing method
CN110706225A (en) * 2019-10-14 2020-01-17 山东省肿瘤防治研究院(山东省肿瘤医院) Tumor identification system based on artificial intelligence
CN110705440A (en) * 2019-09-27 2020-01-17 贵州大学 Capsule endoscopy image recognition model based on neural network feature fusion
CN110910325A (en) * 2019-11-22 2020-03-24 深圳信息职业技术学院 Medical image processing method and device based on artificial butterfly optimization algorithm
CN111507454A (en) * 2019-01-30 2020-08-07 兰州交通大学 Improved cross cortical neural network model for remote sensing image fusion
CN111784683A (en) * 2020-07-10 2020-10-16 天津大学 Pathological section detection method and device, computer equipment and storage medium
CN112184837A (en) * 2020-09-30 2021-01-05 百度(中国)有限公司 Image detection method and device, electronic equipment and storage medium
CN113920042A (en) * 2021-09-24 2022-01-11 深圳市资福医疗技术有限公司 Image processing system and capsule endoscope
CN114240941A (en) * 2022-02-25 2022-03-25 浙江华诺康科技有限公司 Endoscope image noise reduction method, device, electronic apparatus, and storage medium
CN115798725A (en) * 2022-10-27 2023-03-14 佛山读图科技有限公司 Method for making lesion-containing human body simulation image data for nuclear medicine

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274428A (en) * 2017-08-03 2017-10-20 汕头市超声仪器研究所有限公司 Multi-target three-dimensional ultrasonic image partition method based on emulation and measured data
WO2019023819A1 (en) * 2017-08-03 2019-02-07 汕头市超声仪器研究所有限公司 Simulated and measured data-based multi-target three-dimensional ultrasound image segmentation method
US11282204B2 (en) 2017-08-03 2022-03-22 Shantou Institute Of Ultrasonic Instruments Co., Ltd. Simulated and measured data-based multi-target three-dimensional ultrasound image segmentation method
CN107705852A (en) * 2017-12-06 2018-02-16 北京华信佳音医疗科技发展有限责任公司 Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope
CN109241898A (en) * 2018-08-29 2019-01-18 合肥工业大学 Object localization method and system, the storage medium of hysteroscope video
CN109241898B (en) * 2018-08-29 2020-09-22 合肥工业大学 Method and system for positioning target of endoscopic video and storage medium
CN109411084A (en) * 2018-11-28 2019-03-01 武汉大学人民医院(湖北省人民医院) A kind of intestinal tuberculosis assistant diagnosis system and method based on deep learning
CN111507454A (en) * 2019-01-30 2020-08-07 兰州交通大学 Improved cross cortical neural network model for remote sensing image fusion
CN111507454B (en) * 2019-01-30 2022-09-06 兰州交通大学 Improved cross cortical neural network model for remote sensing image fusion
CN110706220A (en) * 2019-09-27 2020-01-17 贵州大学 Capsule endoscope image processing and analyzing method
CN110705440A (en) * 2019-09-27 2020-01-17 贵州大学 Capsule endoscopy image recognition model based on neural network feature fusion
CN110706220B (en) * 2019-09-27 2023-04-18 贵州大学 Capsule endoscope image processing and analyzing method
CN110705440B (en) * 2019-09-27 2022-11-01 贵州大学 Capsule endoscopy image recognition model based on neural network feature fusion
CN110706225A (en) * 2019-10-14 2020-01-17 山东省肿瘤防治研究院(山东省肿瘤医院) Tumor identification system based on artificial intelligence
CN110910325A (en) * 2019-11-22 2020-03-24 深圳信息职业技术学院 Medical image processing method and device based on artificial butterfly optimization algorithm
CN111784683B (en) * 2020-07-10 2022-05-17 天津大学 Pathological section detection method and device, computer equipment and storage medium
CN111784683A (en) * 2020-07-10 2020-10-16 天津大学 Pathological section detection method and device, computer equipment and storage medium
CN112184837A (en) * 2020-09-30 2021-01-05 百度(中国)有限公司 Image detection method and device, electronic equipment and storage medium
CN113920042A (en) * 2021-09-24 2022-01-11 深圳市资福医疗技术有限公司 Image processing system and capsule endoscope
CN113920042B (en) * 2021-09-24 2023-04-18 深圳市资福医疗技术有限公司 Image processing system and capsule endoscope
CN114240941A (en) * 2022-02-25 2022-03-25 浙江华诺康科技有限公司 Endoscope image noise reduction method, device, electronic apparatus, and storage medium
CN114240941B (en) * 2022-02-25 2022-05-31 浙江华诺康科技有限公司 Endoscope image noise reduction method, device, electronic apparatus, and storage medium
CN115798725A (en) * 2022-10-27 2023-03-14 佛山读图科技有限公司 Method for making lesion-containing human body simulation image data for nuclear medicine
CN115798725B (en) * 2022-10-27 2024-03-26 佛山读图科技有限公司 Method for manufacturing human body simulation image data with lesion for nuclear medicine

Similar Documents

Publication Publication Date Title
CN105913075A (en) Endoscopic image focus identification method based on pulse coupling nerve network
CN106934799B (en) Capsule endoscope visual aids diagosis system and method
Shin et al. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification
Li et al. Computer-aided detection of bleeding regions for capsule endoscopy images
Li et al. Computer-based detection of bleeding and ulcer in wireless capsule endoscopy images by chromaticity moments
EP1997074B1 (en) Device, system and method for automatic detection of contractile activity in an image frame
Li et al. Computer-aided small bowel tumor detection for capsule endoscopy
CN108470359A (en) A kind of diabetic retinal eye fundus image lesion detection method
JP7270626B2 (en) Medical image processing apparatus, medical image processing system, operating method of medical image processing apparatus, program, and storage medium
JP7062068B2 (en) Image processing method and image processing device
Li et al. Comparison of several texture features for tumor detection in CE images
CN109241963B (en) Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image
Eskandari et al. Polyp detection in Wireless Capsule Endoscopy images by using region-based active contour model
Guan et al. Segmentation of thermal breast images using convolutional and deconvolutional neural networks
US20230377147A1 (en) Method and system for detecting fundus image based on dynamic weighted attention mechanism
Lei et al. Automated detection of retinopathy of prematurity by deep attention network
Xing et al. A saliency-aware hybrid dense network for bleeding detection in wireless capsule endoscopy images
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
Ay et al. Automated classification of nasal polyps in endoscopy video-frames using handcrafted and CNN features
Zhuang et al. APRNet: A 3D anisotropic pyramidal reversible network with multi-modal cross-dimension attention for brain tissue segmentation in MR images
AU2021425940A1 (en) System and method of using right and left eardrum otoscopy images for automated otoscopy image analysis to diagnose ear pathology
CN110969603B (en) Relative positioning method and device for lesion position and terminal equipment
CN112419246A (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
Liao et al. A case study on computer-aided diagnosis of nonerosive reflux disease using deep learning techniques
CN115994999A (en) Goblet cell semantic segmentation method and system based on boundary gradient attention network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160831

RJ01 Rejection of invention patent application after publication