CN108537773A - Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease - Google Patents
Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease Download PDFInfo
- Publication number
- CN108537773A CN108537773A CN201810141703.4A CN201810141703A CN108537773A CN 108537773 A CN108537773 A CN 108537773A CN 201810141703 A CN201810141703 A CN 201810141703A CN 108537773 A CN108537773 A CN 108537773A
- Authority
- CN
- China
- Prior art keywords
- image
- pancreas
- fusion
- feature
- classification network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000000496 pancreas Anatomy 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 38
- 206010061902 Pancreatic neoplasm Diseases 0.000 title claims abstract description 29
- 208000015486 malignant pancreatic neoplasm Diseases 0.000 title claims abstract description 29
- 208000027866 inflammatory disease Diseases 0.000 title claims abstract description 14
- 230000004927 fusion Effects 0.000 claims abstract description 53
- 238000010606 normalization Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 20
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 8
- 238000012706 support-vector machine Methods 0.000 claims description 7
- 238000000205 computational method Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 5
- 238000002059 diagnostic imaging Methods 0.000 claims description 4
- 210000004923 pancreatic tissue Anatomy 0.000 claims description 4
- 229910052739 hydrogen Inorganic materials 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 206010033645 Pancreatitis Diseases 0.000 abstract description 4
- 238000011160 research Methods 0.000 abstract description 3
- 238000003745 diagnosis Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000002156 mixing Methods 0.000 description 10
- 238000012636 positron electron tomography Methods 0.000 description 9
- 208000000668 Chronic Pancreatitis Diseases 0.000 description 7
- 206010028980 Neoplasm Diseases 0.000 description 7
- 206010033649 Pancreatitis chronic Diseases 0.000 description 7
- 238000009558 endoscopic ultrasound Methods 0.000 description 6
- 208000016222 Pancreatic disease Diseases 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 230000003902 lesion Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 201000011510 cancer Diseases 0.000 description 4
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000012850 discrimination method Methods 0.000 description 3
- 210000004907 gland Anatomy 0.000 description 3
- 238000002604 ultrasonography Methods 0.000 description 3
- 208000004998 Abdominal Pain Diseases 0.000 description 2
- 206010061218 Inflammation Diseases 0.000 description 2
- 230000002308 calcification Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000004054 inflammatory process Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 208000022669 mucinous neoplasm Diseases 0.000 description 2
- 201000002528 pancreatic cancer Diseases 0.000 description 2
- 208000008443 pancreatic carcinoma Diseases 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 206010003694 Atrophy Diseases 0.000 description 1
- 208000004434 Calcinosis Diseases 0.000 description 1
- 201000009030 Carcinoma Diseases 0.000 description 1
- 206010016654 Fibrosis Diseases 0.000 description 1
- 206010023126 Jaundice Diseases 0.000 description 1
- 206010023129 Jaundice cholestatic Diseases 0.000 description 1
- 206010027336 Menstruation delayed Diseases 0.000 description 1
- 201000005267 Obstructive Jaundice Diseases 0.000 description 1
- 108010067035 Pancrelipase Proteins 0.000 description 1
- 206010047700 Vomiting Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 208000009956 adenocarcinoma Diseases 0.000 description 1
- 238000011226 adjuvant chemotherapy Methods 0.000 description 1
- 230000037444 atrophy Effects 0.000 description 1
- 230000001363 autoimmune Effects 0.000 description 1
- 238000009412 basement excavation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000002775 capsule Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000029087 digestion Effects 0.000 description 1
- 210000002249 digestive system Anatomy 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004761 fibrosis Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 208000014829 head and neck neoplasm Diseases 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 230000008595 infiltration Effects 0.000 description 1
- 238000001764 infiltration Methods 0.000 description 1
- 230000000968 intestinal effect Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 210000005075 mammary gland Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001613 neoplastic effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000012831 peritoneal equilibrium test Methods 0.000 description 1
- 239000012466 permeate Substances 0.000 description 1
- 230000000505 pernicious effect Effects 0.000 description 1
- 238000012877 positron emission topography Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000031068 symbiosis, encompassing mutualism through parasitism Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 208000016261 weight loss Diseases 0.000 description 1
- 230000004580 weight loss Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Primary Health Care (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Epidemiology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Pathology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses one kind carrying out intelligence auxiliary mirror method for distinguishing for cancer of pancreas and pancreas inflammatory disease, including is read out to pancreas medical image data and normalization operation, obtains normalized image;Denoising, registration and image co-registration are carried out to normalized image, obtain multi-modal fusion image;Region of interest is chosen in pancreas structure shows clearly image and is mapped on other images, while the region of interest is saved as into the identifiable natural image format of subsequent classification network;According to selected region of interest, the feature of multi-modal fusion image is extracted, classified and merged, and basic classification network model is established for the feature after fusion;The classification results of each basic classification network are differentiated, final taxonomic history result is obtained;The present invention has very strong universality, is not only suitable for clinical practice, it can also be used to the scientific research in cancer of pancreas and pancreatitis field.
Description
Technical field
The present invention relates to intelligent auxiliary diagnosis technical fields, and in particular to one kind for cancer of pancreas and pancreas inflammatory disease into
Row intelligence auxiliary mirror method for distinguishing.
Background technology
Cancer of pancreas (pancreatic cancer, PC) is a kind of common alimentary system malignant tumour, pernicious swollen in China
In tumor, incidence occupies the 7th, and the death rate occupies the 6th, and triennial deposits rate less than 5%.The more unobvious of cancer of pancreas early symptom, when
Occur often having located late period when abdominal pain, jaundice, weight are decreased obviously.In terms of diagnosis of pancreatic cancer, due to its clinical manifestation and its
His pancreas inflammatory disease is very much like, such as chronic pancreatitis (chronic pancreatitis, CP), there is abdominal pain, digestion
The performances such as bad, apocleisis, Nausea and vomiting, weight loss and obstructive jaundice, and in conventional image slice, thin piece with other pancreases
The overlapping of gland inflammatory disease is more, therefore preoperative cancer of pancreas of clarifying a diagnosis is more difficult, is especially difficult to accurately identify cancer of pancreas and pancreas
Gland inflammatory disease.
It is well known that imageological examination plays key effect in the diagnosis of pancreatic disorders, but imageological examination is only
Be capable of providing most intuitive image, the knowledge of wherein information is taken be limited to check directly display effect and image doctor itself
Horizontal and experience cannot make full use of the image picture to be included often because of human eye resolution capability and human negligence etc.
More information such as has pathological tissues the super visualization low-level image feature of separating capacity, doctor inevitable with traditional read tablet mode
It can skip.Therefore, doctor needs a kind of advanced ancillary technique by various inspection informixes, is carried out to multi-modality images
Processing, to improve the recall rate of the lesions such as tumour, calcification, inflammation or fibrosis, here it is intelligent auxiliary diagnosis technology (CAD skills
Art).The technology may recognize that the diagnostic message that human eye cannot identify, as the second eyes of doctor, improve cancer of pancreas
The accuracy rate of diagnosis plays increasingly important role during the diagnosis of cancer of pancreas.
Imageological examination includes multiple rows of computed tomography, magnetic resonance, ultrasound, endoscopic ultrasound (EUS), PET etc., but this
There is limitation in a little Examined effects, such as:CT is to diameter<The minimal neoplastic and isodensity lesion sensibility of 2cm is poor, and in pancreas
Shortcomings in the antidiastole of head cancer and chronic pancreatitis, because the appearance of calcification, the expanding of ductus pancreaticus, part swollen object go out
Phenomena such as obstruction of existing, dual drainage, ductus pancreaticus blocking, the infiltration of peripheral adipose and pancreas peripheral vein, is in two kinds of diseases
Occur;To there is the patient of metallic foreign body such as intravascular stent that can not carry out MRI inspections, and diagnosis of the MRI for pancreatic disorders in vivo
Value is still disputable;Ultrasound is not good enough for there is after the peritonaeum of more intestinal gas and obese patient image show;EUS belongs to a kind of
Intrusive imaging device can cause patient uncomfortable, and for the discriminating of chronic pancreatitis and cancer of pancreas, the performance of EUS does not enable
People is satisfied with, the patient for the cancer of pancreas that occurs together especially for chronic pancreatitis, it is reported that the chronic pancreatitis of 22-36% is missed
It examines as cancer of pancreas;PET is essentially a kind of function phenomenon, reflects specific metabolic process, but inflammation foci is especially certainly
Body autoimmune chronic pancreatitis also will appear the 18F FDG high intakes of similar cancer of pancreas.
As described above, any one based on above-mentioned Examined effect can not all make pancreatic disease accurate judgement, therefore,
Intelligently auxiliary identification system and method apply valence to cancer of pancreas based on image group to clinical research with very high with pancreatitis
Value.The present invention is directed to the super visualization bottom layer image information of the medical image to multiple modalities to carry out going deep into excavation, according to disease
Kitchen range have the characteristics of the underlying image of separating capacity, realize that the classification to cancer of pancreas and pancreas inflammatory disease is reflected by medical image
Not, while the present invention can also be applied to the field of scientific study to cancer of pancreas and pancreas inflammatory disease.
The intelligent auxiliary diagnosis of pancreatic disease is concentrated mainly on using image processing techniques both at home and abroad at present following
Aspect:
2001, Norton I D etc. proposed the artificial neural network of an autonomous learning to analyze EUS images and area
Divide malignant tumour and pancreatitis.2008, Das A etc. carried out texture analysis using image analysis software to pancreas EUS images,
Through principal component analysis (PCA) dimensionality reduction, the cancer of pancreas prediction model based on neural network is established.2013, Zhu M etc. utilized figure
Then picture treatment technology utilizes class algorithm to be calculated with sequential advancement search from pancreas EUS interesting image regions texture feature extractions
The distance between method (SFS) preferably combines feature, establishes support vector machines (SVM) prediction model.Cai Zheyuan etc.
Just proposed that similar algorithm utilizes the further feature of sequential advancement searching algorithm using the first feature selecting of class spacing in 2008
Optimization, hereafter, Cai Zheyuan etc. is again improved texture feature extraction, selects the Multifractal Dimension based on M m-bands wavelet transforms
Feature, the disaggregated model based on this foundation at runtime with the better than method that is previously proposed on classification accuracy.Master Wu
The computer diagnosis result of fuzzy classification is combined by instrument person of outstanding talent with seeds implantation, is expanded entire categorizing system
Exhibition, can not only identify cancer of pancreas and non-cancer, and can further identify cancer of pancreas and pancreatitis.2015, Zhu J etc. were in order to carry
The performance of high score class model introduces a kind of new lesion description --- local tertiary mode variance.
2016, Hanania etc. using gray level co-occurrence matrixes to the grade malignancies of intraductal papillary mucinous tumors into
Row classification.2016, Chakraborty etc. was based on the texture analysis of enhanced CT imagery exploitation to using the pancreas of new adjuvant chemotherapy to lead
Pipe adenocarcinoma patients carry out Prediction of survival, they are extracted 169 standard texture features, including gray scale symbiosis square out of focal area
Battle array, run-length matrix, local binary patterns, fractal dimension and first-order statistical properties etc. are established based on Naive Bayes Classification mould
The prediction model of type.2017, it is swollen to intraductal papillary mucinous tumors and pancreas capsule that Gazit etc. is based on enhanced CT image
Tumor is classified, one new feature for representing solid constituent in tumour of their hand-designeds, and combines 255 standard texture spies
Sign, establishes the disaggregated model based on Ada-boost disaggregated models.
1993, Du-Yih Tsai etc. proposed the subtle anomalies detection method based on CT pancreas images.This is a kind of simple
Cascading filter detection method, the first step introduce gray level logarithm operation square, improve the edge of low gray level, then will
Gray level is transferred to the fuzzy region of deletion, and final step enhances the profile of details with logarithm operation.2013, master Zhao Chao
A kind of support vector cassification side of the quantum genetic algorithm optimization for cancer of pancreas detection is proposed Deng based on pancreas CT images
Method.
The studies above is analyzed it can be found that current pancreatic disease intelligently auxiliary identification system exist more it is following not
Foot:(1) it needs to carry out smart segmentation to pancreas or focal area, this needs doctor to have deep specialty background and abundant face
Bed experience, and take time and effort, inevitably there is segmentation error;(2) feature extraction is carried out by the way of hand-designed, is extracted
Characteristic present is poor with generalization ability, and researcher is needed to carry out in-depth study to problem domain to be solved, with design
Go out the feature of better adaptability;(3) the studies above is studied for the image of single mode, has ignored other mode images
The performance boost that may be brought.
Invention content
For the shortcomings of the prior art, the present invention is based on image groups and deep learning, provide one kind
Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease.
The present invention provides one kind for cancer of pancreas and pancreas inflammatory disease progress intelligence auxiliary mirror method for distinguishing, including under
State step:
1) pancreas medical image data is read out and normalization operation, obtains normalized image;
2) denoising, registration and image co-registration are carried out to normalized image, obtains multi-modal fusion image;
3) it chooses and region of interest and is mapped on other images in pancreas structure shows clearly image, while by the sense
Region of interest saves as the identifiable natural image format of subsequent classification network;
4) according to selected region of interest, the feature of multi-modality images or blending image is extracted, classify and
Fusion, and establish basic classification network model for the feature after fusion;
5) classification results of each basic classification network are differentiated, obtains final taxonomic history result.
Preferably, pancreas medical image data described in step 1) derives from PACS system and medical imaging devices.
Preferably, image co-registration described in step 2) use pixel-level image fusion technology, including spatial-domain algorithm and
Transform-domain algorithm.
Preferably, region of interest described in step 3) is the rectangle for including affected area whole pancreatic tissue, the nature
Picture format is .png or .bmp.
It is preferably that basic classification network model is merged and established in feature extraction, classification described in step 4)
It is as follows:
1) to multi-modal fusion picture construction special depth pyramid convolutional neural networks, the structure of the network is to connect entirely
A series of pyramid ponds layer is used before connecing layer, to allow input picture to be arbitrary dimension;
2) multi-modal fusion image special depth pyramid convolutional neural networks are entered data into, extraction is defeated by full articulamentum
The feature gone out generates characteristic pattern;
3) features described above is merged based on bilinearity fusion function, i.e., by the corresponding position element of two characteristic patterns into
It sums after row apposition operation, obtains fusion feature figure, the port number of the fusion feature figure is square of primitive character figure port number,
It is expressed as
Wherein, ybilIndicate fusion feature figure, xaAnd xbIndicate characteristic pattern, xa、xb∈RH×W×D, H, W, D indicate feature respectively
Length, width and the number of channels of figure,
4) it uses convolution fusion function to carry out dimension-reduction treatment to fusion feature figure, obtains the fusion feature figure of dimensionality reduction, i.e., will
The fusion results of bilinearity fusion function carry out convolution algorithm with filter f, while introducing deviation b, to realize dimensionality reduction, table
It is shown as
yconv=ybil*f+b;
Wherein, yconvFor convolution fusion function, f ∈ R1×1×2D×D, b ∈ RD;
5) disaggregated model is trained according to the fusion feature figure of dimensionality reduction, that is, establishes basic classification network model, wherein institute
It is the supporting vector for combining to form strong disaggregated model by weak typing model, or training a Kernel-Based Methods with sorting technique
Machine.
Preferably, steps are as follows for the discriminating described in step 5):
1) each basic classification network model established is trained by training data, calculates error in classification rate;
2) coefficient of each basic classification network model is calculated according to error in classification rate;
3) class label of each basic classification network model is unified, ask each basic classification network model to exist example to be measured
Prediction probability on each class label is weighted voting to remaining predicted probability after removing deviation point, and obtains finally
Taxonomic history result.
Preferably, the computational methods of the error in classification rate are:If basic classification network model shares M, it is denoted as Cm,
M={ 1,2 ..., M }, training dataset T={ (y1, x1), (y2, x2) ..., (yN, xN), whereinyi∈ Y=-
1 ,+1 }, error in classification rate e of m-th of disaggregated model on training dataset is calculatedm, formula is
The coefficient of each basic classification network model is αm, calculation formula is
Preferably, the class label of each basic classification network model is unified in { -1,1 }, unified function Am(x)
For,
The prediction probability PmComputational methods be, wherein Label be class label,
Pm(Label=1)=(Am(x)+1)/2
Pm(Label=-1)=1- (Am(x)+1)/2。
Preferably, the preparation method of the taxonomic history result is as follows, and the prediction of M basic classification network model is general
Rate PLmIt is expressed as,
By the prediction probability P of any one basic classification network modelmmaxIt is expressed as,
Pmmax=max [Pm(Label=1), Pm(Label=-1)],
Calculate PLm+PmmaxResult and be ranked up, remove maximum value and the corresponding basic classification network model of minimum value,
Weighted voting is realized by building linear combination f (x) to remaining M-2 basic network disaggregated model, and then obtains final classification
Identification result C (x)=sign (f (x));
Wherein, linear combination
Elaboration to the present invention and advantage:(1) intelligence auxiliary discrimination method of the present invention can be manually to hook
The form for drawing rectangle chooses area-of-interest.Since pancreas is different from other human bodies such as lung, mammary gland, a certain region hair
Sick change often leads to the integrally-built variation of pancreas, if carcinoma of head of pancreas is while causing head of pancreas enlargement, also often results in tail of pancreas
Atrophy, therefore the present invention is different from other diseases CAD system is used only can carry out smart segmentation to focal area, and the present invention will be by
Veteran radiologist selects region of interest in pancreas structure shows clearly single mode or blending image, and the sense is emerging
Interesting area is the rectangle for including whole pancreatic tissues that lesion is included, and the construction of the region of interest is more than artificial essence segmentation
Simply, while the uncertainty that the automatic cutting techniques of jejune pancreas are brought is avoided;
(2) in the present invention, final identification result comes from the feature of three aspects:It is the respective feature of multi-modality images, more
The feature of modality fusion image, the top layer fusion feature of multi-modality images.Wherein, the respective feature of multi-modality images can provide
The different collected different human body features of imaging device;The feature of multi-modal fusion image can be by the visual fusion of different modalities
Into piece image, in the feature extraction and classifying stage, make the feature of different modalities that can train jointly;Multi-modal figure
The top layer fusion feature of picture is that the top-level feature for having extracted different modalities image is fused together, then is entered into classification
Model utilizes one strong disaggregated model of top-level feature combined training of different modalities image, preferably utilizes each mode image
Top-level feature;
(3) present invention proposes a kind of formula obtaining final identification result by classification results fusion, which first removes
Classification peels off as a result, being weighted voting to remaining classification results again obtains final identification result, the power of each sorter network result
Value takes into account it in the error in classification rate of training stage and to the certainty factor of Exemplary classes;
(4) in the present invention, pyramid pond is introduced into feature extraction network so that input picture need not be unified to identical
Size can be inputted sorter network in the form of arbitrary dimension, avoid the loss of useful information and the introducing of redundant information;
(5) present invention have very strong universality, both can under the selection of doctor to the medical image of multiple modalities into
Row combinatory analysis can also be analyzed just for the medical image of a certain mode, be suitable for clinical practice, it can also be used to pancreas
The scientific research of gland cancer and pancreas inflammatory disease areas.
Description of the drawings
Fig. 1 is the flow chart that intelligence of the present invention assists discrimination method;
Wherein, dotted line flow is optional flow, i.e., only can just be carried out when acquiring two kinds and two or more mode images,
Otherwise solid line flow can only be carried out, that is, is directed to a certain single mode image and carries out taxonomic history;
Fig. 2 is PET/CT image co-registration mode examples;
Fig. 3 is that depth pyramid pond convolutional neural networks build example;
Wherein, DCNN indicates depth convolutional neural networks.
Specific implementation mode
The intelligence auxiliary discrimination method for elaborating cancer of pancreas medical image below in conjunction with the accompanying drawings, to enable people in the art
Member can implement according to this with reference to specification word.
Embodiment 1
The invention discloses one kind carrying out intelligence auxiliary mirror method for distinguishing for cancer of pancreas and pancreas inflammatory disease, specific to walk
It is rapid as follows:
1) reading of multi-modal image, and carry out gray scale normalization operation;
2) image preprocessing carries out the operations such as denoising, registration to the normalized image that step 1) obtains, obtains increased quality
The unified multi-modality images of sampling interval, and then carry out image co-registration;
3) according to multi-modality images and blending image obtained by step 2), by veteran radiologist in pancreas structure
Show that a rectangle is drawn in clearly single mode or blending image includes region of interest, that is, chooses rectangular target areas, and
This area-of-interest is mapped to other modality images up, it is identifiable that area-of-interest is saved as subsequent classification network
.png, the natural images format such as .bmp;
4) structure depth pyramid pond convolutional neural networks extract multimode according to the area-of-interest that step 3) obtains
And blending image feature and classify;Fusion Features are carried out using the multimode feature that above-mentioned network extracts simultaneously, and are directed to
Feature after fusion carries out establishing disaggregated model;
5) classification results of each basic classification network in step 4) are differentiated, is tested in the training stage in conjunction with them
Error in classification rate in data and the performance on specific example remove the classification results that peel off, add to remaining classification results
Power voting, obtains final classification identification result and its certainty factor.
Wherein, step 1) is to obtain image data, and it is normalized operation;The source of the image data include but
Be not limited to PACS system and medical imaging devices, specific medical imaging devices include but not limited to CT scan, PET/CT scannings,
SPECT scannings, MRI, ultrasound, X-ray, angiogram, fluorescence photo, microphoto;To collected number
According to operation is normalized, this normalization operation includes but not limited to the cutting and compression etc. to medical image tonal range, with
Enhance the useful details in image.
Step 2) is the image after obtaining different modalities denoising and being registrated, and carries out image co-registration, is included the following steps:
S2-1, the normalized image obtained to step 1 carry out denoising, denoising method include but not limited to mean filter, in
The combination of value filtering, adaptive median filter, frequency domain filtering etc. and above-mentioned filtering method;
S2-2, image registration is carried out, will goes, obtains in the low image registration of spatial resolution to the high image of spatial resolution
Unified sampling interval is obtained, for example, being scanned for PET/CT, becoming PET image of changing commanders using simple scaling is matched to CT images
In, method for registering includes but not limited to feature based and the correlation registration technology based on mutual information;
S2-3, the image after registration is merged, the pixel-level image fusion technology used include but not limited to
Spatial-domain algorithm based on Logical filtering method, gray moment and comparison modulation method and with pyramid decomposition fusion method and small
Transform-domain algorithm based on wave conversion method, port number can be according to input mode numbers and depth convolutional neural networks to input layer
The actual demands such as demand are adjusted.
Step 3) is to select region of interest in pancreas structure shows clearly single mode or blending image by doctor, simultaneously
By in region of interest reflection to the image of other mode, include the following steps:
S3-1, it is selected in multiple modalities image and blending image by veteran radiologist, selects pancreas
Structure shows most clearly image;
Region of interest is extracted in the image that S3-2, radiologist select in step 3-1, which is covering
Include the rectangle of whole pancreatic tissues of lesion, the construction of the region of interest is more simpler than artificial essence segmentation, while by the sense
Region of interest reflection is on the image of other mode, wherein pancreas boundary with apart from nearest region of interest square boundary be 5-10
A pixel;
S3-3, the area-of-interest in each mode and blending image is saved as the identifiable .png of subsequent classification network,
.bmp equal natural images format.
Step 4) is the special depth pyramid pond convolutional neural networks disaggregated model of each mode of structure and blending image,
The characteristics of each network all has arbitrary input, while the multi-modal feature that above-mentioned disaggregated model extracts being merged, to melting
One disaggregated model of feature construction is closed, is specifically comprised the following steps:
S4-1, respective special depth pyramid convolutional neural networks classification is built for each modality images and blending image
Model, network structure uses a series of pyramid ponds layer before full articulamentum, to allow input picture that can be
Arbitrary dimension is carried in addition, network further includes the other structures such as short connect structures, Webweb with accelerating training speed
High-performance, optimization algorithm are divided using the methods of stochastic gradient descent, Adm, Nadam, Adagrad, Adadelta and RMSprop
Class layer using softmax or Linear SVM as activation primitive, by by the experimental methods such as grid search come adjust the depth of network with
Width so that each disaggregated model can reach highest accuracy rate, i.e., network, which can to the maximum extent extract in image, has differentiation
The feature of ability;
S4-2, data are inputted to above-mentioned trained each modality images special depth pyramid convolutional neural networks again,
The feature of the full articulamentum output of extraction classification layer preceding layer (layer second from the bottom);
S4-3, bilinearity fusion function is primarily based on by above-mentioned each modality images Fusion Features.Bilinearity merges
It sums after the corresponding position element of 2 characteristic patterns is carried out apposition operation, the port number of fusion feature figure is that primitive character figure is logical
Square of road number, is expressed as
Wherein, xaAnd xbIndicate the characteristic pattern of different modalities image, ybilIndicate fusion space characteristics figure;
xa、xb∈RH×W×D, H, W, D indicate length, width and the number of channels of characteristic pattern respectively,
The characteristic pattern that S4-4, above-mentioned steps obtain has that dimension is excessively high, and the present invention further uses convolution to merge
Function yconv=fconv(xa, xb) dimension-reduction treatment is carried out to fusion feature figure, by the fusion results of bilinearity fusion function and filtering
Device f carries out convolution algorithm, and introduces deviation b, to realize dimensionality reduction, is expressed as
yconv=ybil*f+b
Wherein, f ∈ R1×1×2D×D,b∈RD.Multi-modality images top-level feature is preferably merged as a result,;
S4-5, a disaggregated model is trained according to the fusion feature that S4-4 is obtained, and it is trained to reach highest accuracy rate;It can
Include but not limited to that the strong disaggregated model formed is combined in training one by weak typing model with the sorting technique of selection, such as
The support vector machines etc. of Adaboost disaggregated models, random forest disaggregated model etc., or one Kernel-Based Methods of training.
Step 5) is differentiated to the classification results of each basic classification model in step 4), in conjunction with them in training rank
The error in classification rate of section and the performance on specific example remove the classification results that peel off, table are weighted to remaining classification results
Certainly.Include the following steps:
S5-1, the error in classification rate for calculating each basic classification model classification results in training stage test data, if base
This disaggregated model shares M, they are denoted as Cm, m={ 1,2 ..., M }, training dataset T={ (y1, x1), (x2, y2) ...,
(xN,yN), whereinyi∈ Y={ -1 ,+1 } calculate classification of m-th of disaggregated model on training dataset and miss
Rate em,
The factor alpha for the M disaggregated model that S5-2, calculating are obtained based on error in classification ratem,
S5-3, different classifications model class label is unified, each disaggregated model is acquired to test case in each classification mark
The prediction probability signed removes two judgement deviation points, calculates the weighted sum that remaining disaggregated model differentiates probability, obtains and finally examine
Disconnected opinion and certainty factor.
It is after the completion of softmax is predicted, its class label is unified to { -1,1 } in some implementation examples, and calculate
Prediction probability on all kinds of distinguishing labels;For softmax, output valve inherently prediction probability;For SVM, Adaboost
Equal disaggregated models, for example x, calculating its prediction probability specific method is, by the class label of each basic classification network model
It is unified in { -1,1 }, unified function Am(x) it is,
Prediction probability PmComputational methods be, wherein Label be class label,
Pm(Label=1)=(Am(x)+1)/2
Pm(Label=-1)=1- (Am(x)+1)/2。
The preparation method of taxonomic history result is as follows, by the prediction probability PL of M basic classification network modelmIt is expressed as,
By the prediction probability P of any one basic classification network modelmmaxIt is expressed as,
Pmmax=max [Pm(Label=1), Pm(Label=-1)],
Calculate PLm+PmmaxResult and be ranked up, remove maximum value and the corresponding basic classification network model of minimum value,
Linear combination is built to remaining M-2 basic network disaggregated model,
Obtain final classification identification result C (x)=sign (f (x)), wherein linear combination f (x) realizes M-2 basic point
The weighted voting of class model, Cm(x) coefficient before illustrates m-th of disaggregated model Cm(x) significance level, here, all coefficients
The sum of the class of example x, the accuracy of the absolute value representation classification of f (x) are determined for the symbol of 1, f (x).
Embodiment 2
Image is provided by Changhai Hospital, Shanghai City in Fig. 2, is to illustrate image fusion process by taking PET/CT as an example.First will
PET image a) after registration and CT images b) permeates width puppet figure c), and gray level image d) is changed into using greyscale transformation,
To which the information of both modalities which to be fused together, subsequent processing is carried out, can also subsequent processing directly directly be carried out to puppet figure,
The building mode of above-mentioned puppet figure is using the CT images of different HU value ranges as two channels of pseudo- figure, and PET image is third
Channel.
Embodiment 3
Depth pyramid pond convolutional neural networks build example, although the depth pyramid pond convolution of each modality images
The specific network structure of neural network is different, but all includes following 4 parts:
1) input of arbitrary dimension image, input picture can pass through the processing such as decentralization, standardization, ZCA albefactions so that
It is easier to restrain in training, accelerates training process;
2) structure of depth convolutional neural networks (DCNN), the neural network include convolutional layer, pond layer, BN layers, short
Connect etc., by carrying out tuning in network depth, width, optimization algorithm, activation primitive, learning rate etc. so that it has
There is best ability in feature extraction;
3) pyramid pond layer, by introducing pyramid pond layer so that 2) the different rulers that convolutional neural networks generate in
Very little characteristic pattern can unify the full articulamentum to identical size, also therefore so that this network can be to various sizes of input
Image is handled;
4) classify layer, to from input picture collected feature classify, to converting medical diagnosis on disease problem to
Tagsort problem, activation primitive include but not limited to softmax and Linear SVM.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed
With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily
Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited
In specific details and embodiment shown here.
Claims (9)
1. one kind carrying out intelligence auxiliary mirror method for distinguishing for cancer of pancreas and pancreas inflammatory disease, which is characterized in that including following
Step:
1) pancreas medical image data is read out and normalization operation, obtains normalized image;
2) denoising, registration and image co-registration are carried out to normalized image, obtains multi-modal fusion image;
3) region of interest is chosen in pancreas structure shows clearly image and is mapped on other images, while this is interested
Area saves as the identifiable natural image format of subsequent classification network;
4) according to selected region of interest, the feature of multi-modal fusion image is extracted, classified and merged, and be directed to and melt
Feature after conjunction establishes basic classification network model;
5) classification results of each basic classification network are differentiated, obtains final taxonomic history result.
2. according to the method described in claim 1, it is characterized in that, pancreas medical image data described in step 1) derives from
PACS system and medical imaging devices.
3. according to the method described in claim 1, it is characterized in that, image co-registration described in step 2) is melted using pixel-level image
Conjunction technology, including spatial-domain algorithm and transform-domain algorithm.
4. according to the method described in claim 1, it is characterized in that, region of interest described in step 3) is comprising affected area whole
The rectangle of pancreatic tissue, the natural image format are .png or .bmp.
5. according to the method described in claim 1, it is characterized in that, feature extraction, classification, fusion described in step 4) and
Basic classification network model is established to be as follows:
1) to multi-modal fusion picture construction special depth pyramid convolutional neural networks, the structure of the network is in full articulamentum
A series of pyramid ponds layer is used before, to allow input picture to be arbitrary dimension;
2) multi-modal fusion image special depth pyramid convolutional neural networks are entered data into, what extraction was exported by full articulamentum
Feature generates characteristic pattern;
3) features described above is merged based on bilinearity fusion function, i.e., carried out the corresponding position element of two characteristic patterns outer
It sums after product operation, obtains fusion feature figure, the port number of the fusion feature figure is square of primitive character figure port number, is indicated
For
Wherein, ybilIndicate fusion feature figure, xaAnd xbIndicate characteristic pattern, xa、xb∈RH×W×D, H, W, D indicate characteristic pattern respectively
Length, width and number of channels,
4) it uses convolution fusion function to carry out dimension-reduction treatment to fusion feature figure, obtains the fusion feature figure of dimensionality reduction, i.e., by two-wire
Property fusion function fusion results carry out convolution algorithm with filter f, while introducing deviation b, to realize dimensionality reduction, be expressed as
yconv=ybil* f+b,
Wherein, yconvFor convolution fusion function, f ∈ R1×1×2D×D, b ∈ RD;
5) disaggregated model is trained according to the fusion feature figure of dimensionality reduction, that is, establishes basic classification network model, wherein used point
Class method is the support vector machines for combining to form strong disaggregated model by weak typing model, or training a Kernel-Based Methods.
6. according to the method described in claim 1, it is characterized in that, the discriminating described in step 5) steps are as follows:
1) each basic classification network model established is trained by training data, calculates error in classification rate;
2) coefficient of each basic classification network model is calculated according to error in classification rate;
3) class label of each basic classification network model is unified, ask each basic classification network model to example to be measured each
Prediction probability on class label is weighted voting to remaining predicted probability after removing deviation point, and obtains final classification
Identification result.
7. according to the method described in claim 6, it is characterized in that, the computational methods of the error in classification rate are:If basic point
Class network model shares M, is denoted as Cm, m={ 1,2 ..., M }, training dataset T={ (y1, x1), (y2, x2) ..., (yN,
xN), whereinyi∈ Y={ -1 ,+1 } calculate error in classification rate of m-th of disaggregated model on training dataset
em, formula is
The coefficient of each basic classification network model is αm, calculation formula is
8. the method according to the description of claim 7 is characterized in that the class label of each basic classification network model is unified
In { -1,1 }, unified function Am(x) it is,
The prediction probability PmComputational methods be,
Pm(Label=1)=(Am(x)+1)/2
Pm(Label=-1)=1- (Am(x)+1)/2,
Wherein, Label is class label.
9. according to the method described in claim 8, it is characterized in that, the preparation method of the taxonomic history result is as follows, by M
The prediction probability PL of basic classification network modelmIt is expressed as,
By the prediction probability P of any one basic classification network modelmmaxIt is expressed as,
Pmmax=max [Pm(Label=1), Pm(Label=-1)],
Calculate PLm+PmmaxResult and be ranked up, remove maximum value and the corresponding basic classification network model of minimum value, pass through
Weighted voting is realized to remaining M-2 basic network disaggregated model structure linear combination f (x), and then obtains final classification discriminating
As a result C (x)=sign (f (x));
Wherein, linear combination
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810141703.4A CN108537773B (en) | 2018-02-11 | 2018-02-11 | Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810141703.4A CN108537773B (en) | 2018-02-11 | 2018-02-11 | Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537773A true CN108537773A (en) | 2018-09-14 |
CN108537773B CN108537773B (en) | 2022-06-17 |
Family
ID=63485999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810141703.4A Active CN108537773B (en) | 2018-02-11 | 2018-02-11 | Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537773B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109273084A (en) * | 2018-11-06 | 2019-01-25 | 中山大学附属第医院 | Method and system based on multi-mode ultrasound omics feature modeling |
CN109544517A (en) * | 2018-11-06 | 2019-03-29 | 中山大学附属第医院 | Multi-modal ultrasound omics analysis method and system based on deep learning |
CN109544512A (en) * | 2018-10-26 | 2019-03-29 | 浙江大学 | It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss |
CN109559296A (en) * | 2018-10-08 | 2019-04-02 | 广州市本真网络科技有限公司 | Medical image registration method and system based on full convolutional neural networks and mutual information |
CN109815965A (en) * | 2019-02-13 | 2019-05-28 | 腾讯科技(深圳)有限公司 | A kind of image filtering method, device and storage medium |
CN109949288A (en) * | 2019-03-15 | 2019-06-28 | 上海联影智能医疗科技有限公司 | Tumor type determines system, method and storage medium |
CN109998599A (en) * | 2019-03-07 | 2019-07-12 | 华中科技大学 | A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system |
CN110188788A (en) * | 2019-04-15 | 2019-08-30 | 浙江工业大学 | The classification method of cystic Tumor of Pancreas CT image based on radiation group feature |
CN110349662A (en) * | 2019-05-23 | 2019-10-18 | 复旦大学 | The outliers across image collection that result is accidentally surveyed for filtering pulmonary masses find method and system |
CN110619639A (en) * | 2019-08-26 | 2019-12-27 | 苏州同调医学科技有限公司 | Method for segmenting radiotherapy image by combining deep neural network and probability map model |
CN110909672A (en) * | 2019-11-21 | 2020-03-24 | 江苏德劭信息科技有限公司 | Smoking action recognition method based on double-current convolutional neural network and SVM |
CN110909755A (en) * | 2018-09-17 | 2020-03-24 | 阿里巴巴集团控股有限公司 | Object feature processing method and device |
CN111243711A (en) * | 2018-11-29 | 2020-06-05 | 皇家飞利浦有限公司 | Feature identification in medical imaging |
CN111667486A (en) * | 2020-04-29 | 2020-09-15 | 杭州深睿博联科技有限公司 | Multi-mode fusion pancreas segmentation method and system based on deep learning |
CN111680687A (en) * | 2020-06-09 | 2020-09-18 | 江西理工大学 | Depth fusion model applied to mammary X-ray image anomaly identification and classification method thereof |
CN111798410A (en) * | 2020-06-01 | 2020-10-20 | 深圳市第二人民医院(深圳市转化医学研究院) | Cancer cell pathological grading method, device, equipment and medium based on deep learning model |
CN111833332A (en) * | 2020-07-15 | 2020-10-27 | 中国医学科学院肿瘤医院深圳医院 | Generation method and identification method of energy spectrum CT identification model of bone metastasis tumor and bone island |
CN112070809A (en) * | 2020-07-22 | 2020-12-11 | 中国科学院苏州生物医学工程技术研究所 | Accurate diagnosis system of pancreatic cancer based on two time formation of image of PET/CT |
CN112381798A (en) * | 2020-11-16 | 2021-02-19 | 广东电网有限责任公司肇庆供电局 | Transmission line defect identification method and terminal |
CN112419306A (en) * | 2020-12-11 | 2021-02-26 | 长春工业大学 | Lung nodule detection method based on NAS-FPN |
CN112951426A (en) * | 2021-03-15 | 2021-06-11 | 山东大学齐鲁医院 | Construction method and evaluation system of pancreatic ductal adenoma inflammatory infiltration degree judgment model |
CN113066110A (en) * | 2021-05-06 | 2021-07-02 | 北京爱康宜诚医疗器材有限公司 | Method and device for selecting marking points in pelvis registration |
CN113449770A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Image detection method, electronic device and storage device |
CN113610751A (en) * | 2021-06-03 | 2021-11-05 | 迈格生命科技(深圳)有限公司 | Image processing method, image processing apparatus, and computer-readable storage medium |
CN114332040A (en) * | 2021-12-30 | 2022-04-12 | 华中科技大学协和深圳医院 | Multi-mode-based thyroid tumor image classification method and terminal equipment |
CN114387220A (en) * | 2021-12-20 | 2022-04-22 | 复旦大学 | Brain MR image standardization system based on deep learning |
CN115240854A (en) * | 2022-07-29 | 2022-10-25 | 中国医学科学院北京协和医院 | Method and system for processing pancreatitis prognosis data |
CN118212228A (en) * | 2024-04-23 | 2024-06-18 | 中国人民解放军海军军医大学第一附属医院 | Deep learning model for assisting pancreatic lesion diagnosis through multi-mode images |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976436A (en) * | 2010-10-14 | 2011-02-16 | 西北工业大学 | Pixel-level multi-focus image fusion method based on correction of differential image |
CN105956532A (en) * | 2016-04-25 | 2016-09-21 | 大连理工大学 | Traffic scene classification method based on multi-scale convolution neural network |
CN106682435A (en) * | 2016-12-31 | 2017-05-17 | 西安百利信息科技有限公司 | System and method for automatically detecting lesions in medical image through multi-model fusion |
CN107291822A (en) * | 2017-05-24 | 2017-10-24 | 北京邮电大学 | The problem of based on deep learning disaggregated model training method, sorting technique and device |
CN107403201A (en) * | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
CN107492097A (en) * | 2017-08-07 | 2017-12-19 | 北京深睿博联科技有限责任公司 | A kind of method and device for identifying MRI image area-of-interest |
US20180032846A1 (en) * | 2016-08-01 | 2018-02-01 | Nvidia Corporation | Fusing multilayer and multimodal deep neural networks for video classification |
-
2018
- 2018-02-11 CN CN201810141703.4A patent/CN108537773B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976436A (en) * | 2010-10-14 | 2011-02-16 | 西北工业大学 | Pixel-level multi-focus image fusion method based on correction of differential image |
CN105956532A (en) * | 2016-04-25 | 2016-09-21 | 大连理工大学 | Traffic scene classification method based on multi-scale convolution neural network |
US20180032846A1 (en) * | 2016-08-01 | 2018-02-01 | Nvidia Corporation | Fusing multilayer and multimodal deep neural networks for video classification |
CN106682435A (en) * | 2016-12-31 | 2017-05-17 | 西安百利信息科技有限公司 | System and method for automatically detecting lesions in medical image through multi-model fusion |
CN107291822A (en) * | 2017-05-24 | 2017-10-24 | 北京邮电大学 | The problem of based on deep learning disaggregated model training method, sorting technique and device |
CN107492097A (en) * | 2017-08-07 | 2017-12-19 | 北京深睿博联科技有限责任公司 | A kind of method and device for identifying MRI image area-of-interest |
CN107403201A (en) * | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110909755B (en) * | 2018-09-17 | 2023-05-30 | 阿里巴巴集团控股有限公司 | Object feature processing method and device |
CN110909755A (en) * | 2018-09-17 | 2020-03-24 | 阿里巴巴集团控股有限公司 | Object feature processing method and device |
CN109559296A (en) * | 2018-10-08 | 2019-04-02 | 广州市本真网络科技有限公司 | Medical image registration method and system based on full convolutional neural networks and mutual information |
CN109544512A (en) * | 2018-10-26 | 2019-03-29 | 浙江大学 | It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss |
CN109544512B (en) * | 2018-10-26 | 2020-09-18 | 浙江大学 | Multi-mode-based embryo pregnancy result prediction device |
CN109273084B (en) * | 2018-11-06 | 2021-06-22 | 中山大学附属第一医院 | Method and system based on multi-mode ultrasound omics feature modeling |
CN109544517A (en) * | 2018-11-06 | 2019-03-29 | 中山大学附属第医院 | Multi-modal ultrasound omics analysis method and system based on deep learning |
CN109273084A (en) * | 2018-11-06 | 2019-01-25 | 中山大学附属第医院 | Method and system based on multi-mode ultrasound omics feature modeling |
CN111243711B (en) * | 2018-11-29 | 2024-02-20 | 皇家飞利浦有限公司 | Feature recognition in medical imaging |
CN111243711A (en) * | 2018-11-29 | 2020-06-05 | 皇家飞利浦有限公司 | Feature identification in medical imaging |
CN109815965B (en) * | 2019-02-13 | 2021-07-06 | 腾讯科技(深圳)有限公司 | Image filtering method and device and storage medium |
CN109815965A (en) * | 2019-02-13 | 2019-05-28 | 腾讯科技(深圳)有限公司 | A kind of image filtering method, device and storage medium |
CN109998599A (en) * | 2019-03-07 | 2019-07-12 | 华中科技大学 | A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system |
CN109949288A (en) * | 2019-03-15 | 2019-06-28 | 上海联影智能医疗科技有限公司 | Tumor type determines system, method and storage medium |
CN110188788A (en) * | 2019-04-15 | 2019-08-30 | 浙江工业大学 | The classification method of cystic Tumor of Pancreas CT image based on radiation group feature |
CN110349662B (en) * | 2019-05-23 | 2023-01-13 | 复旦大学 | Cross-image set outlier sample discovery method and system for filtering lung mass misdetection results |
CN110349662A (en) * | 2019-05-23 | 2019-10-18 | 复旦大学 | The outliers across image collection that result is accidentally surveyed for filtering pulmonary masses find method and system |
CN110619639A (en) * | 2019-08-26 | 2019-12-27 | 苏州同调医学科技有限公司 | Method for segmenting radiotherapy image by combining deep neural network and probability map model |
CN110909672A (en) * | 2019-11-21 | 2020-03-24 | 江苏德劭信息科技有限公司 | Smoking action recognition method based on double-current convolutional neural network and SVM |
CN111667486A (en) * | 2020-04-29 | 2020-09-15 | 杭州深睿博联科技有限公司 | Multi-mode fusion pancreas segmentation method and system based on deep learning |
CN111667486B (en) * | 2020-04-29 | 2023-11-17 | 杭州深睿博联科技有限公司 | Multi-modal fusion pancreas segmentation method and system based on deep learning |
CN111798410A (en) * | 2020-06-01 | 2020-10-20 | 深圳市第二人民医院(深圳市转化医学研究院) | Cancer cell pathological grading method, device, equipment and medium based on deep learning model |
CN111680687A (en) * | 2020-06-09 | 2020-09-18 | 江西理工大学 | Depth fusion model applied to mammary X-ray image anomaly identification and classification method thereof |
CN111680687B (en) * | 2020-06-09 | 2022-05-10 | 江西理工大学 | Depth fusion classification method applied to mammary X-ray image anomaly identification |
CN111833332A (en) * | 2020-07-15 | 2020-10-27 | 中国医学科学院肿瘤医院深圳医院 | Generation method and identification method of energy spectrum CT identification model of bone metastasis tumor and bone island |
CN112070809A (en) * | 2020-07-22 | 2020-12-11 | 中国科学院苏州生物医学工程技术研究所 | Accurate diagnosis system of pancreatic cancer based on two time formation of image of PET/CT |
CN112070809B (en) * | 2020-07-22 | 2024-01-26 | 中国科学院苏州生物医学工程技术研究所 | Pancreatic cancer accurate diagnosis system based on PET/CT double-time imaging |
CN112381798A (en) * | 2020-11-16 | 2021-02-19 | 广东电网有限责任公司肇庆供电局 | Transmission line defect identification method and terminal |
CN112419306A (en) * | 2020-12-11 | 2021-02-26 | 长春工业大学 | Lung nodule detection method based on NAS-FPN |
CN112419306B (en) * | 2020-12-11 | 2024-03-15 | 长春工业大学 | NAS-FPN-based lung nodule detection method |
CN112951426A (en) * | 2021-03-15 | 2021-06-11 | 山东大学齐鲁医院 | Construction method and evaluation system of pancreatic ductal adenoma inflammatory infiltration degree judgment model |
CN113066110A (en) * | 2021-05-06 | 2021-07-02 | 北京爱康宜诚医疗器材有限公司 | Method and device for selecting marking points in pelvis registration |
CN113066110B (en) * | 2021-05-06 | 2024-08-30 | 北京爱康宜诚医疗器材有限公司 | Selection method and selection device for marker points in pelvis registration |
CN113449770A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Image detection method, electronic device and storage device |
CN113449770B (en) * | 2021-05-18 | 2024-02-13 | 科大讯飞股份有限公司 | Image detection method, electronic device and storage device |
CN113610751A (en) * | 2021-06-03 | 2021-11-05 | 迈格生命科技(深圳)有限公司 | Image processing method, image processing apparatus, and computer-readable storage medium |
CN113610751B (en) * | 2021-06-03 | 2024-06-11 | 迈格生命科技(深圳)有限公司 | Image processing method, device and computer readable storage medium |
CN114387220A (en) * | 2021-12-20 | 2022-04-22 | 复旦大学 | Brain MR image standardization system based on deep learning |
CN114332040A (en) * | 2021-12-30 | 2022-04-12 | 华中科技大学协和深圳医院 | Multi-mode-based thyroid tumor image classification method and terminal equipment |
CN115240854B (en) * | 2022-07-29 | 2023-10-03 | 中国医学科学院北京协和医院 | Pancreatitis prognosis data processing method and system |
CN115240854A (en) * | 2022-07-29 | 2022-10-25 | 中国医学科学院北京协和医院 | Method and system for processing pancreatitis prognosis data |
CN118212228A (en) * | 2024-04-23 | 2024-06-18 | 中国人民解放军海军军医大学第一附属医院 | Deep learning model for assisting pancreatic lesion diagnosis through multi-mode images |
CN118212228B (en) * | 2024-04-23 | 2024-10-18 | 中国人民解放军海军军医大学第一附属医院 | Deep learning system for assisting pancreatic lesion diagnosis through multi-mode images |
Also Published As
Publication number | Publication date |
---|---|
CN108537773B (en) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537773A (en) | Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease | |
Sharif et al. | A comprehensive review on multi-organs tumor detection based on machine learning | |
Qureshi et al. | Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends | |
Li et al. | A 3D deep supervised densely network for small organs of human temporal bone segmentation in CT images | |
Hambarde et al. | Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net | |
Fan et al. | Lung nodule detection based on 3D convolutional neural networks | |
Sharma et al. | A survey on cancer detection via convolutional neural networks: Current challenges and future directions | |
Rahman et al. | Developing a retrieval based diagnostic aid for automated melanoma recognition of dermoscopic images | |
Nayan et al. | A deep learning approach for brain tumor detection using magnetic resonance imaging | |
Lakshmipriya et al. | Deep learning techniques in liver tumour diagnosis using CT and MR imaging-A systematic review | |
Li et al. | A novel deep learning framework based mask-guided attention mechanism for distant metastasis prediction of lung cancer | |
Liu et al. | Automated classification of cervical Lymph-Node-Level from ultrasound using depthwise separable convolutional swin transformer | |
Affane et al. | Robust deep 3-D architectures based on vascular patterns for liver vessel segmentation | |
CN109635866B (en) | Method of processing an intestinal image | |
Sivasankaran et al. | Lung Cancer Detection Using Image Processing Technique Through Deep Learning Algorithm. | |
Chen et al. | Research related to the diagnosis of prostate cancer based on machine learning medical images: A review | |
Li et al. | Gleason grading of prostate cancer based on improved AlexNet | |
Zhang et al. | ASE-Net: A tumor segmentation method based on image pseudo enhancement and adaptive-scale attention supervision module | |
Serpa-Andrade et al. | An approach based on Fourier descriptors and decision trees to perform presumptive diagnosis of esophagitis for educational purposes | |
Sadremomtaz et al. | Improving the quality of pulmonary nodules segmentation using the new proposed U-Net neural network | |
Rehman et al. | Edge of discovery: Enhancing breast tumor MRI analysis with boundary-driven deep learning | |
Lonseko et al. | Early esophagus cancer segmentation from gastrointestinal endoscopic images based on U-Net++ model | |
Tang et al. | Researches advanced in medical detection based on deep learning | |
Mu et al. | Channel context and dual-domain attention based U-Net for skin lesion attributes segmentation | |
Bagherieh et al. | Mass detection in lung CT images using region growing segmentation and decision making based on fuzzy systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |