CN110189308A - A kind of lesion detection approach and device based on BM3D and the fusion of dense convolutional network - Google Patents
A kind of lesion detection approach and device based on BM3D and the fusion of dense convolutional network Download PDFInfo
- Publication number
- CN110189308A CN110189308A CN201910415029.9A CN201910415029A CN110189308A CN 110189308 A CN110189308 A CN 110189308A CN 201910415029 A CN201910415029 A CN 201910415029A CN 110189308 A CN110189308 A CN 110189308A
- Authority
- CN
- China
- Prior art keywords
- block
- network
- image
- bm3d
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 40
- 230000003902 lesion Effects 0.000 title claims abstract description 33
- 238000013459 approach Methods 0.000 title claims abstract description 23
- 230000004927 fusion Effects 0.000 title claims abstract description 18
- 230000004913 activation Effects 0.000 claims abstract description 30
- 238000002372 labelling Methods 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 8
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 7
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 41
- 230000011218 segmentation Effects 0.000 claims description 23
- 244000157795 Cordia myxa Species 0.000 claims description 16
- 235000004257 Cordia myxa Nutrition 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 239000003550 marker Substances 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 9
- 230000007704 transition Effects 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008034 disappearance Effects 0.000 claims description 5
- 238000004880 explosion Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000002834 transmittance Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 2
- 230000006399 behavior Effects 0.000 claims description 2
- 238000012886 linear function Methods 0.000 claims description 2
- 238000007500 overflow downdraw method Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 12
- 238000006116 polymerization reaction Methods 0.000 abstract description 6
- 230000003213 activating effect Effects 0.000 abstract 1
- 230000002427 irreversible effect Effects 0.000 abstract 1
- 238000013527 convolutional neural network Methods 0.000 description 13
- 230000008901 benefit Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 238000002474 experimental method Methods 0.000 description 9
- 238000003709 image segmentation Methods 0.000 description 7
- 238000011160 research Methods 0.000 description 6
- 238000002679 ablation Methods 0.000 description 5
- 239000003814 drug Substances 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000016273 neuron death Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 231100000768 Toxicity label Toxicity 0.000 description 1
- 206010047426 Violence-related symptom Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008033 biological extinction Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 208000029028 brain injury Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 208000037841 lung tumor Diseases 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000002285 radioactive effect Effects 0.000 description 1
- 208000012802 recumbency Diseases 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of lesion detection approach and device based on BM3D and the fusion of dense convolutional network, marks similar block;Random drop and Fast Labeling;Optimization and training DenseNet network;It is rebuild using the spatial information of input picture, structural information and the characteristic information of extraction based on the data of depth training.The spatial information of input is abstracted as one-dimensional, reduces irreversible initial characteristics Loss.The dense convolutional network of building fusion BM3D, linear unsaturation unit activating function is replaced to activate network using scalable exponential type linear unit activation primitive, introduce negative fraction parameter, it promotes network optimization degree and enhances network robustness, and increase by one layer of maximum pond layer after every piece dense piece and be abstracted characteristics of image, extract tumour core information point.Network end-point carries out feature reconstruction using the polymerization of BM3D, and fusion gradient, spatial information promote network effect.Effectively improve the accuracy of lesion detection.
Description
Technical field
The present invention relates to technical field of medical image processing, more particularly to one kind to be merged based on BM3D and dense convolutional network
Lesion detection approach and device.
Background technique
Medical image derives from imaging technique, including computed tomography, Magnetic resonance imaging, ultrasound, positive electron hair
Penetrate tomoscan, medicine ultrasound examination etc..The two dimension or 3-D image of the available human body corresponding position of medical imaging technology.
In two dimensional image, indicate that the minimum unit element of specifying information is known as pixel;It is referred to as voxel in 3-D image.In specified conditions
Under, 3-D image can be appeared as to a series of two dimensional images, this usage mode largely reduces computation complexity, reduces memory
Demand.But, although medical image imaging technique has reached its maturity, by medical imaging devices, radioactive element density of infection
And the mutual restriction of the factors such as Human Physiology health, many medical image imaging resolutions are very low.Such as it is asked in lesion detection
In topic, due to the similitude of Lung neoplasm and tumour, low-resolution image is unable to complete classification and Detection task, therefore, medical image
Processing technique comes into being.Medical Image Processing is expedited the emergence of in the higher demand to medical image quality, is originated from pathologic problems research
It is related need, be the important step of medical imaging analysis.By carrying out secondary treatment to image, image can be made more
Clearly, easy, diagnosis efficiency is improved, misdiagnosis rate is reduced.The technologies such as Medical image fusion, ultrasonic imaging, image reconstruction are medicine
Several applications of image procossing.Wherein, in its bottom layer realization, medical image semantic segmentation technology is the weight for handling medical image
Want one of means.
Image, semantic segmentation is that the region segmentation in image with different meanings comes, these regions are all satisfied region company
The general character, and mutually disjoint, the sum of all areas constitute entire image.The region of the different meanings of separation rapidly and efficiently is image point
One of target cut.Threshold segmentation is simplest pixel-level image partitioning algorithm, by taking grayscale image as an example, the gray scale of two objects
It is very close, but observe the image grey level histogram and can find, histogram chooses two peak-to-peak paddy there are at two different peaks
Make threshold value, can be very good segmented image.Edge detection is also a kind of common image segmentation algorithm, using object edge pixel ash
Angle value changes often this very violent thought, object edge pixel is determined, to complete object segmentation.In addition, it is based on region
Image segmentation thought it is also relatively conventional, if split degree utilize quaternary tree principle, to image choose it is several it is mutually disjoint just
Beginning region carries out regional split merging, until minimum number of regions according to given uniformity detection criteria.But because
Most of medical image pixel resolutions are bad, and above-mentioned treatment effect is not very ideal.In order on the low basis for differentiating pixel
On characteristics of image is effectively treated, it is some to occur using the method for fuzzy message.Fuzzy clustering algorithm is based on fuzzy set theory
With the method for cluster, the principle of subsidiarity used is for solving the problems, such as that this is effective.Furthermore with the medicine figure of wavelet transformation
As dividing method also has better effects, its wavelet decomposition structure to image histogram is easy to detect in image that gray scale mutation is big
Object, be easy to use different degrees of Threshold segmentation original image.In recent years, in addition to traditional medicine image, semantic dividing method,
With the rise of convolutional neural networks (CNN), the image segmentation algorithm based on CNN is also widely used in medical image segmentation field.
In CNN development process, rising again largely is the AlexNet because of propositions such as Krizhevsky in 2012
Model surmounts the advantage Winning Champion of second place 11% in terms of the image classification task in ImageNet contest, which uses
Linear unsaturation model (ReLU) makees its activation primitive, double GPU concurrent operations and the common improving performance of local standardization.Have to
Mention, in CNN model realization, for ReLU activation primitive is compared to Ssigmoid activation primitive is commonly used before this, it is not only simple but also have more
Good effect.Sigmoid activation primitive is also known as S sigmoid growth curve, and functional image seems the S of recumbency, and the upper limit 1, lower limit is -1, function
Can be will be on variable mappings to (0,1) section.ReLU activation primitive is a linear segmented function, effect be on the occasion of constant,
Negative value becomes 0 entirely, and facts have proved can be than using sigmoid activation primitive better applied to CNN using ReLU activation primitive
Fitting training, mining data feature.After AlexNet, the propositions such as Karen Simonyan are constantly deepened network structure and are promoted certainly
The VGG network of body performance increases linear transformation using 1 × 1 convolutional layer and refines feature;The it is proposeds such as Christian Szegedy
Two loss passbacks are introduced when GoogLeNet layer depth and mitigate gradient disappearance, increase feature using a variety of convolution kernels when expanding width.
Center of gravity is placed on how more preferably to carry out feature extraction by expressway neural network (Highway Network), allow high speed information without
Hinder by each layer of network, effectively slow down gradient problem.Furthermore 2015, in terms of target detection and semantic segmentation, entirely
Convolutional neural networks (FCN) are also suggested.FCN is a kind of end-to-end, pixel to pixel CNN semantic segmentation model, and core is seen
Point is framework one " full convolution " network, for network to input image size no requirement (NR), final output is identical as input.Network
In add warp lamination, effect is the characteristic pattern up-sampling to input layer thereon, defeated with the same input layer that this restores data
Enter consistent spatial resolution.Therefore, which also successfully reserves original while predicting each pixel of input picture
Spatial information in input picture.Secondly, FCN introduces the great-jump-forward connection between down-sampling and up-sampling, in conjunction with from deep
The semantic information and feature of layer, shallow-layer, optimize Pixel-level resolution loss with this, are conducive in upper sampling process from down-sampling layer
Restore trace level information.
But, traditional CNN or FCN carries out more or less occurring the problems such as information loss, loss when information transmitting,
Information or gradient information are inputted by after multilayer, likely resulting in gradient disappearance or gradient explosion.The problem network depth compared with
It is especially prominent when deep.
Summary of the invention
The present invention provide one kind in lesion detection task, improve original image Semantic detection segmentation accuracy based on
The lesion detection approach of BM3D and the fusion of dense convolutional network.
The method of the present invention includes: thus
Step 1 marks similar block;
Step 2, random drop and Fast Labeling;
Step 3, optimization and training DenseNet network;
Step 4, using the spatial information of input picture, structural information and the characteristic information of extraction based on depth training
Data rebuild.
The present invention also provides a kind of devices for realizing the lesion detection approach based on BM3D and the fusion of dense convolutional network, comprising:
Memory, the lesion detection approach for storing computer program and being merged based on BM3D and dense convolutional network;
Processor is based on for executing the computer program and realizing multi-memory pressure testing system with realizing
The step of lesion detection approach that BM3D and dense convolutional network merge.
As can be seen from the above technical solutions, the invention has the following advantages that
DenseNet is applied to lesion detection task, and blending image by all the advantages based on DenseNet, the present invention
Denoising Algorithm BM3D establishes the DenseNet model of an enhancing medicine tumor image semantic segmentation.Lesion detection model of the present invention
Using BM3D similar block group technology and place it into the front end DenseNet, reuse characteristics of image simultaneously optimize network structure, add
Strong network robustness.And scalable exponential type linear unit activation primitive is taken in the DenseNet network model of building
(SELU) ReLU activation primitive Optimal Parameters network structure is replaced, minutia is made full use of to avoid narrow Character losing, is promoted
Network extracts feature capabilities.Meanwhile increasing one layer of maximum pond layer every layer of dense piece of end, deepen abstract characteristics information, makes
Detection effect is more accurate.Finally in network end-point, feature reconstruction is carried out using similarity polymerization in BM3D, is sufficiently excavated
The space structure relationship of image pixel-class.
The present invention uses the similar block group technology of BM3D and places it into the front end DenseNet, uses in the network architecture
SELU activation primitive replaces ReLU and optimizes negative fraction feature, is learnt different layers using dense block structure in DenseNet network
To Feature Mapping be together in series, while maximum pond second extraction feature is added.Finally in network end-point, using phase in BM3D
Feature reconstruction is carried out like degree polymerization, sufficiently excavates the space structure relationship of image pixel-class.Finally, using the polymerization of BM3D
Method carries out region fusion.Inventive network uses multiple evaluation indexes: handing over and than, equal pixel precision etc., and shows excellent
Gesture.Inventive network framework has good robustness in medical image segmentation optimization.
Detailed description of the invention
In order to illustrate more clearly of technical solution of the present invention, attached drawing needed in description will be made below simple
Ground introduction, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ordinary skill
For personnel, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the lesion detection model schematic merged based on BM3D and dense convolutional network;
Fig. 2 is the dense block structural diagram of DenseNet;
Fig. 3 is the contrast schematic diagram of three kinds of activation primitives and SELU;
Fig. 4 is the contrast schematic diagram of ablation experiment a and b-h;
Fig. 5 is the contrast images of six kinds of advanced methods and this method;
Fig. 6 is the lesion detection approach flow chart merged based on BM3D and dense convolutional network.
Specific embodiment
It is a kind of to be located simultaneously using double parallel network frameworks in terms of brain injury segmentation as shown in Fig. 16 in the present invention
The 3D convolutional neural networks frame of reason high-resolution and low-resolution image is suggested, referred to as DeepMedic.Intensive convolutional network
(DenseNet) it proposes and solves the thought that network is degenerated using dense block structure, the network is by dense piece and pond operational group
It builds, every layer is input with all before this layers of output.The framework Primary Reference of DenseNet Highway Network,
ResNet and GoogleNet promotes final classification accuracy by deepening network structure.
In lesion detection task, the present invention improves the accuracy of original image Semantic detection segmentation.DenseNet is scheming
As having good classifying quality on ImageNet data set, verifying its structure in classification and caning be found that, by DenseNet with
BM3D, which is combined, can move in lesion detection task.DenseNet is conceived to feature, is utilized and is reached by the height to feature
To better effect, less parameter is obtained, directly all layers are connected to each other using attended operation, every layer of institute all before use
There is the output of layer, and the output of oneself is passed into all layers below, iteration summation is executed to pervious Feature Mapping.This mode
Solve and increasingly increase with network depth, input after information or gradient information transmit several layers, the information may close to infinitesimal, and
Finally the problem of network end-point is lost.Based on this, the present invention constructs one for detecting the DenseNet model of tumour, and
BM3D dependency structure is wherein incorporated, network structure is as shown in Figure 1, wherein there are similar block label, Fast Labeling, random drops, volume
And operation, dense piece, transition zone, Fusion Features and classification marker device etc..Wherein transition zone is that convolution operation adds pondization to operate.
Step one of the invention is label similar block;
Specifically, the similar block marking operation in frame of the present invention is similar to similar block group technology in BM3D, using Europe
Family name's distance[20], similar set of blocks is found according to Euclidean distance formula, for extracting input picture respective two-dimensional, three-dimensional information, with
It is convenient for Fusion Features reconstruction.
BM3D is the expansion of non-local mean (non-local mean, NLM) in a way, its main thought is
Non local Block- matching.When inquiring similar block, the thinking for directly using Euclidean distance to search different from NLM, BM3D first uses hard threshold
Value linear transformation handles Euclidean distance calculating, reduces its computation complexity;After finding similar block, with NLM direct mean
It handles but similar block noise difference can be introduced, BM3D carries out domain conversion to similar block, filters noise reduction using collaboration, and polymerizeing behaviour
Make place to weight similar block, the process block after finally obtaining noise reduction.
Wherein BM3D similar block division operation are as follows: select N in original image firstkThe reference block of a k × k size (is examined
Consider algorithm complexity, it is not necessary to reference block be selected with regard to each pixel, generallyd use every Sk(Sk< 10) a pixel is chosen, this
When complexity drop to script algorithm), and select size for the range searching similar block of n × n around reference block, it seeks
It looks for all differences degree in region to be less than the block of threshold value, and the similar block looked for is integrated into a three-dimensional matrice.In addition, reference
Block is also required to be integrated into the three-dimensional matrice.BM3D determines similar block using Euclidean distance, uses mark to the pretreatment of block distance
The transformation of standardization two-dimensional linear and hard -threshold, formula are as follows:
Wherein x is pixel,It is target similar block, ZxIt is search block,It is the size of selection block, γ ' is hard -threshold
Operation, threshold value are set asIt is the two-dimensional linear transformation after normalization.
Similar set of blocks can be found according to Euclidean distance formula, by shown in formula (2):
Wherein X is image,Hyper parameter, the hyper parameter can determine that it is whether similar between block and block,For
Similar set of blocks.
Similar block marking operation and BM3D similar block are grouped difference in the present invention are as follows: several by being searched out using reference block
Diversity factor be less than threshold value block be directly stored in together with reference block A in a two-dimensional matrix, be denoted as tag block (A, A1,
A2 ...), it is then basic information in tag block that (A, A1, A2 ...), which is the two-dimensional space information of one-dimensional format,.Then obtained several
The two-dimensional matrix obtained is integrated into three-dimensional matrice together with spatial information.In short explain the difference of the two, the similar block grouping of BM3D
There are several three-dimensional bits after operation, and are then to only exist a three-dimensional bits after being marked using similar block.
Using similar block analysis method, can make by one more objectively in a manner of to image zooming-out feature, obtain similar block
Set, using new data access pattern, increases the anti-fitness of network and stability.
Step two of the invention is random drop and Fast Labeling;
Specially random drop, by closing N number of port at random to input to increase the anti-fitness of network and robustness.
Random drop used in the present invention to a certain extent with image is manually added noise, artificially make image generate deformation class
Seemingly, but random drop method is safer than human intervention image addition noise.When use, the parameter of random drop be can be set
It is 1, to not operated to the matrix of input.
Fast Labeling can be divided into two classes: label Fast Labeling and similar block Fast Labeling according to function difference in frame.
Wherein label Fast Labeling is to regard that simplifying for similar block label operates as, which is only denoted as tag block for reference block, and
Using the tag block as input, so that operation efficiency is greatlyd improve, but this method cannot ensure the protrusion of final result.Mark
Label Fast Labeling can directly skip over similar block label and random drop two operations, and reason is, when actually entering data mistake
When big or memory headroom wretched insufficiency, can the intuitive experimental result of generation rapidly and efficiently, can be used as pre-training and change in advance
In generation, facilitates data analysis.Second of operation can be regarded as the complement operation of similar block label.The operation is to randomly select phase
Like the tag block that block label generates, be conducive to feature reuse.The difference of similar block Fast Labeling and random drop is random drop
For delete operation, and similar block Fast Labeling is to increase operation.Similar block Fast Labeling can be 0 by setting parameter value, from
Without inputting parameter.
Step 3 is to optimize and train DenseNet network;
Specifically, having complete DenseNet structure, replacement activation primitive is SELU, and each dense piece increases maximum pond
Layer, makes full use of dense piece of feature reuse and connection mechanism, is further abstracted and extracts further feature.The advantages of DenseNet is
Deepen feature learning by the height using feature, solve degenerate problem, improves characteristic use rate, and make institute in dense piece
There is layer that can directly receive control information.Every layer of output dimension has ψ (growth rate parameter) a Feature Mapping in network,
Therefore network depth is deepened, and the quantity of characteristic pattern increases, and the two belongs to linear correlation.
In CNN, if network contains N layers, connection is just N number of.But in DenseNet, connection but has N (N+1)/2.
The dense block structure of the network is as shown in Figure 2, x0It is input, H1Input be x0;H2Input be x0And x1, wherein x1It is H1's
Output;H3Input be x0、x1And x2, wherein x2It is H2Output;And so on.
If xlIt is the output of dense piece of l (l > 1) layer in DenseNet.In order to more easily understand DenseNet structure,
Following 3 points:
First, in CNN, xlIt is by the output x to namely l-1 layers of preceding layerl-1Act on nonlinear change HlIt is defeated
Out, formula are as follows: xl=Hl(xl-1), (3)
Wherein, nonlinear change HlDefinition often are as follows: a convolution connects a ReLU activation primitive, and random drop one
The connection trained a bit.
ReLU activation primitive is piecewise linear function, and all possible negative value is all switched to 0, and keeps the positive value of input not
Become.This unilateral mode inhibited can preferably excavate correlated characteristic, for nonlinear function, use ReLU activation primitive
The problem of capable of disappearing from gradient.It activates formula as follows:
Second, in ResNet, in order to simplify the training to deep layer network, residual block is introduced, gradient is allowed to flow directly into
Layer earlier, and feature reuse is carried out, it realizes and the identity map of output is added.Obtained output xlFormula become:
xl=Hl(xl-1)+xl-1, (5)
Third devises the connection mode of a more crypto set in DenseNet, it is with a kind of direct between layers
The mode iteration of connection connects all characteristic output.Therefore, the output x of n-th layerlFormula are as follows:
xl=Hl([xl-1,xl-2,…,x0]), (6)
Wherein [...] indicates attended operation, by carrying out feature reuse to the connection of output.HlIs defined as: a batch
Normalization layer (Batch Normalization, BN), is followed by ReLU activation primitive and adds a convolutional layer and random deactivating layer again.
Batch normalization layer mainly solves gradient and disappears and gradient explosion issues, possesses forward-propagating and counterpropagation structure.It is random to lose
Layer living is able to solve the redundancy issue of information by closing neuron at random.
This connection mode advantage of DenseNet is being capable of feature reuse.And make all layers in the network architecture can be straight
Receive control information.Every layer of output dimension has ψ Feature Mapping in network, therefore network depth is deepened, characteristic pattern
Quantity increase, the two belongs to linear correlation.In order to avoid data explosion, the space dimensionality of characteristic pattern is reduced, network end-point is adopted
1 × 1 convolution operation and 2 × 2 pondizations is taken to operate to optimize.
When similar block, which marks, generates data input, batch processing standardized operation, SELU function first can be carried out to the input
Activation and convolution operation extract feature.Wherein the formula of SELU is
Fig. 3 be in SELU activation primitive and ReLU, PReLU, the linear expression of the activation primitives image such as ELU, wherein, ReLU
Advantage is that gradient disappearance problem has been effectively relieved, and model is allowed to have sparse expression ability.But the meeting when neuron coefficient is less than 0
Neuronal death extinction tests occur.PReLU is able to solve neuronal death problem, negative by introducing the reservation of negative fraction parameter
Axis feature.ELU reduces gap between normal gradient and unit gradient, accelerates learning efficiency.Above three activation primitive in
Negative semiaxis is gentle, this to inhibit gradient to explode when variance is excessive.And there are a fixed points for the negative semiaxis of SELU, in variance
It can be allowed to increase when too small, gradient is prevented to disappear.For SELU activation primitive under special parameter, distribution can be automatic
Normalize to 0 mean value and unit variance, this need to meet two conditions: for meet given formula (7) take particular value λ and α or
Person is according to given parameter to weights initialisation.Due to requiring characteristic pattern to keep same size in dense piece of list, so in difference
Transition zone is set between characteristic pattern and carries out down-sampling, guarantees the realization of algorithm with this.Transition zone be by batch normalizing operation,
Convolution operation and pondization operation composition.After transition zone, the present invention adds a maximum pondization operation again, carries out two to feature
Degree extracts.
Due to dense piece of presence, the feature that shallow-layer extracts still may directly be used by deeper, even dense
Also it will use all layers in dense before piece of feature in transition zone except block.Inventive network has used SELU to activate letter
Number simultaneously adds a maximum pondization operation in dense piece.This promotes the utilization rate of feature in the module.
Step four of the invention is based on using the spatial information of input picture, structural information and the characteristic information of extraction
The data of depth training are rebuild.
Specifically, using the spatial information of input picture, structural information and characteristic information of extraction etc. to depth training
Data rebuild.True value transmittance process is by data after the spatial information of initial data, dimensional characteristics and random drop
Mark information, space structure etc. pass to classification marker.Quick true value transmittance process be only by the spatial information of initial data,
Space structure etc. pass to classification marker using and rebuild.Using quick true value transmitting can construct one " it is coarse " and mould
Type accelerates model training efficiency.Comprising incoming depth characteristic in classification marker, also containing from input picture searching similar block
Treated, and corresponding information is corresponded by characteristic information by the acquisition and connection of classification marker, is convenient for subsequent place
Reason.Feature fusion: the gray value of each pixel is weighted and averaged by the value to each corresponding position block to be updated, and weight depends on
In the number and noise intensity of setting 0.Two-dimensional block is fused to original position, to calculate final image.
WhereinIt is weight,It is the number of non-zero coefficient after formula (6) hard -threshold operates,
Model Fusion result is good, can complete medical image language based on the network architecture that BM3D is combined with DenseNet
Justice segmentation tumour identification mission.
Even if DenseNet so many advantage cause its appearance only just have within 2 years as long as to its very deep research with answer
With.In terms of semantic segmentation, J é gou etc. improves upper sampling process, proposes full convolution DenseNet, avoids again while reusing feature
Generate feature explosion.It is applied DenseNet in semantic segmentation using the benchmark dataset training of City scenarios.In image
Classification field, Huang etc. propose a kind of improvement network C ondenseNet, direct in conjunction with dense piece from actual efficiency
Interconnection, deletes not used connection mechanism, obtains a more efficient image classification network model.In Image Super-resolution side
To Zhang etc. combines ResNet and DenseNet, and residual error structure is merged with dense block structure, it is proposed that dense piece of residual error, will
Image is from low resolution original image to high resolution image reconstruction.
Although research in relation to DenseNet is quickly grown, but excessively due to it " youth ", still needed there are many aspect
Perfect the work is done, it is practical with greater need for being applied to step by step especially in terms of practical application, play its advantage.
Based on the above-mentioned lesion detection approach merged based on BM3D and dense convolutional network, the present invention uses and hands over and compare,
Equal pixel precision and Y-PSNR are as evaluation index.Hand over and than (IoU) be that the accurate measurement of semantic segmentation, mode are meters
Calculate the ratio between two intersection of sets collection and union.In the semantic segmentation the problem of, the two collection are combined into true value and predicted value, this
Ratio can deform the antilog that is positive than the sum of upper real, false negative, false positive (union).
It hands over and is to calculate IoU, Zhi Houping in each class than (Mean Intersection over Union, MIoU)
?.Succinct, representative strong due to it, MIoU is referred to as the most common module of semantic segmentation.Functional value is less than or equal to 1,
The bigger effect of numerical value is better.
Equal pixel precision (Mean Pixel Accuracy, MPA), which is that one kind of pixel precision is simple, to be promoted.Pixel precision
(Pixel Accuracy, PA) is one of simplest metric form of detection image accuracy, as a result marking correct picture
The ratio of the plain total pixel of Zhan.Formula are as follows:
Equal pixel precision calculates the ratio calculated in each class by correct classified pixels number, asks being averaged for all classes later.Its
In pixel precision functional value be less than or equal to 1, the bigger effect of numerical value is better.Specific method is:
Y-PSNR (Peak Signal to Noise Ratio, PSNR) is that a kind of evaluation image is similar with original image
The objective standard of degree, in order to measure image quality after treatment, we would generally observe its PSNR value to measure at some
Can reason reach expected requirement.PSNR is applied to the research of the ablation experiment to similar marker block etc. in the present invention, it is original image
As the logarithm with the mean square error between image processed relative to (2^n-1) ^2:
The present invention verifies lesion detection approach by experimental image, and specific experimental image is selected containing tumour or tubercle
The slices of organs such as lung, brain, wherein image tag is marked by being accomplished manually.Since medical image has confidentiality, and it is high
Degree depends on the accuracy of machinery equipment, and the acquisition of associated picture faces part difficulty, and final network image total quantity is 1 200
Width, training set quantity is 1 000, wherein it is 128x128 that 600 width image sizes, which are 512x512,400 width image sizes,.Experiment
Middle network structure is realized using TensorFlow frame.The hardware facility of experiment is Intel (R) Xeon (R) E5-2643v4@
3.40GHz CPU, NVIDIA GeForce GTX 1080M GPU, 256GB memory, operating system is ubuntu 14.04.
Experiment purpose is to complete lesion detection.Model proposed by the present invention is based on DenseNet and merges BM3D, uses SELU
As activation primitive, and increases maximum pondization to dense piece and operate.The training of similar block segmentation and DenseNet based on BM3D,
The block of demand is long, block width can neither too greatly can not be too small.BM3D pursues a small amount of block parameters, therefore the present invention is in each training
In batch, unit is used to be trained for 8 splicing block as input.Though and DenseNet equally cannot in extraction characteristic aspect
It is excessive, but the image block of small value be likely to can not learning characteristic, so the size of image block is set as 32 × 32 by this part, both
Characteristics of image more can be completely obtained, and can guarantee the safety of data.150 different from training image are chosen in experiment
Width image makees all different image of test set, 50 width and training image and tag image as verifying collection.
On the learning rate the problem of, initial learning rate is set as 1e-3, when image using it is more than half when, learning rate is reduced to 1e-4.
Network is trained used image, has all carried out image enhancement by the operations such as overturning and region distortion, has been guaranteed with this
Inhibit the generation of over-fitting while network is trained rapidly again.The loss late of model is set as 0.2, and weight decaying is set as 1e-4, interior
Si Teluofu momentum is set as 0.9, and the number of iterations is set as 150.Experimental result passes through friendship and ratio, pixel precision and equal pixel precision
Etc. carrying out observed result.
Be to the ablation study processes of parts such as similar block labels, marked about similar block, random drop, label it is quick
In the research of label and similar block Fast Labeling, respectively with the number of iterations for 25,50,75,100,125,150,175,200,225
In the case of tested (express for simplicity, by the number of iterations be equal to 125 for), it is opened in permission respectively
(note similar block label value is 1 (presence), random drop value is 1 (not abandoning parameter), label Fast Labeling value for closed operation
It is 0 (not inputting parameter) for 0 (not inputting parameter) and similar block Fast Labeling value is reset condition a, if flag state is T table
Show that presence, flag state F expression are not present, 0 indicates to occur.
It is tested, is made by the ablation of research similar block label, random drop, label Fast Labeling and similar block Fast Labeling
It is tested with the blending image in Fusion Features reconstruction process.Respectively to carried out in the presence of modules value be 0.1,0.2,
0.3,0.4,0.5,0.6,0.7,0.8,0.9 test.Since test is complicated, the present invention is by taking following conditions as an example: if losing at random
It abandons module to exist, then its value is 0.7;If label Fast Labeling module exists, value is also 0.7;If similar block Fast Labeling
Module exists, and is equally test value with 0.7.Unified modules data are conducive to seek unity of standard, and carry out ablation experiment, observe b-h and a
Y-PSNR, as shown in table 1:
The PSNR comparison of table 1 b-h and a (the number of iterations is equal to 125)
The experimental result of table 1 can be seen that the PSNR result of b, c, d and a are better than the PSNR result of e, f, g, h and a.This
The presence for illustrating random drop method is the preferable premise of inventive network effect;
B, the PSNR of c, d, e, f, g and a are substantially greater than h, and the presence of available similar block labeling method is the present invention
The key element of network.It can be seen that the PSNR of c structure and a are greater than the PSNR of b, d and a in the PSNR of b, c, d and a, this
Illustrate that feature reuse can generate overfitting problem sometimes, instead result in training result and be declined slightly.Similarly, in e, f, g and a
PSNR result is it has also been found that so.
Fig. 4 is that piece image is selected to make the actual result that a-h transformation generates respectively, it can be seen that although there are larger by PSNR
Difference, but overview image difference is little, and overall architecture is perfect, distinguishes more embody in detail.According to the figure selected jointly
As block, wherein the detail description power of a, b, c, d are apparently higher than e, f, g, h.This demonstrates random drop in terms of image vision
Good optimization of the method to network, the existing necessity of similar block grouping.Although in addition the image PSNR of f method processing is relatively
It is low, but its details reducing power is not poor instead.
Experimental comparison, by network and SegNet, DeconvNet, empty convolution convolution (DlatedConvolutions),
RefineNet, PSPNet and DeepLab v36 kind network image processing result compare, wherein, in model training, ginseng
Number is unified, and initial learning rate is set as 1e-3, when image using it is more than half when, learning rate is reduced to 1e-4.Network, which is trained, to be used
Image, all using the enhancing image for equally having carried out the operations such as overturning and region distortion, the loss late of model is set as 0.2,
Weight decaying is set as 1e-4, and Nei Siteluofu momentum is set as 0.9, and the number of iterations is set as 150.The hardware facility of experiment it is all the same and
For Intel (R) Xeon (R) E5-2643v4@3.40GHz CPU, NVIDIA GeForce GTX 1080MGPU, 256GB memory,
Operating system is ubuntu 14.04.Experimental result is by calculating separately corresponding MIoU and MPA come visual inspection.
SegNet is made of encoder, decoder and normalization exponential function classification layer, and advantage is using decoder pair
The input feature vector figure of its low resolution is up-sampled.This mode makes network not need study up-sampling, balance quality
And memory.
DeconvNet introduce deconvolution solve pixel scale prediction, improve tradition based on CNN network portion not
Foot, can identify tiny object.
Dlated Convolutions guarantees to increase the visual field for increasing convolution kernel in the case where parameter constant, without making pond
Change operation, while guaranteeing that the size of the Feature Mapping of output is constant.
RefineNet is a kind of multistage refinement network, can be special by coarse high-level semantics features and tiny stratum
Sign is merged, and detailed information is effectively retained.
The pyramid pond module that PSPNet is proposed can polymerize different zones contextual information, improve and obtain global information
Ability.
DeepLab v3 further studies empty convolution, devises empty convolution cascade or different sample rates cavity convolution simultaneously
Row framework.
Under conditions of Variable Conditions are essentially identical, the knot of the method for the present invention and this 6 kinds of network comparing calculations MIoU and MPA
Fruit is as shown in table 3.
Table 3 algorithms of different semantic segmentation image MIoU and MPA compare
MIoU is one of the core index for detecting semantic segmentation, and the height of accuracy can largely determine to calculate
The accuracy of method semantic segmentation.And MPA can form supplement reference to MIoU.In table 3 as can be seen, the doctor used in the present invention
It learns under image data set, compares in six kinds of methods that experiment uses, the MIoU and MPA of DeepLab v3 value calculated is opposite
Highest is followed by PSPNet, RefineNet and Dilated Convolutions.And SegNet and DeconvNet relative result
Not as good as above-mentioned four kinds.
On the medical images data sets that use of the present invention, either MIoU result or MPA result, the method for the present invention is all
It is optimal.0.4 percentage point more than MIoU ratio DeepLab v3, MPA is 0.5 percentage point more.
Fig. 5 is the knot that the three width images randomly selected from verifying collection are obtained in 7 kinds of network structures including the present invention
Fruit figure, three original width images are located at the leftmost side, and Ground Truth image is located at the image rightmost side, remaining is wherein inspection party
The arrangement of method, a kind of last method (being located on the left of Ground Truth image) is the method for the present invention in verification method, and
It can be seen that the method for the present invention and GroundTruth are closest in visual effect.According to three width images, it can be seen that
DeconvNet, DeepLab v3 and the method for the present invention are the most stable, and image fluctuation is little, but RefineNet and PSPNet method
In a kind of labile state, three width image tagged difference modes are not identical.In PSPNet, narrow red color area in piece image
Domain is located at bottom edge, but the narrow red-label region of the second width and third width is instead in lung.In RefineNet
Marking class and background color gap are too small, are not easy to accurately identify.And the method for the present invention effect (being reference using this three width figure) is accurate
While it is also extremely stable.Part Methods PSNR higher but the reason of image visual effect is not very ideal is unknown at present, when
Before get down to test its concrete reason.
The present invention improves dense convolutional network, introduces three-dimensional bits matched filtering module, it is proposed that Yi Zhongjie
The algorithm of three-dimensional bits matched filtering and dense convolutional network is closed to carry out medical image segmentation and lesion detection.Tumour inspection of the present invention
Model is surveyed using the similar block group technology of BM3D and places it into the front end DenseNet, activates letter using SELU in the network architecture
Number replacement ReLU optimizes negative fraction feature, is reflected using the feature that dense block structure in DenseNet network learns different layers
It penetrates and is together in series, while maximum pond second extraction feature is added.Finally in network end-point, using similarity polymerization side in BM3D
Method carries out feature reconstruction, sufficiently excavates the space structure relationship of image pixel-class.Finally, area is carried out using the polymerization of BM3D
Domain fusion.Inventive network uses multiple evaluation indexes: handing over and than, equal pixel precision etc., and shows advantage.Experimental result
Show, inventive network framework has good robustness in medical image segmentation optimization.
The present invention also provides a kind of device for realizing the lesion detection approach based on BM3D and the fusion of dense convolutional network, packets
It includes: memory, the lesion detection approach for storing computer program and being merged based on BM3D and dense convolutional network;Processing
Device is based on BM3D and dense convolution for executing the computer program and realizing multi-memory pressure testing system to realize
The step of lesion detection approach of the network integration.
Technology as described herein may be implemented in hardware, software, firmware or any combination of them.The various spies
Sign is module, and unit or assembly may be implemented together in integration logic device or separately as discrete but interoperable logic
Device or other hardware devices.In some cases, the various features of electronic circuit may be implemented as one or more integrated
Circuit devcie, such as IC chip or chipset.
It includes that one or more processors execute that the code or instruction, which can be software and/or firmware by processing circuit,
Such as one or more digital signal processors (DSP), general purpose microprocessor, application-specific integrated circuit (ASICs), scene can be compiled
Journey gate array (FPGA) or other equivalents are integrated circuit or discrete logic.Therefore, term " processor, " due to
It can refer to that any aforementioned structure or any other structure are more suitable for the technology as described herein realized when for the present invention.
In addition, in some respects, function described in the disclosure can be provided in software module and hardware module.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, defined in the present invention
General Principle can realize in other embodiments without departing from the spirit or scope of the present invention.Therefore, this hair
It is bright to be not intended to be limited to these embodiments shown in the present invention, and be to fit to special with principles of this disclosure and novelty
The consistent widest scope of point.
Claims (7)
1. a kind of lesion detection approach merged based on BM3D and dense convolutional network, which is characterized in that method includes:
Step 1 marks similar block;
Step 2, random drop and Fast Labeling;
Step 3, optimization and training DenseNet network;
Step 4, the number using the spatial information of input picture, structural information and the characteristic information of extraction based on depth training
According to being rebuild.
2. lesion detection approach according to claim 1, which is characterized in that
Step 1 further include:
N is selected in original imagekThe reference block of a k × k size;
It generallys use every Sk(Sk< 10) a pixel is chosen, and complexity drops to script algorithmAnd it is selected around reference block
The range searching similar block that size is n × n is selected, finds the block that all differences degree in region is less than threshold value, and the phase looked for
A three-dimensional matrice is integrated into like block;
Reference block is integrated into the three-dimensional matrice;BM3D determines similar block using Euclidean distance, uses to the pretreatment of block distance
It standardizes two-dimensional linear transformation and hard -threshold, formula is as follows:
Wherein x is pixel,It is target similar block, ZxIt is search block,It is the size of selection block, γ ' is hard -threshold behaviour
Make, threshold value is set as It is the two-dimensional linear transformation after normalization;
Similar set of blocks is found according to Euclidean distance formula, by shown in formula (2):
Wherein X is image,Hyper parameter, the hyper parameter can determine that it is whether similar between block and block,ForPhase
Like set of blocks.
3. lesion detection approach according to claim 1 or 2, which is characterized in that
Step 2 further include:
The parameter of random drop is set as 1, does not operate to the matrix of input;
Fast Labeling is divided into the frame: label Fast Labeling and similar block Fast Labeling;
Random drop is delete operation, and similar block Fast Labeling is to increase operation.It is 0 that parameter value, which is arranged, in similar block Fast Labeling;
Label Fast Labeling is the operation that simplifies of similar block label, and reference block is denoted as tag block by operation, and with the label
Block is as input;Label Fast Labeling directly skips over similar block label and random drop two operations;
Similar block Fast Labeling is the tag block for randomly selecting similar block label and generating.
4. lesion detection approach according to claim 1 or 2, which is characterized in that
Step 3 further include:
DenseNet structure is configured, setting activation primitive is SELU, and the maximum pond layer of each dense piece of increase utilizes dense piece
Feature reuse and connection mechanism extract further feature;
Based on DenseNet structure, by deepening feature learning using the height of feature, and all layers in dense piece are connect
Receive control information;
Every layer of output dimension has ψ Feature Mapping in a network so that network depth is deepened, and the quantity of characteristic pattern increases;ψ
For growth rate parameter;
The dense block structure of Configuration network, x0It is input, H1Input be x0;
H2Input be x0And x1, wherein x1It is H1Output;
H3Input be x0、x1And x2, wherein x2It is H2Output;And so on;
If xlIt is the output of dense piece of l (l > 1) layer in DenseNet;
X is configured based on DenseNet structurel, xlPass through the output x to l-1 layers of preceding layerl-1Act on nonlinear change HlOutput,
Its formula are as follows: xl=Hl(xl-1) (3)
Wherein, nonlinear change HlDefinition often are as follows: a convolution connects a ReLU activation primitive, and random drop is some
Trained connection;
ReLU activation primitive is piecewise linear function, and all negative values are all switched to 0, and keep input on the occasion of constant;
For nonlinear function, the problem of disappearance using ReLU activation primitive from gradient;It activates formula as follows:
In ResNet, training to deep layer network introduces residual block, allows gradient to flow directly into layer earlier, and carry out spy
Sign reuses, and realizes and is added to the identity map of output, obtained output xlFormula become:
xl=Hl(xl-1)+xl-1 (5)
In DenseNet, iteration connects all characteristic output in a manner of being directly connected between layers;The output of n-th layer
xlFormula are as follows:
xl=Hl([xl-1,xl-2,…,x0]) (6)
Wherein [...] indicates attended operation, by carrying out feature reuse to the connection of output;HlIs defined as: a batch standard
Change layer, is followed by ReLU activation primitive and adds a convolutional layer and random deactivating layer again;Batch normalization layer is for solving gradient disappearance
With gradient explosion issues, possess forward-propagating and counterpropagation structure;Random deactivating layer solves letter by closing neuron at random
The redundancy issue of breath;
When similar block mark generate data input when, first to the input carry out batch processing standardized operation, SELU function activation and
Convolution operation extracts feature.Wherein the formula of SELU is
SELU activation primitive is under parameter preset, distribution energy automatic normalization to 0 mean value and unit variance;
The λ and α of particular value are taken or according to given parameter to weights initialisation for meeting given formula (7);
Due to requiring characteristic pattern to keep same size in dense piece of list, so under setting transition zone carries out between different characteristic figure
Sampling, guarantees the realization of algorithm;
Transition zone is made of the operation of batch normalizing operation, convolution operation and pondization;
After transition zone, maximum pondization operation is configured, two degree of extractions are carried out to feature.
5. lesion detection approach according to claim 1, which is characterized in that
Step 4 further include:
Weight is carried out using the data of the spatial information of input picture, structural information and the characteristic information of extraction to depth training
It builds;
Based on true value transmittance process by the mark information of data after the spatial information of initial data, dimensional characteristics and random drop,
Space structure passes to classification marker;
Quick true value transmittance process is that the spatial information of initial data, space structure are passed to classification marker to utilize and carry out again
It builds;A rough model can be constructed using the transmitting of quick true value, accelerates model training efficiency;
Comprising incoming depth characteristic in classification marker, containing finding similar block treated characteristic information from input picture,
By the acquisition and connection of classification marker, corresponding information is corresponded, is convenient for subsequent processing;
Characteristic information is merged.
6. lesion detection approach according to claim 5, which is characterized in that
Fusion method includes: that the gray value for configuring each pixel is updated by the value weighted average to each corresponding position block;Power
Depend on setting 0 number and noise intensity again;Two-dimensional block is fused to original position, to calculate final image;
WhereinIt is weight,It is the number of non-zero coefficient after formula (6) hard -threshold operates,
Model Fusion result is completed medical image semantic segmentation tumour based on the network architecture that BM3D is combined with DenseNet and is known
Other task.
7. a kind of device for realizing the lesion detection approach based on BM3D and the fusion of dense convolutional network characterized by comprising
Memory, the lesion detection approach for storing computer program and being merged based on BM3D and dense convolutional network;
Processor, for executing the computer program and realizing multi-memory pressure testing system, to realize such as claim 1
The step of to the lesion detection approach merged described in 6 any one based on BM3D and dense convolutional network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910415029.9A CN110189308B (en) | 2019-05-17 | 2019-05-17 | Tumor detection method and device based on fusion of BM3D and dense convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910415029.9A CN110189308B (en) | 2019-05-17 | 2019-05-17 | Tumor detection method and device based on fusion of BM3D and dense convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110189308A true CN110189308A (en) | 2019-08-30 |
CN110189308B CN110189308B (en) | 2020-10-23 |
Family
ID=67716746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910415029.9A Active CN110189308B (en) | 2019-05-17 | 2019-05-17 | Tumor detection method and device based on fusion of BM3D and dense convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110189308B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781793A (en) * | 2019-10-21 | 2020-02-11 | 合肥成方信息技术有限公司 | Artificial intelligence real-time image recognition method based on quadtree algorithm |
CN110879980A (en) * | 2019-11-13 | 2020-03-13 | 厦门大学 | Nuclear magnetic resonance spectrum denoising method based on neural network algorithm |
CN111341438A (en) * | 2020-02-25 | 2020-06-26 | 中国科学技术大学 | Image processing apparatus, electronic device, and medium |
CN111428586A (en) * | 2020-03-09 | 2020-07-17 | 同济大学 | Three-dimensional human body posture estimation method based on feature fusion and sample enhancement |
CN111476802A (en) * | 2020-04-09 | 2020-07-31 | 山东财经大学 | Medical image segmentation and tumor detection method and device based on dense convolution model and readable storage medium |
CN111652840A (en) * | 2020-04-22 | 2020-09-11 | 北京航空航天大学 | Turbid screening and classifying device for X-ray chest X-ray image lung |
CN111832621A (en) * | 2020-06-11 | 2020-10-27 | 国家计算机网络与信息安全管理中心 | Image classification method and system based on dense multipath convolutional network |
CN112257800A (en) * | 2020-10-30 | 2021-01-22 | 南京大学 | Visual identification method based on deep convolutional neural network model-regeneration network |
CN113313775A (en) * | 2021-05-26 | 2021-08-27 | 浙江科技学院 | Deep learning-based nonlinear optical encryption system attack method |
CN115564652A (en) * | 2022-09-30 | 2023-01-03 | 南京航空航天大学 | Reconstruction method for image super-resolution |
CN115810016A (en) * | 2023-02-13 | 2023-03-17 | 四川大学 | Lung infection CXR image automatic identification method, system, storage medium and terminal |
CN116309582A (en) * | 2023-05-19 | 2023-06-23 | 之江实验室 | Portable ultrasonic scanning image identification method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150187053A1 (en) * | 2013-12-26 | 2015-07-02 | Mediatek Inc. | Method and Apparatus for Image Denoising with Three-Dimensional Block-Matching |
CN107563381A (en) * | 2017-09-12 | 2018-01-09 | 国家新闻出版广电总局广播科学研究院 | The object detection method of multiple features fusion based on full convolutional network |
CN107729946A (en) * | 2017-10-26 | 2018-02-23 | 广东欧珀移动通信有限公司 | Picture classification method, device, terminal and storage medium |
CN108765290A (en) * | 2018-05-29 | 2018-11-06 | 天津大学 | A kind of super resolution ratio reconstruction method based on improved dense convolutional neural networks |
CN109360152A (en) * | 2018-10-15 | 2019-02-19 | 天津大学 | 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks |
CN109544510A (en) * | 2018-10-24 | 2019-03-29 | 广州大学 | A kind of three-dimensional Lung neoplasm recognition methods based on convolutional neural networks |
CN109712111A (en) * | 2018-11-22 | 2019-05-03 | 平安科技(深圳)有限公司 | A kind of cutaneum carcinoma category identification method, system, computer equipment and storage medium |
-
2019
- 2019-05-17 CN CN201910415029.9A patent/CN110189308B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150187053A1 (en) * | 2013-12-26 | 2015-07-02 | Mediatek Inc. | Method and Apparatus for Image Denoising with Three-Dimensional Block-Matching |
CN107563381A (en) * | 2017-09-12 | 2018-01-09 | 国家新闻出版广电总局广播科学研究院 | The object detection method of multiple features fusion based on full convolutional network |
CN107729946A (en) * | 2017-10-26 | 2018-02-23 | 广东欧珀移动通信有限公司 | Picture classification method, device, terminal and storage medium |
CN108765290A (en) * | 2018-05-29 | 2018-11-06 | 天津大学 | A kind of super resolution ratio reconstruction method based on improved dense convolutional neural networks |
CN109360152A (en) * | 2018-10-15 | 2019-02-19 | 天津大学 | 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks |
CN109544510A (en) * | 2018-10-24 | 2019-03-29 | 广州大学 | A kind of three-dimensional Lung neoplasm recognition methods based on convolutional neural networks |
CN109712111A (en) * | 2018-11-22 | 2019-05-03 | 平安科技(深圳)有限公司 | A kind of cutaneum carcinoma category identification method, system, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
KOSTADIN DABOV等: "Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781793A (en) * | 2019-10-21 | 2020-02-11 | 合肥成方信息技术有限公司 | Artificial intelligence real-time image recognition method based on quadtree algorithm |
CN110879980A (en) * | 2019-11-13 | 2020-03-13 | 厦门大学 | Nuclear magnetic resonance spectrum denoising method based on neural network algorithm |
CN110879980B (en) * | 2019-11-13 | 2023-09-05 | 厦门大学 | Nuclear magnetic resonance spectrum denoising method based on neural network algorithm |
CN111341438A (en) * | 2020-02-25 | 2020-06-26 | 中国科学技术大学 | Image processing apparatus, electronic device, and medium |
CN111428586B (en) * | 2020-03-09 | 2023-05-16 | 同济大学 | Three-dimensional human body posture estimation method based on feature fusion and sample enhancement |
CN111428586A (en) * | 2020-03-09 | 2020-07-17 | 同济大学 | Three-dimensional human body posture estimation method based on feature fusion and sample enhancement |
CN111476802A (en) * | 2020-04-09 | 2020-07-31 | 山东财经大学 | Medical image segmentation and tumor detection method and device based on dense convolution model and readable storage medium |
CN111476802B (en) * | 2020-04-09 | 2022-10-11 | 山东财经大学 | Medical image segmentation and tumor detection method, equipment and readable storage medium |
CN111652840A (en) * | 2020-04-22 | 2020-09-11 | 北京航空航天大学 | Turbid screening and classifying device for X-ray chest X-ray image lung |
CN111652840B (en) * | 2020-04-22 | 2022-08-30 | 北京航空航天大学 | Turbid screening and classifying device for X-ray chest X-ray image lung |
CN111832621A (en) * | 2020-06-11 | 2020-10-27 | 国家计算机网络与信息安全管理中心 | Image classification method and system based on dense multipath convolutional network |
CN112257800A (en) * | 2020-10-30 | 2021-01-22 | 南京大学 | Visual identification method based on deep convolutional neural network model-regeneration network |
CN113313775A (en) * | 2021-05-26 | 2021-08-27 | 浙江科技学院 | Deep learning-based nonlinear optical encryption system attack method |
CN113313775B (en) * | 2021-05-26 | 2024-03-15 | 浙江科技学院 | Nonlinear optical encryption system attack method based on deep learning |
CN115564652A (en) * | 2022-09-30 | 2023-01-03 | 南京航空航天大学 | Reconstruction method for image super-resolution |
CN115564652B (en) * | 2022-09-30 | 2023-12-01 | 南京航空航天大学 | Reconstruction method for super-resolution of image |
CN115810016A (en) * | 2023-02-13 | 2023-03-17 | 四川大学 | Lung infection CXR image automatic identification method, system, storage medium and terminal |
CN116309582A (en) * | 2023-05-19 | 2023-06-23 | 之江实验室 | Portable ultrasonic scanning image identification method and device and electronic equipment |
CN116309582B (en) * | 2023-05-19 | 2023-08-11 | 之江实验室 | Portable ultrasonic scanning image identification method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110189308B (en) | 2020-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110189308A (en) | A kind of lesion detection approach and device based on BM3D and the fusion of dense convolutional network | |
Aggarwal et al. | Generative adversarial network: An overview of theory and applications | |
Zhang et al. | ME‐Net: multi‐encoder net framework for brain tumor segmentation | |
Adegun et al. | Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art | |
Guibas et al. | Synthetic medical images from dual generative adversarial networks | |
WO2022127227A1 (en) | Multi-view semi-supervised lymph node classification method and system, and device | |
CN109903292A (en) | A kind of three-dimensional image segmentation method and system based on full convolutional neural networks | |
Benhammou et al. | A first study exploring the performance of the state-of-the art CNN model in the problem of breast cancer | |
Ding et al. | DCU-Net: a dual-channel U-shaped network for image splicing forgery detection | |
Zhang et al. | ST-unet: Swin transformer boosted U-net with cross-layer feature enhancement for medical image segmentation | |
Yamanakkanavar et al. | A novel M-SegNet with global attention CNN architecture for automatic segmentation of brain MRI | |
Zidan et al. | Swincup: Cascaded swin transformer for histopathological structures segmentation in colorectal cancer | |
Santos et al. | A new approach for detecting fundus lesions using image processing and deep neural network architecture based on yolo model | |
Ding et al. | FTransCNN: Fusing Transformer and a CNN based on fuzzy logic for uncertain medical image segmentation | |
Luo et al. | A deep convolutional neural network for diabetic retinopathy detection via mining local and long‐range dependence | |
Razavi et al. | Minugan: Dual segmentation of mitoses and nuclei using conditional gans on multi-center breast h&e images | |
Tan et al. | Pulmonary nodule detection using hybrid two‐stage 3D CNNs | |
Kothala et al. | Localization of mixed intracranial hemorrhages by using a ghost convolution-based YOLO network | |
Zhao et al. | SiUNet3+-CD: A full-scale connected Siamese network for change detection of VHR images | |
Banerjee et al. | A CADe system for gliomas in brain MRI using convolutional neural networks | |
Cao et al. | 3D convolutional neural networks fusion model for lung nodule detection onclinical CT scans | |
CN112488996A (en) | Inhomogeneous three-dimensional esophageal cancer energy spectrum CT (computed tomography) weak supervision automatic labeling method and system | |
Bhattacharjee et al. | An explainable computer vision in histopathology: techniques for interpreting black box model | |
Shah et al. | Classifying and localizing abnormalities in brain MRI using channel attention based semi-Bayesian ensemble voting mechanism and convolutional auto-encoder | |
Teng et al. | Semi-supervised leukocyte segmentation based on adversarial learning with reconstruction enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |