CN108090904A - A kind of medical image example dividing method and device - Google Patents
A kind of medical image example dividing method and device Download PDFInfo
- Publication number
- CN108090904A CN108090904A CN201810006159.2A CN201810006159A CN108090904A CN 108090904 A CN108090904 A CN 108090904A CN 201810006159 A CN201810006159 A CN 201810006159A CN 108090904 A CN108090904 A CN 108090904A
- Authority
- CN
- China
- Prior art keywords
- medical image
- network
- border
- image
- data enhancing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of medical image example dividing method and device, can solve the problem of the example segmentation in accurate progress image in the case where medical image object boundary obscures and prospect background difference is small.This method includes:Data enhancing and pretreatment are carried out to medical image;Build multi-channel nerve network;Pass through the multi-channel nerve network, the progress data enhancing and pretreated medical image are classified, the classification results of foreground and background are obtained, border detection is carried out to the structure in the progress data enhancing and pretreated medical image, obtains border result;By converged network, the classification results of the foreground and background and border result are merged, the example in the medical image is split, draw final example segmentation result.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of medical image example dividing methods and device.
Background technology
Traditional image segmentation is a kind of image Segmentation Technology based on region, by setting not using thresholding method
Same characteristic threshold value, if image slices vegetarian refreshments is divided into Ganlei.Threshold value is for distinguishing the gray scale thresholding of target and background.If figure
Picture only two major class of target and background, then need to only choose a threshold value and be known as single threshold segmentation, this method is will be in image
The gray value of each pixel is compared with threshold value, and gray value is more than the pixel of threshold value for one kind, and gray value is less than the pixel of threshold value
To be another kind of.If there are multiple targets in image, it is necessary to it chooses multiple threshold values and separates each target and background, this method
Referred to as multi-threshold segmentation.
In process of the present invention is realized, inventor has found that at least there are the following problems in the prior art:
Pathological section image is since foreground and background color distortion is smaller, and along with different sections, often color depth differs
It causes, causes the dividing method based on threshold value cannot segmentation object well;Biological tissue is made of a variety of cells, different
The dye level of cell often has a larger difference, and in different sections various cells ratio it is inconsistent, cause pathological section
Noise is larger so that the effect of existing partitioning algorithm is poor.
The content of the invention
In view of this, the embodiment of the present invention provides a kind of method, apparatus and device of the segmentation of medical image example, can solve
Certainly in the case where medical image object boundary obscures and prospect background difference is small, example segmentation asks in accurate progress image
Topic.
To achieve the above object, one side according to embodiments of the present invention provides a kind of medical image example segmentation
Method.
A kind of method of medical image example segmentation of the embodiment of the present invention includes:To medical image progress data enhancing and in advance
Processing;Build multi-channel nerve network;By the multi-channel nerve network, by the progress data enhancing and pretreated
Medical image is classified, and obtains the classification results of foreground and background, to the progress data enhancing and pretreated medicine
Structure in image carries out border detection, obtains border result;By converged network, by the classification results of the foreground and background
And border result is merged, and the example in the medical image is split, and draws final example segmentation result.
Optionally, data enhancing and pretreatment include, and to improve training effect, take volume of data Enhancement Method,
Including rotating, scaling, translating, shearing, mirror image, flexible deformation etc..
Optionally, the foreground and background of the medical image refers to the example to be extracted in medical image and removes respectively
Image background and noise beyond example.
Optionally, network struction mould multi-channel nerve network in the block includes:Region segmentation passage and border detection
Passage.
Optionally, the converged network in the Fusion Module is full convolutional neural networks.
To achieve the above object, one side according to embodiments of the present invention provides a kind of medical image example segmentation
Device.
A kind of device of medical image example segmentation of the embodiment of the present invention includes:Image pre-processing module, for medicine
Image carries out data enhancing and pretreatment;Network struction module, for building multi-channel nerve network;Region segmentation is examined with border
Module is surveyed, for passing through the multi-channel nerve network, the progress data enhancing and pretreated medical image are carried out
Classification, obtains the classification results of foreground and background, to the structure in the progress data enhancing and pretreated medical image
Border detection is carried out, obtains border result;Fusion Module, for passing through converged network, by the classification knot of the foreground and background
Fruit and border result are merged, and the example in the medical image is split, and draw final example segmentation result.
Optionally, described image preprocessing module takes volume of data Enhancement Method, including rotating, scaling, putting down
Shifting, shearing, mirror image, flexible deformation, to meet substantial amounts of training data required for neutral net.
Optionally, the region segmentation and boundary detection module module, using full convolutional neural networks come split prospect and
Background, network internal up-sampling can carry out training end to end and prediction;Network is supervised using deep, deep supervision is contributing to balance just
Negative example, and the feature of multiple scales of different depth is integrated, to obtain border detection result.
Optionally, Fusion Module passes through the output result of two passages on the basis of region segmentation and border detection
Convolutional neural networks are further integrated, and obtain final fine segmentation result.
To achieve the above object, it is according to embodiments of the present invention in another aspect, providing a kind of realization medical image example
The electronic equipment of the method for segmentation.
The a kind of electronic equipment of the embodiment of the present invention includes:One or more processors;Storage device, for storing one
Or multiple programs, when one or more of programs are performed by one or more of processors so that one or more of
The method that processor realizes the medical image example segmentation of the embodiment of the present invention.
To achieve the above object, another aspect according to embodiments of the present invention, provides a kind of computer-readable medium.
A kind of computer-readable medium of the embodiment of the present invention, is stored thereon with computer program, and described program is processed
Device is used to implement the medical image example segmentation that the computer is made to perform embodiment of the present invention method when performing.
One embodiment in foregoing invention has the following advantages that or advantageous effect:Create the nerve net of a multichannel
Network improves the accuracy of segmentation;Most pixels are non-border in image, and border and non-border are seriously uneven, therefore
The loss function value of network is relatively small, and diffusion can occur for along with backpropagation when, causes bottom-layer network convergence very slow
Or hardly restrain, it is deep using deep supervision network (Deeply Supervised Net, abbreviation DSN) in border detection passage
Supervision not only accelerates the convergence rate of network, but also bottom-layer network is enabled to acquire the stronger feature of characterization ability, deep to supervise
Help to balance positive and negative example, and integrate the feature of multiple scales of different depth;The present invention is oneself of medical pathologies sectioning image
Dynamic segmentation diagnosis provides a kind of method of Efficient robust, and support is provided for the development of computer-aided diagnosis.
Further effect adds hereinafter in conjunction with specific embodiment possessed by above-mentioned non-usual optional mode
With explanation.
Description of the drawings
Attached drawing does not form inappropriate limitation of the present invention for more fully understanding the present invention.Wherein:
Fig. 1 is the schematic diagram of the key step of the method for medical image example segmentation according to embodiments of the present invention;
Fig. 2 is the flow diagram of the method for medical image example segmentation according to embodiments of the present invention;
Fig. 3 is the schematic diagram of the main modular of the device of medical image example segmentation according to embodiments of the present invention;
Fig. 4 is adapted for the structural representation for realizing the terminal device of the embodiment of the present application or the computer system of server
Figure.
Specific embodiment
It explains below in conjunction with attached drawing to the exemplary embodiment of the present invention, including the various of the embodiment of the present invention
Details should think them only exemplary to help understanding.Therefore, those of ordinary skill in the art should recognize
It arrives, various changes and modifications can be made to the embodiments described herein, without departing from scope and spirit of the present invention.Together
For clarity and conciseness, the description to known function and structure is omitted in sample in following description.
To medical image, it carries out data enhancing and pretreatment to the technical solution of the embodiment of the present invention first, is then based on
FCN (Fully convolutional network, full convolutional neural networks) and HED (Holistically-nested edge
Detector, border detection network) structure multi-channel nerve network;By multi-channel nerve network, medical image is divided
Class obtains prospect background classification results, and the structure in image is carried out border detection obtains border detection result;Based on convolution god
Through network design converged network, above two result is merged, obtains final segmentation result.
Fig. 1 is the schematic diagram of the key step of the method for medical image example segmentation according to embodiments of the present invention;
As shown in Figure 1, the method for the medical image example segmentation of the embodiment of the present invention mainly includes the following steps:
Step S11:Data enhancing and pretreatment are carried out to medical image.In this step, can by rotating, scaling,
At least one of translation, shearing, mirror image, flexible deformation carry out data enhancing and pretreatment.
Step S12:Build multi-channel nerve network.In this step, can be split using the full convolutional neural networks of FCN
Foreground and background can use HED to carry out border detection.
Step S13:By the multi-channel nerve network, by the progress data enhancing and pretreated medical image
Classify, obtain the classification results of foreground and background, in the progress data enhancing and pretreated medical image
Structure carries out border detection, obtains border result.In this step, can medical image be subjected to Pixel-level classification, it will be each
Pixel is labeled as prospect or background;Structure in medical image is divided into two classes by boundary detection module, be respectively labeled as side with it is non-
Side.
Step S14:By converged network, the classification results of the foreground and background and border result are merged,
Example in the medical image is split, draws final example segmentation result.In this step, prospect can be combined to carry on the back
The region segmentation result and border detection of scape are as a result, each example in image is marked out to come.
Fig. 2 is the flow diagram of the method for medical image example segmentation according to embodiments of the present invention
The implementation of specific medical image example segmentation is specific as follows:
Data enhancing and pretreatment are carried out to image first, including following aspects.Rotation:With 15 degree for interval to every
It opens training image to be rotated with corresponding label, the gap that image rotation generates is filled by the average of image, label rotation
The gap of generation is filled by the maximum 255 of 8 integers.Scaling:0.8 times and 0.9 is reduced to every training image respectively
Times, amplify 1.1 times and 1.2 times.Translation:In the training process, every training image is cut at random with 400 × 400 rectangle frame
A part of image is taken as input.Shearing:Shearing inclination is step-length respectively two sides with 5 degree in the range of 20 degree in -20 degree
Training image is sheared upwards.Mirror image:Left and right overturning is carried out to training figure.Although it is also effective become to spin upside down
It changes, but 180 degree is rotated again due to spinning upside down to be equivalent to after left and right is overturn, therefore only carry out left and right overturning.Flexible deformation:It is right respectively
Every training image carries out pincushion deformations, sinusoid deformations and barrel deformations.After the enhancing of row data, every training
Image will generate about 1200 training images, and final training set includes nearly 100,000 training images.
Next structure multi-channel nerve network, including region segmentation passage and border detection passage.Lead in region segmentation
Road using the full convolutional neural networks of FCN, is deformed by VGG16 networks, and most latter two full articulamentum of VGG16 is become
It is 1 convolutional layer into convolution kernel size, and passes through and up-sample layer to characteristic pattern progress bilinear interpolation, so final network knot
Structure includes 15 convolutional layers, 5 maximum down-sampling layers (taking the maximum in receptive field) and a up-sampling layer.One image
X belongs to certain a kind of probability by directly exporting each pixel after full convolutional neural networks, that is, belongs to prospect or background
Probability.Use PuRepresent this probability, ωuRepresent the coefficient matrix of full convolutional neural networks, then
Pu(yj=k | X;ωu)=μk(h (X, ωu)) (1)
Formula (1) represents that pixel j belongs to the probability of kth class, and the k that we select to make probability P u maximums is as to the pre- of the pixel
It surveys, i.e.
Each pixel can be marked out it as a result, and belong to the segmentation of prospect or background, i.e. display foreground and background.
Although target can be precisely located in full convolutional neural networks, and is accurately partitioned into target from background
Come, but for those at a distance of the relatively near object even to contact with each other, independent full convolutional neural networks cannot distinguish it well
, this is because caused by the loss function of full convolutional neural networks is insensitive to contact area, the region that object contacts with each other
It is relatively fewer in an image, though classification error will not significantly increase loss, therefore to this in back-propagation process
A little region punishment are less, these regions is caused classification error occur.The embodiment of the present invention detects list by border detection passage
The solely border of each object, after the border of each object determines, will distinguish different bodies of gland completely.The passage uses
It HED border detections network and is deformed by VGG networks, the HED unlike FCN employs the training plan supervised deeply
Slightly, a loss function is both increased before each down-sampling layer.WithRepresent m-th of the deep supervision output of HED networks
Probability, PbRepresent the weighted average to M deep monitoring forecast result, weighting coefficient α, ωbRepresent the weight matrix of HED models,
Each pixel can be marked out it as a result, to belong to border or be not belonging to border, i.e., the border detection of example in image.
By converged network, prospect background segmentation result is combined with border detection result, the network is by multi task model
The result of prediction directly exports fine segmentation prediction as input.The network includes 4 convolutional layers and 2 down-sampling layers, it
Afterwards the probability value of each pixel is obtained by being up-sampled in network.Use ωfRepresent the weight matrix of fusion networks, then it is final
It is predicted as,
Pf(yj=k | Pu, Pb;ωf)=μk(h(Pu, Pb, ωf)) (5)
The model can be trained and predicted end to end, that is, give an input picture X, and model will directly give essence
Thin segmentation prediction Pf, any other post-processing operation is not required:
Pf(yj=k | X;ω, ωu, ωb, ωf)=μk(h (X, ω, ωu, ωb, ωf)) (6)
Image instance segmentation result can be drawn as a result,.
After the completion of network struction, network is trained.It is arrived although the network of the present invention can mathematically carry out end
The training at end, but actually, since Fei Bian is extremely uneven with side, cause the loss function value of border detection passage than region point
It is much smaller to cut passage, directly optimization can so that it is heavily biased towards region segmentation passage, and border detection passage will be ineffective and lead
Network easily converges to local minimum when network being caused not restrain, and directly being optimized, so the present invention first divides whole network
Module carries out pre-training initialization network parameter, and retraining whole network makes network convergence to optimum.In the pre-training stage,
The network parameter ω of fixed boundary detection module and Fusion Module firstb, ωf, only learning characteristic extraction network parameter ω and area
The parameter ω of regional partition moduleu, this stage is equivalent to only FCN models, and therefore, this stage is for a figure in training set
As XnEach the loss function of prediction pixel point is
The loss of whole image is expressed as the sum of all pixels loss, i.e.,
After region segmentation module trains a period of time, parameter ω and ωuIt preferably initializes, at this moment fixation has been learned
The ω and ω of habitu, then training boundary detection module, learning parameter ωb.Due to the use of deep supervision, for each pixel
M+1 will be generated to lose, the loss on each pixel is the sum of this M+1 loss.In this stage, in order to further balance
While with it is non-while, we employ a kind of deformation of cross entropy loss function, which can be with the positive and negative example of autobalance.Each
The loss function supervised deeply is
Wherein β=| Z- |/| Z |, 1- β=| Z+ |/| Z |, | Z+ | and | Z- | the pixel of side and Fei Bian in label Z are represented respectively
Number, | Z | represent the sum of all pixels of label Z.
It is to the loss function after deep supervision weighted average
This stage total loss function is
After the completion of boundary detection module training, fixed character extracts network, region segmentation module and border detection mould
Block starts to train Fusion Module, and for each pixel, loss function is
For whole image, lose as the sum of pixel loss
After pre-training process, each module parameter of network has been initialized to suitably be worth, and feature conversion coating can incite somebody to action
Provincial characteristics is converted into boundary characteristic well, next needs to make mutual break-in between modules, adjusts mutually so that is entire
The effect of network reaches best, and in this stage, we allow network to learn all parameters simultaneously, and loss function damages for modules
The sum of function is lost, i.e.,
L=Lu+Lb+Lf (14)
In test phase, back propagation learning parameter need not be carried out, it is only necessary to obtain final prediction probability, therefore go
Fall loss function all in network, be changed to corresponding activation primitive, is i.e. softmax loss functions are changed to softmax activation letters
Number, sigmoid cross entropy loss functions are changed to sigmoid activation primitives.It, will be direct using a figure as input in test process
It obtains each pixel and belongs to certain a kind of probability, prediction result of the classification for selecting to make prediction probability maximum as the pixel.
Test phase, image enhance without any data, and original image directly is subtracted its average predicts as input, draws
Example segmentation result.
The method of medical image example segmentation according to embodiments of the present invention can be seen that the god for creating a multichannel
Through network, the accuracy of segmentation is improved;Most pixels are non-border in image, and border and non-border are seriously uneven,
Therefore the loss function value of network is relatively small, and diffusion can occur for along with backpropagation when, and bottom-layer network is caused to be restrained
It is very slow or hardly restrain, in border detection passage, using deep supervision network (Deeply Supervised Net, referred to as
DSN), deep supervision not only accelerates the convergence rate of network, but also bottom-layer network is enabled to acquire the stronger feature of characterization ability,
Deep supervision helps to balance positive and negative example, and integrates the feature of multiple scales of different depth;The present invention is medical pathologies slice map
The automatic segmentation diagnosis of picture provides a kind of method of Efficient robust, and support is provided for the development of computer-aided diagnosis.
Fig. 3 is the schematic diagram of the main modular of the device of medical image example segmentation according to embodiments of the present invention;
As shown in figure 3, the device 300 of the medical image example segmentation of the embodiment of the present invention mainly includes:Image preprocessing
Module 301, network struction module 302, region segmentation and boundary detection module 303, Fusion Module 304.Wherein:
Image pre-processing module 301 can be used for carrying out data enhancing and pretreatment to medical image;Network struction module 302
Available for structure multi-channel nerve network;Region segmentation can be used for boundary detection module 303 through the multi-channel nerve net
Network classifies the medical image, obtains the classification results of foreground and background, and locates the progress data enhancing and in advance
Structure in medical image after reason carries out boundary segmentation and obtains border detection result;Fusion Module 304 can be used for passing through fusion
Network merges above two result, draws final segmentation result, and the object in image is independently split.
From the above, it can be seen that creating the neutral net of a multichannel, the accuracy of segmentation is improved;Image
Middle overwhelming majority pixel is non-border, and border and non-border are seriously uneven, therefore the loss function value of network is relatively small, then
In addition diffusion can occur during backpropagation, cause bottom-layer network convergence very slow or hardly restrain, in border detection passage,
Using deep supervision network (Deeply Supervised Net, abbreviation DSN), deep supervision not only accelerates the convergence rate of network,
And bottom-layer network is enabled to acquire the stronger feature of characterization ability, deep supervision helps to balance positive and negative example, and integrates different depths
The feature of multiple scales of degree;The present invention provides a kind of Efficient robust for the automatic segmentation diagnosis of medical pathologies sectioning image
Method provides support for the development of computer-aided diagnosis.
According to an embodiment of the invention, the present invention also provides a kind of electronic equipment and a kind of readable medium.
The electronic equipment of the present invention includes:One or more processors;Storage device, for storing one or more journeys
Sequence, when one or more of programs are performed by one or more of processors so that one or more of processors are real
The method of the medical image example segmentation of the existing embodiment of the present invention.
The computer-readable medium of the present invention, is stored thereon with computer program, is used when described program is executed by processor
In the method for realizing the medical image example segmentation that the computer is made to perform the embodiment of the present invention.
Fig. 4 is adapted for the structural representation for realizing the terminal device of the embodiment of the present application or the computer system of server
Figure.
As shown in figure 4, it illustrates suitable for being used for realizing the computer system 400 of the terminal device of the embodiment of the present application
Structure diagram.Terminal device shown in Fig. 4 is only an example, should not be to the function and use scope of the embodiment of the present application
Bring any restrictions.
As shown in figure 4, computer system 400 includes central processing unit (CPU) 401, it can be read-only according to being stored in
Program in memory (ROM) 402 or be loaded into program in random access storage device (RAM) 403 from storage part 408 and
Perform various appropriate actions and processing.In RAM 403, also it is stored with system 400 and operates required various programs and data.
CPU 401, ROM 402 and RAM 403 are connected with each other by bus 404.Input/output (I/O) interface 405 is also connected to always
Line 404.
I/O interfaces 405 are connected to lower component:Importation 406 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 407 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage part 408 including hard disk etc.;
And the communications portion 409 of the network interface card including LAN card, modem etc..Communications portion 409 via such as because
The network of spy's net performs communication process.Driver 410 is also according to needing to be connected to I/O interfaces 405.Detachable media 411, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 410, as needed in order to read from it
Computer program be mounted into as needed storage part 408.
Particularly, disclosed embodiment, the process described above with reference to flow chart may be implemented as counting according to the present invention
Calculation machine software program.For example, embodiment disclosed by the invention includes a kind of computer program product, including being carried on computer
Computer program on readable medium, the computer program are included for the program code of the method shown in execution flow chart.
In such embodiment, which can be downloaded and installed from network by communications portion 409 and/or from can
Medium 411 is dismantled to be mounted.When the computer program is performed by central processing unit (CPU) 401, the system that performs the application
The above-mentioned function of middle restriction.
It should be noted that the computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
It is limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.Diversified forms may be employed in the data-signal of this propagation, including but it is unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code can be transmitted with any appropriate medium, be included but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more
The executable instruction of logic function as defined in being used to implement.It should also be noted that some as replace realization in, institute in box
The function of mark can also be occurred with being different from the order marked in attached drawing.For example, two boxes succeedingly represented are actual
On can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depending on involved function.Also
It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and perform rule
The group of specialized hardware and computer instruction is realized or can used to the dedicated hardware based system of fixed functions or operations
It closes to realize.
Being described in module involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described module can also be set in the processor, for example, can be described as:A kind of processor bag
Include image pre-processing module, region segmentation module, boundary detection module, Fusion Module.Wherein, the title of these units is at certain
In the case of do not form restriction to the unit in itself, for example, image preprocessing extraction module is also described as " medicine figure
The image data enhancing preprocessing module as in ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in equipment described in above-described embodiment;Can also be individualism, and without be incorporated the equipment in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are performed by the equipment, makes
Obtaining the equipment includes:Data enhancing and pretreatment are carried out to medical image;Build multi-channel nerve network;Pass through the multichannel
Neutral net classifies the medical image, obtains the classification results of foreground and background, by the body of gland knot in medical image
Structure carries out boundary segmentation and obtains segmentation result;By converged network, above two result is merged, draws final segmentation knot
Fruit independently splits the object in image.
The said goods can perform the method that the embodiment of the present invention is provided, and possesses the corresponding function module of execution method and has
Beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to the method that the embodiment of the present invention is provided.
Technical solution according to embodiments of the present invention creates the neutral net of a multichannel, improves the standard of segmentation
True property;Most pixels are non-border in image, and border and non-border are seriously uneven, therefore the loss function value phase of network
To smaller, diffusion can occur for along with backpropagation when, cause bottom-layer network convergence very slow or hardly restrain, on border
Sense channel, using deep supervision network (Deeply Supervised Net, abbreviation DSN), deep supervision not only accelerates network
Convergence rate, and bottom-layer network is enabled to acquire the stronger feature of characterization ability, deep supervision helps to balance positive and negative example, and whole
Close the feature of multiple scales of different depth;The present invention provides a kind of high for the automatic segmentation diagnosis of medical pathologies sectioning image
Steady method is imitated, support is provided for the development of computer-aided diagnosis.
Above-mentioned specific embodiment, does not form limiting the scope of the invention.Those skilled in the art should be bright
It is white, depending on design requirement and other factors, various modifications, combination, sub-portfolio and replacement can occur.It is any
Modifications, equivalent substitutions and improvements made within the spirit and principles in the present invention etc., should be included in the scope of the present invention
Within.
Claims (10)
1. a kind of medical image example dividing method, which is characterized in that including:
Data enhancing and pretreatment are carried out to medical image;
Build multi-channel nerve network;
By the multi-channel nerve network, the progress data enhancing and pretreated medical image are classified, obtained
To the classification results of foreground and background, to data enhancing and the structure in pretreated medical image of carrying out into row bound
Detection, obtains border result;
By converged network, the classification results of the foreground and background and border result are merged, by the medicine figure
Example as in is split, and draws final example segmentation result.
2. according to the method described in claim 1, it is characterized in that, data enhancing and pretreatment include following at least one
Kind:Rotation, scaling, translation, shearing, mirror image, flexible deformation.
3. according to the method described in claim 1, it is characterized in that, the multi-channel nerve network include region segmentation passage and
Border detection passage.
4. according to the method described in claim 1, it is characterized in that, the converged network is full convolutional neural networks.
5. a kind of medical image example segmenting device, which is characterized in that including:
Image pre-processing module, for carrying out data enhancing and pretreatment to medical image;
Network struction module, for building multi-channel nerve network;
Region segmentation and boundary detection module, for passing through the multi-channel nerve network, by the progress data enhancing and in advance
Treated, and medical image is classified, and obtains the classification results of foreground and background, to the progress data enhancing and pre- place
Structure in medical image after reason carries out border detection, obtains border result;
For passing through converged network, the classification results of the foreground and background and border result are merged for Fusion Module,
Example in the medical image is split, draws final example segmentation result.
6. device according to claim 5, which is characterized in that in described image preprocessing module data enhancing include with
Lower at least one:Rotation, scaling, translation, shearing, mirror image, flexible deformation.
7. device according to claim 5, which is characterized in that the network struction mould multi-channel nerve network bag in the block
It includes:Region segmentation passage and border detection passage.
8. device according to claim 5, which is characterized in that the converged network in the Fusion Module is full convolutional Neural
Network.
9. a kind of electronic equipment, which is characterized in that including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in claim 1-4 is any.
10. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that described program is held by processor
The method as described in any in claim 1-4 is realized during row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810006159.2A CN108090904A (en) | 2018-01-03 | 2018-01-03 | A kind of medical image example dividing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810006159.2A CN108090904A (en) | 2018-01-03 | 2018-01-03 | A kind of medical image example dividing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108090904A true CN108090904A (en) | 2018-05-29 |
Family
ID=62181530
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810006159.2A Pending CN108090904A (en) | 2018-01-03 | 2018-01-03 | A kind of medical image example dividing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108090904A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146899A (en) * | 2018-08-28 | 2019-01-04 | 众安信息技术服务有限公司 | CT image jeopardizes organ segmentation method and device |
CN109934223A (en) * | 2019-03-01 | 2019-06-25 | 北京地平线机器人技术研发有限公司 | A kind of example segmentation determination method, neural network model training method and device neural network based |
CN109948510A (en) * | 2019-03-14 | 2019-06-28 | 北京易道博识科技有限公司 | A kind of file and picture example dividing method and device |
CN110139067A (en) * | 2019-03-28 | 2019-08-16 | 北京林业大学 | A kind of wild animal monitoring data management information system |
CN110163862A (en) * | 2018-10-22 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Image, semantic dividing method, device and computer equipment |
CN110265141A (en) * | 2019-05-13 | 2019-09-20 | 上海大学 | A kind of liver neoplasm CT images computer aided diagnosing method |
CN110276289A (en) * | 2019-06-17 | 2019-09-24 | 厦门美图之家科技有限公司 | Generate the method and human face characteristic point method for tracing of Matching Model |
CN110930427A (en) * | 2018-09-20 | 2020-03-27 | 银河水滴科技(北京)有限公司 | Image segmentation method, device and storage medium based on semantic contour information |
CN111161284A (en) * | 2019-12-31 | 2020-05-15 | 东南大学 | Medical image bone segmentation method based on combination of PSPNet and HED |
CN111179275A (en) * | 2019-12-31 | 2020-05-19 | 电子科技大学 | Medical ultrasonic image segmentation method |
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
CN111612808A (en) * | 2019-02-26 | 2020-09-01 | 北京嘀嘀无限科技发展有限公司 | Foreground area acquisition method and device, electronic equipment and storage medium |
CN109544560B (en) * | 2018-10-31 | 2021-04-27 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2021151272A1 (en) * | 2020-05-20 | 2021-08-05 | 平安科技(深圳)有限公司 | Method and apparatus for cell image segmentation, and electronic device and readable storage medium |
CN113724269A (en) * | 2021-08-12 | 2021-11-30 | 浙江大华技术股份有限公司 | Example segmentation method, training method of example segmentation network and related equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791623A (en) * | 2016-12-09 | 2017-05-31 | 深圳市云宙多媒体技术有限公司 | A kind of panoramic video joining method and device |
CN106940816A (en) * | 2017-03-22 | 2017-07-11 | 杭州健培科技有限公司 | Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D |
-
2018
- 2018-01-03 CN CN201810006159.2A patent/CN108090904A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791623A (en) * | 2016-12-09 | 2017-05-31 | 深圳市云宙多媒体技术有限公司 | A kind of panoramic video joining method and device |
CN106940816A (en) * | 2017-03-22 | 2017-07-11 | 杭州健培科技有限公司 | Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D |
Non-Patent Citations (3)
Title |
---|
SAINING XIE等: ""Holistically-Nested Edge Detection"", 《ARXIV》 * |
YAN XU ET.AL.: ""Gland Instance Segmentation by Deep Multichannel Side Supervision"", 《ARXIV》 * |
YAN XU ET.AL.: ""Gland Instance Segmentation Using Deep Multichannel Neural Networks"", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146899A (en) * | 2018-08-28 | 2019-01-04 | 众安信息技术服务有限公司 | CT image jeopardizes organ segmentation method and device |
CN110930427B (en) * | 2018-09-20 | 2022-05-24 | 银河水滴科技(北京)有限公司 | Image segmentation method, device and storage medium based on semantic contour information |
CN110930427A (en) * | 2018-09-20 | 2020-03-27 | 银河水滴科技(北京)有限公司 | Image segmentation method, device and storage medium based on semantic contour information |
CN110163862B (en) * | 2018-10-22 | 2023-08-25 | 腾讯科技(深圳)有限公司 | Image semantic segmentation method and device and computer equipment |
CN110163862A (en) * | 2018-10-22 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Image, semantic dividing method, device and computer equipment |
CN109544560B (en) * | 2018-10-31 | 2021-04-27 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111612808A (en) * | 2019-02-26 | 2020-09-01 | 北京嘀嘀无限科技发展有限公司 | Foreground area acquisition method and device, electronic equipment and storage medium |
CN111612808B (en) * | 2019-02-26 | 2023-12-08 | 北京嘀嘀无限科技发展有限公司 | Foreground region acquisition method and device, electronic equipment and storage medium |
CN109934223B (en) * | 2019-03-01 | 2022-04-26 | 北京地平线机器人技术研发有限公司 | Method and device for determining evaluation parameters of example segmentation result |
CN109934223A (en) * | 2019-03-01 | 2019-06-25 | 北京地平线机器人技术研发有限公司 | A kind of example segmentation determination method, neural network model training method and device neural network based |
CN109948510A (en) * | 2019-03-14 | 2019-06-28 | 北京易道博识科技有限公司 | A kind of file and picture example dividing method and device |
CN109948510B (en) * | 2019-03-14 | 2021-06-11 | 北京易道博识科技有限公司 | Document image instance segmentation method and device |
CN110139067A (en) * | 2019-03-28 | 2019-08-16 | 北京林业大学 | A kind of wild animal monitoring data management information system |
CN110265141A (en) * | 2019-05-13 | 2019-09-20 | 上海大学 | A kind of liver neoplasm CT images computer aided diagnosing method |
CN110265141B (en) * | 2019-05-13 | 2023-04-18 | 上海大学 | Computer-aided diagnosis method for liver tumor CT image |
CN110276289A (en) * | 2019-06-17 | 2019-09-24 | 厦门美图之家科技有限公司 | Generate the method and human face characteristic point method for tracing of Matching Model |
CN110276289B (en) * | 2019-06-17 | 2021-09-07 | 厦门美图之家科技有限公司 | Method for generating matching model and face characteristic point tracking method |
CN111161284B (en) * | 2019-12-31 | 2022-02-11 | 东南大学 | Medical image bone segmentation method based on combination of PSPNet and HED |
CN111179275A (en) * | 2019-12-31 | 2020-05-19 | 电子科技大学 | Medical ultrasonic image segmentation method |
CN111179275B (en) * | 2019-12-31 | 2023-04-25 | 电子科技大学 | Medical ultrasonic image segmentation method |
CN111161284A (en) * | 2019-12-31 | 2020-05-15 | 东南大学 | Medical image bone segmentation method based on combination of PSPNet and HED |
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
WO2021151272A1 (en) * | 2020-05-20 | 2021-08-05 | 平安科技(深圳)有限公司 | Method and apparatus for cell image segmentation, and electronic device and readable storage medium |
CN113724269A (en) * | 2021-08-12 | 2021-11-30 | 浙江大华技术股份有限公司 | Example segmentation method, training method of example segmentation network and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108090904A (en) | A kind of medical image example dividing method and device | |
CN111369563B (en) | Semantic segmentation method based on pyramid void convolutional network | |
EP3961484A1 (en) | Medical image segmentation method and device, electronic device and storage medium | |
CN111507993B (en) | Image segmentation method, device and storage medium based on generation countermeasure network | |
CN110428432B (en) | Deep neural network algorithm for automatically segmenting colon gland image | |
CN108109152A (en) | Medical Images Classification and dividing method and device | |
Zeng et al. | A machine learning model for detecting invasive ductal carcinoma with Google Cloud AutoML Vision | |
CN110287960A (en) | The detection recognition method of curve text in natural scene image | |
CN110276402B (en) | Salt body identification method based on deep learning semantic boundary enhancement | |
CN111563902A (en) | Lung lobe segmentation method and system based on three-dimensional convolutional neural network | |
CN109711448A (en) | Based on the plant image fine grit classification method for differentiating key field and deep learning | |
CN109635812B (en) | The example dividing method and device of image | |
CN109325589A (en) | Convolutional calculation method and device | |
CN112150476A (en) | Coronary artery sequence vessel segmentation method based on space-time discriminant feature learning | |
CN110705403A (en) | Cell sorting method, cell sorting device, cell sorting medium, and electronic apparatus | |
CN111275686B (en) | Method and device for generating medical image data for artificial neural network training | |
CN108229485A (en) | For testing the method and apparatus of user interface | |
CN110853009A (en) | Retina pathology image analysis system based on machine learning | |
CN106651887A (en) | Image pixel classifying method based convolutional neural network | |
CN111932529A (en) | Image segmentation method, device and system | |
WO2024016812A1 (en) | Microscopic image processing method and apparatus, computer device, and storage medium | |
CN110147753A (en) | The method and device of wisp in a kind of detection image | |
CN114550169A (en) | Training method, device, equipment and medium for cell classification model | |
CN114842238A (en) | Embedded mammary gland ultrasonic image identification method | |
CN111899259A (en) | Prostate cancer tissue microarray classification method based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180529 |
|
RJ01 | Rejection of invention patent application after publication |