CN110211130A - Image partition method, computer equipment and storage medium - Google Patents
Image partition method, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110211130A CN110211130A CN201910417977.6A CN201910417977A CN110211130A CN 110211130 A CN110211130 A CN 110211130A CN 201910417977 A CN201910417977 A CN 201910417977A CN 110211130 A CN110211130 A CN 110211130A
- Authority
- CN
- China
- Prior art keywords
- convolution
- image
- channel number
- split
- convolution operation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of image partition method, computer equipment and storage mediums.This method is split the image to be split with multiple structures to be split of input by dividing network, wherein, dividing network includes multilayer convolution operation, and at least one layer of convolution operation includes the first convolution, second convolution, and the network structure of third convolution, due to the output channel number of the first convolution, second convolution outputs and inputs port number, and the input channel number of third convolution is compressed according to preset ratio, so that the convolution operation comprising three convolution, input channel number and output channel number compared to traditional convolution operation remain unchanged, great reduction has been obtained in the quantity of the model parameter of convolution operation, and then reduce the quantity of model parameter possessed by the segmentation network comprising at least one layer of above-mentioned convolution operation, to reduce using memory when segmentation network progress image dividing processing With the consumption of video memory.
Description
Technical field
This application involves medical image processing technology field more particularly to a kind of image partition method, computer equipment and
Storage medium.
Background technique
With deep learning popularization and application, more and more scientific research institutions march deep learning field.Skill is detected in medical treatment
Art field generallys use classification, detection and segmentation of the convolutional neural networks realization to medical image, wherein to medical image
It is divided into a pith of medical treatment detection at present.And since the complexity of human tissue organ is very high, may include very
More minor structures, for example, being based on above-mentioned application scenarios comprising up to a hundred or even thousands of minor structures in brain map, occurring accordingly very
The mostly dividing method of large quantities of quantum structures based on convolutional neural networks.
The dividing method of the large quantities of quantum structures based on convolutional neural networks occurred in recent years mainly includes two classes, and first
Class method is the mode of corresponding multiple parted patterns to be respectively trained according to large quantities of quantum structures, then use trained multiple points
Model is cut to be split each minor structure for including in input picture.Second class method is according to large quantities of quantum structures training one
Parted pattern recycles a trained parted pattern disposably to divide the large quantities of quantum structures for including in input picture
It cuts.
But to there is a problem of that memory and video memory consume big for above-mentioned dividing method.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of figure that can be effectively reduced memory and video memory consumption
As dividing method, computer equipment and storage medium.
In a first aspect, a kind of image partition method, which comprises
Obtain image to be split;Image to be split includes multiple structures to be split;
Image to be split is input to segmentation network to be split, more structural images after being divided;Divide network packet
Include multilayer convolution operation;Wherein, at least one layer of convolution operation includes:
The first convolution is carried out using the image of the convolution operation output on monovolume product verification upper layer, obtains the first convolution feature
Figure;The output channel number of first convolution is to obtain after the input channel number of the first convolution is compressed according to preset ratio;
The second convolution is carried out to the first convolution characteristic pattern using original convolution kernel, obtains the second convolution characteristic pattern;Volume Two
The output channel number of long-pending input channel number and the second convolution is the output channel number of the first convolution;
Third convolution is carried out using monovolume product the second convolution characteristic pattern of verification, obtains third convolution characteristic pattern, third convolution
Input channel number be the second convolution output channel number, the output channel number of third convolution is the input channel of the first convolution
Number.
Segmentation network includes down-sampling section and up-sampling section in one of the embodiments,;Down-sampling section includes at least one
Layer convolution operation or up-sampling section include at least one layer of convolution operation.
Image to be split is whole image or multiple regions block image, multiple regions block figure in one of the embodiments,
As having all images feature of whole image.
The centre coordinate of region unit image is sat based on the non-background dot on image to be split in one of the embodiments,
What mark was randomly selected.
The multiple that the size of region unit image is 16 in one of the embodiments,.
The specific value of preset ratio is any number in two to eight in one of the embodiments,.One wherein
In embodiment, segmentation network is full convolutional neural networks.
Full convolutional neural networks are the full convolutional neural networks of V-shape in one of the embodiments,.
Second aspect, a kind of image segmentation device, described device include:
Module is obtained, for obtaining image to be split;Image to be split includes multiple structures to be split;
Divide module, is split for image to be split to be input to segmentation network, more structure charts after being divided
Picture;Dividing network includes multilayer convolution operation;Wherein, at least one layer of convolution operation includes:
The first convolution is carried out using the image of the convolution operation output on monovolume product verification upper layer, obtains the first convolution feature
Figure;The output channel number of first convolution is to obtain after the input channel number of the first convolution is compressed according to preset ratio;
The second convolution is carried out to the first convolution characteristic pattern using original convolution kernel, obtains the second convolution characteristic pattern;Volume Two
The output channel number of long-pending input channel number and the second convolution is the output channel number of the first convolution;
Third convolution is carried out using monovolume product the second convolution characteristic pattern of verification, obtains third convolution characteristic pattern, third convolution
Input channel number be the second convolution output channel number, the output channel number of third convolution is the input channel of the first convolution
Number.
The third aspect, a kind of computer equipment, including memory and processor, the memory are stored with computer journey
Sequence, the processor realize image partition method described in first aspect any embodiment when executing the computer program.
Fourth aspect, a kind of computer readable storage medium are stored thereon with computer program, the computer program quilt
Image partition method described in first aspect any embodiment is realized when processor executes.
A kind of image partition method, computer equipment and storage medium provided by the present application, by segmentation network to input
The image to be split with multiple structures to be split be split, wherein segmentation network includes multilayer convolution operation, and at least
One layer of convolution operation includes carrying out the first convolution using the image of the convolution operation output on monovolume product verification upper layer, obtains first
Convolution characteristic pattern, the output channel number of the first convolution therein are after the input channel number of the first convolution is compressed according to preset ratio
It obtains;It reuses original convolution kernel and the second convolution is carried out to the first convolution characteristic pattern, obtain the second convolution characteristic pattern, it is therein
The input channel number of second convolution and the output channel number of the second convolution are the output channel number of the first convolution;Then using single
Convolution kernel carries out third convolution to the second convolution characteristic pattern, obtains third convolution characteristic pattern, and the input of third convolution therein is logical
Road number is the output channel number of the second convolution, and the output channel number of third convolution is the input channel number of the first convolution.Due to upper
It include three convolution in the convolution operation stated, and the first convolution sum third convolution uses single convolution kernel, and only volume Two
Product uses original convolution kernel, and the input channel number and third convolution of the port number of the first convolution output, the second convolution
Input channel number and output channel number be all compressed by by a certain percentage.By common knowledge it is found that according to the size of convolution kernel,
Input channel number and output channel number are assured that the quantity for the model parameter that convolution operation has, therefore, above-mentioned to include
The convolution operation of three convolution only includes one compared to traditional convolution operation and uses original convolution kernel, and the convolution operation
Input channel number and output channel number remain unchanged, great contracting has been obtained in the quantity of the model parameter of convolution operation
Subtract, and then reduce the quantity of model parameter possessed by the segmentation network comprising at least one layer of above-mentioned convolution operation, to drop
The consumption of low memory and video memory when carrying out image dividing processing using the segmentation network.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of internal structure for computer equipment that one embodiment provides;
Fig. 2 is a kind of flow chart for image partition method that one embodiment provides;
Fig. 3 is a kind of connected mode schematic diagram for convolution operation that one embodiment provides;
Fig. 4 is a kind of structural schematic diagram for convolution operation that one embodiment provides;
Fig. 5 is a kind of structural schematic diagram for convolution operation that one embodiment provides;
Fig. 6 is a kind of structural schematic diagram for network structure that one embodiment provides;
Fig. 7 is a kind of flow chart of the training method for segmentation network that one embodiment provides;
Fig. 8 is a kind of structural schematic diagram for image segmentation device that one embodiment provides;
Fig. 9 is a kind of structural schematic diagram for image segmentation device that one embodiment provides.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, and do not have to
In restriction the application.
Image partition method provided by the present application can be applied in computer equipment as shown in Figure 1, which sets
Standby to can be terminal, internal structure chart can be as shown in Figure 1.The computer equipment includes the processing connected by system bus
Device, memory, network interface, display screen and input unit.Wherein, the processor of the computer equipment is calculated and is controlled for providing
Ability processed.The memory of the computer equipment includes non-volatile memory medium, built-in storage.The non-volatile memory medium is deposited
Contain operating system and computer program.The built-in storage is operating system and computer program in non-volatile memory medium
Operation provide environment.The network interface of the computer equipment is used to communicate with external terminal by network connection.The calculating
To realize a kind of image partition method when machine program is executed by processor.The display screen of the computer equipment can be liquid crystal display
Screen or electric ink display screen, the input unit of the computer equipment can be the touch layer covered on display screen, can also be with
It is the key being arranged on computer equipment shell, trace ball or Trackpad, can also be external keyboard, Trackpad or mouse
Deng.
It will be understood by those skilled in the art that structure shown in Fig. 1, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
Embodiment will be passed through below and in conjunction with attached drawing specifically to the technical side of the technical solution of the application and the application
How case, which solves above-mentioned technical problem, is described in detail.These specific embodiments can be combined with each other below, for phase
Same or similar concept or process may repeat no more in certain embodiments.
Fig. 2 is a kind of flow chart for image partition method that one embodiment provides.The executing subject of the present embodiment is such as
Computer equipment shown in FIG. 1, what is involved is computer equipments, and segmentation network handles segmented image to be used to be divided for the present embodiment
The detailed process cut.As shown in Fig. 2, this method comprises:
S101, image to be split is obtained;Image to be split includes multiple structures to be split.
Wherein, image to be split indicates the image for currently needing to carry out multiple segmentation of structures, for example, the image to be split can
To include but is not limited to routine CT image, MRI image, PET-MRI image etc., the present embodiment is not limited this.It is above-mentioned to point
Cutting image is a kind of image comprising multiple structures to be split, specifically, may include in image to be split up to a hundred or even thousands of
A structure to be split, for example, may include thousands of minor structures in Typical AVM image, such as in the brain structure may include as white matter,
The structures such as grey matter, cerebrospinal fluid, white matter therein include corpus callosum, brain stem etc. again, and grey matter includes frontal lobe, temporal lobe, top, Basal ganglia
Deng corpus callosum includes cadre, mouth, splenium etc., and temporal lobe includes superior temporal gyrus, gyrus temporalis meduus, inferior temporal gyrus, temporo grade etc., and Basal ganglia includes
Amygdaloid nucleus, caudate nucleus, shell core, globus pallidus etc..About the structure type in image to be split, the present embodiment with no restrictions, for example,
The type of the structure can be brain structure, blood vessel structure, organ structure, lymph structure etc., as long as comprising more in image to be split
A structure is i.e. in the protection scope of the present embodiment.
In addition, optional, above-mentioned image to be split is whole image or multiple regions block image, and multiple regions block image
All images feature with whole image.Wherein, the size of whole image is identical as the original size of image to be split, area
Practical domain block image is that the parts of images come is detained from whole image, and region unit image can be rectangular image, can also
With the image being square, as long as multiple regions block image combines, included characteristics of image, which be can satisfy, has entire figure
The all images feature of picture, with no restrictions to this present embodiment.
In practical applications, computer equipment can sweep human organ or ecologic structure by connecting scanning device
It retouches to obtain image to be split.Optionally, computer equipment can also directly from database or from internet downloading obtain to
Segmented image, with no restrictions to this present embodiment.It should be noted that computer is set during obtaining image to be split
It is standby to obtain multiple regions block image using a variety of methods, for example, computer equipment can be existed by the method for sliding window
The multiple regions block image that fixed size is extracted in whole image, can also preset various sizes of window, further according to not
Extract the multiple regions block image of correspondingly-sized size, and the multiple regions block extracted on the entire image with the window of size
Image combines all images feature for being included with whole image.
S102, by image to be split be input to segmentation network be split, more structural images after being divided.
Wherein, segmentation network includes multilayer convolution operation;And at least one layer of convolution operation includes: using in monovolume product verification
The image of the convolution operation output of layer carries out the first convolution, obtains the first convolution characteristic pattern;The output channel number of first convolution is
What the input channel number of the first convolution obtained after compressing according to preset ratio;Using original convolution kernel to the first convolution characteristic pattern into
The second convolution of row obtains the second convolution characteristic pattern;The input channel number of second convolution and the output channel number of the second convolution are
The output channel number of first convolution;Third convolution is carried out using monovolume product the second convolution characteristic pattern of verification, it is special to obtain third convolution
Sign figure, the input channel number of third convolution are the output channel number of the second convolution, and the output channel number of third convolution is the first volume
Long-pending input channel number.
Optionally, above-mentioned segmentation network can be a kind of Three dimensional convolution neural network, and the segmentation network can be disposable
Complete the segmentation network for dividing large quantities of quantum structures.Segmentation network in the present embodiment includes multilayer convolution operation, every layer of convolution
The composed structure of operation can be identical, and optionally, the composed structure of every layer of convolution operation can also be different.Specifically, every layer of volume
Connection type between product operation can use connection type as shown in Figure 3, wherein the output of #1 convolution operation connects #2 volumes
The input of product operation and the input of #9 convolution operation, the input of the output connection #3 convolution operation of #2 convolution operation and #8 convolution behaviour
The input of work, the input of the output connection #4 convolution operation of #3 convolution operation and the input of #7 convolution operation, #4 convolution operation
The input of output connection #5 convolution operation and the input of #6 convolution operation;The output of #5 convolution operation connects the defeated of #6 convolution operation
Enter.In addition, the input of lower layer's convolution operation or output channel number be upper layer convolution operation input or 2 times of output channel number.
It should be noted that the network structure provided in Fig. 3, only schematically illustrates the connection relationship between a convolution operation,
The number and port number of operation layer are not limited to, with no restrictions to this present embodiment.
Optionally, the convolution operation in above-mentioned segmentation network be it is a kind of using certain convolution kernel to input picture carry out convolution
Operation, and the input channel number of the convolution operation is identical with output channel number.The convolution operation can specifically include a volume
Long-pending network structure also may include the network structure of multiple convolution.When the convolution operation includes the network structure of a convolution
When, the network structure of the convolution can specifically use single convolution kernel (convolution kernel is 1 × 1 × 1) to input picture progress convolution,
Original convolution kernel (convolution kernel 3) can be used to carry out convolution to input picture;When the convolution operation includes the net of multiple convolution
When network structure, the convolution kernel that the network structure of each convolution uses be may be the same or different.Exemplary illustration is as follows:
Under the first application scenarios, above-mentioned convolution operation can use structure as shown in Figure 4, as shown in figure 4, the volume
Product operation includes the network structure (11 in figure) of a convolution, the input channel number and output channel of the network structure of the convolution
Number is N (N is the integer more than or equal to 1), and the convolution kernel that the network structure of 11 convolution uses is original convolution kernel (convolution kernel
It is 3 × 3 × 3).By common knowledge it is found that the number for the model parameter that network structure has can be counted by following relational expression (1)
It obtains:
M=N1×N2×a×a×a (1);
Wherein, M indicates the number of model parameter;N1Indicate input channel number;N2Indicate output channel number;A indicates convolution kernel
Size.Under the first application scenarios, by the input channel number of the network structure of 11 convolution and output channel number and convolution
Nucleus band enters in above-mentioned relation formula (1), the number of the model parameter of 11 convolution operations can be calculated are as follows: N × N × 3 × 3 × 3
=27N2。
Under second of application scenarios, above-mentioned convolution operation can use structure as shown in Figure 5, as shown in figure 5, the volume
Product operation includes the network structure (22,33,44 in figure) of multiple convolution, wherein the input channel number of the network structure of 22 convolution
For N, output channel number is compressed according to preset ratio C, and compressed port number is N/C, and the network knot of 22 convolution
The convolution kernel that structure uses is original convolution kernel (convolution kernel is 3 × 3 × 3);The input channel number of the network structure of 33 convolution is N/
C, output channel number by for N/C, and the convolution kernel that uses of the network structure of 33 convolution be single convolution kernel (convolution kernel is 1 × 1 ×
1).Under second of application scenarios, by the input channel number and output channel number of the network structure of 22,33 and 44 convolution, with
And convolution kernel is brought into above-mentioned relation formula (1), the number of the model parameter of each network structure can be calculated, further
The model parameter by each network structure number summation, the number of the model parameter of convolution operation can be obtained are as follows: N × N/
C × 1 × 1 × 1+N/C × N/C × 3 × 3 × 3+N/C × N × 1 × 1 × 1=(2/C × 27/C2)×N2。
It should be noted that above-mentioned preset ratio C is a kind of proportionality coefficient of model parameter number for reducing convolution operation,
The specific value of preset ratio C can be determined according to practical situations, for example, it can specifically take the arbitrary number in 2 to 8
Value.When preset ratio C takes different numerical value, according to of the model parameter of the preset ratio C convolution operation being calculated
Number is not also identical.
Exemplary illustration, under above two application scenarios, when the port number N that outputs and inputs of convolution operation is 64,
And preset ratio C value calculates separately the model under two kinds of application scenarios using above-mentioned relation formula (1) when being respectively 2,4,8
The number and model parameter reduction of parameter, the available result as in table 1:
Table 1
By upper table 1 it is found that the number of the model parameter of the convolution operation under second of applicable cases is greatly contracted
Subtract, and with the difference of preset ratio, the reduction of model parameter is different.
To sum up, in the actual process for carrying out image dividing processing to the image to be split of input using above-mentioned segmentation network
In, when computer equipment gets the image to be split in S101, first the image to be split can be pre-processed, example
Such as, pretreatment may include rotation, resampling, remove skull, Nonuniformity Correction, Histogram Matching, gray scale normalization etc. one
Sequence of maneuvers.Then pretreated image to be split is input in segmentation network described in S102 again, realization is treated point
The dividing processing of image is cut, disposably to obtain the segmented image comprising multiple segmenting structures, completes large batch of segmentation of structures.
A kind of image partition method provided in this embodiment has multiple structures to be split to input by segmentation network
Image to be split be split, wherein segmentation network includes multilayer convolution operation, and at least one layer of convolution operation includes making
The first convolution is carried out with the image of the convolution operation output on monovolume product verification upper layer, obtains the first convolution characteristic pattern, therein the
The output channel number of one convolution is to obtain after the input channel number of the first convolution is compressed according to preset ratio;Reuse original volume
Product the first convolution characteristic pattern of verification carries out the second convolution, obtains the second convolution characteristic pattern, the input channel of the second convolution therein
Several and the second convolution output channel number is the output channel number of the first convolution;Then special using monovolume product the second convolution of verification
Sign figure carries out third convolution, obtains third convolution characteristic pattern, and the input channel number of third convolution therein is the defeated of the second convolution
Port number out, the output channel number of third convolution are the input channel number of the first convolution.Due to including in above-mentioned convolution operation
The corresponding network structure of three convolution, and the first convolution sum third convolution uses single convolution kernel, and only the second convolution makes
Original convolution kernel, and the port number of the first convolution output, the input channel number of the second convolution and third convolution it is defeated
Enter port number and output channel number is all contracted by by a certain percentage.By common knowledge it is found that according to the size of convolution kernel, input
Port number and output channel number are the quantity for the model parameter that can determine that convolution operation has, and therefore, above-mentioned includes three volumes
Long-pending convolution operation only includes one compared to traditional convolution operation and uses original convolution kernel, and the input of the convolution operation
Port number and output channel number remain unchanged, and great reduction have been obtained in the quantity of the model parameter of convolution operation, in turn
The quantity for reducing model parameter possessed by the segmentation network comprising at least one layer of above-mentioned convolution operation, to reduce use
The segmentation network carries out the consumption of memory and video memory when image dividing processing.
Based on foregoing description, optionally, present invention also provides a kind of segmentation networks, as shown in fig. 6, the segmentation network packet
Down-sampling section and up-sampling section are included, includes at least one layer of convolution operation in above-mentioned down-sampling section, or include in up-sampling section
At least one layer of convolution operation.
The structure for each convolution operation for including in above-mentioned down-sampling section can be identical, can not also be identical, correspondingly, above-mentioned
The structure for each convolution operation for including in up-sampling section can be identical, can not also be identical.When in down-sampling section or up-sampling section
Each convolution operation structure it is not identical when, the convolution operation of part layer can use convolution operation as shown in Figure 4, part layer
Convolution operation can use convolution operation as shown in Figure 5.So when the structure of each convolution operation is identical, the present embodiment is related to
And the structure of each convolution operation can use convolution operation as shown in Figure 5.
Specifically, the structure type of segmentation network may include many kinds, segmentation network provided by the present application can be specific
Using a kind of full convolutional neural networks, optionally, which can be the full convolutional neural networks of V-shape, be commonly called as
V-net.The segmentation network used due to the present embodiment for full convolutional neural networks, the image of arbitrary dimension can be sent
Enter into the segmentation network and be split, and the segmentation network is trained using patch training method, effectively reduces
The consumption of computer equipment memory and video memory.
Segmentation network based on foregoing description, in one embodiment, present invention also provides a kind of training segmentation nets
The method of network, Fig. 7 are a kind of training method for segmentation network that one embodiment provides, which is related to dividing the instruction of network
Practice process, as shown in fig. 7, this method comprises:
S201, multiple sample images are obtained, sample image therein is whole image or multiple regions block image, and multiple
Region unit image has all images feature of whole image.
Wherein, it is entire sample image that whole image is practical.Region unit image is commonly called as patch image, and practical is from entire
The parts of images come is detained on sample image, patch image can be rectangular image, or square image,
As long as multiple patch images combine, included characteristics of image can satisfy the spy of all images with entire sample image
Sign, with no restrictions to this present embodiment.
In practical applications, computer equipment can obtain multiple pacth images based on the actual application requirements, and input
Segmentation network is trained, and is fixed greatly for example, computer equipment can be extracted on sample image by the method for sliding window
Small multiple pacth images, can also preset various sizes of window, further according to various sizes of window in sample image
The upper multiple pacth images for extracting correspondingly-sized, and the multiple pacth images extracted combine with entire sample image
The all images feature for being included.
Exemplary illustration, in a kind of training process for dividing network, it is assumed that the entire sample graph that computer equipment obtains
The size of picture is 256 × 256 × 256, and segmentation network to be trained is full convolutional neural networks and full convolutional neural networks
Maximum step-length when being 16, effective training can be completed in the multiple that the size of patch image is 16, and therefore, the present embodiment exists
The size for the pacth image chosen in training process includes but is not limited to 96 × 96 × 96.
Optionally, after the size that pacth image has been determined, it is also necessary to the further center for determining pacth image
Coordinate, the centre coordinate of pacth image is randomly selected based on the non-background dot coordinate in whole image in the present embodiment
It obtains.Can guarantee choose entirely in this way be background pacth image, it is possible to reduce invalid pacth image was being trained
Memory consumption caused by journey and video memory consumption can be great especially when in face of having the sample image of high-volume structure
Reduce the consumption of memory and video memory.
S202, sample image is input to segmentation network to be trained, treats trained segmentation network and be trained, obtains
Divide network.
In the present embodiment, training divide network when, the first training method be whole image is input to it is to be trained
It is trained in segmentation network, for example, the size of whole image can be 256 × 256 × 256.Second of training method be by
Multiple pacth images are input in segmentation network to be trained and are trained, wherein the size of each pacth image can phase
Together, for example, the size of each pacth image can be 96 × 96 × 96, optionally, the size of each pacth image can not also
It is identical.
In the first above-mentioned training process: when computer equipment gets multiple sample images, and each sample image
When for whole image, multiple sample images can be pre-processed in advance, for example, rotation, resampling, going skull, image non-
The sequence of operations such as uniformity correction, Histogram Matching, gray scale normalization make the size of all sample images reach unified, example
Such as, the size of all sample images is 256 × 256 × 256.Then computer equipment is again by pretreated multiple sample graphs
It is trained as being input in segmentation network to be trained, the segmentation network used after obtaining.
In above-mentioned second of training process: when computer equipment gets multiple sample images, and each sample image is
When multiple patch images, computer equipment can first determine the size of each patch image, further using institute in S201
The method stated determines the centre coordinate of each patch image.Then, computer equipment can be according to the center of each patch image
The size of coordinate and each patch image extracts multiple patch images in sample image.Later, so that it may by multiple patch
Image is sent into segmentation network to be trained and is trained, to obtain the segmentation network used later.
The training method of segmentation network provided by the above embodiment, trained point is treated by using patch training method
Network is cut to be trained, due to the patch image wherein selected centre coordinate be sample image on non-background dot coordinate, because
This, reduces the image processing process to invalid patch image, and then reduces disappearing for memory and video memory in computer equipment
Consumption.
It should be understood that although each step in Fig. 2 and Fig. 7 flow chart is successively shown according to the instruction of arrow,
It is these steps is not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
There is no stringent sequences to limit for rapid execution, these steps can execute in other order.Moreover, Fig. 2 and Fig. 7 at least one
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out.
In one embodiment, as Fig. 8 shows, a kind of image segmentation device is provided, comprising: obtain module 101 and segmentation
Module 102, in which:
Module 101 is obtained, for obtaining image to be split;Image to be split includes multiple structures to be split;
Divide module 102, is split for image to be split to be input to segmentation network, more structures after being divided
Image;Dividing network includes multilayer convolution operation;Wherein, at least one layer of convolution operation includes: using monovolume product verification upper layer
The image of convolution operation output carries out the first convolution, obtains the first convolution characteristic pattern;The output channel number of first convolution is first
What the input channel number of convolution obtained after compressing according to preset ratio;The is carried out to the first convolution characteristic pattern using original convolution kernel
Two convolution obtain the second convolution characteristic pattern;The input channel number of second convolution and the output channel number of the second convolution are first
The output channel number of convolution;Third convolution is carried out using monovolume product the second convolution characteristic pattern of verification, obtains third convolution characteristic pattern,
The input channel number of third convolution is the output channel number of the second convolution, and the output channel number of third convolution is the defeated of the first convolution
Enter port number.
In one embodiment, as Fig. 9 shows, above-mentioned image segmentation device further include:
Training module 103, for obtaining multiple sample images;Sample image be whole image or multiple regions block image,
Multiple regions block image has all images feature of the whole image;Multiple sample images are input to segmentation to be trained
Network is treated trained segmentation network and is trained, obtains the segmentation network.
A kind of image segmentation device provided by the above embodiment, implementing principle and technical effect and above method embodiment
It is similar, herein not in burden.
Specific about image segmentation device limits the restriction that may refer to above for a kind of image partition method,
This is repeated no more.Modules in above-mentioned image segmentation device can come real fully or partially through software, hardware and combinations thereof
It is existing.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with software shape
Formula is stored in the memory in computer equipment, executes the corresponding operation of the above modules in order to which processor calls.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory
Computer program, the processor perform the steps of when executing computer program
Obtain image to be split;Image to be split includes multiple structures to be split;
Image to be split is input to segmentation network to be split, more structural images after being divided;Divide network packet
Include multilayer convolution operation;Wherein, at least one layer of convolution operation includes:
The first convolution is carried out using the image of the convolution operation output on monovolume product verification upper layer, obtains the first convolution feature
Figure;The output channel number of first convolution is to obtain after the input channel number of the first convolution is compressed according to preset ratio;
The second convolution is carried out to the first convolution characteristic pattern using original convolution kernel, obtains the second convolution characteristic pattern;Volume Two
The output channel number of long-pending input channel number and the second convolution is the output channel number of the first convolution;
Third convolution is carried out using monovolume product the second convolution characteristic pattern of verification, obtains third convolution characteristic pattern, third convolution
Input channel number be the second convolution output channel number, the output channel number of third convolution is the input channel of the first convolution
Number.
A kind of computer equipment provided by the above embodiment, implementing principle and technical effect and above method embodiment class
Seemingly, details are not described herein.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program also performs the steps of when being executed by processor
Obtain image to be split;Image to be split includes multiple structures to be split;
Image to be split is input to segmentation network to be split, more structural images after being divided;Divide network packet
Include multilayer convolution operation;Wherein, at least one layer of convolution operation includes:
The first convolution is carried out using the image of the convolution operation output on monovolume product verification upper layer, obtains the first convolution feature
Figure;The output channel number of first convolution is to obtain after the input channel number of the first convolution is compressed according to preset ratio;
The second convolution is carried out to the first convolution characteristic pattern using original convolution kernel, obtains the second convolution characteristic pattern;Volume Two
The output channel number of long-pending input channel number and the second convolution is the output channel number of the first convolution;
Third convolution is carried out using monovolume product the second convolution characteristic pattern of verification, obtains third convolution characteristic pattern, third convolution
Input channel number be the second convolution output channel number, the output channel number of third convolution is the input channel of the first convolution
Number.
A kind of computer readable storage medium provided by the above embodiment, implementing principle and technical effect and the above method
Embodiment is similar, and details are not described herein.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate SDRAM (DDRSDRAM), increase
Strong type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention
Range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.
Claims (10)
1. a kind of image partition method, which is characterized in that the described method includes:
Obtain image to be split;The image to be split includes multiple structures to be split;
The image to be split is input to segmentation network to be split, more structural images after being divided;The segmentation net
Network includes multilayer convolution operation;Wherein, at least one layer of convolution operation includes:
The first convolution is carried out using the image of the convolution operation output on monovolume product verification upper layer, obtains the first convolution characteristic pattern;Institute
State the first convolution output channel number be first convolution input channel number compressed according to preset ratio after obtain;
The second convolution is carried out to the first convolution characteristic pattern using original convolution kernel, obtains the second convolution characteristic pattern;Described
The input channel number of two convolution and the output channel number of second convolution are the output channel number of first convolution;
The second convolution characteristic pattern is checked using monovolume product and carries out third convolution, obtains third convolution characteristic pattern, the third
The input channel number of convolution is the output channel number of second convolution, and the output channel number of the third convolution is described first
The input channel number of convolution.
2. the method according to claim 1, wherein the segmentation network includes down-sampling section and up-sampling section;
The down-sampling section includes at least one layer of convolution operation or the up-sampling section includes at least one layer of convolution operation.
3. the method according to claim 1, wherein the image to be split is whole image or multiple regions block
Image, the multiple region unit image have all images feature of the whole image.
4. according to the method described in claim 3, it is characterized in that, the centre coordinate of the region unit image is based on described whole
What the non-background dot coordinate on a image was randomly selected.
5. according to the method described in claim 4, it is characterized in that, the multiple that the size of the region unit image is 16.
6. the method according to claim 1, wherein the specific value of the preset ratio is appointing in two to eight
Meaning numerical value.
7. the method according to claim 1, wherein the segmentation network is full convolutional neural networks.
8. the method according to the description of claim 7 is characterized in that the full convolutional neural networks are the full convolutional Neural of V-shape
Network.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 8 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any item of the claim 1 to 8 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910417977.6A CN110211130A (en) | 2019-05-20 | 2019-05-20 | Image partition method, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910417977.6A CN110211130A (en) | 2019-05-20 | 2019-05-20 | Image partition method, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110211130A true CN110211130A (en) | 2019-09-06 |
Family
ID=67787807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910417977.6A Pending CN110211130A (en) | 2019-05-20 | 2019-05-20 | Image partition method, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211130A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930290A (en) * | 2019-11-13 | 2020-03-27 | 东软睿驰汽车技术(沈阳)有限公司 | Data processing method and device |
CN112801282A (en) * | 2021-03-24 | 2021-05-14 | 东莞中国科学院云计算产业技术创新与育成中心 | Three-dimensional image processing method, three-dimensional image processing device, computer equipment and storage medium |
CN112825141A (en) * | 2019-11-21 | 2021-05-21 | 上海高德威智能交通系统有限公司 | Method and device for recognizing text, recognition equipment and storage medium |
CN113160265A (en) * | 2021-05-13 | 2021-07-23 | 四川大学华西医院 | Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation |
CN113269764A (en) * | 2021-06-04 | 2021-08-17 | 重庆大学 | Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107895192A (en) * | 2017-12-06 | 2018-04-10 | 广州华多网络科技有限公司 | Depth convolutional network compression method, storage medium and terminal |
CN109472791A (en) * | 2018-10-31 | 2019-03-15 | 深圳大学 | Ultrasonic image division method and computer equipment |
CN109583576A (en) * | 2018-12-17 | 2019-04-05 | 上海联影智能医疗科技有限公司 | A kind of medical image processing devices and method |
-
2019
- 2019-05-20 CN CN201910417977.6A patent/CN110211130A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107895192A (en) * | 2017-12-06 | 2018-04-10 | 广州华多网络科技有限公司 | Depth convolutional network compression method, storage medium and terminal |
CN109472791A (en) * | 2018-10-31 | 2019-03-15 | 深圳大学 | Ultrasonic image division method and computer equipment |
CN109583576A (en) * | 2018-12-17 | 2019-04-05 | 上海联影智能医疗科技有限公司 | A kind of medical image processing devices and method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930290A (en) * | 2019-11-13 | 2020-03-27 | 东软睿驰汽车技术(沈阳)有限公司 | Data processing method and device |
CN112825141A (en) * | 2019-11-21 | 2021-05-21 | 上海高德威智能交通系统有限公司 | Method and device for recognizing text, recognition equipment and storage medium |
CN112825141B (en) * | 2019-11-21 | 2023-02-17 | 上海高德威智能交通系统有限公司 | Method and device for recognizing text, recognition equipment and storage medium |
US11928872B2 (en) | 2019-11-21 | 2024-03-12 | Shanghai Goldway Intelligent Transportation System Co., Ltd. | Methods and apparatuses for recognizing text, recognition devices and storage media |
CN112801282A (en) * | 2021-03-24 | 2021-05-14 | 东莞中国科学院云计算产业技术创新与育成中心 | Three-dimensional image processing method, three-dimensional image processing device, computer equipment and storage medium |
CN113160265A (en) * | 2021-05-13 | 2021-07-23 | 四川大学华西医院 | Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation |
CN113160265B (en) * | 2021-05-13 | 2022-07-19 | 四川大学华西医院 | Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation |
CN113269764A (en) * | 2021-06-04 | 2021-08-17 | 重庆大学 | Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211130A (en) | Image partition method, computer equipment and storage medium | |
Zhang et al. | A fast medical image super resolution method based on deep learning network | |
Sun et al. | Accurate gastric cancer segmentation in digital pathology images using deformable convolution and multi-scale embedding networks | |
DE102017124573A1 (en) | SYSTEMS AND METHOD FOR CRITING NEURONAL NETWORKS FOR AN OPERATIONAL EFFICIENT CONCLUSION | |
CN110706214B (en) | Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error | |
DE102016211642A1 (en) | PATCH MEMORY SYSTEM | |
CN109754403A (en) | Tumour automatic division method and system in a kind of CT image | |
DE102013018915A1 (en) | An approach to power reduction in floating point operations | |
CN110197491B (en) | Image segmentation method, device, equipment and storage medium | |
CN107424145A (en) | The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks | |
He et al. | A hybrid-attention nested UNet for nuclear segmentation in histopathological images | |
CN109658354B (en) | Image enhancement method and system | |
US20200364829A1 (en) | Electronic device and method for controlling thereof | |
CN111161269B (en) | Image segmentation method, computer device, and readable storage medium | |
Veillard et al. | Cell nuclei extraction from breast cancer histopathologyimages using colour, texture, scale and shape information | |
CN109919915A (en) | Retinal fundus images abnormal area detection method and equipment based on deep learning | |
DE112019003587T5 (en) | Learning device, operating program of learning device, and operating method of learning device | |
CN112102230A (en) | Ultrasonic tangent plane identification method, system, computer equipment and storage medium | |
Huang et al. | Style-invariant cardiac image segmentation with test-time augmentation | |
DE102015114651A1 (en) | Image scaling techniques | |
DE102022105808A1 (en) | EFFICIENT QUANTIZATION FOR NEURAL NETWORK DEPLOYMENT AND EXECUTION | |
Zhang et al. | 3D cross-scale feature transformer network for brain MR image super-resolution | |
Yang et al. | HQ-50K: A Large-scale, High-quality Dataset for Image Restoration | |
CN113628220A (en) | Method and system for segmenting MRI brain tumor image based on improved U-Net network | |
Esmaeilzehi et al. | MuRNet: A deep recursive network for super resolution of bicubically interpolated images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190906 |
|
RJ01 | Rejection of invention patent application after publication |