The application based on and require on October 30th, 2017 submit and application No. is the interim Shens in 62/578,907 U.S.
Priority please is incorporated herein on the whole by reference.
Specific embodiment
It reference will now be made in detail to exemplary embodiment now, example illustrates in the accompanying drawings.Everywhere in possible, entire attached
To make that the same or similar part is denoted by the same reference numerals in figure.
Fig. 2 shows the example images segmenting system 200 according to some embodiments of the present disclosure.According to the disclosure,
Image segmentation system 200 is configured to divide the medical image acquired by image capture device 205.According to the disclosure, image segmentation
System 200 can receive medical image from image capture device 205.Alternatively, medical image can be initially stored in such as medicine
In the image data base of image data base 204, and image segmentation system 200 can receive medical image from image data base.
In some embodiments, medical image can be two-dimentional (2D) or three-dimensional (3D) image.3D rendering may include that several 2D images are cut
Piece.
In some embodiments, any suitable image mode can be used to acquire medical image for image capture device 205
102, including such as functional MRI (for example, fMRI, DCE-MRI and diffusion MR I), conical beam CT (CBCT), spiral CT, positive electricity
It is sub- emission tomography (PET), single photon emission computed tomography (SPECT), x-ray imaging, optical tomography, glimmering
Light imaging, ultrasonic imaging and Radiotherapy imaging etc..
For example, image capture device 205 can be MRI scanner.The MRI scanner includes that there is surrounding for magnetic field to suffer from
The magnet of person's pipeline.Patient is placed on the liner platform that can be moved in patient conduit.MRI scanner further comprise
Gradient coil on several directions (for example, the direction x, y and z), it is empty to be created on the uniform magnetic field generated by the magnet
Between changing magnetic field.The intensity of the uniform magnetic field used by MRI scanner is usually between 0.2T-7T, for example, about 1.5T or 3T.
MRI scanner further include: RF coil, to excite the tissue of patient's body;And transceiver, return is woven in by described group to receive
Electromagnetic signal generated while to equilibrium state.
As another example, image capture device 205 can be CT scanner.CT scanner includes: x-ray source,
X-ray is emitted to bodily tissue;And receiver, receive the remaining X-ray after being decayed by bodily tissue.CT scanner also wraps
Rotating mechanism is included to shoot radioscopic image in different perspectives.This rotating mechanism can be the turntable of rotating patient, either
Around the rotational structure of patient's movable gantry and receiver.Then by the radioscopic image of computer system processor different angle
To construct two-dimentional (2D) cross sectional image or three-dimensional (3D) image.
As shown in Figure 2, image segmentation system 200 may include the component for executing two stages, i.e. training stage
With the segmentation stage.In order to execute the training stage, image segmentation system 200 may include that tranining database 201 and model training are set
Standby 202.In order to execute the segmentation stage, image segmentation system 200 may include image processing equipment 203 and/or medical image number
According to library 204.In some embodiments, image segmentation system 200 may include structure more more or fewer than component shown in Fig. 2
Part.For example, image segmentation system 200 can be omitted when training in advance and providing the segmentation network for Medical Image Segmentation
Tranining database 201 and model training equipment 202, such as only include image processing equipment 203 and medical image databases 204.
As another example, when medical image databases 204 are third party database or position far from image processing equipment 203, figure
As segmenting system 200 can only include image processing equipment 203.
Image segmentation system 200 can optionally include network 206, to promote the various components of image segmentation system 200,
Such as database 201 and 204, equipment 202,203 and 205, between communication.For example, network 206 can be local area network (LAN),
Wireless network, cloud computing environment (for example, software services, platform services, infrastructure services), client-server
Environment, wide area network (WAN), internet etc..In some embodiments, network 206 can be by wired data communication system or equipment
Instead of.
In some embodiments, as shown in Figure 2, the various components of image segmentation system 200 can away from each other or
Different positions, and can be connected by network 206.In some alternative embodiments, certain structures of image segmentation system 200
Part can be located at same scene or be located in an integrated equipment.For example, tranining database 201 can be located at model training equipment
A part of 202 scene either model training equipment 202.As another example, at model training equipment 202 and image
Managing equipment 203 can be located in same computer or processing equipment.
As shown in Figure 2, model training equipment 202 can be communicated with tranining database 201 to receive one group or more of instruction
Practice data.Every group of training data may include medical image and its corresponding label figure as ground truth, which will divide
Cut each pixel that result is supplied to image.The training image being stored in tranining database 201 can be from comprising previously acquiring
Medical image medical image databases obtain.Training image can be 2-D or 3-D image.By to each pixel/voxel
Classified using value and label be set, for example, if pixel/voxel corresponds to perpetual object (for example, cancer) value for 1 or
It is value 0 if pixel/voxel corresponds to background (for example, non-cancer), first training image can be split.
Model training equipment 202 can be used from the received training data of tranining database 201 and train segmentation network, should
Divide network for dividing from such as image capture device 205 or the received medical image of medical image databases 204.Model instruction
Practicing equipment 202 can be realized with the hardware of the software dedicated programmed by execution training managing.For example, model training equipment 202
It may include processor and the computer-readable medium of non-transitory.The processor can be stored in computer by execution can
The instruction of training managing in the medium of reading is trained.Model training equipment 202 can be also comprised to output and input and be connect
Mouthful, to be communicated with tranining database 201, network 206 and/or user interface (not shown).User interface can be used in user
Select training data group, adjusting training processing more than one parameter, selection or modify learning network frame, and/or manually
Or semi-automatically provide testing result relevant to the image for training.
As it is used herein, " training " learning network refer to determining one of at least one of learning network layer with
Upper parameter.For example, the convolutional layer of CNN model may include at least one filter or kernel.One of at least one filter
The above parameter (such as kernel weight, size, shape and structure) can be for example, by the training managing based on backpropagation come really
It is fixed.According to the disclosure, multi-level learning network can be trained using training data by model training equipment 202.
According to the disclosure, the engineering of such as multi-level learning network can be for the segmentation network of Medical Image Segmentation
Practise network.Supervised learning can be used to train in the segmentation network.The framework of segmentation network includes the heap of different block and layer
Folded, more than one input is converted to more than one output with layer by each piece.The example of different layers may include more than one convolution
Layer or full convolutional layer, nonlinear operation symbol layer, pond layer or sub-sampling layer, full articulamentum and/or final loss layer.Every layer can be with
Connect a upstream layer and a downstream layer.
According to the disclosure, the segmentation network may include convolution ladder comprising cascade several convolution blocks are to generate
The Feature Mapping of different stage (resolution ratio).Segmentation network disclosed in the disclosure based on convolution ladder is compact and effective
, this is: 1) it simplifies decoder-path by multiresolution features fusion;2) which reduce parameters used in network
Quantity and 3) its keep convolution during spatial resolution.In some embodiments, the network architecture based on convolution ladder
It is also scalable.In some embodiments, because segmentation result can be generated with several resolution ratio, user can be by reaching
Stop early when to desired segmentation result to control convolution ladder depth.As a result, disclosed segmentation network can be not sacrificial
Runing time is substantially reduced in the case where domestic animal accuracy.
It is not learned for example, Fig. 3 shows the exemplary multi-stage according to an embodiment of the present disclosure for Medical Image Segmentation
Practise network 300.In some embodiments, multi-level learning network 300 includes the cascade convolution block in different stage.For example,
Multi-level learning network 300 has the initial volume block 310 in rank 0, connects with the parallel-convolution block 320 in rank 1
It connects, which further connect with a series of parallel-convolution blocks of rank 2, rank 3 ... rank n.
In some embodiments, multi-level learning network 300 is merged using multiresolution features.For example, each rank
Feature Mapping is combined with the mapping of previous stage another characteristic, to generate the segmentation result of the rank.In the traditional network of such as U-Net
In, the calculating of half is exclusively used in decoding network, which can merge the feature of different resolution continuously to restore empty
Between resolution ratio and simultaneously carry out to output segmentation after image prediction.In common point for dividing cat such as from camera scene
It cuts in task, the high-level global characteristics with larger receptive field are more more crucial than local feature, correctly to be predicted.Cause
This, it may be important and inevitable that this decoding network executes correctly predicted while restoring spatial resolution.However,
For medical image segmentation task, local image characteristics can be important as global characteristics.For example, in CT image, each
The intensity of local voxel is defined by Heng Shi unit (HU) scale, so that the radiation density of distilled water is the spoke of 0HU and air
Penetrating density is -1000HU.In order to roughly cover the pure air in CT image, can be made with being slightly above the value of -1000HU
For image threshold.Therefore, disclosed multi-level learning network merges the feature of different scale and resolution ratio to save and calculate
Cost.
In some embodiments, the feature that different stage can be continuously extracted in segmentation network (for example, CNN) is reflected
It penetrates.These features can directly combine according to pixel, and can be made most by merging them using other convolution block
It determines eventually.In some embodiments, joint usually can be executed with original spatial resolution, and subsequent convolution block can protect
Spatial resolution is held, so that the image after the segmentation of output has resolution ratio identical with input picture.If due to Chi Huahuo
Other processes and reflect the spatial resolution of Feature Mapping then can correspondingly to this feature before joint lower than original image
The spatial resolution penetrated is up-sampled.For example, such as arest neighbors, linear interpolation, the simple interpolations of b spline interpolation can be passed through
Algorithm, or by the trained layer that deconvolutes, to execute up-sampling.For example, as shown in Figure 3, original image 302 can be based on
(for example, the medical image acquired by image capture device 205) is generated the Feature Mapping 312 of rank 0 by initial volume block 310.
Feature Mapping 332 can the Feature Mapping 312 based on rank 0 by parallel-convolution block 320 generate (as Feature Mapping 322) then
It is up-sampled and is obtained by up-sampling block 330.The Feature Mapping 312 and Feature Mapping 332 of rank 0 are combined to generate rank
1 Feature Mapping 334.
In some embodiments, each parallel-convolution block (for example, parallel-convolution block 320) may include arranged parallel to each other
Several convolutional layers.For example, Fig. 3 shows k convolutional layer in each parallel-convolution block.In some embodiments, input is special
It is several intermediate special concurrently to generate that sign mapping (for example, Feature Mapping 312 of rank 0) can be distributed to different convolutional layers
Sign mapping.In some embodiments, the output intermediate features that can combine these layers are mapped to generate new convolution Feature Mapping,
Such as Feature Mapping 322.It is also contemplated, however, that convolutional layer can have different configurations, make it possible to extract in different stage
And blending image feature.
In traditional CNN, the quantity for dividing the Feature Mapping filter in network can be increased continuously, because each
Rank needs additional unit to carry out " memory " useful low-level feature and communicates information to non-adjacent layers.Increased filtering
Device quantity may dramatically increase the number of parameters in network, and therefore increase computational complexity.For example, given by 512 spies
Sign mapping is as the convolutional layer for inputting and exporting 1024 Feature Mappings, and required number of parameters is 512 × 1024 × K, wherein K
It is the size of kernel.The number of parameters is using 32 Feature Mappings as the convolutional layer for inputting and exporting 32 Feature Mappings
512 times.Because disclosed segmentation network is combined with all Feature Mappings when being predicted, It is not necessary to have additional
Unit come to non-adjacent layers transmit low-level feature.In some embodiments, some segmentation task (such as medical images point
Cut) in high-level characteristics of image be not too complicated, and for those tasks, identical quantity can be used to each convolution block
Feature Mapping.
In some embodiments, pond layer can be introduced to convolutional neural networks and be placed between convolutional layer to image
Carry out down-sampling.The receptive field for connecting convolutional layer can be increased using pond layer in this way, eliminate redundant space feature, and
Drive network to learn hierarchical information (from local to the overall situation).For example, Fig. 4 is shown according to the exemplary of the embodiment of the present disclosure
Multi-level learning network 400 includes pond layer in each convolution block.For example, initial volume block may include convolutional layer 412
With maximum pond layer 414.Each subsequent parallel-convolution block may include convolutional layer 422 and maximum pond layer 424.In Fig. 4
Shown in example, 32 filters are can be used in convolutional layer 412 and 422, and the size of convolution kernel is 3 × 3 × 3.Maximum pond layer
414 and 424, which can have in each dimension 2, strides.As a result, the receptive field of each layer of each level is 3 × 3 × 3 (convolution
Layer 412), 6 × 6 × 6 (maximum pond layers 414), 8 × 8 × 8 (convolutional layers 422), 16 × 16 × 16 (maximum pond layers 424), 18
× 18 × 18 (convolutional layers of rank 2) and 36 × 36 × 36 (the maximum pond layers of rank 2), they persistently increase on pantostrat
Add.
In some other embodiments, convolution with holes can use rather than pond layer increases receptive field.According to this public affairs
It opens, convolution with holes can be convolution with hole or empty convolution.The operation can expand in the case where not introducing additional parameter
The receptive field of big convolution.If correct selection parameter, the size of receptive field can be with the number of the convolutional layer of sequence cascade
It measures in exponential increase.For example, Fig. 5 shows the other learning network 500 of exemplary multi-stage according to an embodiment of the present disclosure,
It include empty convolutional layer in each convolution block.For example, initial volume block may include empty convolutional layer 510, using 32 3 ×
The filter of 3 size of cores and 1 × 1 expansion.Each subsequent parallel-convolution block includes empty convolutional layer 520, uses convolution
Core is having a size of 3 × 3 32 filters and 2i×2iExpansion.For example, the expansion is 2 × 2 at rank 1.Therefore, the sense of layer
It is 2 × 2 at rank 0 by open country, is (2 at subsequent level ii-1)×(2iIt -1), i.e., is 7 × 7 at rank 1, at rank 2
It is 15 × 15 etc..The receptive field of the layer of each level also continues to increase.
Referring back to Fig. 2, image processing equipment 203 can receive segmentation network from model training equipment 202, for example, more
Rank learning network 300/400/500.Image processing equipment 203 may include processor and computer-readable Jie of non-transitory
Matter (is discussed in detail) in conjunction with Fig. 6.The processor can execute the instruction of the image dividing processing of storage in the medium.Image procossing
Equipment 203 can also comprise output and input interface (being discussed in detail in conjunction with Fig. 6) with medical image databases 204, network
206 and/or user interface (not shown) communication.The user interface can be used for selective medicine image to be split, start segmentation
Processing, display medical image and/or segmentation result.
Image processing equipment 203 can be communicated with medical image databases 204 to receive more than one medical image.One
In a little embodiments, the medical image being stored in medical image databases 204 may include the medicine of more than one image mode
Image.Medical image can be acquired by the image capture device 205 of such as MRI scanner and CT scanner.Image processing equipment
203 can be used from the received trained segmentation network of model training equipment 202 and carry out each pixel of prospective medicine image (such as
Fruit is 2-D) or voxel (if it is 3-D) whether correspond to perpetual object, and export segmentation after image.
In some embodiments, multi-level learning network 300 can be applied to original image by image processing equipment 203
302.At rank 0, image processing equipment 203 can determine the Feature Mapping of rank 0 by application initial volume block 310
312.At rank 1, image processing equipment 203 can pass through the Feature Mapping 312 by parallel-convolution block 320 applied to rank 0
To determine Feature Mapping 322.If Feature Mapping 322 has the spatial resolution lower than original image 302, image procossing is set
Standby 203, which can be used up-sampling block 330, up-samples Feature Mapping 322 to obtain with identical with original image 302
The Feature Mapping 332 of spatial resolution.Image processing equipment 203 can be by the Feature Mapping 312 of Feature Mapping 332 and rank 0
Combine to generate the Feature Mapping 334 of rank 1.Image processing equipment 203 can be in the Feature Mapping 334 of rank 1 using another
Image 342 after segmentation of a roll of block 340 to obtain rank 1.In some embodiments, image processing equipment 203 can be after
The image of continuous " convolution ladder " downwards with the continuous parallel-convolution of application and to be similar to after the above-mentioned segmentation for obtaining rank 1
342 mode obtains the image after the segmentation at different stage.
In some embodiments, when by the application of image processing equipment 203 with image after divide, dividing network can
To be scalable.In some embodiments, the image after the segmentation of different stage can be successively returned due to segmentation network,
Therefore when the image after the segmentation of specific rank is good enough, image processing equipment 203 can determine to stop network early.One
In a little embodiments, which can be based on the calculating of predefined parameter associated with the image after segmentation.For example, image procossing is set
Standby 203 can determine that the difference between the image after the segmentation of image and rank (i+1) after the segmentation of rank i is less than threshold value.
In some embodiments, the image after the segmentation of different stage can be shown to user, and user can stop dividing manually
Network is further applied.
In some embodiments, the quantity for dividing the rank in network can be predetermined and by by model training
Equipment 202 is set.For example, model training equipment 202 can be based on before it will divide network and be supplied to image processing equipment 203
Test the size to determine network.For example, if the segmentation output of some rank it is good enough and cannot by subsequent rank into
One step improves, then can abandon subsequent rank in segmentation network.As another example, if after the other segmentation of lower level
Image reasonable performance is not provided, then can also segmentation network in eliminate relevant convolution block.
Fig. 6 shows the example images processing equipment 203 according to some embodiments of the present disclosure.In some embodiments
In, image processing equipment 203 can be special purpose computer or general purpose computer.For example, image processing equipment 203 can be as doctor
The computer of institute's customization, to execute Image Acquisition and image processing tasks.As shown in Figure 6, image processing equipment 203 can wrap
Include communication interface 602, processor 604, memory 606, reservoir 608 and display 610.
Communication interface 602 may include network adapter, cable connector, serial connector, and USB connector connects parallel
Connect device, high speed data transfer adapter (optical fiber, USB 3.0, thunder and lightning interface etc.), wireless network adapter (such as WiFi
Adapter), telecommunications (3G, 4G/LTE etc.) adapter etc..Image processing equipment 203 can be connected to figure by communication interface 602
As other components of segmenting system 200 and network 206.In some embodiments, communication interface 602 is from image capture device 205
Receive medical image.For example, image capture device 205 is MRI scanner or CT scanner.In some embodiments, communication connects
Mouth 602 also receives the segmentation network of such as multi-level learning network 300/400/500 from model training equipment 202.
Processor 604 can be the processing equipment including more than one general purpose processing device, such as microprocessor, centre
Manage unit (CPU), graphics processing unit (GPU) etc..More specifically, to can be complex instruction set calculation (CISC) micro- for the processor
Processor, very long instruction word (VLIW) microprocessor, runs other instruction set at reduced instruction set computing (RISC) microprocessor
The combined processor of processor or operating instruction collection.The processor can also be more than one dedicated treatment facility, such as specially
With integrated circuit (ASIC), field programmable gate array (FPGA), digital signal processor (DSP), system on chip (SoC) etc..
Processor 604 can be communicably coupled to memory 606 and be configured as executing the executable finger of the computer being stored thereon
It enables, to execute the example images dividing processing that will such as combine Fig. 7 description.
606/ reservoir 608 of memory can be the computer-readable medium of non-transitory, such as read-only memory
(ROM), random access memory (RAM), phase change random access memory devices (PRAM), static random access memory (SRAM),
Dynamic random access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), other kinds of arbitrary access
Memory (RAM), the flash memory of flash disk or other forms, caching, register, static memory, compact disc read-only memory (CD-
ROM), digital versatile disc (DVD) or other optical memory, cassette tape or other magnetic storage apparatus, or be used to store
The medium etc. of any other possible non-transitory of the information or instruction that can be accessed by computer equipment.
In some embodiments, reservoir 608 can store trained network (such as multi-level learning network 300/
400/500) and data, all for example original medical images of the data, extraction characteristics of image (for example, the Feature Mapping of rank i,
Intermediate features mapping), while executing computer program it is received, used or generated etc..In some implementations
In example, memory 606 can store computer executable instructions, such as more than one image processing program.
In some embodiments, the image after segmentation and/or other numbers can be presented in processor 604 on display 610
According to visualization.Display 610 may include that liquid crystal display (LCD), light emitting diode indicator (LED), plasma are shown
The display of device or any other type, and provide the graphic user interface (GUI) that is presented on display for user's input and
Image/data are shown.The display may include many different types of materials (such as plastics or glass), and can be
Touch-sensitive is ordered with receiving from user.For example, display may include substantially rigid touch-sensitive material (such as Gorilla glass
GlassTM) or substantially flexible (such as Willow glassTM) touch-sensitive material.
According to the disclosure, model training equipment 202 be can have and the same or similar structure of image processing equipment 203.
In some embodiments, model training equipment 202 including processor and is configured with training image training segmentation network
Other components.
Fig. 7 shows the flow chart of the illustrative methods 700 according to an embodiment of the present disclosure for Medical Image Segmentation.
For example, method 700 can be realized by the image processing equipment 203 in Fig. 1.However, method 700 is not limited to the exemplary implementation
Example.Method 700 may include step S702-S724 as described below.It should be appreciated that some steps are provided herein for executing
Disclosure may be optional.In addition, some steps may be performed simultaneously, or held with the sequence different from shown in Fig. 7
Row.
In step S702, image processing equipment 203 is for example received from medical image databases 204 by image capture device
The medical image of 205 acquisitions.Medical image can be any image mode of such as MRI or CT.In step S704, at image
Manage the segmentation network that equipment 203 receives for example multi-level learning network 300/400/500.For example, segmentation network can be by model
Equipment 202 is trained to train.
In step S706, image processing equipment 203 determines rank 0 by the way that initial volume block is applied to medical image
Feature Mapping.For example, image processing equipment 203 can be by multi-level learning network 300 in embodiment as shown in Figure 3
Applied to original image 302.At rank 0, image processing equipment 203 can determine grade by application initial volume block 310
Other 0 Feature Mapping 312.In some embodiments, initial volume block 310 may include convolutional layer 412 and maximum pond layer
414.In some other embodiments, initial volume block 310 may include empty convolutional layer 510.
In step S708,203 degree of establishment of image processing equipment indexes i=1.In step S710, image procossing is set
Standby 203 can determine Feature Mapping by the way that parallel-convolution block to be applied to the Feature Mapping of prior level.For example, as in Fig. 3
Shown, at rank 1, image processing equipment 203 can pass through the Feature Mapping 312 by parallel-convolution block 320 applied to rank 0
To determine Feature Mapping 322.In some embodiments, parallel-convolution block 320 may include convolutional layer 422 and maximum pond layer
424.In some other embodiments, parallel-convolution block 320 may include empty convolutional layer 520.
In some embodiments, parallel-convolution block (for example, parallel-convolution block 320) may include parallel arranged several
Convolutional layer.For example, as shown in Figure 3, parallel-convolution block 320 includes parellel arranged k convolutional layer, and the feature of rank 0
Mapping 312 can be distributed to k convolutional layer to generate several intermediate features mappings, can be combined to generate Feature Mapping
322。
In step S712, image processing equipment 203 determine Feature Mapping spatial resolution whether with divided doctor
Learn the spatial resolution matching of image.If Feature Mapping has the spatial resolution lower than the spatial resolution of medical image
(S712: no), then method 700 proceeds to step 714, this image processing equipment 203 can for example using up-sampling block 330 it is right
Feature Mapping is up-sampled, to obtain the Feature Mapping for having same spatial resolution with medical image.Otherwise (S712:
It is), method 700 is directly to step S716.
In step S716, image processing equipment can reflect the feature of Feature Mapping and rank (i-1) after up-sampling
It penetrates and combines to generate the Feature Mapping of rank i.For example, as shown in Figure 3, image processing equipment 203 can be by Feature Mapping 332
Combine with the Feature Mapping 312 of rank 0 to generate the Feature Mapping 334 of rank 1.In some embodiments, the picture of Feature Mapping
Plain value can be combines pixel by pixel.For example, can be added, be averaging or otherwise assemblage characteristic mapping in correspondence picture
Plain value is to generate the pixel value of Feature Mapping.
In step S718, image processing equipment 203 can be by being applied to another convolution block in step S716
Image after segmentation of the Feature Mapping of the rank i of acquisition to obtain rank i.For example, as shown in Figure 3, image processing equipment
203 can be in the Feature Mapping 334 of rank 1 using the image 342 after segmentation of another convolution block 340 to obtain rank 1.
In step S720, image processing equipment 203 can determine whether the segmentation result obtained in step S718 enables
People is satisfied.In some embodiments, image processing equipment 203 can calculate with divide after the associated some predetermined ginsengs of image
Number.For example, image processing equipment 203 can determine the image after the segmentation of image and rank (i-1) after the segmentation of rank i it
Between difference be less than threshold value, this instruction is sufficiently small so that subsequent refinement can be does not have by improving of promoting that rank obtains
It is necessary.In such a case, it is possible to think that segmentation result is satisfactory.If segmentation result is satisfactory (S720: yes),
Image processing equipment 203 can determine the segmentation network for stopping applying other ranks, and point of rank i is provided in step S724
Image after cutting is as final segmentation result.Otherwise (S720: no), method 700 proceed to S722 to increase level index i, and
Back to step S710, this image processing equipment 203 by repeat step S710-S720 continue downward " convolution ladder " with
Using subsequent parallel-convolution block and obtain the image after the segmentation in following stages other places.
Another aspect of the present disclosure is intended to provide a kind of non-transitory computer-readable medium of storage instruction, described instruction
One or several processors are caused to execute method as described above when executed.Computer-readable medium may include volatibility
Or non-volatile, magnetic, semiconductor, tape, optical, moveable, irremovable or other kinds of computer can
Read medium or computer-readable storage facilities.For example, as disclosed, computer-readable medium can be stores calculating thereon
The storage facilities or memory module of machine instruction.In some embodiments, computer-readable medium can be stores calculating thereon
The disk or flash drive of machine instruction.
It will be apparent to one skilled in the art that various repair can be carried out to disclosed system and correlation technique
Change and changes.Specification and practice in view of disclosed system and correlation technique, other embodiments are for art technology
It will be apparent for personnel.
It is intended to specification and example is considered only as illustratively, wherein true scope is by the following claims and their equivalents
Instruction.