WO2020108525A1 - 图像分割方法、装置、诊断系统、存储介质及计算机设备 - Google Patents

图像分割方法、装置、诊断系统、存储介质及计算机设备 Download PDF

Info

Publication number
WO2020108525A1
WO2020108525A1 PCT/CN2019/121246 CN2019121246W WO2020108525A1 WO 2020108525 A1 WO2020108525 A1 WO 2020108525A1 CN 2019121246 W CN2019121246 W CN 2019121246W WO 2020108525 A1 WO2020108525 A1 WO 2020108525A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmentation
tumor
network
layer
Prior art date
Application number
PCT/CN2019/121246
Other languages
English (en)
French (fr)
Inventor
胡一凡
郑冶枫
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19889004.8A priority Critical patent/EP3828825A4/en
Publication of WO2020108525A1 publication Critical patent/WO2020108525A1/zh
Priority to US17/204,894 priority patent/US11954863B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2504Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image segmentation method, device, diagnostic system, storage medium, and computer equipment.
  • Gliomas are the most common primary malignant brain tumors, also known as brain tumors, which have different degrees of invasiveness, and are often divided into whole tumor areas, tumor core areas, and enhanced tumor core areas.
  • Magnetic resonance imaging Magnetic Resonance Imaging, MRI is the most commonly used brain tumor examination and diagnosis method in clinical. It accurately divides the various regions contained in the brain tumor from the images generated by the multi-modal MRI scan, which has a very high Medical value.
  • tumor image segmentation is mainly based on deep learning, for example, using fully convolutional neural networks (FCNNS, Fully Convolutional Neural Networks).
  • FCNNS Fully Convolutional neural networks
  • the inventors found that the features learned by the full convolutional neural network method are all based on the part of the complete image, and the ability to learn the features of the full image is poor, which is easy to cause a poor segmentation effect.
  • embodiments of the present application provide an image segmentation method, device, diagnostic system, storage medium, and computer equipment.
  • an image segmentation method includes: acquiring a tumor image; performing tumor positioning on the acquired tumor image to obtain a candidate image indicating the position of the entire tumor region in the tumor image;
  • the candidate images are input to the cascade segmentation network constructed based on the machine learning model; starting with the first-level segmentation network in the cascade segmentation network, image segmentation of the entire tumor region in the candidate image is started, step by step Step to the last stage of segmentation network to perform image segmentation on the enhanced tumor core area to obtain segmented images.
  • an image segmentation device includes: an image acquisition module for acquiring a tumor image; an image coarse segmentation module for performing tumor positioning on the acquired tumor image to obtain an indication for Candidate images of the position of the entire tumor region in the tumor image; an image input module for inputting the candidate image to a cascade segmentation network constructed based on a machine learning model; an image fine segmentation module for use in the cascade segmentation network In the first-level segmentation network, the image segmentation of the entire tumor region in the candidate image is started, and the segmentation network is stepped to the last level to perform image segmentation on the enhanced tumor core region to obtain a segmented image.
  • a diagnostic system the diagnostic system includes an acquisition end, a segmentation end, and a diagnosis end, wherein the acquisition end is used to acquire a tumor image and send it to the segmentation end;
  • the segmentation end is used to perform tumor localization on the tumor image sent by the segmentation end to obtain a candidate image indicating the position of the entire tumor region in the tumor image, and input the candidate image to a machine learning model-based
  • the cascade segmentation network starts with the first-level segmentation network in the cascade segmentation network, and performs image segmentation on the entire tumor region in the candidate image, and gradually steps to the last level segmentation network for enhancement
  • the image of the core area of the tumor is segmented to obtain a segmented image; the diagnostic end is used to receive the segmented image sent by the segmented end and display it to assist the diagnostic staff in performing tumor diagnosis through the segmented image.
  • a computer device includes a processor and a memory, and a computer-readable instruction is stored on the memory, and the computer-readable instruction is executed by the processor to implement the image as described above Segmentation method.
  • a storage medium stores a computer program thereon, and when the computer program is executed by a processor, the image segmentation method described above is implemented.
  • FIG. 1 is a schematic diagram of an implementation environment involved in this application.
  • Fig. 2 is a block diagram of a hardware structure of a splitting end according to an exemplary embodiment.
  • Fig. 3 is a flow chart showing an image segmentation method according to an exemplary embodiment.
  • FIG. 4 is a schematic diagram of the segmentation results of the segmentation networks at all levels in the cascade segmentation network according to the embodiment corresponding to FIG. 3.
  • FIG. 5 is a schematic diagram of a tumor image, a tumor positioning process, and candidate images involved in the corresponding embodiment of FIG. 3.
  • FIG. 6 is a schematic structural diagram of a cascaded split network involved in the corresponding embodiment of FIG. 3.
  • step 7a is a flowchart of step 330 in an embodiment corresponding to the embodiment in FIG. 3.
  • Fig. 7b is a schematic structural diagram of a U-net-based network according to an exemplary embodiment.
  • FIG. 8 is a flowchart of step 410 in an embodiment corresponding to the embodiment of FIG. 7a.
  • FIG. 9 is a schematic diagram of the network structure of the 3D U-net network involved in the embodiment corresponding to FIG. 8.
  • Fig. 10 is a schematic diagram of a network structure of a split network according to an exemplary embodiment.
  • FIG. 11 is a schematic structural diagram of a dense module layer involved in the embodiment corresponding to FIG. 10.
  • Fig. 12 is a flowchart of an image segmentation process according to an exemplary embodiment.
  • Fig. 13 is a flowchart of another image segmentation method according to an exemplary embodiment.
  • FIG. 14 is a schematic diagram of an image segmentation method in a specific embodiment.
  • FIG. 15 is a flowchart of the image segmentation method according to the specific embodiment of FIG. 14.
  • Fig. 16 is a block diagram of an image segmentation device according to an exemplary embodiment.
  • Fig. 17 is a structural block diagram of a computer device according to an exemplary embodiment.
  • the embodiment of the present application proposes an image segmentation method based on stepwise image segmentation, which can effectively improve the segmentation effect in tumor image segmentation.
  • this image segmentation method is applicable to tumor image segmentation devices and the tumor image
  • the segmentation device is deployed in a computer device with a von Neumann architecture.
  • the computer device may be a personal computer (PC), a server, or the like.
  • FIG. 1 is a schematic diagram of an implementation environment involved in an image segmentation method.
  • the implementation environment includes a diagnosis system 100 that includes an acquisition end 110, a segmentation end 130, and a diagnosis end 150.
  • the collection terminal 110 is an electronic device for collecting tumor images, for example, an MRI device, a CT (Computed Tomography) device, etc., which is not limited herein.
  • the segmentation end 130 is an electronic device that provides a background service for users, such as a personal computer, a server, etc.
  • This background service includes an image segmentation service.
  • the split end 130 may be a server, or a server cluster composed of multiple servers, or even a cloud computing center composed of multiple servers, so as to better provide a large number of users
  • the background service does not constitute a specific limit here.
  • the segmentation end 130 deploys a tumor positioning network 131 for locating the position of the entire tumor region in the tumor image, and a cascade segmentation network 132 constructed based on the machine learning model, so as to realize stepwise image segmentation.
  • the cascading split network 132 includes multi-level split networks 1321, 1322, ..., 132X.
  • the diagnosis terminal 150 is an electronic device for assisting a diagnosis person in performing tumor diagnosis, for example, a personal computer equipped with a display screen.
  • the dividing end 130 establishes a wireless or wired network connection with the collecting end 110 and the diagnosis end 150 respectively, so as to realize data transmission between the diagnosis systems 100 through the network connection.
  • this data transmission includes tumor images, segmented images, etc.
  • the collection end 110 sends the collected tumor image to the segmentation end 130.
  • the tumor image 111 sent by the acquisition end 110 will be received, and the tumor image 111 will be tumor positioned based on the tumor positioning network 131 to obtain a candidate image 1311 indicating the position of the entire tumor area in the tumor image 111 , And then input to the cascade segmentation network 132.
  • the segmented image 1301 can be displayed on the configured display screen to assist the diagnosis personnel in performing tumor diagnosis.
  • Fig. 2 is a block diagram of a hardware structure of a splitting end according to an exemplary embodiment. This kind of dividing end is applicable to the dividing end 130 of the implementation environment shown in FIG. 1.
  • splitting end is just an example adapted to this application, and cannot be considered as providing any limitation on the scope of use of this application.
  • This kind of dividing end cannot also be interpreted as needing to depend on or having to have one or more components in the exemplary dividing end 200 shown in FIG. 2.
  • the splitter 200 includes: a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU) , Central Processing Units) 270.
  • CPU central processing unit
  • CPU Central Processing Unit
  • the power supply 210 is used to provide an operating voltage for each hardware device on the dividing terminal 200.
  • the interface 230 includes at least one wired or wireless network interface for interacting with external devices. For example, interact with the collection terminal 110 of the implementation environment shown in FIG. 1 or interact with the diagnostic terminal 150 of the implementation environment shown in FIG. 1.
  • the interface 230 may further include at least one serial-to-parallel conversion interface 233, at least one input-output interface 235, and at least one USB interface 237, etc., as shown in FIG. This configuration is specifically limited.
  • the memory 250 serves as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk, or an optical disk.
  • the resources stored on the memory 250 include an operating system 251, application programs 253, and data 255. .
  • the operating system 251 is used to manage and control the hardware devices and application programs 253 on the split end 200 to implement the calculation and processing of the massive data 255 in the memory 250 by the central processor 270, which may be Windows ServerTM, Mac OS XTM , UnixTM, LinuxTM, FreeBSDTM, etc.
  • the application program 253 is a computer program that completes at least one specific job based on the operating system 251, and may include at least one module (not shown in FIG. 2), and each module may separately include a series of Computer readable instructions.
  • the tumor image segmentation device can be regarded as an application program 253 deployed on the segmentation end 200 to implement the image segmentation method.
  • the data 255 may be photos, pictures, etc. stored in the magnetic disk, or may be tumor images, segmented images, etc., stored in the memory 250.
  • the central processor 270 may include one or more processors, and is configured to communicate with the memory 250 through at least one communication bus to read computer-readable instructions stored in the memory 250, thereby implementing massive data 255 in the memory 250 Operation and processing. For example, the image segmentation method is completed by the central processor 270 reading a series of computer-readable instructions stored in the memory 250.
  • present application can also be implemented through a hardware circuit or a combination of hardware circuits and software. Therefore, implementing the present application is not limited to any specific hardware circuit, software, or a combination of both.
  • an image segmentation method is applicable to the segmentation end of the implementation environment shown in FIG. 1, and the structure of the segmentation end may be as shown in FIG.
  • This method of image segmentation can be performed by the segmentation end and can include the following steps:
  • Step 310 Acquire a tumor image.
  • the tumor image is generated by scanning the part where the tumor may exist in the collection end, so as to facilitate subsequent image segmentation of the tumor image.
  • the acquisition end may be an MRI device, a CT device, and so on.
  • the tumor image can be derived from the real-time scanned image of the collecting end, or it can be an image sent by the collecting end pre-stored at the dividing end, for example, the dividing end is a server, and then the server can be obtained by local reading or network transmission .
  • the images scanned by the acquisition end can be acquired in real time to facilitate real-time image segmentation of the tumor image, or the images scanned by the acquisition end within a historical time period can be acquired to facilitate segmentation
  • the image segmentation of the tumor image is performed when there are few processing tasks, or the image segmentation of the tumor image is performed under the instruction of the operator, which is not specifically limited in this embodiment.
  • the received tumor image may be denoised, so as to improve the accuracy of subsequent image segmentation.
  • the denoising process may include removing the skull and background in the tumor image.
  • the tumor images acquired by the segmentation end include but are not limited to one or more of four-modality MRI images such as FLAIR, T1, T1c, and T2.
  • Step 330 Perform tumor positioning on the acquired tumor image to obtain a candidate image indicating the position of the entire tumor area in the tumor image.
  • the most important feature is that there is a true inclusion relationship between the regions, as shown in sub-graphs (a) to (c) of FIG. 4, that is, the entire tumor region 3011 contains the tumor core region 3021, and the tumor core region 3021 contains the enhanced tumor core area 3031.
  • tumor localization refers to locating the rough position of the entire tumor area in the tumor image, so as to contain the full tumor area in the candidate image according to the located position.
  • the candidate image contains the entire tumor area in the tumor image through the designated area.
  • the shape of the designated area may be rectangular, triangular, circular, etc., which is not limited herein.
  • the designated area is a rectangular frame
  • the maximum size of the rectangular frame is the maximum value of the segmentation coordinates
  • the specified number of pixels is extended to the periphery.
  • the specified number can be flexibly adjusted according to the actual needs of the application scenario. For example, in an application scenario, the specified number is 5.
  • 305 represents a tumor image
  • 306 represents a tumor localization process
  • 307 represents a candidate image.
  • the entire tumor area 3071 is accommodated in the designated area 3072.
  • the designated area 3072 is a rectangular frame.
  • the candidate image is only a part of the tumor image.
  • the candidate image contains the full tumor area through the designated area, thereby indicating the rough position of the full tumor area in the tumor image, which is conducive to subsequent finer based on the candidate graphic Image segmentation.
  • tumor localization can be achieved by image segmentation, that is, the full tumor area and the non-tumor area in the tumor image are segmented, so that the positioning frame can accommodate the segmented full tumor area.
  • the image segmentation includes: ordinary segmentation, semantic segmentation, instance segmentation, etc., where the ordinary segmentation further includes: threshold segmentation, region segmentation, edge segmentation, histogram segmentation, etc., which is not specifically limited in this embodiment.
  • the image segmentation may be implemented by a machine learning model.
  • the machine learning model may be a convolutional neural network model, a residual neural network model, or the like.
  • Step 340 Input the candidate image to a cascade segmentation network constructed based on a machine learning model.
  • Step 350 starting with the first-level segmentation network in the cascade segmentation network, performing image segmentation on the entire tumor region in the candidate image as a starting point, and stepping through the last-level segmentation network step by step to enhance the tumor core
  • the image of the area is divided to obtain a divided image.
  • Cascaded segmentation networks including multi-level segmentation networks, are constructed based on machine learning models.
  • machine learning models can be convolutional neural network models, residual neural network models, and so on.
  • the segmentation networks at all levels in the cascade segmentation network segment the entire tumor area in the candidate image based on the first-level segmentation network and its parameters, and output the segmentation results to the second-level segmentation network.
  • the segmentation result of the last segmentation network is used as the segmentation image.
  • the brain tumor can be divided into a full tumor area, a tumor core area, and an enhanced tumor core area. Therefore, in an embodiment, the cascade segmentation network includes a three-level segmentation network.
  • the cascade split network 400 includes a first level split network 401, a second level split network 402 and a third level split network 403.
  • the first-level segmentation network 401 performs image segmentation of the candidate image to obtain a first-level intermediate segmented image 405.
  • the second-level segmentation network 402 performs image segmentation of the first-level intermediate segmented image 405 to obtain a second-level intermediate segmented image.
  • the third-level segmentation network 403 performs image segmentation of the second-level intermediate segmented image 406 to obtain the segmented image.
  • the first-level intermediate segmented image 301 is the segmentation result of the first-level segmentation network 401, and the entire tumor region 3011 included in the image is marked, such as It is shown in the subfigure (a) of Figure 4.
  • the second-level intermediate segmented image 302 is the segmentation result of the second-level segmentation network 402, and the entire tumor area 3011 and the tumor core area 3021 in the image are marked differently to reflect both the full tumor area 3011 and the tumor core area 3021 The true inclusion relationship between them is shown in Figure 4 (b).
  • the segmented image 303 is the segmentation result of the third-level segmentation network 403, and the entire tumor region 3011, the tumor core region 3021, and the enhanced tumor core region 3031 contained in the image are marked differently, as shown in sub-graph (c) in FIG. . That is, the segmented image 303 reflects the true inclusion relationship among the entire tumor region 3011, the tumor core region 3021, and the enhanced tumor core region 3031.
  • the parameters used by the segmentation networks at different levels are different, so as to better adapt to the image segmentation between different regions contained in the brain tumor, which is beneficial to improve the tumor image Segmentation effect.
  • the tumor image segmentation based on machine learning is realized, and the segmentation effect of the tumor image is effectively improved through the image segmentation process of different scales.
  • step 330 may include the following steps:
  • Step 410 Based on a 3D U-Net network, extract the corresponding feature map from the acquired tumor image.
  • the tumor image generated by the scan at the acquisition end is often a three-dimensional image, that is, the tumor image is composed of many slices. If a two-dimensional machine learning model is used to process the three-dimensional tumor image, not only the segmentation effect is not Good, and the segmentation efficiency is poor, because each slice that constitutes the tumor image has to be input to the machine learning model in order to train or class predict, which is too cumbersome.
  • tumor localization is realized by a three-dimensional machine learning model, that is, a 3D U-Net network.
  • the 3D U-Net network is also called a three-dimensional network based on U-Net. It can be understood that the 3D U-Net network is built on the basis of a U-Net-based network, and its network structure is also U-shaped.
  • the U-net-based network includes a contraction path 105 and an expansion path 107.
  • the input image undergoes multiple convolution and reduction through the contraction path 105 to obtain multiple feature maps, and then the expansion path 107 performs multiple deconvolution and expansion.
  • it will also be merged with multiple feature maps obtained by the contraction path 105, as shown in 1051-1054 in FIG. 7b, so as to obtain features about different dimensions of the input image, thereby improving segmentation effect.
  • the 3D U-Net network includes an encoder network and a decoder network.
  • the encoder network is used to extract the context features of the tumor image to accurately describe the tumor image locally/globally based on the context features, so as to capture the context information in the tumor image.
  • the decoder network is used to extract the localization features of the tumor image, so as to accurately localize the regions in the tumor image that need to be segmented by the localization features.
  • feature fusion of context features and positioning features is also performed to obtain features about different dimensions of the tumor image, so that the segmentation effect of image segmentation is better.
  • Step 430 Perform category prediction on the pixels in the feature map corresponding to the tumor image to obtain the categories of pixels in the feature map corresponding to the tumor image.
  • the category prediction is implemented based on the classifier set by the 3D U-Net network, that is, the classifier is used to calculate the probability that the pixels in the feature map corresponding to the tumor image belong to different categories.
  • the essence of tumor location is to segment the tumor area and the non-tumor area in the tumor image.
  • the categories include the full tumor area category and the non-tumor area category.
  • the probability that the pixel belongs to different categories is calculated separately.
  • P1 the probability that the pixel belongs to the category of the entire tumor area
  • P2 the probability that the pixel belongs to the category of the non-tumor area
  • the category prediction of all pixels in the feature map corresponding to the tumor image is to be completed, that is, the segmentation of the tumor area and the non-tumor area in the tumor image is completed, that is, the rough position of the entire tumor area is located in the tumor image.
  • Step 450 Obtain the candidate image that contains the entire tumor area in the designated area according to the pixels belonging to the category of the entire tumor area in the feature map.
  • the pixels belonging to the category of all tumor regions can be obtained therefrom, so as to construct a designated area based on the acquired pixels.
  • enclosing the pixels that belong to the total tumor area category within the designated area is considered to contain the entire tumor area in the designated area, thereby generating a candidate image that contains the entire tumor area in the designated area , As shown by 307 in Figure 5.
  • the designated area will be centered and expanded to the periphery, so that the size of the candidate image reaches the designated size, so as to fully ensure the tumor image Segmentation effect.
  • the specified size can be flexibly set according to the actual needs of the application scenario, which is not limited in this embodiment.
  • the specified size is 96 ⁇ 96 ⁇ 96.
  • the coarse segmentation of the tumor image is realized based on the 3D U-Net network, not only to locate the rough position of the entire tumor region in the tumor image from a macro perspective, to avoid loss of image segmentation accuracy, but also from the tumor
  • the image is reduced to the candidate image, which effectively reduces the size of the image, not only reduces the proportion of the background, but also helps to improve the granularity of the segmentation of small tumors, and is also conducive to the design of a deeper network, and thus improve the segmentation effect.
  • the size of the designated area can dynamically change with the size of the entire tumor area, which is conducive to ensuring the balance of positive and negative samples in the subsequent training of the segmentation network model.
  • step 410 may include the following steps:
  • Step 411 Use the encoder network to extract the context features of the tumor image.
  • the 3D U-Net network 500 includes an encoder network 501 and a decoder network 502.
  • the encoder network 501 includes several down-sampling layers 5011-5015, and the decoder network 502 includes several up-sampling layers 5021-5025.
  • a number of feature propagation layers 5031-5034 are established in order from shallow to deep.
  • the 3D U-Net network 500 also includes a classification layer 503, which is set up to calculate the probability that pixels in the feature map corresponding to the tumor image belong to different categories, so as to achieve the corresponding feature map of the tumor image Pixel category prediction.
  • the context features of the tumor image can be extracted through several down-sampling layers 5011-5015, and the extracted context features can be transmitted to the decoder network via several feature propagation layers 5031-5034 502.
  • the tumor image is input to the shallowest downsampling layer 5011 in the encoder network 501, and the input tumor image is convoluted by the shallowest downsampling layer 5011 to obtain the shallowest downsampling layer 5011.
  • Corresponding local features are input to the next shallow downsampling layer 5012 after downsampling.
  • the down-sampling layers 5012, 5013, and 5014 in the encoder network 501 are traversed to obtain local features corresponding to the down-sampling layers 5012, 5013, and 5014.
  • the feature propagation of the above-mentioned local features is performed through several feature propagation layers 5031-5034, respectively.
  • the global features corresponding to the deepest down-sampling layer 5015 are obtained and directly transmitted to the deepest up-sampling layer 5025 in the decoder network 502.
  • horizontal arrows indicate convolution processing
  • downward arrows indicate down-sampling processing.
  • both local features and global features are regarded as the context features of the tumor image, so as to accurately describe the tumor image locally/globally.
  • the feature extraction of tumor images is gradually abstracted from local descriptions to global descriptions, thereby describing tumor images more accurately, which is helpful to ensure the accuracy of image segmentation.
  • the decoder network is used to extract localization features of the entire tumor region, and the contextual features are merged with the localization features to obtain a feature map corresponding to the tumor image.
  • the context feature (global feature) corresponding to the deepest downsampling layer 5015 in the encoder network 501 is used as the positioning feature corresponding to the deepest upsampling layer 5025.
  • the up-sampling processing is performed on the positioning features corresponding to the deepest up-sampling layer 5025 to obtain the features to be fused.
  • the feature to be fused is input to the next deepest upsampling layer 5024, and the context features (local features) corresponding to the second deepest downsampling layer 5014 are fused, and the second deepest layer is obtained through deconvolution processing The positioning feature corresponding to the sampling layer 5024.
  • the remaining up-sampling layers 5023, 5022, and 5021 are traversed to obtain positioning features corresponding to the up-sampling layer.
  • the feature map corresponding to the tumor image is obtained from the localization feature corresponding to the shallowest up-sampling layer 5021.
  • horizontal arrows indicate deconvolution processing
  • upward arrows indicate upsampling processing
  • the input image is the candidate image, the first-level intermediate divided image, or the second-level intermediate divided image.
  • the output image is the first-level intermediate divided image, the second-level intermediate divided image, or the divided image.
  • the split network is a split network at each level in the cascade split network.
  • the parameters of the split network are the parameters of the split networks at all levels in the cascaded split network.
  • the segmentation network 600 includes a down sampling (DownSampling) stage 610 and an up sampling (UpSampling) stage 630.
  • the down-sampling stage 610 includes several first basic network layers 611, 612 and several first dense block (DenseBlock) layers 613, 614 connected in sequence.
  • first basic network layers 611, 612 and several first dense block (DenseBlock) layers 613, 614 connected in sequence.
  • DenseBlock first dense block
  • the up-sampling stage 630 includes several third dense module layers 634, 633 and several second basic network layers 632, 631 connected in sequence.
  • the up-sampling stage 630 and the down-sampling stage 610 are symmetrical to each other, including: the first basic network layer 611 and the second basic network layer 631 are symmetrical to each other, and the first basic network layer 612 and the second basic network layer 632 are symmetrical to each other, The first dense module 613 and the third dense module layer 633 are symmetrical to each other, and the first dense module 614 and the third dense module layer 634 are symmetrical to each other.
  • the first basic network layer 611 includes a second dense module layer 6111 and a pooling layer 6112 connected in sequence.
  • the first basic network layer 612 includes a second dense module layer 6121 and a pooling layer 6122 connected in sequence.
  • the second basic network layer 631 includes an up-sampling layer 6311 and a fourth dense module layer 6312 connected in sequence.
  • the second basic network layer 632 includes an up-sampling layer 6321 and a fourth dense module layer 6322 connected in sequence.
  • the second dense module layer 6111 and the fourth dense module layer 6312 are symmetrical to each other, and the second dense module layer 6121 and the fourth dense module layer 6322 are mutually symmetrical symmetry.
  • the above dense module layer includes an input unit and at least one dense unit, and each dense unit further includes a convolution layer, an activation layer and a normalization layer connected in sequence, avoiding the use of a simple convolution layer Or a residual convolution layer to ensure the accuracy of image segmentation.
  • the dense module layer includes one input unit and four dense units H1, H2, H3, and H4. Among them, each dense unit further includes a convolution layer Conv, an activation layer Relu and a normalization layer BN.
  • the feature x0 corresponding to the input image input is input by the input unit and simultaneously output to the dense units H1, H2, H3, and H4.
  • the feature x1 output by the dense unit H1 will be simultaneously output to the dense unit H2, H3, H4.
  • the feature x2 output by the dense unit H2 will be simultaneously output to the dense unit H3, H4
  • the feature x3 output by the dense unit H3 will be output to the dense unit H4.
  • the features x0, x1 corresponding to the input image input are combined; for the dense unit H3, the features x0, x1, x2 corresponding to the input image input are combined; for the dense unit H4, In other words, the features x0, x1, x2, and x3 corresponding to the input image input are combined.
  • the dense module layer can not only reuse shallow features, such as x0, x1, etc., to fully ensure the integrity of the input image, but also the combination of deep and shallow features, such as the combination of x0, x1, x2, is conducive to Reduce the complexity of image segmentation, and then effectively improve the segmentation effect of image segmentation.
  • the convolutional layers in the first dense module layer and the third dense module layer include several three-dimensional convolution kernels (not shown in FIG. 10).
  • the convolutional layers in the second dense module layer and the fourth dense module layer above include several tangent convolution kernels (not shown in FIG. 10) and several normal convolution kernels, such as 6111a, 6121a, 6322a in FIG. 10, 6312a.
  • the convolutional layer in the second dense module layer and the fourth dense module layer is to convert several three-dimensional convolution kernels (k ⁇ k ⁇ k) into several slice convolution kernels (k ⁇ k ⁇ 1) and Several normal convolution kernels (1 ⁇ 1 ⁇ k).
  • the 2.5-dimensional image segmentation is realized, which not only avoids the high memory consumption and computational complexity of the three-dimensional convolution kernel, but more importantly, based on the particularity of the tumor image, that is, the tumor image consists of many slices
  • the composition will lead to a large difference between the slice resolution and the normal resolution when many slices synthesize 3D images.
  • the pure 3D image segmentation error is large.
  • the pure 2D segmentation directly ignores the local/global correlation between the images. Only 2.5 dimensions are the most suitable for tumor image segmentation.
  • the three-dimensional characteristics of the first dense module layer and the third dense module layer are combined with the 2.5-dimensional characteristics of the second dense module layer and the fourth dense module layer. It not only combines the advantages of the two before and after, but also integrates the features of each dimension of the input image on the basis of the latter, which ensures the collection and fusion of the maximum features and further effectively improves the segmentation effect of image segmentation.
  • the image segmentation process that is, performing image segmentation of the input image through the segmentation network to obtain the output image, may include the following steps:
  • Step 510 During the downsampling stage in the segmentation network, key features are extracted from the input image.
  • the input image input is input to the down-sampling stage 610 in the segmentation network 600, and convolution down-sampling processing is performed through several first basic network layers 611, 612 to obtain intermediate features.
  • the intermediate features may be convolved through several first dense module layers 613, 614 to obtain the key features.
  • the input image input is input to the first first basic network layer 611, through the second dense module layer 6111 in the first first basic network layer 611 Perform convolution processing on the input image input.
  • the features of the convolution output are down-sampled by the pooling layer 6112 in the first first basic network layer 611 to be output to the second first basic network layer 612.
  • the feature propagation sequence traverse the remaining first basic network layers 612 among several first basic network layers, and to complete the traversal, the second first basic network layer 612, which is the last first basic network
  • the feature after layer 612 downsampling is used as the intermediate feature.
  • Step 530 Input the key features into the upsampling stage in the segmentation network, perform multi-scale feature fusion, and obtain a feature map corresponding to the input image.
  • the key feature is input to the up-sampling stage 630 in the segmentation network 600, and deconvolution processing is performed through a number of third dense module layers 634, 633 to obtain a first-scale feature 651 to input the first few The second basic network layer.
  • the first several second basic network layers are connected between the last third dense module layer 633 and the last second basic network layer 631 in the up-sampling stage 630.
  • the features output by the fusion of the first several second basic network layers are up-sampled to obtain the second-scale features 652.
  • the feature map output corresponding to the input image input is not only based on the feature of 1 times upsampling (second scale feature 652), 2 times upsampling feature (second dense module layer 6121 convolution output feature 653),
  • the 4 times upsampling feature (the first scale feature 651) is also based on the feature without the upsampling multiple (the second dense module layer 6111 convolution output feature 653), thereby achieving multi-scale feature fusion, making all levels of the network segmentation
  • the results of the segmentation can achieve the best segmentation results in both local and global, effectively improving the segmentation effect.
  • the feature fusion process between the symmetric fourth dense module layer and the second dense module layer is further described as follows.
  • the first scale feature 651 is input to the first second basic network layer 632, and the input first scale feature 651 is up-sampled through the up-sampling layer 6321 in the first second basic network layer 632.
  • the merged features are deconvoluted to output to the second second basic network layer 631.
  • the feature propagation sequence traverse the remaining second basic network layers of the first several second basic network layers, and to complete the traversal, complete the characteristics between the fourth dense module layer and the second dense module layer that are mutually symmetric Fusion.
  • the second second basic network layer is essentially the last second basic network layer, so there is no need to traverse
  • the remaining second basic network layers in the first several second basic network layers only complete the mutually symmetric fourth dense module layer 6322 and the deconvolution process performed by the first second basic network layer 632.
  • the features between the second dense module layers 6121 are fused.
  • Step 550 Perform category prediction on pixels in the feature map corresponding to the input image to obtain the category of pixels in the feature map corresponding to the input image.
  • the category prediction is implemented based on the classifier set in the segmentation network, that is, the classifier is used to calculate the probability that the pixels in the feature map corresponding to the input image belong to different categories.
  • the category may be the remaining region category, the whole tumor region category, the tumor core region category, and the enhanced tumor core region category.
  • each segmentation network is constructed based on two classifications, that is, in the first-level segmentation network, the categories include the remaining region categories and the full tumor region categories. At this time, the remaining areas are also non-tumor areas.
  • the categories include the remaining area categories and the tumor core area categories.
  • the remaining area refers to the non-tumor area and the entire tumor area that does not include the tumor core area.
  • the categories include the remaining area categories and the enhanced tumor core area categories.
  • the remaining area refers to the non-tumor area and the entire tumor area that does not include the enhanced tumor core area.
  • the second-level segmentation network performs image segmentation of the remaining regions and the tumor core region as an example.
  • Step 570 Mark pixels of a specified category in the feature map corresponding to the input image to obtain the output image.
  • Marking is based on the category of pixels, and can be marked by color, or by symbols such as asterisks, which is not limited here.
  • pixels of different categories are marked with different colors, as shown in sub-figure (c) of FIG. 4.
  • the designated categories are different in all levels of the split network.
  • the designated category in the first-level segmentation network, the designated category is the full tumor area category; in the second-level segmentation network, the designated category is the tumor core area; in the third-level segmentation network, the designated category is the enhanced tumor core area.
  • the category prediction of all pixels in the feature map corresponding to all input images is to be completed, that is, the segmentation of the full tumor area, tumor core area, and enhanced tumor area in the tumor image is completed, that is, in the tumor In the image, more detailed positions of the entire tumor area, the tumor core area, and the enhanced tumor area are located.
  • the method further includes: constructing the cascade segmentation network based on a machine learning model, the machine learning model is a convolutional neural network model.
  • the constructing the cascade segmentation network based on the machine learning model may include the following steps:
  • Step 710 Obtain training samples carrying tags.
  • the training sample is a tumor image in which the whole tumor area, the tumor core area, and the enhanced tumor core area are marked by different kinds of tags.
  • labeling refers to adding non-zero marks only to the entire tumor area, the tumor core area, or the enhanced tumor core area in the tumor image, and zero-marking the pixels of the remaining areas in the tumor image.
  • the tumor image is labeled, and the full tumor is carried.
  • the training sample of the region label is carried.
  • the pixels in the tumor image are normalized to improve the accuracy of image segmentation.
  • Step 730 Create multiple training sample sets according to the types of labels carried by the training samples, each training sample set corresponding to a category.
  • model training will be based on a large number of training samples. For this reason, in this embodiment, for each training sample, sample augmentation processing will be performed.
  • Sample augmentation processing includes: flip, rotate, zoom, contrast enhancement, etc.
  • flip means that the tumor image flips back and forth, flips left and right, etc.
  • rotation means that the tumor image rotates at a specified angle
  • zooming means zooming in on the tumor image, or zooming out on the tumor image
  • contrast enhancement means changing The contrast of pixels in the tumor image.
  • zooming in means to interpolate the 96 ⁇ 96 ⁇ 96 tumor image into an image of 120 ⁇ 120 ⁇ 120 size, and then cut the intermediate image from the 120 ⁇ 120 ⁇ 120 size image to 96 ⁇ 96 ⁇ 96 ;
  • the reduction process means to reduce the size of the 120 ⁇ 120 ⁇ 120 tumor image to 96 ⁇ 96 ⁇ 96.
  • the corresponding training sample set is established by using the training samples carrying the same type of label, then multiple corresponding training sample sets can be established for the training samples of multiple type labels.
  • the corresponding training sample set is constructed from the training samples carrying the label of the entire tumor region, and after the model training of the convolutional neural network model, image segmentation on the entire tumor region will be performed.
  • Step 750 Perform model training on multiple convolutional neural network models with a specified model structure through multiple training sample sets.
  • Model training is essentially to iteratively optimize the parameters of the convolutional neural network model with the specified model structure through the training sample set, so that the specified algorithm function constructed based on this parameter meets the convergence conditions.
  • the designated model structure is shown in FIG. 10.
  • Specify algorithm functions including but not limited to: maximum expectation function, loss function, etc.
  • the parameters of the convolutional neural network model are randomly initialized.
  • the probability calculation is performed by forward propagation based on the randomly initialized parameters, and the Dice distance between the calculated probability and the correct label is constructed. Loss function, and further calculate the loss value of the loss function.
  • the parameters of the convolutional neural network model are updated by back propagation, and the probability calculation is performed based on the updated parameters according to the latter training sample in the training sample set.
  • the distance between Dice reconstructs the loss function and calculates the loss value of the reconstructed loss function again.
  • the convolutional neural network model converges and meets the preset accuracy requirements, it means that the convolutional neural network model completes the model training, which can further build a cascade segmentation network.
  • Step 770 Cascade multiple convolutional neural network models that have completed model training to obtain the cascade segmentation network.
  • each convolutional neural network model that completes model training corresponds to a training sample set.
  • the training sample in the training sample set is a tumor image that is labeled with the entire tumor region by label, then the convolutional neural network model that completes the model training thus performs image segmentation on the entire tumor region.
  • the cascading split network 400 includes three levels of split networks 401, 402, and 403.
  • the method as described above may further include the following steps:
  • a morphological algorithm is used to correct the segmented image.
  • the morphological algorithm includes, but is not limited to, corrosion, expansion, cavity filling, and dense CRF (conditional random field), etc., and this embodiment does not specifically limit this.
  • the correction of the segmented image is achieved, so that the segmentation edge between the entire tumor region, the tumor core region, and the enhanced tumor core region in the segmented image is smoothed, and/or the noise in the segmented image is eliminated, which is further effective
  • the image segmentation effect has been improved.
  • the segmentation end divides the tumor image segmentation task into a coarse segmentation subtask and a fine segmentation subtask.
  • the tumor image is generated by scanning the MRI equipment at the collecting end, and is essentially a four-mode MRI image.
  • the segmentation end acquires the tumor image 811 generated by the scan at the acquisition end.
  • the rough segmentation subtask is based on the tumor image 811, and the tumor is positioned through the 3D U-net network 820 to obtain the candidate image 812 in which the full tumor area is contained in the rectangular frame.
  • Step 802 Based on the tumor image 811, perform a rough segmentation of the entire tumor region.
  • the candidate image 812 is used as the basis of the fine segmentation subtask. It can be seen that the candidate image 812 is greatly reduced in size compared to the tumor image 811.
  • the candidate image 812 is input to the first-level segmentation network 831 in the cascade segmentation network 830, and image segmentation is performed to obtain a first-level intermediate segmentation image 813 marked with a full tumor area.
  • the entire tumor area contained in the first-level intermediate segmented image 813 is no longer roughly included in the rectangular frame, but is specifically marked to achieve the first fine segmentation of the tumor image. That is, step 803 is performed: Based on the candidate image 812, image segmentation on the entire tumor region is performed.
  • the first-level intermediate segmented image 813 is used as the input of the second-level segmentation network 832 to perform image segmentation to obtain a second-level intermediate segmented image 814, which reflects both the total tumor area and the tumor core area
  • the true inclusion relationship between them realizes the second fine segmentation of the tumor image, that is, step 804 is performed: based on the first-level intermediate segmented image 813, the image segmentation on the core region of the tumor is performed.
  • the second-level intermediate segmented image 814 is used as the input of the third-level segmentation network 833 to perform image segmentation to obtain a segmented image 815, which reflects the total tumor area, the tumor core area, and the enhanced tumor core area.
  • the sub-task of fine segmentation is completed.
  • the segmented image 815 obtained by the segmentation end can be received, so that the doctor can timely understand the three regions of different severity in the brain tumor, assisting the doctor to diagnose the tumor more quickly and accurately For example, to analyze the benign and malignant of the patient's tumor, the degree of malignancy, etc.
  • the network structure of the 3D U-net network 810 is shown in FIG. 9, and the network structure of each level of the split network in the cascade split network 830 is shown in FIG. 10.
  • the structure of the dense module layer is shown in Figure 11, and through the combination of the three-dimensional convolution kernel, the slice convolution kernel and the normal convolution kernel in the convolution layer. , To achieve feature extraction and feature fusion process.
  • Table 1 The settings of three-dimensional convolution kernel, slice convolution kernel and normal convolution kernel in the down-sampling stage
  • [3 ⁇ 3 ⁇ 1conv] represents a slice convolution kernel
  • [1 ⁇ 1 ⁇ 3conv] represents a normal convolution kernel
  • [3 ⁇ 3 ⁇ 3conv] represents a three-dimensional convolution Accumulate core.
  • the number of various types of convolution kernels set in different dense module layers can be flexibly adjusted according to the actual needs of the application scenario, which is not specifically limited here. For example, in this application scenario, in the down-sampling stage of the first dense module layer, 12 three-dimensional convolution kernels and three three-dimensional convolution kernels are respectively set.
  • the essence is that four channels are arranged in each convolutional layer, so that the four-modality MRI image is input to the cascade segmentation network through different channels for image segmentation, thereby fully guaranteeing the tumor
  • the integrity of the image helps to improve the segmentation effect.
  • end-to-end automatic image segmentation is realized, that is, as long as the different modal MRI images corresponding to the patient are input, three regions with different severity can be obtained, which can not only effectively assist the doctor to further analyze the treatment plan for the patient And, it can also determine the operation area for the patient, so as to deal with the lesion more accurately.
  • the following is an embodiment of the device of the present application, which can be used to execute the image segmentation method involved in the present application.
  • the device embodiments of the present application please refer to the method embodiments of the image segmentation method involved in the present application.
  • an image segmentation apparatus 900 includes but is not limited to: an image acquisition module 910, an image coarse segmentation module 930, an image input module 940, and an image fine segmentation module 950.
  • the image acquisition module 910 is used to acquire tumor images.
  • the rough image segmentation module 930 is used to perform tumor positioning on the acquired tumor image, and obtain a candidate image indicating the position of the entire tumor region in the tumor image.
  • the image fine segmentation module 950 is used to start the image segmentation of the entire tumor region in the candidate image with the first level segmentation network in the cascade segmentation network, and step by step to the last level segmentation network Perform image segmentation on the core area of the enhanced tumor to obtain a segmented image.
  • the image segmentation apparatus provided in the above embodiments only uses the division of the above functional modules as an example to illustrate the tumor image segmentation process.
  • the above functions can be allocated by different functions as needed Module completion, that is, the internal structure of the image segmentation device will be divided into different functional modules to complete all or part of the functions described above.
  • image segmentation apparatus and the image segmentation method embodiments provided in the above embodiments belong to the same concept.
  • the specific manner in which each module performs operations has been described in detail in the method embodiments, and will not be repeated here.
  • a computer device 1000 includes at least one processor 1001, at least one memory 1002, and at least one communication bus 1003.
  • the memory 1002 stores computer-readable instructions, and the processor 1001 reads the computer-readable instructions stored in the memory 1002 through the communication bus 1003.
  • a computer-readable storage medium has stored thereon a computer program, which when executed by a processor implements the image segmentation method in the above embodiments.

Abstract

本申请公开了一种图像分割方法、装置、诊断系统、存储介质及计算机设备,所述图像分割方法包括:获取肿瘤图像;对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像;将所述候选图像输入至基于机器学习模型构建的级联分割网络;以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。

Description

图像分割方法、装置、诊断系统、存储介质及计算机设备
本申请要求于2018年11月30日提交中国专利局、申请号为201811462063.3、申请名称为“图像分割方法、装置、诊断系统及存储介质”的中国专利申请的优先权。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图像分割方法、装置、诊断系统、存储介质及计算机设备。
背景技术
胶质瘤是最常见的原发性恶性脑肿瘤,也称为脑部肿瘤,具有不同程度的侵袭性,往往划分为全肿瘤区域、肿瘤核心区域、增强肿瘤核心区域。
磁共振成像(Magnetic Resonance Imaging,MRI)是临床上最常用的脑部肿瘤检查与诊断手段,从多模态MRI扫描生成的图像中精确分割出脑部肿瘤所包含的各个区域,具有极高的医学价值。
目前,肿瘤图像分割主要基于深度学习,例如,使用全卷积神经网络(FCNNS,Fully convolutional neural networks)。然而,经发明人研究发现,全卷积神经网络方法所学习到的特征都是基于完整图像的局部,而对完整图像的特征学习能力较差,容易导致分割效果不好。
发明内容
为了解决相关技术中存在的肿瘤图像分割效果不佳的问题,本申请各实施例提供一种图像分割方法、装置、诊断系统、存储介质及计算机设备。
其中,本申请所采用的技术方案为:
根据本申请实施例的一方面,一种图像分割方法,包括:获取肿瘤图像;对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像;将所述候选图像输入至基于机器学习模型构建的级联分割网络;以所述级联分割网络中的第一级分割网络,对所述候选图像中 的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。
根据本申请实施例的一方面,一种图像分割装置,包括:图像获取模块,用于获取肿瘤图像;图像粗分割模块,用于对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像;图像输入模块,用于将所述候选图像输入至基于机器学习模型构建的级联分割网络;图像细分割模块,用于以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。
根据本申请实施例的一方面,一种诊断系统,所述诊断系统包括采集端、分割端和诊断端,其中,所述采集端,用于采集肿瘤图像,并发送至所述分割端;所述分割端,用于对所述分割端发送的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像,并将所述候选图像输入至基于机器学习模型构建的级联分割网络,以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像;所述诊断端,用于接收所述分割端发送的分割图像,并显示,以通过所述分割图像辅助诊断人员进行肿瘤诊断。
根据本申请实施例的一方面,一种计算机设备,包括处理器及存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如上所述的图像分割方法。
根据本申请实施例的一方面,一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的图像分割方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并于说明书一起用于解释本申请的原理。
图1是本申请所涉及的实施环境的示意图。
图2是根据一示例性实施例示出的一种分割端的硬件结构框图。
图3是根据一示例性实施例示出的一种图像分割方法的流程图。
图4是图3对应实施例所涉及的级联分割网络中各级分割网络的分割结果的示意图。
图5为图3对应实施例所涉及的肿瘤图像、肿瘤定位过程、候选图像的示意图。
图6为图3对应实施例所涉及的级联分割网络的结构示意图。
图7a是图3对应实施例中步骤330在一个实施例的流程图。
图7b是根据一示例性实施例示出的基于U-net的网络的结构示意图。
图8是图7a对应实施例中步骤410在一个实施例的流程图。
图9是图8对应实施例所涉及的3D U-net网络的网络结构示意图。
图10是根据一示例性实施例示出的分割网络的网络结构示意图。
图11是图10对应实施例所涉及的稠密模块层的结构示意图。
图12是根据一示例性实施例示出的图像分割过程的流程图。
图13是根据一示例性实施例示出的另一种图像分割方法的流程图。
图14是一具体实施例中一种图像分割方法的实现示意图。
图15是图14对应具体实施例所涉及的图像分割方法的流程图。
图16是根据一示例性实施例示出的一种图像分割装置的框图。
图17是根据一示例性实施例示出的一种计算机设备的结构框图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一 些方面相一致的装置和方法的例子。
本申请实施例提出了一种图像分割方法,基于步进式的图像分割,能够有效地改善肿瘤图像分割中的分割效果,相应地,该种图像分割方法适用于肿瘤图像分割装置,该肿瘤图像分割装置部署于具备冯诺依曼体系结构的计算机设备中,例如,该计算机设备可以是个人计算机(PC)、服务器等。
图1为一种图像分割方法所涉及的实施环境的示意图。该实施环境包括诊断系统100,该诊断系统100包括采集端110、分割端130和诊断端150。
其中,采集端110是进行肿瘤图像采集的电子设备,例如,MRI设备、CT(Computed Tomography)设备等,在此并不进行限定。
分割端130则是为用户提供后台服务的电子设备,例如,个人计算机、服务器等,此后台服务包括图像分割服务。
当然,根据实际营运的需要,分割端130可以是一台服务器,也可以是由多台服务器构成的服务器集群,甚至是由多台服务器构成的云计算中心,以便于更好地面向海量用户提供后台服务,在此并未构成具体限定。
进一步地,分割端130部署了用于定位全肿瘤区域在肿瘤图像中位置的肿瘤定位网络131、以及基于机器学习模型构建的级联分割网络132,以此实现步进式图像分割。其中,级联分割网络132包括多级分割网络1321、1322、……、132X。
诊断端150是用于辅助诊断人员进行肿瘤诊断的电子设备,例如,配置有显示屏幕的个人计算机等。
分割端130分别与采集端110、诊断端150之间建立无线或者有线的网络连接,以通过网络连接实现诊断系统100之间的数据传输。例如,此数据传输包括肿瘤图像、分割图像等。
通过采集端110与分割端130的交互,采集端110将采集到的肿瘤图像发送至分割端130。
对于分割端130而言,将接收到采集端110发送的肿瘤图像111,并基于肿瘤定位网络131对该肿瘤图像111进行肿瘤定位,得到用于指示肿瘤图像111中全肿瘤区域位置的候选图像1311,进而输入至级联分割网络132。
以级联分割网络132中的第一级分割网络1321,对候选图像1311中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络132X进行关于增强肿瘤核心区域的图像分割,得到分割图像1301。
那么,对于诊断端150而言,便可在配置的显示屏幕上显示该分割图像1301,以此来辅助诊断人员进行肿瘤诊断。
图2是根据一示例性实施例示出的一种分割端的硬件结构框图。该种分割端适用于图1所示出实施环境的分割端130。
需要说明的是,该种分割端只是一个适配于本申请的示例,不能认为是提供了对本申请的使用范围的任何限制。该种分割端也不能解释为需要依赖于或者必须具有图2中示出的示例性的分割端200中的一个或者多个组件。
分割端200的硬件结构可因配置或者性能的不同而产生较大的差异,如图2所示,分割端200包括:电源210、接口230、至少一存储器250、以及至少一中央处理器(CPU,Central Processing Units)270。
具体地,电源210用于为分割端200上的各硬件设备提供工作电压。
接口230包括至少一有线或无线网络接口,用于与外部设备交互。例如,与图1所示出实施环境的采集端110交互,或者,与图1所示出实施环境的诊断端150交互。
当然,在其余本申请适配的示例中,接口230还可以进一步包括至少一串并转换接口233、至少一输入输出接口235以及至少一USB接口237等,如图2所示,在此并非对此构成具体限定。
存储器250作为资源存储的载体,可以是只读存储器、随机存储器、磁盘或者光盘等,其上所存储的资源包括操作系统251、应用程序253及数据255等,存储方式可以是短暂存储或者永久存储。
其中,操作系统251用于管理与控制分割端200上的各硬件设备以及应用程序253,以实现中央处理器270对存储器250中海量数据255的运算与处理,其可以是Windows ServerTM、Mac OS XTM、UnixTM、LinuxTM、FreeBSDTM等。
应用程序253是基于操作系统251之上完成至少一项特定工作的计算机程序,其可以包括至少一模块(图2中未示出),每个模块都可以分别包含有 对分割端200的一系列计算机可读指令。例如,肿瘤图像分割装置可视为部署于分割端200的应用程序253,以实现图像分割方法。
数据255可以是存储于磁盘中的照片、图片等,还可以是肿瘤图像、分割图像等,存储于存储器250中。
中央处理器270可以包括一个或多个以上的处理器,并设置为通过至少一通信总线与存储器250通信,以读取存储器250中存储的计算机可读指令,进而实现对存储器250中海量数据255的运算与处理。例如,通过中央处理器270读取存储器250中存储的一系列计算机可读指令的形式来完成图像分割方法。
此外,通过硬件电路或者硬件电路结合软件也能同样实现本申请,因此,实现本申请并不限于任何特定硬件电路、软件以及两者的组合。
请参阅图3,在一示例性实施例中,一种图像分割方法适用于图1所示实施环境的分割端,该分割端的结构可以如图2所示。
该种图像分割方法可以由分割端执行,可以包括以下步骤:
步骤310,获取肿瘤图像。
其中,肿瘤图像,是采集端通过对人可能存在肿瘤的部位进行扫描生成的,以便于后续针对该肿瘤图像进行图像分割。例如,采集端可以是MRI设备、CT设备等等。
肿瘤图像,可以来源于采集端实时扫描的图像,还可以是分割端预先存储的采集端所发送的图像,例如,分割端为服务器,进而该服务器便可通过本地读取或者网络传输的方式获取。
换句话说,关于肿瘤图像的获取,既可以实时获取采集端扫描的图像,以便于实时地对该肿瘤图像进行图像分割,也可以获取一历史时间段内采集端扫描的图像,以便于分割端在处理任务较少的时候对肿瘤图像进行图像分割,或者,在操作人员的指示下对肿瘤图像进行图像分割,本实施例并未对此作出具体限定。
进一步地,对于分割端而言,在接收到采集端发送的肿瘤图像之后,可以对接收到的肿瘤图像进行去噪处理,以便于提高后续图像分割的精准度。
例如,对于脑部肿瘤而言,去噪处理可以包括将肿瘤图像中的头骨以及 背景去除。
可选地,当采集端为MRI设备,分割端获取到的肿瘤图像包括但不限于FLAIR、T1、T1c、T2等四模态MRI图像中的一种或者多种。
步骤330,对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像。
针对脑部肿瘤而言,如前所述,具有不同程度的侵袭性,可划分为三个区域:全肿瘤区域、肿瘤核心区域和增强肿瘤核心区域。关于此三个区域,最重要的特性是区域之间具有真包含关系,如图4中子图(a)至(c)所示,即全肿瘤区域3011包含肿瘤核心区域3021,而肿瘤核心区域3021包含增强肿瘤核心区域3031。
为此,肿瘤定位,指的是定位出全肿瘤区域在肿瘤图像中粗略的位置,以根据定位到的位置将该全肿瘤区域收容于候选图像中。
具体地,候选图像通过指定区域对肿瘤图像中的全肿瘤区域进行收容。
其中,指定区域的形状可以是矩形、三角形、圆形等,在此并未加以限定。
在一实施例中,指定区域为矩形框,该矩形框的最大尺寸为分割坐标最大值再向外围扩充指定个数的像素点距离。其中,指定个数可以根据应用场景的实际需要灵活地调整,例如,在一应用场景中,指定个数为5个。
举例来说,如图5所示,305表示肿瘤图像,306表示肿瘤定位过程,307表示候选图像,该候选图像307中全肿瘤区域3071收容于指定区域3072中。该指定区域3072为矩形框。
也就是说,候选图像仅是肿瘤图像的其中一部分,该候选图像通过指定区域收容了全肿瘤区域,以此指示全肿瘤区域在肿瘤图像中的粗略位置,有利于后续基于该候选图形作更精细的图像分割。
进一步地,肿瘤定位可以通过图像分割实现,也即是,将肿瘤图像中的全肿瘤区域和非肿瘤区域分割,以使得定位框可将分割得到的全肿瘤区域收容。
可选地,图像分割包括:普通分割、语义分割、实例分割等,其中,普通分割进一步包括:阈值分割、区域分割、边缘分割、直方图分割等,本实施例并未对此作出具体限定。
在一实施例中,图像分割可以通过机器学习模型实现,例如,机器学习模型可以是卷积神经网络模型、残差神经网络模型等。
步骤340,将所述候选图像输入至基于机器学习模型构建的级联分割网络。
步骤350,以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。
级联分割网络,包括多级分割网络,是基于机器学习模型构建的,例如,机器学习模型可以是卷积神经网络模型、残差神经网络模型等。
在级联分割网络中的各级分割网络,基于第一级分割网络及其参数,对候选图像中的全肿瘤区域进行图像分割,并将分割结果输出至第二级分割网络。
基于第二级分割网络及其参数,对第一级分割网络输出的分割结果进行图像分割,并将分割结果输出至第三级分割网络,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,以最后一级分割网络的分割结果作为分割图像。
由此,即实现以候选图像中全肿瘤区域的位置边界为起始,向内逐步分割至增强肿瘤核心区域的步进式图像分割。
如前所述,对于脑部肿瘤而言,可划分为全肿瘤区域、肿瘤核心区域、增强肿瘤核心区域,为此,在一实施例中,级联分割网络包括三级分割网络。
如图6所示,级联分割网络400包括第一级分割网络401、第二级分割网络402和第三级分割网络403。
具体而言,通过所述第一级分割网络401进行所述候选图像的图像分割,得到第一级中间分割图像405。
通过所述第二级分割网络402进行所述第一级中间分割图像405的图像分割,得到第二级中间分割图像。
通过所述第三级分割网络403进行所述第二级中间分割图像406的图像分割,得到所述分割图像。
其中,如图4中子图(a)至(c)所示,第一级中间分割图像301是第一级分割网络401的分割结果,对图像所包含的全肿瘤区域3011进行了标记,如图 4中子图(a)所示。
第二级中间分割图像302是第二级分割网络402的分割结果,对图像中的全肿瘤区域3011、肿瘤核心区域3021进行了不同标记,以此反映全肿瘤区域3011与肿瘤核心区域3021二者之间的真包含关系,如图4中子图(b)所示。
分割图像303是第三级分割网络403的分割结果,对图像所包含的全肿瘤区域3011、肿瘤核心区域3021、增强肿瘤核心区域3031进行了不同标记,如图4中子图(c)所示。也即是,分割图像303反映了全肿瘤区域3011、肿瘤核心区域3021、增强肿瘤核心区域3031三者之间的真包含关系。
可选地,在步进式图像分割中,各级分割网络所使用的参数各不相同,以便于更好地适应于脑部肿瘤所包含不同区域之间的图像分割,进而有利于改善肿瘤图像的分割效果。
通过如上所述的过程,实现了基于机器学习的肿瘤图像分割,通过不同尺度的图像分割过程有效地改善了肿瘤图像的分割效果。
请参阅图7a,在一示例性实施例中,步骤330可以包括以下步骤:
步骤410,基于三维U型全卷积神经(3D U-Net)网络,由获取到的肿瘤图像提取得到对应的特征图。
应当理解,采集端扫描生成的肿瘤图像往往是三维的图像,也即是,由很多个slice(切片)构成肿瘤图像,如果使用二维的机器学习模型去处理三维的肿瘤图像,不仅分割效果不好,而且分割效率较差,因为不得不将构成肿瘤图像的每一个slice依次输入至机器学习模型进行训练或者类别预测,过于繁琐。
为此,本实施例中,肿瘤定位是通过三维的机器学习模型实现的,即3D U-Net网络。
该3D U-Net网络,也称为基于U-Net的三维网络,可以理解,该3D U-Net网络是以基于U-Net的网络为原型构建的,其网络结构也呈U型。
参见图7b所示的基于U-net的网络,它是一种改进型的全卷积神经网络。该基于U-net的网络包括收缩路径105和扩张路径107,输入图像通过收缩路径105进行多次卷积缩小,得到多个特征图,再由扩张路径107进行多次反卷积扩大,在此过程中,还会与收缩路径105得到的多个特征图对应合并, 如图7b中1051-1054所示,以便于获取关于输入图像不同维度上的特征,进而改善分割效果。
具体地,所述3D U-Net网络包括编码器网络和解码器网络。
其中,编码器网络用于提取肿瘤图像的上下文特征,以通过上下文特征对肿瘤图像进行局部/全局的准确描述,以此捕捉肿瘤图像中的上下文信息(context information)。解码器网络则用于提取肿瘤图像的定位特征,以通过定位特征对肿瘤图像中需要进行图像分割的区域进行精准地定位(localization)。
此外,在解码器网络中,还进行上下文特征与定位特征的特征融合,以获取关于肿瘤图像不同维度上的特征,使得图像分割的分割效果更好。
步骤430,对所述肿瘤图像对应特征图中的像素点进行类别预测,得到所述肿瘤图像对应特征图中像素点的类别。
本实施例中,类别预测,是基于3D U-Net网络设置的分类器实现的,即采用分类器计算肿瘤图像对应特征图中像素点属于不同类别的概率。
如前所述,肿瘤定位,实质是先将肿瘤图像中的肿瘤区域和非肿瘤区域分割,为此,类别包括全肿瘤区域类别和非肿瘤区域类别。
举例来说,对于肿瘤图像对应特征图中的某一个像素点而言,分别计算该像素点属于不同类别的概率。假设该像素点属于全肿瘤区域类别的概率为P1,该像素点属于非肿瘤区域类别的概率为P2,如果P1>P2,则表示该像素点属于全肿瘤区域类别,反之,如果P1<P2,则表示该像素点属于非肿瘤区域类别。
待完成肿瘤图像对应特征图中所有像素点的类别预测,即完成肿瘤图像中肿瘤区域和非肿瘤区域的分割,也即是,在肿瘤图像中定位出全肿瘤区域粗略的位置。
步骤450,根据所述特征图中属于全肿瘤区域类别的像素点,得到将所述全肿瘤区域收容于指定区域的所述候选图像。
在得到肿瘤图像对应特征图中所有像素点的类别之后,便可从中获取属于全肿瘤区域类别的像素点,以便于基于获取到的像素点构建指定区域。
换而言之,将属于全肿瘤区域类别的像素点包围在指定区域之内,即视为将全肿瘤区域收容在指定区域中,由此,便生成了收容全肿瘤区域于指定 区域的候选图像,如图5中307所示。
可选地,考虑脑部肿瘤所具有的不同程度的侵袭性,在构建指定区域过程中,将以指定区域为中心,向外围扩充,使得候选图像的尺寸达到指定尺寸,以此充分保证肿瘤图像的分割效果。
该指定尺寸,可以根据应用场景的实际需要灵活地设置,本实施例并未对此加以限定。例如,在一应用场景中,指定尺寸为96×96×96。
在上述实施例的作用下,基于3D U-Net网络实现了肿瘤图像的粗分割,不仅从宏观角度出发定位全肿瘤区域在肿瘤图像中的粗略位置,避免损失图像分割的精准度,而且从肿瘤图像缩小至候选图像,有效地降低了图像的尺寸,既降低了背景的比重,有利于提高小肿瘤的分割粒度,而且有利于设计更深层次的网络,进而改善分割效果。
此外,通过肿瘤图像的粗分割,指定区域的大小可根随全肿瘤区域的大小动态变化,有利于在后续进行分割网络的模型训练时充分地保证正负样本的均衡性。
请参阅图8,在一示例性实施例中,步骤410可以包括以下步骤:
步骤411,采用所述编码器网络提取得到所述肿瘤图像的上下文特征。
如图9所示,3D U-Net网络500包括编码器网络501和解码器网络502。
按照由浅至深的顺序,所述编码器网络501包括若干下采样层5011-5015,所述解码器网络502包括若干上采样层5021-5025。
在编码器网络501与解码器网络502之间,按照由浅至深的顺序,建立了若干特征传播层5031-5034。
值得一提的是,3D U-Net网络500还包括分类层503,设置了分类器,用于计算肿瘤图像对应特征图中像素点属于不同类别的概率,以此实现对肿瘤图像对应特征图中像素点的类别预测。
那么,对于编码器网络501而言,便可通过若干下采样层5011-5015进行肿瘤图像的上下文特征的提取,并经由若干特征传播层5031-5034,将提取到的上下文特征传输至解码器网络502。
具体地,将肿瘤图像输入至编码器网络501中最浅一层下采样层5011,通过最浅一层下采样层5011对输入的肿瘤图像进行卷积处理,得到最浅一层下采样层5011对应的局部特征,并通过下采样处理后输入至次浅一层下采样 层5012。
按照由浅至深的顺序,对编码器网络501中的下采样层5012、5013、5014进行遍历,获得遍历到下采样层5012、5013、5014对应的局部特征。
在编码器网络501与解码器网络502之间,分别通过若干特征传播层5031-5034进行上述局部特征的特征传播。
通过最深一层下采样层5015的卷积处理,得到最深一层下采样层5015对应的全局特征,并直接传输至解码器网络502中的最深一层上采样层5025。
在编码器网络501中,横向箭头表示卷积处理,向下箭头表示下采样处理。
应当说明的是,无论是局部特征,还是全局特征,均视为肿瘤图像的上下文特征,以此对肿瘤图像进行局部/全局的准确描述。
也就是说,随着编码器网络层次加深,对肿瘤图像的特征提取中,逐渐由局部描述抽象为全局描述,进而更加准确地描述肿瘤图像,从而有利于保证图像分割的精准度。
步骤413,采用所述解码器网络提取关于所述全肿瘤区域的定位特征,并进行所述上下文特征与所述定位特征的融合,得到所述肿瘤图像对应的特征图。
对于解码器网络而言,不仅通过若干上采样进行肿瘤图像的定位特征的提取,而且还针对肿瘤图像进行上下文特征与定位特征的特征融合。
结合图9,对解码器网络的特征提取和特征融合过程加以说明。
具体地,在所述解码器网络502中,以所述编码器网络501中最深一层下采样层5015对应的上下文特征(全局特征)作为所述最深一层上采样层5025对应的定位特征。
对最深一层上采样层5025对应的定位特征进行上采样处理,得到待融合特征。
将所述待融合特征输入至次深一层上采样层5024,与次深一层下采样层5014对应的上下文特征(局部特征)进行融合,并通过反卷积处理,得到次深一层上采样层5024对应的定位特征。
按照由深至浅的顺序,对其余上采样层5023、5022、5021进行遍历,获得遍历到上采样层对应的定位特征。
待完成所述遍历,由最浅一层上采样层5021对应的定位特征得到所述肿瘤图像对应的特征图。
在解码器网络502中,横向箭头表示反卷积处理,向上箭头表示上采样处理。
通过上述过程,通过编码器网络和解码器网络的相互结合,不仅有效地降低了图像分割的计算量,有利于提升分割效率,而且充分地保障了图像分割的精准度。
应当理解,各级分割网络所进行的图像分割过程,原理是相同的,区别仅在于输入对象、输出对象的不同,以及所使用的参数也各不相同,为此,在对图像分割作进一步地详细说明之前,将针对基于各级分割网络进行的图像分割过程之间的差异进行如下定义说明,以便于后续更好地描述基于各级分割网络进行的图像分割过程中的共性。
其中,输入图像为所述候选图像、所述第一级中间分割图像、或者所述第二级中间分割图像。
输出图像为所述第一级中间分割图像、所述第二级中间分割图像、或者所述分割图像。
分割网络为所述级联分割网络中的各级分割网络。分割网络的参数为所述级联分割网络中各级分割网络的参数。
如图10所示,所述分割网络600包括下采样(DownSampling)阶段610和上采样(UpSampling)阶段630。
具体地,所述下采样阶段610包括依次连接的若干第一基础网络层611、612和若干第一稠密模块(DenseBlock)层613、614。
所述上采样阶段630包括依次连接的若干第三稠密模块层634、633和若干第二基础网络层632、631。所述上采样阶段630与所述下采样阶段610相互对称,包括:第一基础网络层611与第二基础网络层631相互对称,第一基础网络层612与第二基础网络层632相互对称,第一稠密模块613与第三稠密模块层633相互对称,第一稠密模块614与第三稠密模块层634相互对称。
其中,所述第一基础网络层611包括依次连接的第二稠密模块层6111和池化层6112。所述第一基础网络层612包括依次连接的第二稠密模块层 6121和池化层6122。
所述第二基础网络层631包括依次连接的上采样层6311和第四稠密模块层6312。所述第二基础网络层632包括依次连接的上采样层6321和第四稠密模块层6322。
相应地,基于所述上采样阶段630与所述下采样阶段610相互对称,第二稠密模块层6111与第四稠密模块层6312相互对称,第二稠密模块层6121与第四稠密模块层6322相互对称。
进一步地,上述稠密模块层(Dense Block)均包括一个输入单元和至少一个稠密单元,每一个稠密单元进一步包括依次连接的卷积层、激活层和归一化层,避免使用单纯的卷积层或者残差卷积层,以此保证图像分割的精准度。
如图11所示,稠密模块层包括1个输入单元和4个稠密单元H1、H2、H3、H4。其中,每一个稠密单元又进一步包括卷积层Conv、激活层Relu和归一化层BN。
输入图像input对应的特征x0由输入单元输入,并同时输出至稠密单元H1、H2、H3、H4。并且,在对输入图像input进行特征提取时,稠密单元H1输出的特征x1将同时输出至稠密单元H2、H3、H4,同理,稠密单元H2输出的特征x2将同时输出至稠密单元H3、H4,稠密单元H3输出的特征x3将输出至稠密单元H4。
换而言之,对于稠密单元H2而言,结合了输入图像input对应的特征x0、x1;对于稠密单元H3而言,结合了输入图像input对应的特征x0、x1、x2;对于稠密单元H4而言,结合了输入图像input对应的特征x0、x1、x2、x3。
通过如此设置,稠密模块层不仅能够重复利用浅层特征,例如,x0、x1等,充分保证了输入图像的完整性,而且深浅层特征的结合,例如,x0、x1、x2的结合,有利于降低图像分割的复杂度,进而有效地改善图像分割的分割效果。
更进一步地,上述第一稠密模块层、第三稠密模块层中的卷积层均包括若干三维卷积核(图10中未示出)。
上述第二稠密模块层、第四稠密模块层中的卷积层均包括若干切面卷积 核(图10中未示出)和若干法向卷积核,如图10中6111a、6121a、6322a、6312a所示。换句话说,上述第二稠密模块层、第四稠密模块层中的卷积层是将若干三维卷积核(k×k×k)转化作若干切面卷积核(k×k×1)和若干法向卷积核(1×1×k)。
通过如此设置,使得基于2.5维的图像分割得以实现,不仅可以避免三维卷积核的高显存占用和计算复杂度,更重要的是,基于肿瘤图像的特殊性,即该肿瘤图像由很多个slice构成,会导致很多个slice合成三维图像的时候切面分辨率和法向分辨率差异较大,纯三维的图像分割误差较大,纯二维分割又直接忽略了图像局部/全局之间的关联,只有2.5维最适合进行肿瘤图像分割。
此外,如图10所示,通过第一稠密模块层、第三稠密模块层所具有的三维特性、与第二稠密模块层、第四稠密模块层所具有的2.5维特性之间的相互结合,不仅综合了前后二者各自的优势,而且在后者的基础上综合了输入图像各个维度的特征,保证了最大限度特征的收集和融合,进一步有效地改善图像分割的分割效果。
相应地,在一示例性实施例中,如图12所示,图像分割过程,即通过所述分割网络进行所述输入图像的图像分割,得到所述输出图像,可以包括以下步骤:
步骤510,在所述分割网络中的下采样阶段,由所述输入图像提取得到关键特征。
结合图10,对关键特征的提取过程加以说明。
具体地,将所述输入图像input输入至所述分割网络600中的下采样阶段610,通过若干第一基础网络层611、612进行卷积下采样处理,得到中间特征。
那么,在获得中间特征之后,便可通过若干第一稠密模块层613、614对所述中间特征进行卷积处理,得到所述关键特征。
其中,中间特征的获得过程进一步说明如下。
在所述分割网络600中的下采样阶段610,将所述输入图像input输入至第一个第一基础网络层611,通过该第一个第一基础网络层611中的第二稠密模块层6111对所述输入图像input进行卷积处理。
通过该第一个第一基础网络层611中的池化层6112对卷积输出的特征进行下采样处理,以输出至第二个第一基础网络层612。
按照特征传播顺序,对若干第一基础网络层中的其余第一基础网络层612进行遍历,待完成所述遍历,以第二个第一基础网络层612,也即是最后一个第一基础网络层612下采样处理后的特征作为所述中间特征。
步骤530,将所述关键特征输入至所述分割网络中的上采样阶段,进行多尺度特征融合,得到所述输入图像对应的特征图。
结合图10,对多尺度特征融合过程加以说明。
具体地,将所述关键特征输入至所述分割网络600中的上采样阶段630,通过若干第三稠密模块层634、633进行反卷积处理,得到第一尺度特征651,以输入前若干个第二基础网络层。
在前若干个第二基础网络层中,进行该前若干个第二基础网络层中的第四稠密模块层,与若干所述第一基础网络层中相互对称的第二稠密模块层之间的特征融合,所述前若干个第二基础网络层在所述上采样阶段630中连接于最后一个所述第三稠密模块层633与最后一个第二基础网络层631之间。
在最后一个第二基础网络层631中,通过该最后一个第二基础网络层631中的上采样层6311对前若干个第二基础网络层融合输出的特征进行上采样处理,得到第二尺度特征652。
获取若干所述第一基础网络层611、612中第二稠密模块层6111、6121卷积输出的特征,将获取到的特征作为第三尺度特征653。
通过该最后一个第二基础网络层631中的第四稠密模块层6312将所述第一尺度特征651、所述第二尺度特征652、所述第三尺度特征653融合,并进行反卷积处理,得到所述输入图像input对应的特征图output。
也就是说,输入图像input对应的特征图output,不仅基于1倍上采样的特征(第二尺度特征652)、2倍上采样的特征(第二稠密模块层6121卷积输出的特征653)、4倍上采样的特征(第一尺度特征651),还基于无上采样倍数的特征(第二稠密模块层6111卷积输出的特征653),从而实现了多尺度特征融合,使得各级分割网络的分割结果在局部和全局都能够达到最好的分割效果,有效地改善了分割效果。
其中,相互对称的第四稠密模块层与第二稠密模块层之间的特征融合过 程进一步说明如下。
将所述第一尺度特征651输入第一个第二基础网络层632,通过该第一个第二基础网络层632中的上采样层6321,对输入的第一尺度特征651进行上采样处理。
基于该第一个第二基础网络层632中的第四稠密模块6322,获取若干第一基础网络层612中相互对称的第二稠密模块6121卷积输出的特征654,与上采样处理后的特征融合,得到合并特征。
通过该第一个第二基础网络层632中的第四稠密模块6322,对所述合并特征进行反卷积处理,以输出至第二个第二基础网络层631。
按照特征传播顺序,对前若干个第二基础网络层中的其余第二基础网络层进行遍历,待完成所述遍历,完成相互对称的第四稠密模块层与第二稠密模块层之间的特征融合。
需要说明的是,由于图10中仅包含两个第二基础网络层,因此,在上采样阶段630中,第二个第二基础网络层实质为最后一个第二基础网络层,故而,无需遍历前若干个第二基础网络层中的其余第二基础网络层,仅在第一个第二基础网络层632所进行的反卷积处理完毕之后,即完成相互对称的第四稠密模块层6322与第二稠密模块层6121之间的特征融合。
步骤550,对所述输入图像对应特征图中的像素点进行类别预测,得到所述输入图像对应特征图中像素点的类别。
本实施例中,类别预测,是基于分割网络中设置的分类器实现的,即采用分类器计算输入图像对应特征图中像素点属于不同类别的概率。
其中,类别可以是其余区域类别、全肿瘤区域类别、肿瘤核心区域类别、增强肿瘤核心区域类别。
应当说明的,各分割网络都是基于二分类构建的,即,第一级分割网络中,类别包括其余区域类别和全肿瘤区域类别。此时,其余区域也即是非肿瘤区域。
同理,第二级分割网络中,类别包括其余区域类别和肿瘤核心区域类别。此时,其余区域指的是非肿瘤区域、未包含肿瘤核心区域的全肿瘤区域。
第三级分割网络中,类别包括其余区域类别和增强肿瘤核心区域类别。此时,其余区域则是指非肿瘤区域、未包含增强肿瘤核心区域的全肿瘤区域。
以第二级分割网络进行其余区域和肿瘤核心区域的图像分割为例进行说明。
对于输入图像对应特征图中的某一个像素点而言,分别计算该像素点属于不同类别的概率,假设该像素点属于其余区域类别的概率为P1,该像素点属于其余区域类别的概率为P2,如果P1大,则表示该像素点属于其余区域类别,反之,如果P2大,则表示该像素点属于肿瘤核心区域类别。
步骤570,对所述输入图像对应特征图中指定类别的像素点进行标记,得到所述输出图像。
标记,是根据像素点所属类别进行的,可以通过颜色标记,还可以通过星号等符号标记,在此并未加以限定。
在一实施例中,不同类别的像素点通过不同颜色进行标记,如图4中子图(c)所示。
值得一提的是,各级分割网络中,指定类别各不相同。例如,第一级分割网络中,指定类别为全肿瘤区域类别;第二级分割网络中,指定类别为肿瘤核心区域;第三级分割网络中,指定类别为增强肿瘤核心区域。
对于各级分割网络而言,待完成所有输入图像对应特征图中所有像素点的类别预测,即完成肿瘤图像中全肿瘤区域、肿瘤核心区域、增强性肿瘤区域的分割,也即是,在肿瘤图像中定位出全肿瘤区域、肿瘤核心区域、增强性肿瘤区域更为精细的位置。
请参阅图13,在一示例性实施例中,所述方法还包括:基于机器学习模型构建所述级联分割网络,所述机器学习模型为卷积神经网络模型。
所述基于机器学习模型构建所述级联分割网络,可以包括以下步骤:
步骤710,获取携带标签的训练样本。
其中,所述训练样本是通过不同种类标签对所述全肿瘤区域、所述肿瘤核心区域、所述增强肿瘤核心区域进行标注的肿瘤图像。
对于脑部肿瘤而言,标注,指的是仅在肿瘤图像中的全肿瘤区域、肿瘤核心区域、或者增强肿瘤核心区域添加非零标记,而将肿瘤图像中其余区域的像素点进行零标记。
例如,针对同一肿瘤图像,如果仅在该肿瘤图像中的全肿瘤区域添加非零标记,而将该肿瘤图像中其余区域的像素点进行零标记,即完成该肿瘤图 像的标注,得到携带全肿瘤区域标签的训练样本。
如果仅在该肿瘤图像中的肿瘤核心区域添加非零标记,而将该肿瘤图像中其余区域的像素点进行零标记,即得到携带肿瘤核心区域标签的训练样本。
同理,如果仅在该肿瘤图像中的增强肿瘤核心区域添加非零标记,而将该肿瘤图像中其余区域的像素点进行零标记,即得到携带增强肿瘤核心区域标签的训练样本。
可选地,在进行标注之前,对肿瘤图像中的像素点进行归一化处理,以此提高图像分割的精准度。
步骤730,根据训练样本所携带标签的种类,建立多个训练样本集,每一个训练样本集对应一个种类。
应当理解,由于肿瘤没有固定的形状大小和方向性,模型训练将基于大量的训练样本,为此,本实施例中,对于每一个训练样本,将进行样本增广处理。
样本增广处理包括:翻转、旋转、放缩、对比度增强等。其中,翻转是指肿瘤图像前后翻转、左右翻转等;旋转是指肿瘤图像按照指定角度旋转;放缩是指对肿瘤图像进行放大处理,或者,对肿瘤图像进行缩小处理;对比度增强则是指改变肿瘤图像中像素点的对比度。
以放缩进行说明,放大处理意味着将96×96×96肿瘤图像插值为120×120×120尺寸的图像,再从120×120×120尺寸的图像中裁取中间图像至96×96×96;而缩小处理则意味着将120×120×120肿瘤图像的尺寸缩小至96×96×96。
以携带同一种类标签的训练样本建立对应的训练样本集,那么,多个种类标签的训练样本便可建立多个对应的训练样本集。例如,由携带全肿瘤区域标签的训练样本构建对应的训练样本集,在对卷积神经网络模型进行模型训练之后,将进行关于全肿瘤区域的图像分割。
通过如此设置,有效地扩大了训练样本的数量,有利于提高不同方向性、不同形状大小的肿瘤的训练价值,进而充分地保证图像分割的精准度。
步骤750,通过多个训练样本集分别对多个具有指定模型结构的卷积神经网络模型进行模型训练。
模型训练,实质上是通过训练样本集对具有指定模型结构的卷积神经网 络模型的参数加以迭代优化,使得基于此参数构建的指定算法函数满足收敛条件。
本实施例中,指定模型结构即如图10所示。指定算法函数,包括但不限于:最大期望函数、损失函数等等。
举例来说,随机初始化卷积神经网络模型的参数,根据训练样本集中的当前一个训练样本,基于随机初始化的参数通过前向传播进行概率计算,通过计算的概率与正确标注之间的Dice距离构建损失函数,并进一步地计算该损失函数的损失值。
如果损失函数的损失值未达到最小,则通过反向传播更新卷积神经网络模型的参数,并根据训练样本集中的后一个训练样本,基于更新的参数进行概率计算,通过计算的概率与正确标注之间的Dice距离重新构建损失函数,并再次计算重新构建的损失函数的损失值。
如此迭代循环,直至所构建损失函数的损失值达到最小,即视为损失函数收敛,此时,卷积神经网络模型也收敛,并符合预设精度要求,则停止迭代。
否则,迭代更新卷积神经网络模型的参数,并根据训练样本集中的其余训练样本和更新的参数,计算由此构建的损失函数的损失值,直至损失函数收敛。
值得一提的是,如果在损失函数收敛之前,迭代次数已经达到迭代阈值,也将停止迭代,以此保证模型训练的效率。
当卷积神经网络模型收敛并符合预设精度要求时,表示卷积神经网络模型完成模型训练,由此便可进一步地构建级联分割网络。
步骤770,将完成模型训练的多个卷积神经网络模型级联,得到所述级联分割网络。
针对多个不同的训练样本集,便可得到完成模型训练的多个卷积神经网络模型。每一个完成模型训练的卷积神经网络模型对应一个训练样本集。例如,训练样本集中的训练样本是通过标签对所述全肿瘤区域进行标注的肿瘤图像,那么,由此完成模型训练的卷积神经网络模型则进行关于全肿瘤区域的图像分割。
以一个完成模型训练的卷积神经网络模型作为一级分割网络,将多级分 割网络进行级联,便可构建得到级联分割网络。例如,回请参阅图6,级联分割网络400包括三级分割网络401、402、403。
通过上述过程,基于所构建级联分割网络中的各级分割网络,对于分隔端而言,便具有了对输入图像对应特征图进行像素点类别预测的能力。
那么,将输入图像输入至级联分割网络,便能够对输入图像对应特征图中的像素点进行类别预测,由此得到特征图中像素点的类别,进而实现输入图像的图像分割。
在一示例性实施例中,步骤350之后,如上所述的方法还可以包括以下步骤:
采用形态学算法对所述分割图像进行修正。
其中,形态学算法包括但不限于腐蚀、膨胀、填充空洞以及Dense CRF(条件随机场)等,本实施例并未对此作出具体限定。
由此,实现了分割图像的修正,使得分割图像中全肿瘤区域、肿瘤核心区域、增强肿瘤核心区域三者之间的分割边缘得以平滑,和/或,分割图像中的噪声得以消除,进一步有效地改善了图像分割的分割效果。
下面结合一具体实施例对一种图像分割方法加以描述。
在该具体实施例中,针对脑部肿瘤,分割端将肿瘤图像分割任务分为粗分割子任务和细分割子任务。其中,肿瘤图像为采集端通过MRI设备扫描生成的,实质为四模态MRI图像。
具体而言,结合图14至15所示,通过执行步骤801,分割端获取采集端扫描生成的肿瘤图像811。
如图14所示,粗分割子任务基于肿瘤图像811,通过3D U-net网络820进行肿瘤定位,得到全肿瘤区域包含于矩形框的候选图像812,由此,完成粗分割子任务,即执行步骤802:基于肿瘤图像811,进行全肿瘤区域的粗分割。
细分割子任务中,以候选图像812作为细分割子任务的基础,可以看出,候选图像812相较于肿瘤图像811,尺寸大大缩小。
将候选图像812输入至级联分割网络830中的第一级分割网络831,进行图像分割,得到标记了全肿瘤区域的第一级中间分割图像813。相较于候 选图像812,该第一级中间分割图像813所包含的全肿瘤区域不再粗略地包含于矩形框中,而是进行了具体的标记,实现了肿瘤图像的第一次细分割,即执行步骤803:基于候选图像812,进行关于全肿瘤区域的图像分割。
将第一级中间分割图像813作为第二级分割网络832的输入,进行图像分割,得到第二级中间分割图像814,该第二级中间分割图像814反映了全肿瘤区域与肿瘤核心区域二者之间的真包含关系,实现了肿瘤图像的第二次细分割,即执行步骤804:基于第一级中间分割图像813,进行关于肿瘤核心区域的图像分割。
最后,以第二级中间分割图像814作为第三级分割网络833的输入,进行图像分割,得到分割图像815,该分割图像815反映了全肿瘤区域、肿瘤核心区域、增强肿瘤核心区域三者之间的真包含关系,实现了肿瘤图像的第三次细分割,即执行步骤805:基于第二级中间分割图像814,进行关于增强肿瘤核心区域的图像分割。
由此,针对脑部肿瘤所包含三个区域的不同特性,通过步进式图像分割,即基于不同输入图像所进行的关于不同区域的图像分割,完成了细分割子任务。
那么,对于诊断端而言,通过执行步骤806,便可接收到分割端所得到的分割图像815,使得医生及时了解脑部肿瘤中严重程度不同的三种区域,辅助医生更加快速准确进行肿瘤诊断,例如,分析病人的肿瘤的良恶性、恶性程度等等。
其中,3D U-net网络810的网络结构如图9所示,而对于级联分割网络830中的各级分割网络,其网络结构如图10所示。
在各级分割网络的上采样阶段、下采样阶段中,稠密模块层的结构如图11所示,且通过卷积层中三维卷积核与切面卷积核、法向卷积核的相互结合,实现特征提取和特征融合过程。
表1下采样阶段中三维卷积核、切面卷积核、法向卷积核的设置情况
Figure PCTCN2019121246-appb-000001
Figure PCTCN2019121246-appb-000002
以下采样阶段为例,如表1所示,[3×3×1conv]表示切面卷积核,[1×1×3conv]表示法向卷积核,[3×3×3conv]则表示三维卷积核。不同稠密模块层所设置的各类型卷积核的个数可以根据应用场景的实际需要灵活地调整,此处并非对此作出具体限定。例如,在本应用场景中,第一稠密模块层在下采样阶段中,分别设置了12个三维卷积核、3个三维卷积核。
另外,关于四模态MRI图像,实质是在各卷积层中配置了四个通道,以使该四模态MRI图像通过不同通道输入至级联分割网络中进行图像分割,进而充分地保证肿瘤图像的完整性,有利于提升分割效果。
通过上述过程,实现了端到端的自动图像分割,即只要输入病人所对应的不同模态MRI图像,便可得到三种严重程度不同的区域,不仅能够有效地辅助医生进一步地针对病人分析治疗方案,而且还能够为病人确定手术区域,从而更加精准地针对病灶进行处理。
下述为本申请装置实施例,可以用于执行本申请所涉及的图像分割方法。对于本申请装置实施例中未披露的细节,请参照本申请所涉及的图像分割方 法的方法实施例。
请参阅图16,在一示例性实施例中,一种图像分割装置900包括但不限于:图像获取模块910、图像粗分割模块930、图像输入模块940和图像细分割模块950。
其中,图像获取模块910,用于获取肿瘤图像。
图像粗分割模块930,用于对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像。
图像输入模块940,用于将所述候选图像输入至基于机器学习模型构建的级联分割网络
图像细分割模块950,用于以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。
需要说明的是,上述实施例所提供的图像分割装置在进行肿瘤图像分割处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即图像分割装置的内部结构将划分为不同的功能模块,以完成以上描述的全部或者部分功能。
另外,上述实施例所提供的图像分割装置与图像分割方法的实施例属于同一构思,其中各个模块执行操作的具体方式已经在方法实施例中进行了详细描述,此处不再赘述。
请参阅图17,在一示例性实施例中,一种计算机设备1000,包括至少一处理器1001、至少一存储器1002、以及至少一通信总线1003。
其中,存储器1002上存储有计算机可读指令,处理器1001通过通信总线1003读取存储器1002中存储的计算机可读指令。
该计算机可读指令被处理器1001执行时实现上述各实施例中的图像分割方法。
在一示例性实施例中,一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各实施例中的图像分割方法。
上述内容,仅为本申请的较佳示例性实施例,并非用于限制本申请的实施方案,本领域普通技术人员根据本申请的主要构思和精神,可以十分方便地进行相应的变通或修改,故本申请的保护范围应以权利要求书所要求的保 护范围为准。

Claims (27)

  1. 一种图像分割方法,由计算机设备执行,包括:
    获取肿瘤图像;
    对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像;
    将所述候选图像输入至基于机器学习模型构建的级联分割网络;
    以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。
  2. 如权利要求1所述的方法,其中,所述级联分割网络包括三级分割网络;
    所述以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像,包括:
    通过所述第一级分割网络进行所述候选图像的图像分割,得到标记了所述全肿瘤区域的第一级中间分割图像;
    通过第二级分割网络进行所述第一级中间分割图像的图像分割,得到标记了所述全肿瘤区域、所述肿瘤核心区域的第二级中间分割图像;
    通过第三级分割网络进行所述第二级中间分割图像的图像分割,得到标记了所述全肿瘤区域、所述肿瘤核心区域、所述增强肿瘤核心区域的所述分割图像。
  3. 如权利要求2所述的方法,其中,若输入图像为所述候选图像、所述第一级中间分割图像、或者所述第二级中间分割图像;输出图像为所述第一级中间分割图像、所述第二级中间分割图像、或者所述分割图像;分割网络为所述级联分割网络中的各级分割网络;所述分割网络包括上采样阶段和下采样阶段;
    通过所述分割网络进行所述输入图像的图像分割,得到所述输出图像,包括:
    在所述分割网络中的下采样阶段,由所述输入图像提取得到关键特征;
    将所述关键特征输入至所述分割网络中的上采样阶段,进行多尺度特征融合,得到所述输入图像对应的特征图;
    对所述输入图像对应特征图中的像素点进行类别预测,得到所述输入图像对应特征图中像素点的类别;
    对所述输入图像对应特征图中指定类别的像素点进行标记,得到所述输出图像。
  4. 如权利要求3所述的方法,其中,所述下采样阶段包括依次连接的若干第一基础网络层和若干第一稠密模块层;
    所述在所述分割网络中的下采样阶段,由所述输入图像提取得到关键特征,包括:
    将所述输入图像输入至所述分割网络中的下采样阶段,通过若干第一基础网络层进行卷积下采样处理,得到中间特征;
    通过若干第一稠密模块层对所述中间特征进行卷积处理,得到所述关键特征。
  5. 如权利要求4所述的方法,其中,所述第一基础网络层包括依次连接的第二稠密模块层和池化层;
    所述将所述输入图像输入至所述分割网络中的下采样阶段,通过若干第一基础网络层进行卷积下采样处理,得到中间特征,包括:
    在所述分割网络中的下采样阶段,将所述输入图像输入至第一个第一基础网络层,通过该第一个第一基础网络层中的第二稠密模块层对所述输入图像进行卷积处理;
    通过该第一个第一基础网络层中的池化层对卷积输出的特征进行下采样处理,以输出至第二个第一基础网络层;
    按照特征传播顺序,对若干第一基础网络层中的其余第一基础网络层进行遍历,待完成所述遍历,以最后一个第一基础网络层下采样处理后的特征作为所述中间特征。
  6. 如权利要求5所述的方法,其中,所述上采样阶段与所述下采样阶段相互对称,所述上采样阶段包括依次连接的若干第三稠密模块层和若干第二基础网络层,所述第二基础网络层包括依次连接的上采样层和第四稠密模块层;
    所述将所述关键特征输入至所述分割网络中的上采样阶段,进行多尺度特征融合,得到所述输入图像对应的特征图,包括:
    将所述关键特征输入至所述分割网络中的上采样阶段,通过若干第三稠密模块层进行反卷积处理,得到第一尺度特征,以输入前若干个第二基础网络层;
    在前若干个第二基础网络层中,进行该前若干个第二基础网络层中的第四稠密模块层,与若干所述第一基础网络层中相互对称的第二稠密模块层之间的特征融合;
    在最后一个第二基础网络层中,通过该最后一个第二基础网络层中的上采样层对前若干个第二基础网络层融合输出的特征进行上采样处理,得到第二尺度特征;
    获取若干所述第一基础网络层中第二稠密模块层卷积输出的特征,将获取到的特征作为第三尺度特征;
    通过该最后一个第二基础网络层中的第四稠密模块层将所述第一尺度特征、所述第二尺度特征、所述第三尺度特征融合,并进行反卷积处理,得到所述输入图像对应的特征图。
  7. 如权利要求6所述的方法,其中,所述在前若干个第二基础网络层中,进行该前若干个第二基础网络层中的第四稠密模块层,与若干所述第一基础网络层中相互对称的第二稠密模块层之间的特征融合,包括:
    将所述第一尺度特征输入第一个第二基础网络层,通过该第一个第二基础网络层中的上采样层,对输入的第一尺度特征进行上采样处理;
    基于该第一个第二基础网络层中的第四稠密模块,获取若干第一基础网络层中相互对称的第二稠密模块卷积输出的特征,与上采样处理后的特征合并,得到合并特征;
    通过该第一个第二基础网络层中的第四稠密模块,对所述合并特征进行反卷积处理,以输出至第二个第二基础网络层;
    按照特征传播顺序,对前若干个第二基础网络层中的其余第二基础网络层进行遍历,待完成所述遍历,完成相互对称的第四稠密模块层与第二稠密模块层之间的特征融合。
  8. 如权利要求6所述的方法,其中,所述第一稠密模块层、所述第三稠 密模块层中的卷积层均包括若干三维卷积核;
    所述第二稠密模块层、所述第四稠密模块层中的卷积层均包括若干切面卷积核和若干法向卷积核。
  9. 如权利要求1所述的方法,还包括:
    基于机器学习模型构建所述级联分割网络,所述机器学习模型为卷积神经网络模型。
  10. 如权利要求9所述的方法,其中,所述基于机器学习模型构建所述级联分割网络,包括:
    获取携带标签的训练样本,所述训练样本是通过不同种类标签对所述全肿瘤区域、所述肿瘤核心区域、所述增强肿瘤核心区域进行标注的肿瘤图像;
    根据训练样本所携带标签的种类,建立多个训练样本集,每一个训练样本集对应一个种类;
    通过多个训练样本集分别对多个具有指定模型结构的卷积神经网络模型进行模型训练;
    将完成模型训练的多个卷积神经网络模型级联,得到所述级联分割网络。
  11. 如权利要求1所述的方法,其中,所述对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像,包括:
    基于三维U型全卷积神经网络,由获取到的肿瘤图像提取得到对应的特征图;
    对所述肿瘤图像对应特征图中的像素点进行类别预测,得到所述肿瘤图像对应特征图中像素点的类别;
    根据所述特征图中属于全肿瘤区域类别的像素点,得到将所述全肿瘤区域收容于指定区域的所述候选图像。
  12. 如权利要求11所述的方法,其中,所述三维U型全卷积神经网络包括编码器网络和解码器网络;
    所述基于三维U型全卷积神经网络,由获取到的肿瘤图像提取得到对应的特征图,包括:
    采用所述编码器网络提取得到所述肿瘤图像的上下文特征;
    采用所述解码器网络提取关于所述全肿瘤区域的定位特征,并进行所述上下文特征与所述定位特征的特征融合,得到所述肿瘤图像对应的特征图。
  13. 如权利要求12所述的方法,其中,所述编码器网络包括若干下采样层,所述解码器网络包括若干上采样层;
    所述采用所述解码器网络提取关于所述全肿瘤区域的定位特征,并进行所述上下文特征与所述定位特征的特征融合,得到所述肿瘤图像对应的特征图,包括:
    在所述解码器网络中,对最深一层上采样层对应的定位特征进行上采样处理,得到待融合特征,所述最深一层上采样层对应的定位特征为所述编码器网络中最深一层下采样层对应的上下文特征;
    将所述待融合特征输入至次深一层上采样层,与次深一层下采样层对应的上下文特征进行合并,并通过反卷积处理,得到次深一层上采样层对应的定位特征;
    对其余上采样层按照由深至浅的顺序进行遍历,获得遍历到上采样层对应的定位特征;
    待完成所述遍历,由最浅一层上采样层对应的定位特征得到所述肿瘤图像对应的特征图。
  14. 一种图像分割装置,包括:
    图像获取模块,用于获取肿瘤图像;
    图像粗分割模块,用于对获取到的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像;
    图像输入模块,用于将所述候选图像输入至基于机器学习模型构建的级联分割网络;
    图像细分割模块,用于以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像。
  15. 如权利要求14所述的装置,其中,所述级联分割网络包括三级分割网络;所述图像细分割模块用于,通过所述第一级分割网络进行所述候选图像的图像分割,得到标记了所述全肿瘤区域的第一级中间分割图像;通过第二级分割网络进行所述第一级中间分割图像的图像分割,得到标记了所述 全肿瘤区域、所述肿瘤核心区域的第二级中间分割图像;通过第三级分割网络进行所述第二级中间分割图像的图像分割,得到标记了所述全肿瘤区域、所述肿瘤核心区域、所述增强肿瘤核心区域的所述分割图像。
  16. 如权利要求15所述的装置,其中,若输入图像为所述候选图像、所述第一级中间分割图像、或者所述第二级中间分割图像;输出图像为所述第一级中间分割图像、所述第二级中间分割图像、或者所述分割图像;分割网络为所述级联分割网络中的各级分割网络;所述分割网络包括上采样阶段和下采样阶段;
    所述图像细分割模块用于,在所述分割网络中的下采样阶段,由所述输入图像提取得到关键特征;将所述关键特征输入至所述分割网络中的上采样阶段,进行多尺度特征融合,得到所述输入图像对应的特征图;对所述输入图像对应特征图中的像素点进行类别预测,得到所述输入图像对应特征图中像素点的类别;对所述输入图像对应特征图中指定类别的像素点进行标记,得到所述输出图像。
  17. 如权利要求16所述的装置,其中,所述下采样阶段包括依次连接的若干第一基础网络层和若干第一稠密模块层;所述图像细分割模块用于,将所述输入图像输入至所述分割网络中的下采样阶段,通过若干第一基础网络层进行卷积下采样处理,得到中间特征;通过若干第一稠密模块层对所述中间特征进行卷积处理,得到所述关键特征。
  18. 如权利要求14所述的装置,其中,所述图像输入模块进一步用于,基于机器学习模型构建所述级联分割网络,所述机器学习模型为卷积神经网络模型。
  19. 如权利要求18所述的装置,其中,所述图像输入模块用于,获取携带标签的训练样本,所述训练样本是通过不同种类标签对所述全肿瘤区域、所述肿瘤核心区域、所述增强肿瘤核心区域进行标注的肿瘤图像;根据训练样本所携带标签的种类,建立多个训练样本集,每一个训练样本集对应一个 种类;通过多个训练样本集分别对多个具有指定模型结构的卷积神经网络模型进行模型训练;将完成模型训练的多个卷积神经网络模型级联,得到所述级联分割网络。
  20. 如权利要求14所述的装置,其中,所述图像粗分割模块用于,基于三维U型全卷积神经网络,由获取到的肿瘤图像提取得到对应的特征图;对所述肿瘤图像对应特征图中的像素点进行类别预测,得到所述肿瘤图像对应特征图中像素点的类别;根据所述特征图中属于全肿瘤区域类别的像素点,得到将所述全肿瘤区域收容于指定区域的所述候选图像。
  21. 如权利要求20所述的装置,其中,所述三维U型全卷积神经网络包括编码器网络和解码器网络;所述图像粗分割模块用于,采用所述编码器网络提取得到所述肿瘤图像的上下文特征;采用所述解码器网络提取关于所述全肿瘤区域的定位特征,并进行所述上下文特征与所述定位特征的特征融合,得到所述肿瘤图像对应的特征图。
  22. 一种诊断系统,所述诊断系统包括采集端、分割端和诊断端,其中,
    所述采集端,用于采集肿瘤图像,并发送至所述分割端;
    所述分割端,用于对所述分割端发送的肿瘤图像进行肿瘤定位,得到用于指示所述肿瘤图像中全肿瘤区域位置的候选图像,并将所述候选图像输入至基于机器学习模型构建的级联分割网络,以所述级联分割网络中的第一级分割网络,对所述候选图像中的全肿瘤区域进行图像分割为起始,逐级步进至最后一级分割网络进行关于增强肿瘤核心区域的图像分割,得到分割图像;
    所述诊断端,用于接收所述分割端发送的分割图像,并显示,以通过所述分割图像辅助诊断人员进行肿瘤诊断。
  23. 如权利要求22所述的系统,其中,所述级联分割网络包括三级分割网络;所述分割端用于,通过所述第一级分割网络进行所述候选图像的图像分割,得到标记了所述全肿瘤区域的第一级中间分割图像;通过第二级分割网络进行所述第一级中间分割图像的图像分割,得到标记了所述全肿瘤区 域、所述肿瘤核心区域的第二级中间分割图像;通过第三级分割网络进行所述第二级中间分割图像的图像分割,得到标记了所述全肿瘤区域、所述肿瘤核心区域、所述增强肿瘤核心区域的所述分割图像。
  24. 如权利要求22所述的系统,其中,所述分割端进一步用于,基于机器学习模型构建所述级联分割网络,所述机器学习模型为卷积神经网络模型。
  25. 如权利要求22所述的系统,其中,所述分割端用于,基于三维U型全卷积神经网络,由获取到的肿瘤图像提取得到对应的特征图;对所述肿瘤图像对应特征图中的像素点进行类别预测,得到所述肿瘤图像对应特征图中像素点的类别;根据所述特征图中属于全肿瘤区域类别的像素点,得到将所述全肿瘤区域收容于指定区域的所述候选图像。
  26. 一种存储介质,所述存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至13中任一项所述的图像分割方法。
  27. 一种计算机设备,其特征在于,包括:
    处理器;及
    存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如权利要求1至13中任一项所述的图像分割方法。
PCT/CN2019/121246 2018-11-30 2019-11-27 图像分割方法、装置、诊断系统、存储介质及计算机设备 WO2020108525A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19889004.8A EP3828825A4 (en) 2018-11-30 2019-11-27 IMAGE SEGMENTATION PROCESS AND APPARATUS, DIAGNOSIS SYSTEM, STORAGE MEDIA, AND COMPUTER DEVICE
US17/204,894 US11954863B2 (en) 2018-11-30 2021-03-17 Image segmentation method and apparatus, diagnosis system, storage medium, and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811462063.3A CN109598728B (zh) 2018-11-30 2018-11-30 图像分割方法、装置、诊断系统及存储介质
CN201811462063.3 2018-11-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/204,894 Continuation US11954863B2 (en) 2018-11-30 2021-03-17 Image segmentation method and apparatus, diagnosis system, storage medium, and computer device

Publications (1)

Publication Number Publication Date
WO2020108525A1 true WO2020108525A1 (zh) 2020-06-04

Family

ID=65959310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121246 WO2020108525A1 (zh) 2018-11-30 2019-11-27 图像分割方法、装置、诊断系统、存储介质及计算机设备

Country Status (4)

Country Link
US (1) US11954863B2 (zh)
EP (1) EP3828825A4 (zh)
CN (1) CN109598728B (zh)
WO (1) WO2020108525A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037171A (zh) * 2020-07-30 2020-12-04 西安电子科技大学 基于多模态特征融合的多任务mri脑瘤图像分割方法
CN112258526A (zh) * 2020-10-30 2021-01-22 南京信息工程大学 一种基于对偶注意力机制的ct肾脏区域级联分割方法
CN112767417A (zh) * 2021-01-20 2021-05-07 合肥工业大学 一种基于级联U-Net网络的多模态图像分割方法
CN112785605A (zh) * 2021-01-26 2021-05-11 西安电子科技大学 基于语义迁移的多时相ct图像肝肿瘤分割方法
CN114092815A (zh) * 2021-11-29 2022-02-25 自然资源部国土卫星遥感应用中心 一种大范围光伏发电设施遥感智能提取方法
CN114170244A (zh) * 2021-11-24 2022-03-11 北京航空航天大学 一种基于级联神经网络结构的脑胶质瘤分割方法
CN114372944A (zh) * 2021-12-30 2022-04-19 深圳大学 一种多模态和多尺度融合的候选区域生成方法及相关装置
EP3958184A3 (en) * 2021-01-20 2022-05-11 Beijing Baidu Netcom Science And Technology Co., Ltd. Image processing method and apparatus, device, and storage medium
CN114612479A (zh) * 2022-02-09 2022-06-10 苏州大学 一种基于全局与局部特征重建网络的医学图像分割方法
CN114372944B (zh) * 2021-12-30 2024-05-17 深圳大学 一种多模态和多尺度融合的候选区域生成方法及相关装置

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160346A (zh) * 2018-11-07 2020-05-15 电子科技大学 基于三维卷积的缺血性脑卒中分割系统
CN109598728B (zh) 2018-11-30 2019-12-27 腾讯科技(深圳)有限公司 图像分割方法、装置、诊断系统及存储介质
CN110084297B (zh) * 2019-04-23 2023-09-15 东华大学 一种面向小样本的影像语义对齐系统
CN110211134B (zh) * 2019-05-30 2021-11-05 上海商汤智能科技有限公司 一种图像分割方法及装置、电子设备和存储介质
CN110097921B (zh) * 2019-05-30 2023-01-06 复旦大学 基于影像组学的胶质瘤内基因异质性可视化定量方法和系统
CN110211140B (zh) * 2019-06-14 2023-04-07 重庆大学 基于3D残差U-Net和加权损失函数的腹部血管分割方法
CN110232361B (zh) * 2019-06-18 2021-04-02 中国科学院合肥物质科学研究院 基于三维残差稠密网络的人体行为意图识别方法与系统
CN110276755B (zh) * 2019-06-25 2021-07-06 广东工业大学 一种肿瘤位置定位系统及相关装置
CN110363776B (zh) * 2019-06-28 2021-10-22 联想(北京)有限公司 图像处理方法及电子设备
CN110390680A (zh) * 2019-07-04 2019-10-29 上海联影智能医疗科技有限公司 图像分割方法、计算机设备和存储介质
CN110310280B (zh) * 2019-07-10 2021-05-11 广东工业大学 肝胆管及结石的图像识别方法、系统、设备及存储介质
CN110349170B (zh) * 2019-07-13 2022-07-08 长春工业大学 一种全连接crf级联fcn和k均值脑肿瘤分割算法
CN110378976B (zh) * 2019-07-18 2020-11-13 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN110415234A (zh) * 2019-07-29 2019-11-05 北京航空航天大学 基于多参数磁共振成像的脑部肿瘤分割方法
US11816870B2 (en) * 2019-08-01 2023-11-14 Boe Technology Group Co., Ltd. Image processing method and device, neural network and training method thereof, storage medium
CN110458833B (zh) * 2019-08-15 2023-07-11 腾讯科技(深圳)有限公司 基于人工智能的医学图像处理方法、医学设备和存储介质
CN110717913B (zh) * 2019-09-06 2022-04-22 浪潮电子信息产业股份有限公司 一种图像分割方法及装置
CN110874842B (zh) * 2019-10-10 2022-04-29 浙江大学 一种基于级联残差全卷积网络的胸腔多器官分割方法
CN110866908B (zh) * 2019-11-12 2021-03-26 腾讯科技(深圳)有限公司 图像处理方法、装置、服务器及存储介质
CN111028236A (zh) * 2019-11-18 2020-04-17 浙江工业大学 一种基于多尺度卷积U-Net的癌细胞图像分割方法
CN111047602A (zh) * 2019-11-26 2020-04-21 中国科学院深圳先进技术研究院 图像分割方法、装置及终端设备
CN111047606B (zh) * 2019-12-05 2022-10-04 北京航空航天大学 一种基于级联思想的病理全切片图像分割算法
CN111145186B (zh) * 2019-12-17 2023-08-22 中国科学院深圳先进技术研究院 神经网络结构、图像分割方法、装置及存储介质
CN111192320B (zh) * 2019-12-30 2023-07-25 上海联影医疗科技股份有限公司 一种位置信息确定方法、装置、设备和存储介质
US11645505B2 (en) * 2020-01-17 2023-05-09 Servicenow Canada Inc. Method and system for generating a vector representation of an image
CN111275721B (zh) * 2020-02-14 2021-06-08 推想医疗科技股份有限公司 一种图像分割方法、装置、电子设备及存储介质
CN111507215B (zh) * 2020-04-08 2022-01-28 常熟理工学院 基于时空卷积循环神经网络与空洞卷积的视频目标分割方法
CN111626298B (zh) * 2020-04-17 2023-08-18 中国科学院声学研究所 一种实时图像语义分割装置及分割方法
CN111696084A (zh) * 2020-05-20 2020-09-22 平安科技(深圳)有限公司 细胞图像分割方法、装置、电子设备及可读存储介质
CN111368849B (zh) * 2020-05-28 2020-08-28 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN111369576B (zh) * 2020-05-28 2020-09-18 腾讯科技(深圳)有限公司 图像分割模型的训练方法、图像分割方法、装置及设备
CN111640100B (zh) * 2020-05-29 2023-12-12 京东方科技集团股份有限公司 肿瘤图像的处理方法和装置、电子设备、存储介质
CN112085743A (zh) * 2020-09-04 2020-12-15 厦门大学 一种肾肿瘤的图像分割方法
CN112070752A (zh) * 2020-09-10 2020-12-11 杭州晟视科技有限公司 一种医学图像的心耳分割方法、装置及存储介质
CN112150470B (zh) * 2020-09-22 2023-10-03 平安科技(深圳)有限公司 图像分割方法、装置、介质及电子设备
CN111968137A (zh) * 2020-10-22 2020-11-20 平安科技(深圳)有限公司 头部ct图像分割方法、装置、电子设备及存储介质
CN112017189B (zh) * 2020-10-26 2021-02-02 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机设备和存储介质
CN112396620A (zh) * 2020-11-17 2021-02-23 齐鲁工业大学 一种基于多阈值的图像语义分割方法及系统
KR20220080249A (ko) 2020-12-07 2022-06-14 삼성전자주식회사 영상 처리 방법 및 장치
KR102321427B1 (ko) * 2021-01-20 2021-11-04 메디컬아이피 주식회사 의료영상을 이용한 인체성분 분석 방법 및 그 장치
CN112862830B (zh) * 2021-01-28 2023-12-22 陕西师范大学 一种多模态图像分割方法、系统、终端及可读存储介质
CN112767407B (zh) * 2021-02-02 2023-07-07 南京信息工程大学 一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法
CN113781449A (zh) * 2021-09-14 2021-12-10 上海布眼人工智能科技有限公司 一种基于多尺度特征融合的纺织品瑕疵分类方法
CN113569865B (zh) * 2021-09-27 2021-12-17 南京码极客科技有限公司 一种基于类别原型学习的单样本图像分割方法
CN113658180B (zh) * 2021-10-20 2022-03-04 北京矩视智能科技有限公司 一种基于空间上下文引导的表面缺陷区域分割方法和装置
CN114267443B (zh) * 2021-11-08 2022-10-04 东莞市人民医院 基于深度学习的胰腺肿瘤纤维化程度预测方法及相关装置
US11961618B2 (en) * 2021-11-17 2024-04-16 City University Of Hong Kong Task interaction network for prostate cancer diagnosis
CN114445726B (zh) * 2021-12-13 2022-08-02 广东省国土资源测绘院 一种基于深度学习的样本库建立方法和装置
CN114241344B (zh) * 2021-12-20 2023-05-02 电子科技大学 一种基于深度学习的植物叶片病虫害严重程度评估方法
CN114299288A (zh) * 2021-12-23 2022-04-08 广州方硅信息技术有限公司 图像分割方法、装置、设备和存储介质
CN114496145B (zh) * 2022-01-27 2023-02-10 深圳市铱硙医疗科技有限公司 一种医疗图像档案管理方法与系统
CN114648529B (zh) * 2022-05-19 2022-09-23 深圳市中科先见医疗科技有限公司 一种基于cnn网络的dpcr液滴荧光检测方法
CN115115577A (zh) * 2022-05-19 2022-09-27 北京深睿博联科技有限责任公司 一种基于混合感知的多阶段器官分割方法及装置
CN116912502B (zh) * 2023-09-08 2024-01-16 南方医科大学珠江医院 全局视角辅助下图像关键解剖结构的分割方法及其设备
CN117455935B (zh) * 2023-12-22 2024-03-19 中国人民解放军总医院第一医学中心 基于腹部ct医学图像融合及器官分割方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018156778A1 (en) * 2017-02-22 2018-08-30 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks
CN108492297A (zh) * 2017-12-25 2018-09-04 重庆理工大学 基于深度级联卷积网络的mri脑肿瘤定位与瘤内分割方法
CN108564582A (zh) * 2018-04-23 2018-09-21 重庆大学 一种基于深度神经网络的mri脑肿瘤自动识别方法
CN109598728A (zh) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 图像分割方法、装置、诊断系统及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136516B2 (en) * 2002-01-25 2006-11-14 General Electric Company Method and system for segmenting magnetic resonance images
WO2006114003A1 (en) * 2005-04-27 2006-11-02 The Governors Of The University Of Alberta A method and system for automatic detection and segmentation of tumors and associated edema (swelling) in magnetic resonance (mri) images
US20110286654A1 (en) * 2010-05-21 2011-11-24 Siemens Medical Solutions Usa, Inc. Segmentation of Biological Image Data
CN104123417B (zh) * 2014-07-22 2017-08-01 上海交通大学 一种基于聚类融合的图像分割的方法
CN106709568B (zh) * 2016-12-16 2019-03-22 北京工业大学 基于深层卷积网络的rgb-d图像的物体检测和语义分割方法
EP3612981A1 (en) * 2017-04-19 2020-02-26 Siemens Healthcare GmbH Target detection in latent space
CN108268870B (zh) * 2018-01-29 2020-10-09 重庆师范大学 基于对抗学习的多尺度特征融合超声图像语义分割方法
CN108830855B (zh) * 2018-04-02 2022-03-25 华南理工大学 一种基于多尺度低层特征融合的全卷积网络语义分割方法
CN108765422A (zh) 2018-06-13 2018-11-06 云南大学 一种视网膜图像血管自动分割方法
CN109271992A (zh) * 2018-09-26 2019-01-25 上海联影智能医疗科技有限公司 一种医学图像处理方法、系统、装置和计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018156778A1 (en) * 2017-02-22 2018-08-30 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks
CN108492297A (zh) * 2017-12-25 2018-09-04 重庆理工大学 基于深度级联卷积网络的mri脑肿瘤定位与瘤内分割方法
CN108564582A (zh) * 2018-04-23 2018-09-21 重庆大学 一种基于深度神经网络的mri脑肿瘤自动识别方法
CN109598728A (zh) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 图像分割方法、装置、诊断系统及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HU, KE ET AL.: "A 2.5D Cancer Segmentation for MRI Image Based on U-Net", 2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING, no. 22 July 2018 22.07.2018, 22 July 2018 (2018-07-22), XP033501645, DOI: 第6-10页 *
See also references of EP3828825A4

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037171B (zh) * 2020-07-30 2023-08-15 西安电子科技大学 基于多模态特征融合的多任务mri脑瘤图像分割方法
CN112037171A (zh) * 2020-07-30 2020-12-04 西安电子科技大学 基于多模态特征融合的多任务mri脑瘤图像分割方法
CN112258526A (zh) * 2020-10-30 2021-01-22 南京信息工程大学 一种基于对偶注意力机制的ct肾脏区域级联分割方法
CN112258526B (zh) * 2020-10-30 2023-06-27 南京信息工程大学 一种基于对偶注意力机制的ct肾脏区域级联分割方法
EP3958184A3 (en) * 2021-01-20 2022-05-11 Beijing Baidu Netcom Science And Technology Co., Ltd. Image processing method and apparatus, device, and storage medium
CN112767417A (zh) * 2021-01-20 2021-05-07 合肥工业大学 一种基于级联U-Net网络的多模态图像分割方法
US11893708B2 (en) 2021-01-20 2024-02-06 Beijing Baidu Netcom Science Technology Co., Ltd. Image processing method and apparatus, device, and storage medium
CN112767417B (zh) * 2021-01-20 2022-09-13 合肥工业大学 一种基于级联U-Net网络的多模态图像分割方法
CN112785605B (zh) * 2021-01-26 2023-07-28 西安电子科技大学 基于语义迁移的多时相ct图像肝肿瘤分割方法
CN112785605A (zh) * 2021-01-26 2021-05-11 西安电子科技大学 基于语义迁移的多时相ct图像肝肿瘤分割方法
CN114170244A (zh) * 2021-11-24 2022-03-11 北京航空航天大学 一种基于级联神经网络结构的脑胶质瘤分割方法
CN114092815B (zh) * 2021-11-29 2022-04-15 自然资源部国土卫星遥感应用中心 一种大范围光伏发电设施遥感智能提取方法
CN114092815A (zh) * 2021-11-29 2022-02-25 自然资源部国土卫星遥感应用中心 一种大范围光伏发电设施遥感智能提取方法
CN114372944A (zh) * 2021-12-30 2022-04-19 深圳大学 一种多模态和多尺度融合的候选区域生成方法及相关装置
CN114372944B (zh) * 2021-12-30 2024-05-17 深圳大学 一种多模态和多尺度融合的候选区域生成方法及相关装置
CN114612479A (zh) * 2022-02-09 2022-06-10 苏州大学 一种基于全局与局部特征重建网络的医学图像分割方法

Also Published As

Publication number Publication date
CN109598728B (zh) 2019-12-27
US11954863B2 (en) 2024-04-09
CN109598728A (zh) 2019-04-09
EP3828825A4 (en) 2021-11-17
EP3828825A1 (en) 2021-06-02
US20210241027A1 (en) 2021-08-05

Similar Documents

Publication Publication Date Title
WO2020108525A1 (zh) 图像分割方法、装置、诊断系统、存储介质及计算机设备
US11568533B2 (en) Automated classification and taxonomy of 3D teeth data using deep learning methods
CN109741343B (zh) 一种基于3D-Unet和图论分割的T1WI-fMRI图像肿瘤协同分割方法
Chen et al. Automatic segmentation of individual tooth in dental CBCT images from tooth surface map by a multi-task FCN
CN110475505B (zh) 利用全卷积网络的自动分割
US20210174543A1 (en) Automated determination of a canonical pose of a 3d objects and superimposition of 3d objects using deep learning
Wang et al. Smartphone-based wound assessment system for patients with diabetes
TWI777092B (zh) 一種圖像處理方法、電子設備及存儲介質
CN104717925A (zh) 图像处理装置、方法及程序
WO2019037654A1 (zh) 3d图像检测方法、装置、电子设备及计算机可读介质
CN113936011A (zh) 基于注意力机制的ct影像肺叶图像分割系统
CN112991365B (zh) 一种冠状动脉分割方法、系统及存储介质
Wang et al. Left atrial appendage segmentation based on ranking 2-D segmentation proposals
Dangi et al. Cine cardiac MRI slice misalignment correction towards full 3D left ventricle segmentation
EP3847665A1 (en) Determination of a growth rate of an object in 3d data sets using deep learning
Ben-Hamadou et al. 3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
Mortaheb et al. Metal artifact reduction and segmentation of dental computerized tomography images using least square support vector machine and mean shift algorithm
Wu et al. Semiautomatic segmentation of glioma on mobile devices
CN112750124B (zh) 模型生成、图像分割方法、装置、电子设备及存储介质
CN115100050A (zh) Ct图像环状伪影的去除方法、装置、设备及存储介质
CN115546174B (zh) 图像处理方法、装置、计算设备及存储介质
Sattar TADOC: Tool for automated detection of oral cancer
JP7462188B2 (ja) 医用画像処理装置、医用画像処理方法、およびプログラム
US20230119535A1 (en) Systems and methods for automatically detecting anatomical features for preoperative cardiac implant simulations
EP4152255A1 (en) System and method for differentiating a tissue of interest from another part of a medical scanner image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19889004

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019889004

Country of ref document: EP

Effective date: 20210224

NENP Non-entry into the national phase

Ref country code: DE