CN109636813B - Segmentation method and system for prostate magnetic resonance image - Google Patents

Segmentation method and system for prostate magnetic resonance image Download PDF

Info

Publication number
CN109636813B
CN109636813B CN201811538977.3A CN201811538977A CN109636813B CN 109636813 B CN109636813 B CN 109636813B CN 201811538977 A CN201811538977 A CN 201811538977A CN 109636813 B CN109636813 B CN 109636813B
Authority
CN
China
Prior art keywords
label
segmentation
image
magnetic resonance
convolution network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811538977.3A
Other languages
Chinese (zh)
Other versions
CN109636813A (en
Inventor
谌先敢
刘海华
刘李漫
唐奇伶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Central Minzu University
Original Assignee
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Central University for Nationalities filed Critical South Central University for Nationalities
Priority to CN201811538977.3A priority Critical patent/CN109636813B/en
Publication of CN109636813A publication Critical patent/CN109636813A/en
Application granted granted Critical
Publication of CN109636813B publication Critical patent/CN109636813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate

Abstract

The invention discloses a segmentation method and a segmentation system for a prostate magnetic resonance image, and relates to the field of medical image processing. The method comprises the following steps: in the training stage, inputting the image into a full convolution network to obtain corresponding output probability, and calculating the cross entropy between the output probability and the label; calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain final loss, and adjusting parameters of the full convolution network to enable the loss to reach the minimum value; in the segmentation stage, the prostate magnetic resonance image to be segmented is input into the trained full convolution network, and an initial segmentation result is obtained. The invention enables automatic segmentation of the central and peripheral regions within the prostate from the magnetic resonance image.

Description

Segmentation method and system for prostate magnetic resonance image
Technical Field
The invention relates to the field of medical image processing, in particular to a segmentation method and a segmentation system for a prostate magnetic resonance image.
Background
Prostate disease is common in older men. In particular, prostate cancer has become the second largest cancer threatening the health of men. In the united states, approximately 1/6 men will get prostate cancer and 1/36 men will die from this disease. Among the many examination methods, MRI (Magnetic Resonance Imaging) has become the most effective means for prostate cancer examination.
The anatomy of the prostate can be divided into the Central Gland (CG) and the Peripheral Zone (PZ), with approximately 70% to 75% of prostate cancer from the PZ, and with cancers from the PZ and CG appearing graphically differently. Accurate segmentation of the prostate from MR images is an important step in treatment planning and is crucial for the diagnosis of prostate cancer.
Currently, prostate segmentation is performed manually by a doctor, the quality of segmentation mainly depends on the experience of the doctor, and manual segmentation is time-consuming and subjective. Therefore, a rapid prostate segmentation method is urgently needed in clinic.
However, automatic segmentation of the prostate based on Magnetic Resonance (MR) images is very difficult, mainly due to several factors:
first, the prostate is similar to the surrounding tissue, lacking clear borders;
second, different subjects, different disease species, different imaging conditions cause the prostate to vary greatly in shape and size.
Many prostate segmentation methods have been proposed, but the segmentation results of these methods still have great differences from manual segmentation. Moreover, most segmentation methods are directed primarily to the entire prostate tissue, and do not segment the central and peripheral regions of the prostate.
The problem of automatic segmentation of central glandular and peripheral regions within prostate tissue can be seen as semantic segmentation of medical images, i.e. assigning a class of labels to each pixel in an image. At present, a Full Convolution Network (FCN) has been proven to be an effective tool for semantic segmentation, which can segment each object in an image at the same time.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: the results obtained by the full convolution network are not accurate enough, and when segmenting medical images, the segmentation of some details is not good enough, and the performance still needs to be further improved.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the background art, and provides a method and a system for segmenting a magnetic resonance image of a prostate, which can automatically segment a central gland and a peripheral region in the prostate from the magnetic resonance image.
In a first aspect, a method for segmenting a prostate magnetic resonance image is provided, which includes the following steps:
in the training stage, inputting the image into a full convolution network to obtain corresponding output probability, and calculating the cross entropy between the output probability and the label; calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain final loss, and adjusting parameters of the full convolution network to enable the loss to reach the minimum value;
in the segmentation stage, the prostate magnetic resonance image to be segmented is input into the trained full convolution network, and an initial segmentation result is obtained.
The above technical solution enables automatic segmentation of various regions within the prostate, i.e. the central gland and peripheral regions within the prostate tissue, from the magnetic resonance image.
According to the first aspect, in a first possible implementation manner of the first aspect, the calculation formula of the weight map is:
Figure BDA0001906857410000031
wherein, wi(x) Is a weight map, IxIs the gray value of the original, yxTo label the graph, Grad (I)x) The gradient of the original image is represented,
Figure BDA0001906857410000032
the portion representing the increase in weight value is inversely proportional to the gradient of the original image, Morphology (y)x) Is a morphological operation to control the spatial extent of the pixels with increasing weight values, aiIs a coefficient controlling how much the weight value is increased, biThe pixel without weighting is the basic part of the final loss function, i is 0, 1 or 2, and corresponds to the background, peripheral region and central gland in the label.
According to the technical scheme, a new weight map calculation mode is designed for the prostate MR image and is used for weighting a loss function, pixels which are difficult to segment in the prostate MR image are endowed with higher weight, the weight map comprises three components which respectively correspond to a background, a peripheral region and a central gland, and a deep learning model is promoted to segment each region of the prostate better.
According to a first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the Morphology (y) isx) The label graph is subjected to expansion and corrosion respectively and then subtracted to obtain: morphology (y)x)=Dilation(yx,smi)-Erosion(yx,smi) Wherein, the relationship (y)x,smi) Is a knot obtained by respectively expanding the label graphFruit, Erosion (y)x,smi) Is the result of etching the label map, smiIs a morphological element used to control the extent of the expansion and erosion operations.
According to the first aspect, in a third possible implementation manner of the first aspect, after obtaining the initial segmentation result, the method further includes the following steps:
and manually adjusting the segmentation result on the basis of the initial segmentation result to obtain a final segmentation result.
On the result of the automatic segmentation, manual adjustment by a physician may be further performed.
According to the first aspect, in a fourth possible implementation manner of the first aspect, the parameter of the full convolutional network refers to a weight of a neuron in a full convolutional network model.
In a second aspect, a segmentation system for a magnetic resonance image of a prostate is provided, comprising:
a training unit to: in the training stage, inputting the image into a full convolution network to obtain corresponding output probability, and calculating the cross entropy between the output probability and the label; calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain final loss, and adjusting parameters of the full convolution network to enable the loss to reach the minimum value;
a segmentation unit to: in the segmentation stage, the prostate magnetic resonance image to be segmented is input into the trained full convolution network, and an initial segmentation result is obtained.
The above technical solution enables automatic segmentation of various regions within the prostate, i.e. the central gland and peripheral regions within the prostate tissue, from the magnetic resonance image.
According to the second aspect, in a first possible implementation manner of the second aspect, the calculation formula of the weight map is:
Figure BDA0001906857410000041
wherein, wi(x) Is a weight map, IxIs the gray value of the original image,yxTo label the graph, Grad (I)x) The gradient of the original image is represented,
Figure BDA0001906857410000042
the portion representing the increase in weight value is inversely proportional to the gradient of the original image, Morphology (y)x) Is a morphological operation to control the spatial extent of the pixels with increasing weight values, aiIs a coefficient controlling how much the weight value is increased, biThe pixel without weighting is the basic part of the final loss function, i is 0, 1 or 2, and corresponds to the background, peripheral region and central gland in the label.
According to the technical scheme, a new weight map calculation mode is designed for the prostate MR image and is used for weighting a loss function, pixels which are difficult to segment in the prostate MR image are endowed with higher weight, the weight map comprises three components which respectively correspond to a background, a peripheral region and a central gland, and a deep learning model is promoted to segment each region of the prostate better.
According to a first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the Morphology (y) isx) The label graph is subjected to expansion and corrosion respectively and then subtracted to obtain: morphology (y)x)=Dilation(yx,smi)-Erosion(yx,smi) Wherein, the relationship (y)x,smi) Results of swelling the label graphs, respectively, Erosis (y)x,smi) Is the result of etching the label map, smiIs a morphological element used to control the extent of the expansion and erosion operations.
According to the second aspect, in a third possible implementation manner of the second aspect, the system further includes:
a manual adjustment unit for: and manually adjusting the segmentation result on the basis of the initial segmentation result to obtain a final segmentation result.
On the result of the automatic segmentation, manual adjustment by a physician may be further performed.
According to the second aspect, in a fourth possible implementation manner of the second aspect, the parameter of the full convolutional network refers to a weight of a neuron in a full convolutional network model.
Compared with the prior art, the invention has the following advantages:
in the training stage, inputting an image into a full convolution network to obtain a corresponding output probability, calculating a cross entropy between the output probability and a label, meanwhile, calculating a weight graph according to the image and the label, multiplying the cross entropy and the weight graph in a pixel-to-pixel mode to obtain a final loss, and continuously adjusting parameters of the full convolution network to enable the loss to reach a minimum value; the parameters of the full convolution network refer to weights of neurons in the full convolution network model. In the segmentation stage, the prostate magnetic resonance image to be segmented is input into the trained full convolution network, and an initial segmentation result is obtained. The invention can realize automatic segmentation of each region in the prostate from the magnetic resonance image, namely the central gland and the peripheral region in the prostate tissue, and can be further manually adjusted by a doctor on the basis of the automatic segmentation result.
Drawings
FIG. 1 is a flow chart of the computation of a loss function for training a model in an embodiment of the present invention.
Fig. 2 is a flow chart of a method of segmentation of a magnetic resonance image of the prostate without a manual adjustment step in an embodiment of the present invention.
Fig. 3 is a flow chart of a method of segmentation of a magnetic resonance image of the prostate including a manual adjustment step in an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that they are not intended to limit the invention to the embodiments described. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or functional arrangement, and that any functional block or functional arrangement may be implemented as a physical entity or a logical entity, or a combination of both.
In order that those skilled in the art will better understand the present invention, the following detailed description of the invention is provided in conjunction with the accompanying drawings and the detailed description of the invention.
Note that: the example to be described next is only a specific example, and does not limit the embodiments of the present invention necessarily to the following specific steps, values, conditions, data, orders, and the like. Those skilled in the art can, upon reading this specification, utilize the concepts of the present invention to construct more embodiments than those specifically described herein.
The embodiment of the invention provides a segmentation method of a prostate magnetic resonance image, which comprises the following steps:
in the training stage, as shown in fig. 1, inputting an image into a full convolution network to obtain a corresponding output probability, calculating a cross entropy between the output probability and a label, meanwhile, calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel manner to obtain a final loss, and continuously adjusting parameters of the full convolution network to enable the loss to reach a minimum value; the parameters of the full convolution network refer to weights of neurons in the full convolution network model.
In the segmentation stage, as shown in fig. 2, the prostate magnetic resonance image to be segmented is input into the trained full convolution network to obtain an initial segmentation result.
The above technical solution enables automatic segmentation of various regions within the prostate, i.e. the central gland and peripheral regions within the prostate tissue, from the magnetic resonance image.
As an alternative embodiment, on the basis of the initial segmentation result, as shown in fig. 3, manual adjustment may also be performed by the physician to obtain the final segmentation result.
The embodiment of the present invention further provides a segmentation system for a prostate magnetic resonance image, including:
a training unit to: in the training stage, inputting the image into a full convolution network to obtain corresponding output probability, and calculating the cross entropy between the output probability and the label; calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain final loss, and adjusting parameters of the full convolution network to enable the loss to reach the minimum value; the parameters of the full convolution network refer to weights of neurons in the full convolution network model.
A segmentation unit to: in the segmentation stage, the prostate magnetic resonance image to be segmented is input into the trained full convolution network, and an initial segmentation result is obtained.
The above technical solution enables automatic segmentation of various regions within the prostate, i.e. the central gland and peripheral regions within the prostate tissue, from the magnetic resonance image.
As an optional implementation, the system further comprises:
a manual adjustment unit for: and manually adjusting the segmentation result on the basis of the initial segmentation result to obtain a final segmentation result.
Prostate MR image segmentation can be considered as a semantic segmentation problem, i.e. assigning a class of labels to each pixel in the image. Full Convolutional Networks (FCN) has been proven to be an effective tool for semantic segmentation, which can segment each object in an image at the same time. As one of the basic problems of pattern recognition, semantic segmentation is to interpret new data with knowledge learned from known data and tags. The process is divided into two stages, wherein firstly, a model is trained by using the existing data and labels, and secondly, the labels of the new data are deduced by using the trained model, namely, each pixel in the new data image is assigned with a class label to carry out semantic segmentation. The training process of the model can be described as follows, and a parameterized model is trained to minimize the corresponding loss function given a data set, and the mathematical formula is:
min{∑(x,y)∈DL(fθ(x),y)} (1)
where θ is a parameter of the deep network, y is a tag, Σ(x,y)Denotes the sum of the losses for all positions, L (f)θ(x) Y) is a loss function used to penalize false labels, and D is a set of training samples.
The mathematical formula for the weighted loss function is:
Figure BDA0001906857410000081
wherein w (x) is a weight map,
Figure BDA0001906857410000085
is the output probability of the model
Figure BDA0001906857410000086
And label y, n represents the number of total pixels.
The design idea of the weight map w (x) provided by the embodiment of the invention is as follows: the pixels difficult to segment in the MR image of the prostate are given higher weights, and the weight map w (x) includes three components, which correspond to the background, the peripheral region, and the central gland, and therefore can be expressed as wi(x) In the embodiment of the invention, the calculation formula of the weight graph is as follows:
Figure BDA0001906857410000082
wherein, wi(x) Is a weight map, IxIs the gray value of the original, yxTo label the graph, Grad (I)x) The gradient of the original image is represented,
Figure BDA0001906857410000083
the portion representing the increase in weight value is inversely proportional to the gradient of the original image, Morphology (y)x) Is a morphological operation to control the spatial extent of the pixels with increasing weight values, aiIs a coefficient controlling how much the weight value is increased, biThe pixel without weighting is the basic part of the final loss function, i is 0, 1 or 2, and corresponds to the background, peripheral region and central gland in the label.
Figure BDA0001906857410000084
Part for indicating weight value increaseInversely proportional to the gradient of the original image, the reason is that if the gradient value near the edge of the prostate region is smaller, meaning the edge is less sharp, the weight value needs to be increased more.
The specific numerical values of the parameters in the above formula (3) are set manually according to experience.
Weight map w in an embodiment of the inventioni(x) The device consists of two parts:
the first part is:
Figure BDA0001906857410000091
a portion indicating an increase in weight value;
the second part is: biRepresenting the basis part of the weight values.
Morphology(yx) Is a morphological operation to control the spatial extent of the pixels with increasing weight values, i.e. to specify which locations of the pixels in each MR image need to be weighted.
Morphology(yx) The label graph is subjected to expansion and corrosion respectively and then subtracted to obtain: morphology (y)x)=Dilation(yx,smi)-Erosion(yx,smi) Wherein, the relationship (y)x,smi) Results of swelling the label graphs, respectively, Erosis (y)x,smi) Is the result of etching the label map, smiIs a morphological element used to control the extent of the expansion and erosion operations.
The prostate MR image segmentation in the embodiment of the invention can be divided into two stages in the process, wherein the existing data and the label are used for training a model, and the trained model is used for reasoning the label of new data, namely, the segmentation result is obtained.
In the embodiment of the invention, a model is trained on the basis of a given data set, so that the corresponding loss function reaches the minimum value, the calculation process is shown in figure 1, firstly, image data is input into a full convolution network to obtain corresponding output probability, and the cross entropy between the output probability and a label is calculated; meanwhile, calculating a weight map according to the image and the label; and multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain the final loss. The training process of the model is a process of continuously adjusting parameters of the full convolution network, wherein the parameters of the full convolution network specifically refer to weights of neurons in the full convolution network model, so that the loss function reaches a minimum value.
Referring to fig. 2, the process of segmenting the image to be segmented is firstly input into a trained full convolution network to obtain an output result, and some post-processing is performed on the output result to obtain an initial segmentation result, and the post-processing calculation includes median filtering and the like.
Optionally, referring to fig. 3, the final segmentation result is obtained by manual adjustment by the doctor based on the initial segmentation result.
The manual adjustment in the embodiments of the present invention is not a necessary step, and is not necessary if the results obtained in the previous steps are relatively accurate.
The embodiment of the invention tests the public data set (without a manual adjustment process), the segmentation performance evaluation adopts a Dice coefficient (DSC) as an evaluation index, the Dice coefficient is the ratio of intersection and union between a reference image and a segmentation image, the value of the Dice coefficient is between 0 and 1, and the higher the value is, the more accurate the segmentation result is represented.
The method of the present example was compared to the cross entropy loss function and the comparison results are shown in table 1.
TABLE 1 results of comparison of the method of the present example with the Cross entropy loss function
Figure BDA0001906857410000101
As can be seen from table 1, the segmentation performance (0.8831) of the inventive example was 0.0274 higher than the performance of the cross-entropy loss function (0.8557) on segmentation of the central gland, evaluated using DSC; the segmentation performance (0.7576) of the present invention was 0.0405 higher than the performance (0.7171) of the cross-entropy loss function for segmentation of peripheral regions, and the method of the present invention exceeded the cross-entropy loss function for segmentation of individual regions of the prostate.
The embodiment of the invention has the advantage that a new weight map calculation mode is designed for the prostate MR image and is used for weighting the loss function, so that the deep learning model can better segment each region of the prostate. Tests are carried out on the disclosed data set, the method provided by the embodiment of the invention obtains excellent performance, and the method provided by the embodiment of the invention is general enough and can be expanded to other medical image segmentation tasks, such as: liver segmentation and heart segmentation.
Note that: the above-described embodiments are merely examples and are not intended to be limiting, and those skilled in the art can combine and combine some steps and devices from the above-described separately embodiments to achieve the effects of the present invention according to the concept of the present invention, and such combined and combined embodiments are also included in the present invention, and such combined and combined embodiments are not described herein separately.
Advantages, effects, and the like, which are mentioned in the embodiments of the present invention, are only examples and are not limiting, and they cannot be considered as necessarily possessed by the various embodiments of the present invention. Furthermore, the foregoing specific details disclosed herein are merely for purposes of example and for purposes of clarity of understanding, and are not intended to limit the embodiments of the invention to the particular details which may be employed to practice the embodiments of the invention.
The block diagrams of devices, apparatuses, systems involved in the embodiments of the present invention are only given as illustrative examples, and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. As used in connection with embodiments of the present invention, the terms "or" and "refer to the term" and/or "and are used interchangeably herein unless the context clearly dictates otherwise. The word "such as" is used in connection with embodiments of the present invention to mean, and is used interchangeably with, the word "such as but not limited to".
The flow charts of steps in the embodiments of the present invention and the above description of the methods are merely illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by those skilled in the art, the order of the steps in the above embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are only used to guide the reader through the description of these methods. Furthermore, any reference to an element in the singular, for example, using the articles "a," "an," or "the" is not to be construed as limiting the element to the singular.
In addition, the steps and devices in the embodiments of the present invention are not limited to be implemented in a certain embodiment, and in fact, some steps and devices in the embodiments of the present invention may be combined according to the concept of the present invention to conceive new embodiments, and these new embodiments are also included in the scope of the present invention.
The respective operations in the embodiments of the present invention may be performed by any appropriate means capable of performing the corresponding functions. The means may comprise various hardware and/or software components and/or modules, including, but not limited to, a hardware Circuit, an ASIC (Application Specific Integrated Circuit), or a processor.
In practical applications, the various illustrated Logic blocks, modules and circuits may be implemented by a general purpose Processor, a DSP (digital signal Processor), an ASIC, an FPGA (Field Programmable Gate Array) or CPLD (Complex Programmable Logic Device), discrete Gate or transistor Logic, discrete hardware components or any combination thereof designed to perform the functions described above. Wherein a general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that can be used include RAM (Random Access Memory), ROM (Read-Only Memory), flash Memory, EPROM (erasable Programmable Read-Only Memory), EEPROM (Electrically-erasable Programmable Read-Only Memory), registers, hard disk, removable disk, CD-ROM (Compact Disc Read-Only Memory), and the like. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
The method of an embodiment of the invention includes one or more acts for implementing the method described above. The methods and/or acts may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and/or use of specific actions may be modified without departing from the scope of the claims.
The functions in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. As used herein, disk (disk) and Disc (Disc) include Compact Disc (CD), laser Disc, optical Disc, DVD (Digital Versatile Disc), floppy disk and blu-ray Disc where disks reproduce data magnetically, while discs reproduce data optically with lasers.
Accordingly, a computer program product may perform the operations presented herein. For example, such a computer program product may be a computer-readable tangible medium having instructions stored (and/or encoded) thereon that are executable by one or more processors to perform the operations described herein. The computer program product may include packaged material.
Software or instructions in embodiments of the present invention may also be transmitted over a transmission medium. For example, the software may be transmitted from a website, server, or other remote source using a transmission medium such as coaxial cable, fiber optic cable, twisted pair, DSL (Digital Subscriber Line), or wireless technologies such as infrared, radio, or microwave.
Further, modules and/or other suitable means for implementing the methods and techniques of embodiments of the present invention may be downloaded and/or otherwise obtained by a user terminal and/or base station as appropriate. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk) so that the user terminal and/or base station can obtain the various methods when coupled to or providing storage means to the device. Further, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
Other examples and implementations are within the scope and spirit of the embodiments of the invention and the following claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hard-wired, or any combination of these. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that a list of "A, B or at least one of C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
Various changes, substitutions and alterations to the techniques described herein may be made by those skilled in the art without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the invention to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (6)

1. A segmentation method of a prostate magnetic resonance image is characterized by comprising the following steps:
in the training stage, inputting the image into a full convolution network to obtain corresponding output probability, and calculating the cross entropy between the output probability and the label; calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain final loss, and adjusting parameters of the full convolution network to enable the loss to reach the minimum value;
in the segmentation stage, inputting the prostate magnetic resonance image to be segmented into a trained full convolution network to obtain an initial segmentation result;
the calculation formula of the weight map is as follows:
Figure FDA0002604009630000011
wherein, wi(x) Is a weight map, IxIs the gray value of the original, yxTo label the graph, Grad (I)x) The gradient of the original image is represented,
Figure FDA0002604009630000012
the portion representing the increase in weight value is inversely proportional to the gradient of the original image, Morphology (y)x) Is a morphological operation to control the spatial extent of the pixels with increasing weight values, aiIs a coefficient controlling how much the weight value is increased, biIs the basic part of the final loss function of the pixels without adding weight, i is 0, 1 or 2, corresponding to the background, peripheral region and central gland in the label respectively;
the Morphology (y)x) The label graph is subjected to expansion and corrosion respectively and then subtracted to obtain:
Morphology(yx)=Dilation(yx,smi)-Erosion(yx,smi) Wherein, the relationship (y)x,smi) Results of swelling the label graphs, respectively, Erosis (y)x,smi) Is the result of etching the label map, smiIs a morphological element used to control the extent of the expansion and erosion operations.
2. The method for segmenting a prostate magnetic resonance image as set forth in claim 1, wherein: after obtaining the initial segmentation result, the method also comprises the following steps:
and manually adjusting the segmentation result on the basis of the initial segmentation result to obtain a final segmentation result.
3. The method for segmenting a prostate magnetic resonance image as set forth in claim 1, wherein: the parameters of the full convolution network refer to weights of neurons in a full convolution network model.
4. A segmentation system for prostate magnetic resonance images, comprising:
a training unit to: in the training stage, inputting the image into a full convolution network to obtain corresponding output probability, and calculating the cross entropy between the output probability and the label; calculating a weight map according to the image and the label, multiplying the cross entropy and the weight map in a pixel-to-pixel mode to obtain final loss, and adjusting parameters of the full convolution network to enable the loss to reach the minimum value;
a segmentation unit to: in the segmentation stage, inputting the prostate magnetic resonance image to be segmented into a trained full convolution network to obtain an initial segmentation result;
the calculation formula of the weight map is as follows:
Figure FDA0002604009630000021
wherein, wi(x) Is a weight map, IxIs the gray value of the original, yxTo label the graph, Grad (I)x) The gradient of the original image is represented,
Figure FDA0002604009630000022
the portion representing the increase in weight value is inversely proportional to the gradient of the original image, Morphology (y)x) Is a morphological operation to control the spatial extent of the pixels with increasing weight values, aiIs a coefficient controlling how much the weight value is increased, biIs the basic part of the final loss function of the pixels without adding weight, i is 0, 1 or 2, corresponding to the background, peripheral region and central gland in the label respectively;
the Morphology (y)x) The label graph is subjected to expansion and corrosion respectively and then subtracted to obtain:
Morphology(yx)=Dilation(yx,smi)-Erosion(yx,smi) Wherein, the relationship (y)x,smi) Results of swelling the label graphs, respectively, Erosis (y)x,smi) Is the result of etching the label map, smiIs a morphological element used to control the extent of the expansion and erosion operations.
5. The segmentation system for prostate magnetic resonance images as set forth in claim 4, wherein: the system further comprises:
a manual adjustment unit for: and manually adjusting the segmentation result on the basis of the initial segmentation result to obtain a final segmentation result.
6. The segmentation system for prostate magnetic resonance images as set forth in claim 4, wherein: the parameters of the full convolution network refer to weights of neurons in a full convolution network model.
CN201811538977.3A 2018-12-14 2018-12-14 Segmentation method and system for prostate magnetic resonance image Active CN109636813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811538977.3A CN109636813B (en) 2018-12-14 2018-12-14 Segmentation method and system for prostate magnetic resonance image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811538977.3A CN109636813B (en) 2018-12-14 2018-12-14 Segmentation method and system for prostate magnetic resonance image

Publications (2)

Publication Number Publication Date
CN109636813A CN109636813A (en) 2019-04-16
CN109636813B true CN109636813B (en) 2020-10-30

Family

ID=66074440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811538977.3A Active CN109636813B (en) 2018-12-14 2018-12-14 Segmentation method and system for prostate magnetic resonance image

Country Status (1)

Country Link
CN (1) CN109636813B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189332B (en) * 2019-05-22 2021-03-02 中南民族大学 Prostate magnetic resonance image segmentation method and system based on weight map design
CN110689548B (en) * 2019-09-29 2023-01-17 浪潮电子信息产业股份有限公司 Medical image segmentation method, device, equipment and readable storage medium
CN111028206A (en) * 2019-11-21 2020-04-17 万达信息股份有限公司 Prostate cancer automatic detection and classification system based on deep learning
CN113476033B (en) * 2021-08-18 2022-06-07 华中科技大学同济医学院附属同济医院 Deep neural network-based automatic generation method for prostatic hyperplasia target area
CN115619810B (en) * 2022-12-19 2023-10-03 中国医学科学院北京协和医院 Prostate partition segmentation method, system and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8811701B2 (en) * 2011-02-23 2014-08-19 Siemens Aktiengesellschaft Systems and method for automatic prostate localization in MR images using random walker segmentation initialized via boosted classifiers
CN107240102A (en) * 2017-04-20 2017-10-10 合肥工业大学 Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm
CN107886510A (en) * 2017-11-27 2018-04-06 杭州电子科技大学 A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108345887A (en) * 2018-01-29 2018-07-31 清华大学深圳研究生院 The training method and image, semantic dividing method of image, semantic parted pattern

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8811701B2 (en) * 2011-02-23 2014-08-19 Siemens Aktiengesellschaft Systems and method for automatic prostate localization in MR images using random walker segmentation initialized via boosted classifiers
CN107240102A (en) * 2017-04-20 2017-10-10 合肥工业大学 Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm
CN107886510A (en) * 2017-11-27 2018-04-06 杭州电子科技大学 A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks
CN108345887A (en) * 2018-01-29 2018-07-31 清华大学深圳研究生院 The training method and image, semantic dividing method of image, semantic parted pattern
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于U-Net的结节分割方法》;徐峰 等;《软件导刊》;20180815;第17卷(第8期);第161-164页 *

Also Published As

Publication number Publication date
CN109636813A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109636813B (en) Segmentation method and system for prostate magnetic resonance image
Myronenko 3D MRI brain tumor segmentation using autoencoder regularization
Anitha et al. Brain tumour classification using two‐tier classifier with adaptive segmentation technique
CN105574859B (en) A kind of liver neoplasm dividing method and device based on CT images
Chen et al. Liver tumor segmentation in CT volumes using an adversarial densely connected network
Kamal et al. Lung cancer tumor region segmentation using recurrent 3d-denseunet
Saranya et al. Blood vessel segmentation in retinal fundus images for proliferative diabetic retinopathy screening using deep learning
Aranguren et al. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm
CN113808146B (en) Multi-organ segmentation method and system for medical image
Zotin et al. Techniques for medical images processing using shearlet transform and color coding
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
Li et al. Superpixel-guided label softening for medical image segmentation
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
Mohagheghi et al. Incorporating prior shape knowledge via data-driven loss model to improve 3D liver segmentation in deep CNNs
CN110189332B (en) Prostate magnetic resonance image segmentation method and system based on weight map design
Zheng et al. Segmentation of thyroid glands and nodules in ultrasound images using the improved U-Net architecture
Kumar et al. E-fuzzy feature fusion and thresholding for morphology segmentation of brain MRI modalities
CN116485813A (en) Zero-sample brain lesion segmentation method, system, equipment and medium based on prompt learning
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
Kaur et al. Deep learning in medical applications: Lesion segmentation in skin cancer images using modified and improved encoder-decoder architecture
Yuan et al. Automatic construction of filter tree by genetic programming for ultrasound guidance image segmentation
Choong et al. Extending upon a transfer learning approach for brain tumour segmentation
CN111598870B (en) Method for calculating coronary artery calcification ratio based on convolutional neural network end-to-end reasoning
Lu et al. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior
CN115578400A (en) Image processing method, and training method and device of image segmentation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant