CN115937234A - Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment - Google Patents

Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment Download PDF

Info

Publication number
CN115937234A
CN115937234A CN202310217540.4A CN202310217540A CN115937234A CN 115937234 A CN115937234 A CN 115937234A CN 202310217540 A CN202310217540 A CN 202310217540A CN 115937234 A CN115937234 A CN 115937234A
Authority
CN
China
Prior art keywords
segmentation
image
tumor
net
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310217540.4A
Other languages
Chinese (zh)
Other versions
CN115937234B (en
Inventor
李鹏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhiyuan Artificial Intelligence Research Institute
Original Assignee
Beijing Zhiyuan Artificial Intelligence Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhiyuan Artificial Intelligence Research Institute filed Critical Beijing Zhiyuan Artificial Intelligence Research Institute
Priority to CN202310217540.4A priority Critical patent/CN115937234B/en
Publication of CN115937234A publication Critical patent/CN115937234A/en
Application granted granted Critical
Publication of CN115937234B publication Critical patent/CN115937234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a tumor image segmentation method and device based on preprocessing acceleration and electronic equipment, and belongs to the technical field of intelligent medical treatment. The method comprises the following steps: obtaining a first segmentation image by carrying out a first segmentation operation on an original 3D tumor image; obtaining a second segmentation image by carrying out a second segmentation operation on the first segmentation image; performing third segmentation operation on the second segmentation image to obtain a third segmentation image; and registering the first segmentation image, the second segmentation image and the third segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result. According to the tumor image segmentation method, the tumor image is segmented in sequence, the multi-azimuth data of the image are segmented respectively, and finally the segmentation results of the azimuth data are fused, so that the segmentation effect is better than that of a single direction; for the divergent segmentation result, the optimization of the segmentation is realized by the MPR reconstruction and the loop iteration judgment; meanwhile, the input quantity of a model network is reduced by pre-dividing the image, and the calculation efficiency is improved.

Description

Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment
Technical Field
The invention relates to the technical field of intelligent medical treatment, in particular to a tumor image segmentation method and device based on preprocessing acceleration and electronic equipment.
Background
In order to accurately diagnose and treat tumors, it is generally clinically necessary to segment the tumor image to distinguish the Whole Tumor (WT), the Tumor Core (TC), and the Enhanced Tumor (ET).
Currently, clinically common tumor segmentation methods include manual, semi-automatic, and automatic segmentation. Wherein, manual or semi-automatic segmentation not only consumes a large amount of time, still receives the restriction of doctor's self level and energy, can't guarantee long-term stability. The automatic segmentation can not only better solve the problems, but also approach or achieve the effect of manual segmentation of professional clinicians. The automatic segmentation method may be classified into 2D data segmentation and 3D data segmentation according to the type of data. The 3D data segmentation model has better segmentation effect than the 2D data segmentation model, but the 3D data segmentation has the following problems:
first, the large data size of 3D data puts higher demands on computer configuration. The mutual restriction among the computer performance, the segmentation effect and the calculation efficiency often faces the problem of taking or rejecting.
Secondly, when the conventional segmentation method is used for pre-segmentation, the interference of other tissue signals of the brain can be confronted, so that the applicability is poor, and the phenomenon of unstable segmentation effect often occurs. For example, when segmenting a tumor in a brain tissue, a high signal of the skull tissue is difficult to avoid, and the calculation amount and difficulty for separating the skull itself are large, so that the skull segmentation method is difficult to popularize and apply in an actual scene.
Finally, the network model segmentation effect is directly related to the sample data, and how much the segmentation precision is improved under the condition of limited training data in actual processing is very difficult.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides the following technical scheme.
The invention provides a tumor image segmentation method based on preprocessing acceleration in a first aspect, which comprises the following steps:
obtaining a first segmentation image by carrying out a first segmentation operation on an original 3D tumor image;
obtaining a second segmentation image by carrying out a second segmentation operation on the first segmentation image;
performing third segmentation operation on the second segmentation image to obtain a third segmentation image;
registering the first segmentation image, the second segmentation image and the third segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result;
in the first segmentation operation, the second segmentation operation and the third segmentation operation, respectively segmenting multi-azimuth data of the image, and fusing segmentation results to obtain corresponding segmented images; in the fusion process, if the segmentation results of the different orientation data are the same, the fusion is successful to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through the MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image.
Preferably, the original 3D tumor image is subjected to a first segmentation operation to obtain a first segmentation image; obtaining a second segmentation image by carrying out second segmentation operation on the first segmentation image; performing third segmentation operation on the second segmentation image to obtain a third segmentation image; registering the first, second and third segmented images with the original 3D tumor image, marking to obtain a final segmentation result:
performing 3D WT-Net segmentation on the original 3D tumor image to obtain a 3D WT segmentation image;
obtaining a 3D TC segmentation image by segmenting the 3D WT segmentation image through 3D TC-Net;
obtaining a 3D ET segmentation image by carrying out 3D ET-Net segmentation on the 3D TC segmentation image;
and registering the 3D WT segmentation image, the 3D TC segmentation image and the 3D ET segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result.
Preferably, the tumor image segmentation method based on preprocessing acceleration further comprises: reconstructing an original 3D tumor image to generate a three-dimensional equal voxel image, and acquiring a 3D WT area image of the three-dimensional equal voxel image;
the original 3D tumor image is segmented by 3D WT-Net to obtain a 3D WT segmentation image, namely: and 3D WT area images of the three-dimensional iso-voxel images are segmented by 3D WT-Net to obtain 3D WT segmentation images.
Preferably, said acquiring a 3D WT region image of a three-dimensional iso-voxel image comprises:
subtracting the T1CE image and the T1 image in the three-dimensional equal voxel image to obtain a 3D ET silhouette image;
pre-dividing a 3D ET region by an expansion corrosion method;
and calculating a local maximum average value region of the 3D ET region, registering the local maximum average value region as a seed point to a Flair image in a corresponding three-dimensional iso-voxel image, and pre-segmenting a 3D WT region by adopting a region growing method so as to obtain a 3D WT region image of the three-dimensional iso-voxel image.
Preferably, said pre-segmenting a 3D WT region further comprises: bicubic interpolation is performed on the pre-segmented 3D WT region.
Preferably, the calculating the local maximum mean region of the 3D ET region includes: and taking a certain point as a center, calculating the average value of the point and the surrounding adjacent points, wherein the area with the maximum average value is the local maximum average value area.
Preferably, the segmentation method further comprises: merging and cropping the 3D WT segmentation image and the 3D WT area image to obtain an accurately segmented 3D WT area image; the 3D WT segmented image is segmented by 3D TC-Net to obtain: and (3D TC segmentation images are obtained by performing 3D TC-Net segmentation on the accurately segmented 3D WT area images.
Preferably, the segmentation method further comprises: merging and cutting the 3D TC segmentation image and the accurately segmented 3D WT area image to obtain an accurately segmented 3D TC area image; the 3D ET segmented image obtained by segmenting the 3D TC segmented image through 3D ET-Net is as follows: and (3) carrying out 3D ET-Net segmentation on the accurately segmented 3D TC area image to obtain a 3D ET segmented image.
Preferably, the 3D WT-Net uses a 3D U-Net network model to segment the multi-directional data of the image, the 3D TC-Net uses the 3D U-Net network model to segment the multi-directional data of the image, and the 3D ET-Net uses the 3D U-Net + + network model to segment the multi-directional data of the image.
Preferably, in the fusion process, whether the fusion is successful is judged according to the following method:
the segmentation results include the following two types: the current point belongs within the segmentation range, and the current point belongs outside the segmentation range;
if it is
Figure SMS_1
,/>
Figure SMS_2
If so, judging that the segmentation results of the data in different directions are the same, and successfully fusing to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image;
wherein ,
Figure SMS_3
a count representing that the current point belongs within the segment, is greater than>
Figure SMS_4
A count representing that the current point falls outside the segmented range, <' >>
Figure SMS_5
Is the set acceptable judgment value.
A second aspect of the present invention provides a tumor image segmentation apparatus, comprising:
the first segmentation image acquisition module is used for obtaining a first segmentation image from the original 3D tumor image through a first segmentation operation;
the second segmentation image acquisition module is used for carrying out second segmentation operation on the first segmentation image to obtain a second segmentation image;
the third segmentation image acquisition module is used for carrying out third segmentation operation on the second segmentation image to obtain a third segmentation image;
the final segmentation result acquisition module is used for registering the first segmentation image, the second segmentation image and the third segmentation image with the original 3D tumor image and obtaining a final segmentation result after marking;
in the first segmentation operation, the second segmentation operation and the third segmentation operation, respectively segmenting multi-azimuth data of the image, and fusing segmentation results to obtain corresponding segmented images; in the fusion process, if the segmentation results of the different orientation data are the same, the fusion is successful to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through the MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image.
A third aspect of the invention provides a memory storing a plurality of instructions for implementing the method according to the first aspect.
A fourth aspect of the present invention provides an electronic device comprising a processor and a memory coupled to the processor, the memory storing a plurality of instructions that are loadable and executable by the processor to enable the processor to carry out the method according to the first aspect.
The invention has the following beneficial effects:
(1) The method adopts 3D WT-Net, 3D TC-Net and 3D ET-Net to segment and fuse the multi-azimuth data of the image, and in the segmentation process, 3D U-Net or 3D U-Net + + is respectively adopted to segment each azimuth data, thereby ensuring that the segmentation effect is superior to that of a single direction. In the network model segmentation process, when diverging results possibly generated by different azimuth data are encountered, new direction data are generated through an MPR reconstruction and loop iteration judgment method, the new direction data are segmented, and then the segmentation results are substituted into the judgment process to obtain the optimal segmentation results.
(2) The method has the advantages that the pre-segmentation step is added, the 3D WT area is segmented by adopting an area growing method during pre-segmentation, the maximum local average area of the silhouette area of the enhanced image and the Mask image is creatively used as a seed point, and the influence of poor selection of an external skull height signal and the seed point on area segmentation is solved. Bicubic interpolation which can be carried out after the 3D WT area pre-segmentation is finished improves the resolution ratio of the segmented area, and is convenient for improving the subsequent segmentation effect.
(3) The 3D U-Net and 3D U-Net + + models have better segmentation effects than the 2D models, but the larger data size of the 3D data puts higher requirements on the computer. Therefore, the invention processes the image, takes the reduced calculation area as the input of each step of the model network, and considers both the calculation efficiency and the segmentation effect.
Drawings
FIG. 1 is a schematic flow chart of a tumor image segmentation method according to the present invention;
FIG. 2 is a schematic diagram illustrating an implementation process of tumor image segmentation according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the structures and operation processes of 3D WT-Net, 3D TC-Net, and 3D ET-Net according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a 3D U-Net network model structure and a working process according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a 3D U-Net + + network model structure and a working process according to an embodiment of the present invention;
fig. 6 is a functional structure diagram of the tumor image segmentation apparatus according to the present invention.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
The method provided by the invention can be implemented in the following terminal environment, and the terminal can comprise one or more of the following components: a processor, a memory, and a display screen. Wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to implement the methods described in the embodiments described below.
A processor may include one or more processing cores. The processor connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory, and calling data stored in the memory.
The Memory may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory may be used to store instructions, programs, code sets, or instructions.
The display screen is used for displaying user interfaces of all the application programs.
In addition, those skilled in the art will appreciate that the above-described terminal configurations are not intended to be limiting, and that the terminal may include more or fewer components, or some of the components may be combined, or a different arrangement of components. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a power supply, and other components, which are not described herein again.
Example one
As shown in fig. 1, an embodiment of the present invention provides a tumor image segmentation method based on preprocessing acceleration, including:
s101, obtaining a first segmentation image by performing a first segmentation operation on an original 3D tumor image;
s102, obtaining a second segmentation image by performing a second segmentation operation on the first segmentation image;
s103, performing third segmentation operation on the second segmentation image to obtain a third segmentation image;
s104, registering the first segmentation image, the second segmentation image and the third segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result;
in the first segmentation operation, the second segmentation operation and the third segmentation operation, respectively segmenting multi-azimuth data of the image, and fusing segmentation results to obtain corresponding segmented images; in the fusion process, if the segmentation results of the different orientation data are the same, the fusion is successful to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through the MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image.
In a preferred embodiment of the present invention, the original 3D tumor image is subjected to a first segmentation operation to obtain a first segmentation image; obtaining a second segmentation image by carrying out a second segmentation operation on the first segmentation image; performing third segmentation operation on the second segmentation image to obtain a third segmentation image; registering the first, second and third segmented images with the original 3D tumor image, and marking to obtain a final segmentation result:
performing 3D WT-Net segmentation on the original 3D tumor image to obtain a 3D WT segmentation image;
obtaining a 3D TC segmentation image by segmenting the 3D WT segmentation image through 3D TC-Net;
the 3D TC segmentation image is subjected to 3D ET-Net segmentation to obtain a 3D ET segmentation image;
and registering the 3D WT segmentation image, the 3D TC segmentation image and the 3D ET segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result.
As shown in fig. 2, in the practical application process, the following steps may be specifically performed:
step 1: the method comprises the steps of obtaining an original 3D tumor image to be processed, such as an MRI brain tumor image, preprocessing the original 3D tumor image, and reconstructing the original image into a three-dimensional voxel image serving as input data through three-dimensional reconstruction processing. The three-dimensional isocontological images contain sets of contrast sequence images (T1, T2, flair, T1CE, etc.). Each set of contrast images has three directions of image data, namely, a transverse axis (TRA), a sagittal plane (SAG), and a coronal plane (COR).
And 2, step: and subtracting the enhanced image 3D T1CE from the enhanced image 3D T1CE to obtain a 3D ET silhouette image. The 3D ET region is divided by the expansion corrosion method, and the divided 3D ET region can be slightly larger than the actual 3D ET region. If the swelling erosion results in non-connected pieces of 3D ET regions (i.e., multiple tumors), the following steps can be performed for each region individually, and finally combined.
And step 3: and calculating a local maximum average value area of the 3D ET area, taking the local maximum average value area as a seed point, registering the seed point on a corresponding 3D Flair image, pre-dividing a 3D WT area by adopting an area growing method, and matching with an expansion method to obtain an area slightly larger than the 3D WT area. Therefore, 3D WT area images of T1, T2, flair and T1CE can be obtained, bicubic interpolation can be carried out on the area images, the resolution ratio is improved, and the accuracy of next segmentation is improved.
Taking a certain point as a center, calculating the average value of the adjacent points around the certain point, and obtaining the area with the maximum average value, namely the local maximum average value area.
And 4, step 4: 3D WT segmented images (WT masks) are obtained by using the 3D WT area images of T2 and Flair and passing through 3D WT-Net (fusion of multi-direction 3D U-Net image segmentation results).
Further, the structure and operation of the 3D WT-Net can be as shown in fig. 3, and the specific operation can be as follows:
(1) Segmenting existing three azimuth (TRA in a horizontal axis, SAG in a sagittal axis and COR in a coronal axis) images respectively through a 3D U-Net network model;
(2) The segmented data are fused according to coordinates, and if the segmentation results of the three groups of data at the same point are the same, the fusion can be successful; if three groups of data obtain two results (the results only have two conditions, namely the current point belongs to the inside or outside of the segmentation range, the count is respectively set as x and y), if the current point belongs to the inside or outside of the segmentation range, the current point is set as the x and y, the current point is set as the y, and the current point is set as the y
Figure SMS_6
,/>
Figure SMS_7
If so, judging that the segmentation results of the data in different directions are the same, and successfully fusing to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image;
wherein ,
Figure SMS_8
a count representing that the current point belongs within the segment, is greater than>
Figure SMS_9
A count representing that the current point falls outside the segmented range, <' >>
Figure SMS_10
Is the set acceptable judgment value.
Specifically, if the segmentation results of the different orientation data are different and the fusion is unsuccessful, the oblique cutting data can be obtained from the image data by using an MPR (multi-planar reconstruction) method
Figure SMS_11
Then will be
Figure SMS_12
And adding the network segmentation result of the data into the judgment process, wherein if the condition is met, the cycle can be exited, and if the condition is not met, the new segmentation result of the beveling data is continuously added into the judgment process. In order to prevent the situation that the cycle cannot jump out or the number of cycles is too large, an upper limit on the number of cycles can be set>
Figure SMS_13
When an upper cycle limit is reached>
Figure SMS_14
The value of k may be decreased and the result obtained thereby is output. />
As shown in fig. 4, the 3D U-Net network may perform image segmentation by using the following specific steps:
(1) Performing 3D convolution twice on the data X (0, 0), and performing down-sampling maximum pooling to obtain data X (1, 0);
(2) Performing 3D convolution twice on the data X (1, 0), and performing down-sampling maximum pooling to obtain data X (2, 0);
(3) Performing 3D convolution on the data X (2, 0) twice, and performing downsampling maximum pooling to obtain data X (3, 0);
(4) Performing 3D convolution and 3D transposition convolution on the data X (3, 0) twice, and performing up-sampling to obtain data X (2, 1);
(5) Performing feature fusion on X (2, 0) and X (2, 1) to form data, performing 3D convolution and 3D transposition convolution twice, and performing up-sampling to obtain data X (1, 1);
(6) Performing feature fusion on the X (1, 0) and the X (1, 1) to form data, performing 3D convolution and 3D transposition convolution twice, and performing up-sampling to obtain data X (0, 1);
(7) And performing feature fusion on the X (0, 0) and the X (0, 1) to form data, performing two times of 3D convolution and 3D transposition convolution, and finally rearranging the data to obtain a final result, namely the 3D WT segmentation image.
And 5: and (5) merging and cutting the 3D WT segmentation image obtained in the step (4) and the preprocessed 3D WT area image to obtain an accurately segmented 3D WT area image.
Step 6: and taking the accurately segmented 3D WT area image obtained after T1/T2/Flair segmentation as an input, and inputting the input into a 3D TC-Net (multi-direction 3D U-Net image segmentation result fusion) network to obtain a 3D TC segmented image (TC Mask). The 3D TC-Net and the 3D WT-Net adopt the same model, and the specific steps can refer to step 4.
And 7: and combining and cutting the 3D TC segmentation image and the accurately segmented 3D WT area image to obtain an accurately segmented 3D TC area image.
And 8: and taking the accurately segmented 3D TC area image obtained after T1CE/T2 segmentation as an input, and inputting the input into a 3D ET-Net (multi-direction 3D U-Net + + image segmentation result fusion) network to obtain a 3D ET segmented image (ET Mask). The 3D ET-Net network structure and the working process are the same as the 3D WT-Net, and the step 4 can be referred to, except that a 3D U-Net + + network is used to replace the 3D U-Net to divide the multi-direction data. As shown in fig. 5, the 3D U-Net + + may be used to perform image segmentation, and the specific steps of using the 3D U-Net + + may include:
(1) Performing 3D convolution twice on the data X (0, 0), and performing down-sampling maximum pooling to obtain data X (1, 0);
(2) Performing 3D convolution twice on the data X (1, 0), and performing down-sampling maximum pooling to obtain data X (2, 0);
(3) Performing 3D convolution on the data X (2, 0) twice, and performing downsampling maximum pooling to obtain data X (3, 0);
(4) Performing 3D convolution and 3D transposition convolution on X (3, 0) twice, and performing up-sampling to obtain data X (2, 1);
(5) Carrying out primary 3D convolution on the data of X (2, 0) and the data formed by carrying out feature fusion on X (2, 1), then carrying out secondary 3D convolution and 3D transposition convolution, and carrying out up-sampling to obtain data X (1, 2);
(6) And (3) performing 3D convolution operation on X (1, 0) and X (1, 1) once respectively, then performing feature fusion on the X (1, 2), performing convolution twice through 3D convolution and 3D transposition, and performing upsampling to obtain data X (0, 3). Wherein X (1, 1) is formed by performing 3D convolution operation once by X (1, 0), and simultaneously performing feature fusion after performing 3D convolution and 3D transposition convolution once by X (2, 0);
(7) And performing 3D convolution operation on X (0, 0), X (0, 1) and X (0, 2) once respectively, performing feature fusion on the X (0, 3), and performing 3D convolution and data rearrangement twice to obtain an output image. Wherein X (0, 1) is formed by performing 3D convolution operation once by X (0, 0), and simultaneously performing feature fusion after performing 3D convolution once and 3D transposition convolution by X (1, 0); x (0, 2) is formed by performing 3D convolution operation on X (0, 0) and X (0, 1) once, and simultaneously performing feature fusion after performing 3D convolution and 3D transposition convolution on X (1, 1).
And step 9: the 3D WT, 3D TC, and 3D ET segmented images are registered with the original 3D tumor image and color-labeled to obtain the final segmentation result (final image).
Example two
As shown in fig. 6, another aspect of the present invention further includes a functional module architecture completely corresponding to the foregoing method flow, that is, an embodiment of the present invention further provides a tumor image segmentation apparatus based on preprocessing acceleration, including:
a first segmentation image obtaining module 601, configured to perform a first segmentation operation on the original 3D tumor image to obtain a first segmentation image;
a second segmentation image obtaining module 602, configured to perform a second segmentation operation on the first segmentation image to obtain a second segmentation image;
a third segmentation image obtaining module 603, configured to perform a third segmentation operation on the second segmentation image to obtain a third segmentation image;
a final segmentation result obtaining module 604, configured to register the first, second, and third segmentation images with the original 3D tumor image, and obtain a final segmentation result after marking;
in the first segmentation operation, the second segmentation operation and the third segmentation operation, respectively segmenting the multi-directional data of the image, and fusing segmentation results to obtain corresponding segmented images; in the fusion process, if the segmentation results of different azimuth data are the same, the fusion is successful to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through the MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image.
Further, the original 3D tumor image is subjected to a first segmentation operation to obtain a first segmentation image; obtaining a second segmentation image by carrying out a second segmentation operation on the first segmentation image; performing third segmentation operation on the second segmentation image to obtain a third segmentation image; registering the first, second and third segmented images with the original 3D tumor image, and marking to obtain a final segmentation result: performing 3D WT-Net segmentation on the original 3D tumor image to obtain a 3D WT segmentation image; performing 3D TC-Net segmentation on the 3D WT segmentation image to obtain a 3D TC segmentation image; the 3D TC segmentation image is subjected to 3D ET-Net segmentation to obtain a 3D ET segmentation image; and registering the 3D WT segmentation image, the 3D TC segmentation image and the 3D ET segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result.
Further, the device also comprises a preprocessing module: the 3D WT area image is used for reconstructing the original 3D tumor image to generate a three-dimensional equal voxel image and acquiring the three-dimensional equal voxel image; the original 3D tumor image is segmented by 3D WT-Net to obtain a 3D WT segmentation image, namely: and (3) carrying out 3D WT-Net segmentation on the WT area image of the three-dimensional iso-voxel image to obtain a 3D WT segmentation image.
Further, said acquiring a 3D WT region image of a three-dimensional iso-voxel image comprises: subtracting the T1CE image and the T1 image in the three-dimensional equal voxel image to obtain a 3D ET silhouette image; pre-dividing a 3D ET region by an expansion corrosion method; and calculating a local maximum average value region of the 3D ET region, registering the local maximum average value region as a seed point to a Flair image in a corresponding three-dimensional iso-voxel image, and pre-segmenting a 3D WT region by adopting a region growing method so as to obtain a 3D WT region image of the three-dimensional iso-voxel image.
Further, the pre-segmenting of the 3D WT region further comprises: bicubic interpolation is performed on the pre-segmented 3D WT region.
Further, the calculating the local maximum mean region of the 3D ET region includes: and taking a certain point as a center, calculating the average value of the point and the surrounding adjacent points, wherein the area with the maximum average value is the local maximum average value area.
Further, the system also comprises a first merging and clipping module: a 3D WT area image for merging and cropping the 3D WT segmentation image and the 3D WT area image to obtain an accurately segmented 3D WT area image; the 3D WT segmented image is segmented by 3D TC-Net to obtain: and (3D TC segmentation images are obtained by performing 3D TC-Net segmentation on the accurately segmented 3D WT area images.
Further, a second merge cropping module is included: the 3D TC area image segmentation device is used for merging and cutting the 3D TC segmentation image and the accurately segmented 3D WT area image to obtain an accurately segmented 3D TC area image; the 3D ET segmented image obtained by segmenting the 3D TC segmented image through 3D ET-Net is as follows: and (3D ET-Net segmentation is carried out on the accurately segmented 3D TC area image to obtain a 3D ET segmented image.
Furthermore, the 3D WT-Net adopts a 3D U-Net network model to segment the multi-directional data of the image, the 3D TC-Net adopts the 3D U-Net network model to segment the multi-directional data of the image, and the 3D ET-Net adopts the 3D U-Net + + network model to segment the multi-directional data of the image.
Further, in the fusion process, whether the fusion is successful is judged according to the following method:
the segmentation result includes the following two types: the current point belongs to the segmentation range, and the current point belongs to the segmentation range;
if it is
Figure SMS_15
,/>
Figure SMS_16
Then determine the different partyThe segmentation results of the bit data are the same, and corresponding segmented images are obtained by successful fusion; otherwise, generating new azimuth data by reconstructing the different azimuth data through MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image;
wherein ,
Figure SMS_17
a count representing that the current point belongs within the segment, is greater than>
Figure SMS_18
A count representing that the current point falls outside the segmented range, <' >>
Figure SMS_19
Is the set acceptable judgment value.
The device can be implemented by the tumor image segmentation method provided in the first embodiment, and specific implementation methods can be referred to the description in the first embodiment and are not described herein again.
The invention also provides a memory storing a plurality of instructions for implementing the method according to the first embodiment.
The invention also provides an electronic device comprising a processor and a memory connected to the processor, wherein the memory stores a plurality of instructions, and the instructions can be loaded and executed by the processor to enable the processor to execute the method according to the first embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (13)

1. A method for tumor image segmentation based on preprocessing acceleration, comprising:
obtaining a first segmentation image by carrying out a first segmentation operation on an original 3D tumor image;
obtaining a second segmentation image by carrying out a second segmentation operation on the first segmentation image;
performing third segmentation operation on the second segmentation image to obtain a third segmentation image;
registering the first segmentation image, the second segmentation image and the third segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result;
in the first segmentation operation, the second segmentation operation and the third segmentation operation, respectively segmenting multi-azimuth data of the image, and fusing segmentation results to obtain corresponding segmented images; in the fusion process, if the segmentation results of the different orientation data are the same, the fusion is successful to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through the MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image.
2. The pre-processing acceleration-based tumor image segmentation method according to claim 1, wherein the original 3D tumor image is subjected to a first segmentation operation to obtain a first segmentation image; obtaining a second segmentation image by carrying out second segmentation operation on the first segmentation image; performing third segmentation operation on the second segmentation image to obtain a third segmentation image; registering the first, second and third segmented images with the original 3D tumor image, marking to obtain a final segmentation result:
obtaining a 3D WT segmentation image by segmenting an original 3D tumor image through 3D WT-Net;
performing 3D TC-Net segmentation on the 3D WT segmentation image to obtain a 3D TC segmentation image;
obtaining a 3D ET segmentation image by carrying out 3D ET-Net segmentation on the 3D TC segmentation image;
and registering the 3D WT segmentation image, the 3D TC segmentation image and the 3D ET segmentation image with the original 3D tumor image, and marking to obtain a final segmentation result.
3. The pre-processing acceleration-based tumor image segmentation method of claim 2, further comprising: reconstructing an original 3D tumor image to generate a three-dimensional equal voxel image, and acquiring a 3D WT region image of the three-dimensional equal voxel image;
the original 3D tumor image is segmented by 3D WT-Net to obtain a 3D WT segmentation image, namely: and 3D WT area images of the three-dimensional iso-voxel images are segmented by 3D WT-Net to obtain 3D WT segmentation images.
4. The method of pre-processing acceleration-based tumor image segmentation according to claim 3, wherein the acquiring of the 3D WT region image of the three-dimensional iso-voxel image comprises:
subtracting the T1CE image and the T1 image in the three-dimensional equal voxel image to obtain a 3D ET silhouette image;
pre-dividing a 3D ET region by an expansion corrosion method;
and calculating a local maximum average value region of the 3D ET region, taking the local maximum average value region as a seed point, registering the seed point to a Flair image in a corresponding three-dimensional iso-voxel image, and pre-segmenting a 3D WT region by adopting a region growing method so as to obtain a 3D WT region image of the three-dimensional iso-voxel image.
5. The pre-processing acceleration-based tumor image segmentation method according to claim 4, wherein the pre-segmenting of the 3D WT region further comprises: bicubic interpolation is performed on the pre-segmented 3D WT regions.
6. The pre-processing acceleration-based tumor image segmentation method according to claim 4, wherein the calculating the local maximum mean region of the 3D ET region comprises: and taking a certain point as a center, calculating the average value of the point and the surrounding adjacent points, wherein the area with the maximum average value is the local maximum average value area.
7. The pre-processing acceleration-based tumor image segmentation method of claim 3, wherein the segmentation method further comprises: merging and clipping the 3D WT segmentation image and the 3D WT area image to obtain an accurately segmented 3D WT area image; the 3D WT segmentation image is segmented by 3D TC-Net to obtain: and (3D TC segmentation images are obtained by performing 3D TC-Net segmentation on the accurately segmented 3D WT area images.
8. The pre-processing acceleration-based tumor image segmentation method of claim 7, wherein the segmentation method further comprises: merging and cutting the 3D TC segmentation image and the accurately segmented 3D WT area image to obtain an accurately segmented 3D TC area image; the 3D ET segmented image obtained by segmenting the 3D TC segmented image through 3D ET-Net is as follows: and (3) carrying out 3D ET-Net segmentation on the accurately segmented 3D TC area image to obtain a 3D ET segmented image.
9. The pre-processing acceleration-based tumor image segmentation method according to any one of claims 2 to 8, wherein the 3D WT-Net segments the multi-orientation data of the image using a 3D U-Net network model, the 3D TC-Net segments the multi-orientation data of the image using a 3D U-Net network model, and the 3D ET-Net segments the multi-orientation data of the image using a 3D U-Net + + network model.
10. The method for tumor image segmentation based on preprocessing acceleration as claimed in claim 1, wherein during the fusion process, whether the fusion is successful is determined according to the following method:
the segmentation results include the following two types: the current point belongs within the segmentation range, and the current point belongs outside the segmentation range;
if it is
Figure QLYQS_1
,/>
Figure QLYQS_2
If so, judging that the segmentation results of the data in different directions are the same, and successfully fusing to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image;
wherein ,
Figure QLYQS_3
a count indicating that the current point falls within the split range>
Figure QLYQS_4
Indicating that the current point belongs to a count outside the segmentation limit,
Figure QLYQS_5
is set as an acceptable judgment value.
11. A device for tumor image segmentation based on preprocessing acceleration, comprising:
the first segmentation image acquisition module is used for obtaining a first segmentation image from the original 3D tumor image through a first segmentation operation;
the second segmentation image acquisition module is used for carrying out second segmentation operation on the first segmentation image to obtain a second segmentation image;
a third segmentation image acquisition module, configured to perform a third segmentation operation on the second segmentation image to obtain a third segmentation image;
the final segmentation result acquisition module is used for registering the first segmentation image, the second segmentation image and the third segmentation image with the original 3D tumor image and obtaining a final segmentation result after marking;
in the first segmentation operation, the second segmentation operation and the third segmentation operation, respectively segmenting multi-azimuth data of the image, and fusing segmentation results to obtain corresponding segmented images; in the fusion process, if the segmentation results of the different orientation data are the same, the fusion is successful to obtain a corresponding segmentation image; otherwise, generating new azimuth data by reconstructing the different azimuth data through the MPR, automatically substituting the segmentation result of the new azimuth data into the judgment process, circularly and iteratively judging until the segmentation results of each azimuth data are the same, and successfully fusing to obtain a corresponding segmentation image.
12. A memory having stored thereon a plurality of instructions for implementing the method of any one of claims 1-10.
13. An electronic device comprising a processor and a memory coupled to the processor, the memory storing a plurality of instructions that are loadable and executable by the processor to enable the processor to perform the method according to any of claims 1-10.
CN202310217540.4A 2023-03-03 2023-03-03 Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment Active CN115937234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310217540.4A CN115937234B (en) 2023-03-03 2023-03-03 Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310217540.4A CN115937234B (en) 2023-03-03 2023-03-03 Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment

Publications (2)

Publication Number Publication Date
CN115937234A true CN115937234A (en) 2023-04-07
CN115937234B CN115937234B (en) 2023-05-30

Family

ID=85818514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310217540.4A Active CN115937234B (en) 2023-03-03 2023-03-03 Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment

Country Status (1)

Country Link
CN (1) CN115937234B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152181A (en) * 2023-10-31 2023-12-01 北京智源人工智能研究院 Tumor image segmentation method, device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130097299A (en) * 2012-02-24 2013-09-03 김웅식 Method and apparatus for image reconstruction by digital hand scanner
CN110047080A (en) * 2019-03-12 2019-07-23 天津大学 A method of the multi-modal brain tumor image fine segmentation based on V-Net
CN111046921A (en) * 2019-11-25 2020-04-21 天津大学 Brain tumor segmentation method based on U-Net network and multi-view fusion
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN115272389A (en) * 2022-07-20 2022-11-01 华中科技大学同济医学院附属协和医院 Aortic dissection method with intimal valve attention module

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130097299A (en) * 2012-02-24 2013-09-03 김웅식 Method and apparatus for image reconstruction by digital hand scanner
CN110047080A (en) * 2019-03-12 2019-07-23 天津大学 A method of the multi-modal brain tumor image fine segmentation based on V-Net
CN111046921A (en) * 2019-11-25 2020-04-21 天津大学 Brain tumor segmentation method based on U-Net network and multi-view fusion
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN115272389A (en) * 2022-07-20 2022-11-01 华中科技大学同济医学院附属协和医院 Aortic dissection method with intimal valve attention module

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152181A (en) * 2023-10-31 2023-12-01 北京智源人工智能研究院 Tumor image segmentation method, device, electronic equipment and readable storage medium
CN117152181B (en) * 2023-10-31 2024-02-20 北京智源人工智能研究院 Tumor image segmentation method, device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN115937234B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN107784647B (en) Liver and tumor segmentation method and system based on multitask deep convolutional network
WO2020108525A1 (en) Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
CN110689548B (en) Medical image segmentation method, device, equipment and readable storage medium
CN111046921B (en) Brain tumor segmentation method based on U-Net network and multi-view fusion
CN108805871A (en) Blood-vessel image processing method, device, computer equipment and storage medium
CN109035261B (en) Medical image processing method and device, electronic device and storage medium
JPH0638274B2 (en) Image recognition apparatus and image recognition method
CN104657986B (en) A kind of quasi- dense matching extended method merged based on subspace with consistency constraint
CN112802046B (en) Image generation system for generating pseudo CT from multi-sequence MR based on deep learning
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN115937234A (en) Tumor image segmentation method and device based on preprocessing acceleration and electronic equipment
CN111311705B (en) High-adaptability medical image multi-plane reconstruction method and system based on webgl
CN111242969B (en) Boundary node determination method, grid division method and medical equipment
CN116188479B (en) Hip joint image segmentation method and system based on deep learning
CN112070752A (en) Method, device and storage medium for segmenting auricle of medical image
CN105719333A (en) 3D image data processing method and 3D image data processing device
JPH09198490A (en) Three-dimensional discrete data projector
CN115018825B (en) Coronary artery dominant type classification method, classification device and storage medium
CN112017161A (en) Pulmonary nodule detection method and device based on central point regression
CN116468838A (en) Regional resource rendering method, system, computer and readable storage medium
CN116703992A (en) Accurate registration method, device and equipment for three-dimensional point cloud data and storage medium
CN104239874B (en) A kind of organ blood vessel recognition methods and device
CN114170367B (en) Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
CN115661170A (en) Method, device and medium for automatically segmenting abdomen three-dimensional CT image
CN114299096A (en) Outline delineation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant