CN115700762B - Target element segmentation method, model and electronic equipment for medical image - Google Patents

Target element segmentation method, model and electronic equipment for medical image Download PDF

Info

Publication number
CN115700762B
CN115700762B CN202211689087.9A CN202211689087A CN115700762B CN 115700762 B CN115700762 B CN 115700762B CN 202211689087 A CN202211689087 A CN 202211689087A CN 115700762 B CN115700762 B CN 115700762B
Authority
CN
China
Prior art keywords
module
layer
residual
segmentation
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211689087.9A
Other languages
Chinese (zh)
Other versions
CN115700762A (en
Inventor
戴亚康
周志勇
耿辰
钱旭升
胡冀苏
黄智宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Guoke Kangcheng Medical Technology Co ltd
Original Assignee
Suzhou Guoke Kangcheng Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Guoke Kangcheng Medical Technology Co ltd filed Critical Suzhou Guoke Kangcheng Medical Technology Co ltd
Priority to CN202211689087.9A priority Critical patent/CN115700762B/en
Publication of CN115700762A publication Critical patent/CN115700762A/en
Application granted granted Critical
Publication of CN115700762B publication Critical patent/CN115700762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a target element segmentation method of a medical image, a model and electronic equipment. The method comprises the following steps: acquiring a medical image to be segmented, and performing enhancement and amplification processing as input data; inputting input data into a segmentation model, and outputting a corresponding segmentation result by the segmentation model, wherein the segmentation model is provided with a multi-scale feature activation module, a residual label module and a distance map weighting module; and segmenting the target elements in the medical image according to the segmentation result. The method solves the problem that in the related technology, when the blood vessel in the medical image is segmented, the accuracy of the blood vessel segmentation result is low due to poor model structure.

Description

Target element segmentation method, model and electronic equipment for medical image
Technical Field
The application relates to the field of image segmentation, in particular to a target element segmentation method, a model and electronic equipment of a medical image.
Background
In the conventional medical image segmentation, although a trunk network using a UNet segmentation network as a model is processed in a body, the blood vessels in the medical image are small in size and are more difficult to be effectively identified and found compared with other organ tissues, so that the conventional UNet segmentation network cannot be effectively segmented.
Aiming at the problem that the accuracy rate of a blood vessel segmentation result is low due to poor model structure when a blood vessel in a medical image is segmented in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The present application mainly aims to provide a method, a model, and an electronic device for segmenting a target element of a medical image, so as to solve the problem in the related art that when a blood vessel in the medical image is segmented, the accuracy of the segmentation result of the blood vessel is low due to a poor model structure.
In order to achieve the above object, according to one aspect of the present application, there is provided a target element segmentation method of a medical image, the method including: acquiring a medical image to be segmented, and performing enhancement and amplification processing as input data; inputting the input data into a segmentation model, and outputting a corresponding segmentation result by the segmentation model, wherein the segmentation model is provided with a multi-scale feature activation module, a residual label module and a distance map weighting module; and segmenting the target element in the medical image according to the segmentation result.
Optionally, the acquiring a medical image to be segmented, and performing enhancement and amplification processing includes as input data: acquiring the medical image, wherein the medical image comprises a region of interest of the target element; enhancing the target element to obtain an enhanced image; channel merging is carried out on the medical image and the enhanced image to obtain a merged image; carrying out resampling, intensity adjustment and normalization processing on the merged image; and performing data amplification on the processed combined image to obtain a plurality of image feature sets with preset sizes as the input data.
Optionally, inputting the input data into a segmentation model, and outputting a corresponding segmentation result by the segmentation model includes: inputting image features with preset size into the segmentation model, wherein a backbone network of the segmentation model is a UNet segmentation network, and the UNet segmentation network comprises a multi-layer coding module and a decoding module; the decoding module and the coding module of each layer are connected through a jump connection structure, the coding module of the lower layer inputs data obtained by down sampling output of the coding module of the upper layer, the decoding module of the upper layer inputs data obtained by up sampling output of the coding module of the layer, and the decoding module of the lower layer outputs data obtained by up sampling; inputting output data of a plurality of layers of coding modules into the multi-scale feature activation module for feature activation processing, and sending processed fusion feature data to a decoding module corresponding to each layer of coding module; inputting output data of a multi-layer decoding module into the residual label module, and calculating a residual label; inputting the residual label into the distance map weighting module, and taking the output of the distance map weighting module as the input of the upsampling of each layer of decoding module; and outputting a final segmentation result through the UNet segmentation network.
Optionally, inputting output data of the multi-layer coding module to the multi-scale feature activation module for feature activation processing, and sending the processed fusion feature data to the decoding module corresponding to each layer of coding module includes: in the multi-scale feature activation module, the output data of the multi-layer coding modules are input into a compression excitation unit through respective corresponding channels, feature fusion is carried out on the output data of each layer of coding modules, and screening is carried out to obtain fusion features of each channel; and sending the fusion characteristics of the channels to a decoding module corresponding to each layer of coding module.
Optionally, the inputting the output data of the multi-layer decoding module to the residual label module, and the calculating the residual label includes: in the residual label module, calculating the residual label according to a residual label calculation formula according to the slice of the element label in the medical image, wherein the residual label calculation formula is as follows:
Figure 74729DEST_PATH_IMAGE001
in, is greater than or equal to>
Figure 333672DEST_PATH_IMAGE002
And &>
Figure 353580DEST_PATH_IMAGE003
Represented by a ^ h>
Figure 356171DEST_PATH_IMAGE002
Layer and first +>
Figure 145136DEST_PATH_IMAGE003
Two adjacent slices in a slice with a slice size of->
Figure 840559DEST_PATH_IMAGE004
,/>
Figure 364076DEST_PATH_IMAGE005
Is a remnant label, is>
Figure 904778DEST_PATH_IMAGE006
The element label value for each voxel in the slice.
Optionally, inputting the residual label into the distance map weighting module, and taking the output of the distance map weighting module as an input of upsampling of each layer decoding module includes: in the distance map weighting module, distance map calculation is carried out according to the residual labels to obtain an approximate distance weighted map; and outputting the approximate distance weighted graph as an upsampling input of each layer decoding module, wherein in the decoding module, pixel multiplication is carried out on the approximate distance weighted graph and the feature graph before upsampling as an upsampling input.
Optionally, the method further includes: calculating a residual loss by the residual label module; calculating the error loss of the segmentation result and the gold standard through a depth supervision module; calculating connectivity loss through the output of each layer of decoding module; and determining the total loss of the segmentation result according to the residual loss, the error loss and the connectivity loss and the respective corresponding weights.
In order to achieve the above object, according to another aspect of the present application, there is provided a target element segmentation model of a medical image, including: the system comprises a UNet segmentation network, a depth supervision module, a multi-scale feature activation module, a residual label module and a distance map weighting module; the UNet segmentation network comprises a plurality of layers of coding modules and decoding modules, and is used for determining the segmentation result of a target element of a medical image, wherein the decoding modules of each layer are connected with the coding modules through a jump connection structure, the coding module of the lower layer inputs data obtained by outputting down sampling to the coding module of the upper layer, the decoding module of the upper layer inputs data obtained by outputting the coding module of the upper layer, and the decoding module of the lower layer outputs data obtained by up sampling; the input of the multi-scale feature activation module is connected with the output of the multilayer coding module, the output of the multi-scale feature activation module is connected with the input of the multilayer decoding module, and the multi-scale feature activation module is used for inputting the output data of the multilayer coding module to the multi-scale feature activation module for feature activation processing and sending the processed fusion feature data to the decoding module corresponding to each layer of coding module; the input of the residual label module is connected with the output of the multilayer decoding module and is used for inputting the output data of the multilayer decoding module into the residual label module, calculating a residual label and determining residual loss; the input of the distance map weighting module is connected with the output of the residual label module, the output of the distance map weighting module is connected with the input of each layer of decoding module, and the distance map weighting module is used for inputting the residual labels output by the residual label module into the distance map weighting module and outputting an approximate distance weighted map as the input of the up-sampling of each layer of decoding module; and the input of the depth supervision module is connected with the output of each layer of decoding module and is used for determining the error loss of the segmentation result according to the output of each layer of encoding module.
In order to achieve the above object, according to another aspect of the present application, there is provided a computer-readable storage medium storing a program, wherein the program performs the target element segmentation method of a medical image described in any one of the above.
In order to achieve the above object, according to another aspect of the present application, there is provided an electronic device comprising one or more processors and a memory for storing one or more programs, wherein when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the target element segmentation method of a medical image described in any one of the above.
When the target element in the medical image is segmented, the multi-scale feature activation module of the segmentation model is used for fusing and activating the multi-layer coded data, and the output of each layer of code is fully utilized. The residual labels of the target elements are calculated through the residual label module to represent morphological characteristics such as direction continuity and the like, and the residual labels are fed back to each layer of decoding module through the distance map weighting module in a weighting mode to be integrated into the segmentation model, so that the segmentation model has stronger identification capability and more accurate segmentation results when the target elements are segmented. The method achieves the purpose of more accurately segmenting the target element, achieves the technical effect of improving the accuracy of the segmentation result of the target element, and further solves the problem of low accuracy of the segmentation result of the blood vessel caused by poor model structure when the blood vessel in the medical image is segmented in the related technology.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a flowchart of a target element segmentation method for a medical image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an overall architecture of a segmentation model provided according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a multi-scale feature activation module provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic illustration of a remnant label provided in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of a target element segmentation model of a medical image provided according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device provided according to an embodiment of the present application.
Detailed Description
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The present invention is described below with reference to preferred implementation steps, and fig. 1 is a flowchart of a method for segmenting a target element of a medical image according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
step S101, acquiring a medical image to be segmented, and performing enhancement and amplification processing as input data;
step S102, inputting input data into a segmentation model, and outputting a corresponding segmentation result by the segmentation model, wherein the segmentation model is provided with a multi-scale feature activation module, a residual label module and a distance map weighting module;
and step S103, segmenting the target element in the medical image according to the segmentation result.
When the target elements in the medical image are segmented, the multi-scale feature activation module of the segmentation model is used for fusing and activating the multi-layer coded data, and the output of each layer of code is fully utilized. The residual labels of the target elements are calculated through the residual label module to represent morphological characteristics such as direction continuity and the like, and the residual labels are fed back to each layer of decoding module through the distance map weighting module in a weighting mode to be integrated into the segmentation model, so that the segmentation model has stronger identification capability and more accurate segmentation results when the target elements are segmented. The method achieves the purpose of more accurately segmenting the target element, achieves the technical effect of improving the accuracy of the segmentation result of the target element, and further solves the problem of low accuracy of the segmentation result of the blood vessel caused by poor model structure when the blood vessel in the medical image is segmented in the related technology.
The execution subject of the above steps may be a medical image segmentation system, which may comprise processing means to perform the data processing operations in the above steps, e.g. steps S101-S103. The segmentation system may be connected to a medical imaging device, and receive a medical image generated by the medical imaging device for enhancement and augmentation as input data to a segmentation model.
The segmentation model is built by a UNet backbone network, the UNet backbone network has 5 layers of coding and decoding structures, a jump connection structure is arranged between decoding and coding of each layer, the input of a coding module at a lower layer is data obtained by outputting down sampling by a coding module at an upper layer, the input of a decoding module at the upper layer comprises the output of the coding module at the current layer, and the output of the decoding module at the lower layer is data obtained by up sampling. And a multi-scale feature activation module, a residual label module and a distance map weighting module are arranged in the UNet backbone network.
The input of the multi-scale feature activation module is connected with the output of the multilayer coding module, the output of the multi-scale feature activation module is connected with the input of the multilayer decoding module, and the multi-scale feature activation module is used for inputting the output data of the multilayer coding module to the multi-scale feature activation module for feature activation processing and sending the processed fusion feature data to the decoding module corresponding to each layer of coding module.
The multi-scale feature activation module can make full use of information of each layer of the encoder side, and improves the segmentation precision and accuracy of the segmentation model. Specifically, after sampling each layer to the size of the first layer on the encoder side, splicing the channels to generate a fusion feature, and activating the fusion feature, that is, an information screening process. And finally, transmitting the screened fusion features to a corresponding decoder layer.
And the input of the residual label module is connected with the output of the multilayer decoding module and is used for inputting the output data of the multilayer decoding module into the residual label module, calculating a residual label and determining residual loss.
The residual label can be understood as a label obtained by subtracting voxel points of adjacent slices, the trend of organ tissues of the target element between the adjacent slices is reflected, and further the morphological information such as continuity of the target element can be determined. The obtained morphological information is input into a distance map weighting module, the cardiac morphology information is fused into a segmentation model in a weighting mode, the obtained segmentation model can notice the morphological information of the target element, and therefore the accuracy rate and the segmentation precision of the target element segmentation are improved.
The input of the distance map weighting module is connected with the output of the residual label module, the output of the distance map weighting module is connected with the input of each layer of decoding module, the distance map weighting module is used for inputting the residual labels output by the residual label module into the distance map weighting module, and outputting the approximate distance weighted map as the input of the sampling on each layer of decoding module.
The distance map is generated by the residual label, can represent morphological information in the residual label, and further performs weighting according to the distance map, so that the decoder can pay attention to the morphological information of the target element, and further improves the accuracy and the segmentation precision of the target element.
Through the multi-scale feature activation module, the residual label module and the distance map weighting module, the purpose of more accurately segmenting the target element is achieved through the cooperation of the segmentation model, the technical effect of improving the accuracy of the segmentation result of the target element is achieved, and the problem that the accuracy of the segmentation result of the blood vessel is low due to poor model structure when the blood vessel in the medical image is segmented in the related technology is solved.
Optionally, the acquiring a medical image to be segmented, and performing enhancement and amplification processing includes, as input data: acquiring a medical image, wherein the medical image comprises a region of interest of a target element; enhancing the target element to obtain an enhanced image; channel merging is carried out on the medical image and the enhanced image to obtain a merged image; resampling, intensity adjusting and normalizing the merged image; and performing data amplification on the processed combined image to obtain a plurality of image feature sets with preset sizes as input data.
Before the medical image is input into the segmentation model, enhancement and augmentation processing is required, so that the medical image is converted into a format and a size which can be recognized by a UNet segmentation network and input to segment the medical image. The image enhancement and the channel combination can improve the accuracy of feature extraction of target elements in the medical image, and the resampling, the intensity adjustment and the normalization are all used for enabling the extracted image features to be used as input of the UNet. Data augmentation is to enhance the diversity of medical images and to obtain more accurate segmentation accuracy and precision.
Optionally, inputting the input data into the segmentation model, and outputting the corresponding segmentation result by the segmentation model includes: inputting the image characteristics with the preset size into a segmentation model, wherein a trunk network of the segmentation model is a UNet segmentation network, and the UNet segmentation network comprises a multilayer coding module and a decoding module; the decoding module and the coding module of each layer are connected through a jump connection structure, the coding module of the lower layer inputs data obtained by down sampling of the coding module output of the upper layer, the decoding module of the upper layer inputs data obtained by up sampling of the coding module of the upper layer, and the decoding module of the lower layer outputs data obtained by up sampling; inputting output data of the multi-layer coding module to the multi-scale feature activation module for feature activation processing, and sending processed fusion feature data to a decoding module corresponding to each layer of coding module; inputting the output data of the multilayer decoding module into a residual label module, and calculating a residual label; inputting the residual labels into a distance map weighting module, and taking the output of the distance map weighting module as the input of sampling on each layer of decoding module; and outputting a final segmentation result through the UNet segmentation network.
Specifically, inputting output data of a multilayer coding module to a multi-scale feature activation module for feature activation processing, and sending processed fusion feature data to a decoding module corresponding to each layer of coding module includes: in the multi-scale feature activation module, the output data of the multi-layer coding modules are input into the compression excitation unit through the corresponding channels, feature fusion is carried out on the output data of each layer of coding modules, and the fusion features of each channel are obtained through screening; and sending the fusion characteristics of the channels to a decoding module corresponding to each layer of coding module.
As shown in fig. 3, the multi-scale feature activation module includes a compressed excitation unit SE, performs feature fusion by splicing channels after the coding module inputs the size of the first layer by upsampling each layer, and then screens the fusion features by using the SE unit, and transmits the screened fusion features to the corresponding decoding module.
Optionally, the step of inputting the output data of the multi-layer decoding module to the residual label module, wherein the step of calculating the residual label includes: in the residual label module, calculating a residual label through a residual label calculation formula according to the slice of the element label in the medical image, wherein the residual label calculation formula is as follows:
Figure 813829DEST_PATH_IMAGE007
in the formula (II)>
Figure 414574DEST_PATH_IMAGE008
And &>
Figure 409075DEST_PATH_IMAGE009
Represented by a ^ th->
Figure 753469DEST_PATH_IMAGE008
Layer and/or>
Figure 517025DEST_PATH_IMAGE009
Two adjacent slices in layers with a slice size->
Figure 554252DEST_PATH_IMAGE010
,/>
Figure 518272DEST_PATH_IMAGE011
In the form of a residual label, the label,
Figure 666357DEST_PATH_IMAGE012
the element label value for each voxel in the slice. />
The residual label has positive and negative values, and the direction from the positive value to the negative value can be approximately regarded as the trend of the organ tissues of the target elements among the slices. Considering that the residual label represents the position change of the voxels of the adjacent layer, the number of foreground voxels in the obtained residual label is not large, which may show that the adjacent layer is continuously slowly changed, so there is no sharp change of the number of voxels.
The residual labels reflect the trend of organ tissues of the target elements between adjacent sections, and further can determine morphological information such as continuity of the target elements. The obtained morphological information is input into a distance map weighting module, the psychology information is merged into a segmentation model in a weighting mode, the morphological information of the target element can be noticed by the obtained segmentation model, and therefore the accuracy and the segmentation precision of the target element are improved.
Optionally, inputting the residual tag into the distance map weighting module, and taking the output of the distance map weighting module as an input of the upsampling of each layer of decoding module includes: in a distance map weighting module, calculating a distance map according to the residual labels to obtain an approximate distance weighted map; and outputting the approximate distance weighted graph as an up-sampling input of each layer of decoding modules, wherein the decoding modules perform pixel multiplication on the feature graph before up-sampling and the approximate distance weighted graph as the up-sampling input.
Remnant label
Figure 284420DEST_PATH_IMAGE013
The voxel position of the foreground is close to the edge of the vessel, so the residual label can be approximated to the edge distance map of the vessel voxel. To achieve a higher sensitivity of the network to edges, the residue label is ≥>
Figure 492547DEST_PATH_IMAGE013
Distance map calculation to form an approximate distance weighted map->
Figure 196061DEST_PATH_IMAGE014
. The generated approximate distance weighted graph is evaluated at the decoder side>
Figure 882257DEST_PATH_IMAGE015
And multiplying the feature graph at pixel level before up-sampling to achieve the purpose of edge sensitivity. The formula is as follows:
Figure 620406DEST_PATH_IMAGE016
wherein->
Figure 999435DEST_PATH_IMAGE017
Represents a certain layer on the decoder side, and>
Figure 940977DEST_PATH_IMAGE018
represents a feature map, before sampling at a certain level of the decoder, is taken>
Figure 430864DEST_PATH_IMAGE019
The weighted feature map is shown.
Optionally, the method further comprises: calculating a residual loss through a residual label module; calculating the error loss of the segmentation result and the gold standard through a depth supervision module; calculating connectivity loss through the output of each layer of decoding module; and determining the total loss of the segmentation result according to the residual loss, the error loss and the connectivity loss and the respective corresponding weights.
For residual loss
Figure 757940DEST_PATH_IMAGE020
Generating a residue label ≥ from a network predicted probability map>
Figure 573450DEST_PATH_IMAGE021
Likewise, a gold standard residue label is also generated->
Figure 251556DEST_PATH_IMAGE022
Using a binary cross entropy penalty>
Figure 545134DEST_PATH_IMAGE023
Input of which is a predicted residual label
Figure 992296DEST_PATH_IMAGE024
And a gold standard residue label->
Figure 198280DEST_PATH_IMAGE025
The residual loss calculation is performed. The formula is as follows:
Figure 363682DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 195372DEST_PATH_IMAGE027
is a certain layer of the decoder, is selected>
Figure 762620DEST_PATH_IMAGE028
And &>
Figure 388773DEST_PATH_IMAGE029
Respectively corresponding to the residual label predicted by the network and the gold standard residual label. />
Figure 41471DEST_PATH_IMAGE030
The weight values corresponding to the residual labels generated for the decoder layers are set to [1,0.75,0.5,0.25 ]]. The output of the network prediction has three channels, which are asserted>
Figure 676852DEST_PATH_IMAGE031
Is the weight value corresponding to each channel, and the specific calculation method is ^>
Figure 833027DEST_PATH_IMAGE032
In which>
Figure 646393DEST_PATH_IMAGE033
Is the total prime number in the channel, based on the comparison result>
Figure 520808DEST_PATH_IMAGE034
Is the number of voxels in the channel that are foreground.
Figure 959880DEST_PATH_IMAGE035
Is a binary cross entropy loss function.
For connectivity errors
Figure 236140DEST_PATH_IMAGE036
In order to restrict the connectivity of the blood vessel, the network is forced to focus on easily broken and difficult vessel voxels during training. The fragile vessels can be represented in a probability map predicted by the network, i.e. those voxels whose predicted probability is close to 0.5. Voxels with a probability greater than 0.5 are not of interest because after the probability map is converted to a binary label, voxels with a probability greater than 0.5 are considered as foreground, and the reason for vessel rupture is that the predicted probability value is lower than 0.5. Firstly, setting the value ranging from 0.4 to 0.5 in the probability map of the network prediction to 1, and setting the rest values to zero, then obtaining the binary label of the hard-to-divide vessel of the network prediction>
Figure 469676DEST_PATH_IMAGE037
. Will->
Figure 831387DEST_PATH_IMAGE037
Multiplying with corresponding gold standard to obtain the gold standard blood vessel label difficult to be classified>
Figure 808570DEST_PATH_IMAGE038
. Then, the binary label of the hard blood vessel predicted by the network is judged to be->
Figure 939337DEST_PATH_IMAGE037
And a difficult-to-distinguish gold standard vascular label>
Figure 357156DEST_PATH_IMAGE038
Make a loss of continuity>
Figure 206163DEST_PATH_IMAGE039
In conjunction with the calculation of (c), using a loss function of &>
Figure 721458DEST_PATH_IMAGE040
A loss function.
Figure 972311DEST_PATH_IMAGE041
Loss of continuity>
Figure 547649DEST_PATH_IMAGE042
Use>
Figure 883952DEST_PATH_IMAGE040
A loss function is calculated. />
Figure 202938DEST_PATH_IMAGE043
And &>
Figure 59030DEST_PATH_IMAGE044
Is a weight value of the residue label and the continuity loss function->
Figure 805269DEST_PATH_IMAGE043
And &>
Figure 363289DEST_PATH_IMAGE044
Set to 0.08 and 0.1, respectively.
The final loss function is formulated as follows:
Figure 485966DEST_PATH_IMAGE045
in the formula (I), the compound is shown in the specification,
Figure 445831DEST_PATH_IMAGE046
Loss dice in order to be a function of the Dice loss,Loss focal is FocaThe loss function, per is the network prediction, and GT is the gold standard.
Figure 362972DEST_PATH_IMAGE047
For error loss, the deep monitoring module can obtain the error according to the output of each layer of decoding module, and the error is determined in a common error determination mode of a UNet backbone network.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than the order illustrated.
It should be noted that the present application also provides an alternative embodiment, which is described in detail below.
The present embodiment provides a blood vessel segmentation method and a segmentation model. The method is mainly applied to a blood vessel segmentation network structure, in particular to a structure of a multi-scale feature activation module, a residual loss module and a distance map weighting module and a connection mode between the structure and a main network.
Fig. 2 is a schematic diagram of an overall architecture of a segmentation model provided in an embodiment of the present invention, and as shown in fig. 2, the segmentation model provided in the embodiment mainly includes the following contents:
dynnunet backbone network. The dynUNet backbone network has a 5-layer coding and decoding structure; each has a skip connection structure (skip connection) between decoding and encoding.
Multi-scale feature activation Module (MSFA), shown as 1 in fig. 2. The multi-scale feature activation module is positioned between the coding layer and the decoding layer of the dynUNet; the output of the coding layers of 1 to 4 layers of the dynaUNet backbone network is used as the input of the multi-scale feature activation module; and the output data of the multi-scale feature activation module is used as the input of decoding layers of 1 to 4 layers of the dynanet backbone network.
A residual loss module: shown as 2 in figure 2. The output of the coding layers of 1 to 4 layers of the dynUNet backbone network is used for constructing a residual label.
Distance map weighting module, shown as 3 in fig. 2. The input of the distance map weighting module is the output of the residual loss module; the output of the distance map weighting module is fed back to the coding layers of 1 to 4 layers of the dynUNet backbone network, the output results of the coding layers of the dynUNet backbone network are combined in each coding layer in a tensor splicing mode, and the combined result is used as the input of up sampling (up sampling).
For the multi-scale feature activation module, fig. 3 is a schematic diagram of a multi-scale feature activation module provided according to an embodiment of the present application, and a framework of the multi-scale feature activation module is as shown in fig. 3, as the number of layers of the network increases, the network can already capture information of different fields of view, shallow layers (e.g., encoder layers 1 and 2) extract more details, but are more noisy, and deep layers (e.g., decoder layers 3 and 4) contribute more abstract and integrated information, but have poor perceptibility to details, so that each stage has features useful for vessel segmentation. In order to fully utilize the information of each layer at the encoder side, the layers are upsampled to the size of the first layer at the encoder side, and then the channels are spliced. And secondly, activating the fusion characteristics by using an SE module, namely an information screening process. And finally, transmitting the screened fusion features to a corresponding decoder layer.
In addition, in the traditional method, the complicated surface geometry of the blood vessel can be described, including longitudinal and axial curvature, section eccentricity and eccentric direction, and the segmentation result has high precision, but is long in time and low in efficiency. In the deep learning method, each point of an input image has certain intensity information, and a network can only judge whether each pixel point is a foreground point or not from the intensity value of each pixel point or judge whether the current point is a foreground point or not according to the field point information. Therefore, the deep learning method cannot incorporate morphological information such as the direction and continuity of the blood vessel into the training. To solve this problem, residual losses are introduced.
For residual loss modules. Fig. 4 is a schematic diagram of a residual label provided according to an embodiment of the present application, and the residual label is generated using the cross-sectional CT scan slice in the first diagram above, as shown in fig. 4. The residual label has a positive value and a negative value, and the direction from the positive value to the negative value can be approximately regarded as the trend of the blood vessel between slices. Considering that the residual label represents the position change of vessel voxels of the adjacent layer, the number of foreground voxels in the obtained residual label is not large, which can show that the vessels in the adjacent layer continuously and slowly change, so that the number of vessel voxels does not change sharply.
If a pair of labels has
Figure 673868DEST_PATH_IMAGE048
Sheet section, each sheet section is big or small>
Figure 334656DEST_PATH_IMAGE049
Then the residual label formula corresponding to the label is as follows:
Figure 899761DEST_PATH_IMAGE050
wherein, the first and the second end of the pipe are connected with each other,
Figure 253382DEST_PATH_IMAGE051
and &>
Figure 785994DEST_PATH_IMAGE052
Represented by a ^ h>
Figure 250473DEST_PATH_IMAGE051
Layer and first +>
Figure 919352DEST_PATH_IMAGE052
Two adjacent slices in layers of the size of
Figure 709454DEST_PATH_IMAGE053
. The label value of each voxel is->
Figure 729362DEST_PATH_IMAGE054
. If/or>
Figure 482686DEST_PATH_IMAGE055
Then, the same on the adjacent slices is indicatedVoxels of a location belong to the same class and vice versa. The embodiment also designs a residual loss, and generates a residual label based on the probability graph predicted by the network>
Figure 271650DEST_PATH_IMAGE056
Likewise, a gold standard residue label is generated->
Figure 967074DEST_PATH_IMAGE057
Using a binary cross entropy penalty>
Figure 208699DEST_PATH_IMAGE058
Input of which is a predicted residual label
Figure 14981DEST_PATH_IMAGE059
And a gold standard residue label->
Figure 924031DEST_PATH_IMAGE057
And calculating residual loss. The formula is as follows:
Figure 524777DEST_PATH_IMAGE060
wherein the content of the first and second substances,
Figure 519278DEST_PATH_IMAGE061
is a certain layer of the decoder, is selected>
Figure 611474DEST_PATH_IMAGE062
And &>
Figure 375031DEST_PATH_IMAGE063
Respectively corresponding to the residual label predicted by the network and the gold standard residual label. />
Figure 146678DEST_PATH_IMAGE064
The weight values corresponding to the residual labels generated for the decoder layers are set to [1,0.75,0.5,0.25 ]]. The output of the network prediction has three channels, which are asserted>
Figure 628475DEST_PATH_IMAGE065
The weight value corresponding to each channel is calculated by
Figure 776559DEST_PATH_IMAGE066
In which>
Figure 394623DEST_PATH_IMAGE067
Is the total prime number in the channel, based on the comparison result>
Figure 602750DEST_PATH_IMAGE068
Is the number of voxels in the channel that are foreground. />
Figure 306264DEST_PATH_IMAGE069
Is a binary cross entropy loss function.
In addition, in order to constrain the connectivity of the vessel, the network is forced to focus on easily broken, difficult vessel voxels during training. The fragile vessels can be represented in a probability map predicted by the network, i.e. those voxels whose predicted probability is close to 0.5. Voxels with probability greater than 0.5 are not of interest because after the probability map is converted to a binary label, the voxels with probability greater than 0.5 are considered as foreground, and the reason for vessel rupture is that the predicted probability value is lower than 0.5. Firstly, setting the value in the range of 0.4 to 0.5 in the probability map of the network prediction to be 1, and setting the rest values to be zero, so that the binary label of the network prediction of the difficult-to-divide blood vessel can be obtained
Figure 743192DEST_PATH_IMAGE070
. Will->
Figure 481341DEST_PATH_IMAGE070
Multiplying with corresponding gold standard to obtain the gold standard blood vessel label difficult to be distinguished
Figure 594791DEST_PATH_IMAGE071
. Then, a binary label of the hard vessel predicted by the network is determined>
Figure 51180DEST_PATH_IMAGE072
And difficult-to-distinguish gold standard blood vessel label
Figure 541067DEST_PATH_IMAGE071
Make a loss of continuity>
Figure 133722DEST_PATH_IMAGE073
In conjunction with the calculation of (c), using a loss function of &>
Figure 683653DEST_PATH_IMAGE074
A loss function.
Figure 112491DEST_PATH_IMAGE075
Loss of continuity
Figure 140490DEST_PATH_IMAGE076
Use>
Figure 587652DEST_PATH_IMAGE077
A loss function is calculated. />
Figure 308483DEST_PATH_IMAGE078
And &>
Figure 473885DEST_PATH_IMAGE079
Is a weight value of the residue label and the continuity loss function->
Figure 571154DEST_PATH_IMAGE078
And &>
Figure 872822DEST_PATH_IMAGE079
Set to 0.08 and 0.1, respectively. />
The final loss function is formulated as follows:
Figure 515288DEST_PATH_IMAGE080
in the formula,
Figure 167986DEST_PATH_IMAGE081
Loss dice Is a function of the loss of Dice function,Loss focal for the Focal loss function, per is the net prediction and GT is the gold standard.
The module is weighted for the distance map. Remnant label
Figure 537787DEST_PATH_IMAGE082
The voxel position of the medium foreground is close to the edge of the vessel, so the residual label can be approximated as an edge distance map of the vessel voxels. To achieve a higher sensitivity of the network to edges, the residue label is ≥>
Figure 959541DEST_PATH_IMAGE082
Distance map calculation to form an approximate distance weighted map->
Figure 22175DEST_PATH_IMAGE083
. The generated approximate distance weighted graph is evaluated at the decoder side>
Figure 896590DEST_PATH_IMAGE083
And multiplying the feature graph at pixel level before up-sampling to achieve the purpose of edge sensitivity. The formula is as follows:
Figure 335662DEST_PATH_IMAGE084
wherein
Figure 611922DEST_PATH_IMAGE085
Represents a certain layer on the decoder side, and>
Figure 429198DEST_PATH_IMAGE086
representing a feature map of the decoder before sampling at a certain level,
Figure 790909DEST_PATH_IMAGE087
the weighted feature map is shown.
It should be noted that, before the medical image is input into the segmentation model, the following steps need to be performed:
extracting a region of interest of an original image; performing blood vessel enhancement by using Frangi filtering to obtain a blood vessel enhanced image; and carrying out channel combination on the original image and the blood vessel enhanced image, inputting the combined image into a deep learning network, wherein the original image is used as a first channel, and the blood vessel enhanced image is used as a second channel and is input into the network.
The image resampling is adjusted to (400, 320, 320), the image intensity is adjusted to (-1000, 500), and then normalization is performed. The data amplification method comprises the steps of Gaussian noise addition, cutting foreground, random cutting according to the positive and negative proportion, random overturning, rotation and affine transformation.
A sliding window with the size of 80 is adopted for training and reasoning, and spacing is adjusted to be (1, 1).
Then, the segmentation model is trained, tested and checked in a deep learning mode. Until the total loss meets the requirements.
After the segmentation model outputs the segmentation result, the network output of the segmentation model may be processed using the maximum connected component, and only the segmentation target with the maximum number of connected components may be retained. The accuracy of the segmentation result of the segmentation model is further improved.
Fig. 5 is a schematic diagram of a target element segmentation model of a medical image according to an embodiment of the present application, and as shown in fig. 5, an embodiment of the present application further provides a target element segmentation model of a medical image, including: UNet segmentation network 51, depth supervision module 52, multi-scale feature activation module 53, residual label module 54 and distance map weighting module 55, which are as follows:
the UNet segmentation network 51 comprises a plurality of layers of coding modules and decoding modules, and is used for determining the segmentation result of a target element of a medical image, wherein the decoding modules of each layer are connected with the coding modules through a jump connection structure, the coding modules of the lower layer input data obtained by down-sampling the output of the coding modules of the upper layer, the decoding modules of the upper layer input data obtained by up-sampling the output of the coding modules of the current layer, and the decoding modules of the lower layer output data obtained by up-sampling; the input of the multi-scale feature activation module 53 is connected with the output of the multilayer coding module, the output of the multi-scale feature activation module 53 is connected with the input of the multilayer decoding module, and the multi-scale feature activation module is used for inputting the output data of the multilayer coding module to the multi-scale feature activation module for feature activation processing, and sending the processed fusion feature data to the decoding module corresponding to each layer of coding module; the input of the residual label module 54 is connected to the output of the multi-layer decoding module, and is configured to input the output data of the multi-layer decoding module to the residual label module, calculate a residual label, and determine a residual loss; the input of the distance map weighting module 55 is connected with the output of the residual label module 54, the output of the distance map weighting module 55 is connected with the input of each layer decoding module, and the distance map weighting module is used for inputting the residual labels output by the residual label module into the distance map weighting module and outputting an approximate distance weighted map as the input of the upsampling of each layer decoding module; the input of the depth supervision module 52 is connected to the output of each layer of decoding modules for determining the error loss of the segmentation result according to the output of each layer of encoding modules.
When the target elements in the medical images are segmented, the multi-scale feature activation module of the segmentation model is used for fusing and activating the data of the multi-layer codes, and the output of each layer of codes is fully utilized. The residual labels of the target elements are calculated through the residual label module to represent morphological characteristics such as direction continuity and the like, and the residual labels are fed back to each layer of decoding module through the distance map weighting module in a weighting mode to be integrated into the segmentation model, so that the segmentation model has stronger identification capability and more accurate segmentation results when the target elements are segmented. The method achieves the purpose of more accurately segmenting the target element, achieves the technical effect of improving the accuracy of the segmentation result of the target element, and further solves the problem of low accuracy of the segmentation result of the blood vessel caused by poor model structure when the blood vessel in the medical image is segmented in the related technology.
The target element segmentation model of the medical image may be provided in a processor and a memory, the UNet segmentation network 51, the depth supervision module 52, the multi-scale feature activation module 53, the residual label module 54, the distance map weighting module 55, and the like are stored in the memory as program units, and the program units stored in the memory are executed by the processor to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. One or more than one kernel can be set, and the problem that a user cannot determine whether the non-capacitive screen originally-matched capacitive pen is matched or not when the non-capacitive screen originally-matched capacitive pen is used in the related technology is solved by adjusting kernel parameters.
The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), including at least one memory chip.
An embodiment of the present invention provides a computer-readable storage medium on which a program is stored, which, when executed by a processor, implements a method for target element segmentation of a medical image.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program executes a target element segmentation method of a medical image during running.
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, an embodiment of the present application provides an electronic device 60, which includes a processor, a memory, and a program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the steps of the method for segmenting the target element of the medical image:
the device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program initialized with any of the method steps described above when executed on a target element segmentation apparatus for medical images.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable medical image object element segmentation apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable medical image object element segmentation apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable medical image object element segmentation apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable medical image object element segmentation apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A method of object element segmentation of a medical image, the method comprising:
acquiring a medical image to be segmented, and performing enhancement and amplification processing as input data;
inputting the input data into a segmentation model, and outputting a corresponding segmentation result by the segmentation model, wherein the segmentation model is provided with a multi-scale feature activation module, a residual label module and a distance map weighting module;
segmenting target elements in the medical image according to the segmentation result;
inputting the input data into a segmentation model, and outputting a corresponding segmentation result by the segmentation model comprises:
inputting image features with preset size into the segmentation model, wherein a backbone network of the segmentation model is a UNet segmentation network, and the UNet segmentation network comprises a multi-layer coding module and a decoding module; the decoding module and the coding module of each layer are connected through a jump connection structure, the coding module of the lower layer inputs data obtained by down sampling of the coding module output of the upper layer, the decoding module of the upper layer inputs data obtained by up sampling of the coding module of the upper layer, and the decoding module of the lower layer outputs data obtained by up sampling;
inputting output data of a multilayer coding module into the multi-scale feature activation module for feature activation processing, and sending processed fusion feature data to a decoding module corresponding to each layer of coding module;
inputting output data of a multi-layer decoding module into the residual label module, and calculating a residual label;
inputting the residual label into the distance map weighting module, and taking the output of the distance map weighting module as the input of the upsampling of each layer of decoding module;
and outputting a final segmentation result through the UNet segmentation network.
2. The method according to claim 1, wherein the medical image to be segmented is acquired and subjected to enhancement and augmentation processing, as input data, comprising:
acquiring the medical image, wherein the medical image comprises a region of interest of the target element;
enhancing the target element to obtain an enhanced image;
channel merging is carried out on the medical image and the enhanced image to obtain a merged image;
carrying out resampling, intensity adjustment and normalization processing on the merged image;
and performing data amplification on the processed combined image to obtain a plurality of image feature sets with preset sizes as the input data.
3. The method of claim 1, wherein inputting output data of a multi-layer coding module into the multi-scale feature activation module for feature activation processing, and sending processed fused feature data to a decoding module corresponding to each layer of coding module comprises:
in the multi-scale feature activation module, the output data of the multi-layer coding modules are input into a compression excitation unit through respective corresponding channels, feature fusion is carried out on the output data of each layer of coding modules, and screening is carried out to obtain fusion features of each channel;
and sending the fusion characteristics of the channels to a decoding module corresponding to each layer of coding module.
4. The method of claim 1, wherein the output data of the multi-layer decoding module is input to the residual label module, and wherein computing the residual label comprises:
in the residual label module, calculating the residual label according to a residual label calculation formula according to the slice of the element label in the medical image, wherein the residual label calculation formula is as follows:
Figure QLYQS_1
in the formula (I), the compound is shown in the specification,
Figure QLYQS_2
and &>
Figure QLYQS_3
Represented by a ^ h>
Figure QLYQS_4
Layer and first +>
Figure QLYQS_5
Two adjacent slices in a slice with a slice size of->
Figure QLYQS_6
,/>
Figure QLYQS_7
Is a remnant label, is>
Figure QLYQS_8
The element label value for each voxel in the slice.
5. The method of claim 1, wherein inputting the residual labels to the distance map weighting module, wherein taking the output of the distance map weighting module as an input for upsampling by each layer decoding module comprises:
in the distance map weighting module, distance map calculation is carried out according to the residual labels to obtain an approximate distance weighted map;
and outputting the approximate distance weighted graph as an up-sampling input of each layer of decoding module, wherein in the decoding module, pixel multiplication is carried out on the approximate distance weighted graph and the feature graph before up-sampling as an up-sampling input.
6. The method of claim 1, further comprising:
calculating a residual loss by the residual tag module;
calculating the error loss of the segmentation result and the gold standard through a depth supervision module;
calculating connectivity loss through the output of each layer of decoding module;
and determining the total loss of the segmentation result according to the residual loss, the error loss and the connectivity loss and the respective corresponding weights.
7. A target element segmentation model for medical images, comprising: the system comprises a UNet segmentation network, a depth supervision module, a multi-scale feature activation module, a residual label module and a distance map weighting module;
the UNet segmentation network comprises a plurality of layers of coding modules and decoding modules, and is used for determining the segmentation result of a target element of a medical image, wherein the decoding modules of each layer are connected with the coding modules through a jump connection structure, the coding module of the lower layer inputs data obtained by outputting down sampling to the coding module of the upper layer, the decoding module of the upper layer inputs data obtained by outputting the coding module of the upper layer, and the decoding module of the lower layer outputs data obtained by up sampling;
the input of the multi-scale feature activation module is connected with the output of the multilayer coding module, the output of the multi-scale feature activation module is connected with the input of the multilayer decoding module, and the multi-scale feature activation module is used for inputting the output data of the multilayer coding module to the multi-scale feature activation module for feature activation processing and sending the processed fusion feature data to the decoding module corresponding to each layer of coding module;
the input of the residual label module is connected with the output of the multilayer decoding module and is used for inputting the output data of the multilayer decoding module into the residual label module, calculating a residual label and determining residual loss;
the input of the distance map weighting module is connected with the output of the residual label module, the output of the distance map weighting module is connected with the input of each layer of decoding module, and the distance map weighting module is used for inputting the residual labels output by the residual label module into the distance map weighting module and outputting an approximate distance weighted map as the input of the up-sampling of each layer of decoding module;
and the input of the depth supervision module is connected with the output of each layer of decoding module and is used for determining the error loss of the segmentation result according to the output of each layer of encoding module.
8. A computer-readable storage medium characterized in that the storage medium is used for storing a program, wherein the program executes the method of object element segmentation of medical images according to any one of claims 1 to 6.
9. An electronic device comprising one or more processors and memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of object element segmentation of medical images of any one of claims 1 to 6.
CN202211689087.9A 2022-12-28 2022-12-28 Target element segmentation method, model and electronic equipment for medical image Active CN115700762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211689087.9A CN115700762B (en) 2022-12-28 2022-12-28 Target element segmentation method, model and electronic equipment for medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211689087.9A CN115700762B (en) 2022-12-28 2022-12-28 Target element segmentation method, model and electronic equipment for medical image

Publications (2)

Publication Number Publication Date
CN115700762A CN115700762A (en) 2023-02-07
CN115700762B true CN115700762B (en) 2023-04-07

Family

ID=85121200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211689087.9A Active CN115700762B (en) 2022-12-28 2022-12-28 Target element segmentation method, model and electronic equipment for medical image

Country Status (1)

Country Link
CN (1) CN115700762B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840913B (en) * 2019-01-21 2020-12-29 中南民族大学 Method and system for segmenting tumor in mammary X-ray image
CN111028246A (en) * 2019-12-09 2020-04-17 北京推想科技有限公司 Medical image segmentation method and device, storage medium and electronic equipment
CN113947681A (en) * 2021-10-18 2022-01-18 柏意慧心(杭州)网络科技有限公司 Method, apparatus and medium for segmenting medical images
CN115131556A (en) * 2022-05-27 2022-09-30 吉林大学 Image instance segmentation method based on deep learning

Also Published As

Publication number Publication date
CN115700762A (en) 2023-02-07

Similar Documents

Publication Publication Date Title
US10867384B2 (en) System and method for automatically detecting a target object from a 3D image
CN105574513B (en) Character detecting method and device
CN111428709B (en) Image processing method, device, computer equipment and storage medium
Balakrishna et al. Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder
CN112651979B (en) Lung X-ray image segmentation method, system, computer equipment and storage medium
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN110473243A (en) Tooth dividing method, device and computer equipment based on depth profile perception
CN113205524B (en) Blood vessel image segmentation method, device and equipment based on U-Net
CN114219943A (en) CT image organ-at-risk segmentation system based on deep learning
CN113421240A (en) Mammary gland classification method and device based on ultrasonic automatic mammary gland full-volume imaging
Wu et al. Multi-features refinement and aggregation for medical brain segmentation
CN114511581B (en) Multi-task multi-resolution collaborative esophageal cancer lesion segmentation method and device
CN113538458A (en) U-Net image segmentation method based on FTL loss function and attention
CN117115166B (en) Multi-period CT image detection system and method for kidneys and electronic equipment
CN117333529B (en) Template matching-based vascular ultrasonic intima automatic measurement method and system
CN111507950B (en) Image segmentation method and device, electronic equipment and computer-readable storage medium
CN115700762B (en) Target element segmentation method, model and electronic equipment for medical image
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN113379691B (en) Breast lesion deep learning segmentation method based on prior guidance
CN113781475A (en) Method and system for detecting human body target with remarkable thermal infrared image
CN116740465B (en) Focus sorter and equipment based on peritoneal dialysis liquid image segmentation
CN115272365B (en) CT perfusion imaging processing method and device
CN115546239B (en) Target segmentation method and device based on boundary attention and distance transformation
CN113436191B (en) Pathological image classification method, pathological image classification system and readable medium
Yuan Attentional mechanisms for multiscale aggregate U-Net brain tumor image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant