CN112541918A - Three-dimensional medical image segmentation method based on self-attention mechanism neural network - Google Patents
Three-dimensional medical image segmentation method based on self-attention mechanism neural network Download PDFInfo
- Publication number
- CN112541918A CN112541918A CN202011537174.3A CN202011537174A CN112541918A CN 112541918 A CN112541918 A CN 112541918A CN 202011537174 A CN202011537174 A CN 202011537174A CN 112541918 A CN112541918 A CN 112541918A
- Authority
- CN
- China
- Prior art keywords
- feature map
- matrix
- neural network
- attention
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000007246 mechanism Effects 0.000 title claims abstract description 20
- 238000003709 image segmentation Methods 0.000 title claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims abstract description 62
- 238000010586 diagram Methods 0.000 claims abstract description 45
- 238000006243 chemical reaction Methods 0.000 claims abstract description 12
- 238000011176 pooling Methods 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 abstract description 8
- 230000002776 aggregation Effects 0.000 description 5
- 238000004220 aggregation Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005489 elastic deformation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention discloses a three-dimensional medical image segmentation method based on a self-attention mechanism neural network, which comprises the following steps of: acquiring a medical image, and extracting a convolution characteristic map of the medical image; carrying out tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix; obtaining an attention feature map matrix according to the first feature map matrix and the second feature map matrix; obtaining a self-attention feature map according to the attention feature map matrix and the third feature map matrix; carrying out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map; and restoring the global feature map to obtain a segmented image. Global information can be effectively aggregated without a deep encoder, a large number of training parameters can be reduced, convolution operation is reduced, loss of spatial information is reduced, and more context information is reserved.
Description
Technical Field
The invention relates to the technical field of deep learning and computer vision, in particular to a three-dimensional medical image segmentation method based on a self-attention mechanism neural network.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Medical image segmentation techniques can assist physicians in more accurately analyzing the condition of a patient. In recent years, deep learning has made great progress in the field of medical image segmentation, and existing medical image segmentation models almost rely on an encoder-decoder architecture, which has two disadvantages, one is that excessive local operations generate a large number of parameters to reduce the model training efficiency, and the other is that excessive spatial information is lost to cause poor image segmentation effect.
The encoder-decoder architecture uses a very deep encoder to accumulate a large number of local operations, the convolution and downsampling operations in the existing model are typical local operations, and because the number of feature maps after each downsampling or convolution operation is doubled, too many convolution or downsampling operations can generate a large number of training parameters, thereby reducing the efficiency of the model; in addition to this, convolution or downsampling operations can result in loss of spatial information, more convolution or downsampling operations result in loss of more context information for the image, and extensive context information is crucial for the segmentation of medical images.
Disclosure of Invention
In order to solve the problems, the invention provides a three-dimensional medical image segmentation method based on a self-attention mechanism neural network, and the whole segmentation method is divided into an up-sampling part, a bottom layer module and a down-sampling part; the down-sampling part is used for extracting image features, the bottom layer module can fuse global information from feature maps of any size based on a self-attention mechanism, and the up-sampling part is used for restoring image information details and restoring image precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for segmenting a three-dimensional medical image based on a self-attention mechanism neural network, including:
acquiring a medical image, and extracting a convolution characteristic map of the medical image;
carrying out tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix;
obtaining an attention feature map matrix according to the first feature map matrix and the second feature map matrix;
obtaining a self-attention feature map according to the attention feature map matrix and the third feature map matrix;
carrying out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map;
and restoring the global feature map to obtain a segmented image.
In a second aspect, the present invention provides a three-dimensional medical image segmentation system based on a self-attention mechanism neural network, comprising:
the feature extraction module is configured to acquire a medical image and extract a convolution feature map of the medical image;
the tensor conversion module is configured to perform tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix;
an attention module configured to derive an attention profile matrix from the first and second profile matrices;
a self-attention module configured to obtain a self-attention feature map according to the attention feature map matrix and the third feature map matrix;
the global module is configured to carry out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map;
and the restoration module is configured to restore the global feature map to obtain the segmented image.
In a third aspect, the present invention provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein when the computer instructions are executed by the processor, the method of the first aspect is performed.
In a fourth aspect, the present invention provides a computer readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
the invention reduces convolution operation; the reduction of convolution operation can reduce the number of characteristic graphs on one hand, thereby reducing training parameters and improving the training efficiency of the model, and on the other hand, the reduction of convolution operation can reduce the loss of spatial information and reserve more context information.
The method comprises the steps of performing residual error connection on a self-attention feature map and a convolution feature map in a summation mode to perform feature fusion; on one hand, the feature fusion mode does not generate a large number of feature maps, so that the training parameters of the lower-layer neural network are reduced; on the other hand, the summation jump connection can be regarded as remote residual connection, and the problem of gradient disappearance in the model training process is solved.
The global aggregation block based on the self-attention mechanism can more effectively fuse global information from feature maps of any size, effectively aggregate the global information without a deep encoder, and reduce a large number of training parameters.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of a three-dimensional medical image method based on a self-attention mechanism neural network according to embodiment 1 of the present invention;
fig. 2 is an overall architecture diagram of a three-dimensional medical image segmentation method based on a self-attention mechanism neural network according to embodiment 1 of the present invention;
fig. 3 is a schematic diagram of a bottom block module provided in embodiment 1 of the present invention;
fig. 4 is a schematic diagram of a global aggregation block in a bottom block according to embodiment 1 of the present invention.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example 1
As shown in fig. 1, the present embodiment provides a three-dimensional medical image segmentation method based on a self-attention mechanism neural network, including:
s1: acquiring a medical image, and extracting a convolution characteristic map of the medical image;
s2: carrying out tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix;
s3: obtaining an attention feature map matrix according to the first feature map matrix and the second feature map matrix;
s4: obtaining a self-attention feature map according to the attention feature map matrix and the third feature map matrix;
s5: carrying out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map;
s6: and restoring the global feature map to obtain a segmented image.
In this embodiment, the pre-processing of the acquired medical image includes:
(1) expanding the data set by using a data enhancement mode of elastic deformation;
(2) and randomly cutting the preprocessed medical image into image blocks of modules with fixed sizes by using a random crop method.
In this embodiment, the entire network architecture of the partitioning method is divided into three parts, namely an up-sampling part, a bottom module bottom block and a down-sampling part; the down-sampling part is used for showing environmental information and extracting image characteristics, the up-sampling module is combined with corresponding information of the down-sampling module to restore image information details and restore image precision, and the bottom layer module is used for aggregating global information; specifically, the method comprises the following steps:
in step S1, the medical image is down-sampled, and the down-sampling part has three layers of neural networks to extract image features, as shown in fig. 2, the specific steps are as follows:
s1-1: performing convolution operation of 3 multiplied by 3 twice on the first layer of neural network, wherein the channel number of the feature map after the first convolution is changed from 3 to 32, and the channel number after the second convolution is changed to 64; and a ReLU activation function is used after each convolution.
S1-2: and performing maximum pooling operation of 2 multiplied by 2 with one step of 2 on the output characteristic diagram obtained by the first-layer neural network, and inputting the obtained characteristic diagram into the second-layer neural network.
S1-3: the second layer of neural network comprises two convolution operations of 3 multiplied by 3, each convolution is followed by a ReLU activation function, and the number of channels of the feature map is increased from 64 to 128 after the two convolutions.
S1-4: and performing maximum pooling operation of 2 multiplied by 2 with one step of 2 on the output characteristic diagram obtained by the second-layer neural network, and inputting the obtained characteristic diagram into the third-layer neural network.
S1-5: the operation of the neural network of the third layer is the same as that of the second layer, and the number of channels of the feature map is increased from 128 to 256.
S1-6: and directly inputting the convolution characteristic graph obtained by the third layer of neural network into the bottom block.
The steps S2-S5 are all operations of the bottom block module, as shown in fig. 3, the bottom block module is composed of a residual block, wherein the conventional convolution operation is replaced by a global aggregation block based on the self-attention mechanism; the global aggregation block is shown in FIG. 4, and the specific steps are as follows:
s2-1: the convolution feature map obtained by the downsampling operation is subjected to a convolution operation of 1 × 1 × 1 to obtain a feature map K, V shown in fig. 4, and a feature map Q is obtained by a query transform operation, where the query transform is CkArbitrary operations of the sheet feature map, such as convolution or pooling operations.
S2-2: q, K, V, carrying out tensor conversion by adopting an Unfold operation to obtain a corresponding matrix, wherein the Unfold operation is the operation of converting the tensor into the matrix.
S3-1: performing longitudinal axis splicing on the matrix of Q and K to finish linear change for one time; and then, normalizing the weights by adopting a softmax function to obtain an attention feature map matrix A.
The softmax function is formulated as:
wherein, CkIs the number of channels of the feature map.
S4-1: multiplying the attention characteristic diagram matrixes A and V by a matrix obtained by Unfold operation, and then performing matrix conversion operation by Fold operation to obtain a self-attention characteristic diagram O; fold is inverse operation of Unfold, namely converting the matrix into tensor, and the self-attention feature map O is a feature map output by global aggregation block.
S5-1: and carrying out residual connection operation on the self-attention feature graph O and the convolution feature graph as shown in FIG. 2 to obtain a global feature graph output by the bottom block.
In the step S6, the global feature map obtained from the bottom block is up-sampled to obtain a segmented image, and the up-sampling part also has three layers of neural networks; the method comprises the following specific steps:
s6-1: the first layer of neural network comprises two convolution operations of 3 multiplied by 3, and the number of channels of the feature map is reduced from 256 to 128 after the two convolution operations; a ReLU activation function is used after each convolution.
S6-2: and performing inverse pooling operation of 2 multiplied by 2 on the output characteristic diagram of the first layer of neural network, performing characteristic fusion on the output characteristic diagram obtained by the second layer of neural network of the down-sampling part, and inputting the obtained characteristic diagram into the second layer of neural network of the up-sampling part.
S6-3: and after performing convolution operation twice by 3 multiplied by 3 and inverse pooling operation once by 2 multiplied by 2 on the feature map obtained by the second layer of neural network, performing feature fusion with the output feature map obtained by the first layer of neural network of the downsampling part, and inputting the obtained feature map into the last layer of neural network.
S6-4: the last layer of neural network comprises two times of convolution operations of 3 multiplied by 3, the number of channels is reduced from 64 to 32, finally, the convolution operation of 1 multiplied by 1 is carried out, and the number of channels is reduced to 3 to obtain a segmented image.
Example 2
The embodiment provides a three-dimensional medical image segmentation system based on a self-attention mechanism neural network, which comprises:
the feature extraction module is configured to acquire a medical image and extract a convolution feature map of the medical image;
the tensor conversion module is configured to perform tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix;
an attention module configured to derive an attention profile matrix from the first and second profile matrices;
a self-attention module configured to obtain a self-attention feature map according to the attention feature map matrix and the third feature map matrix;
the global module is configured to carry out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map;
and the restoration module is configured to restore the global feature map to obtain the segmented image.
It should be noted that the above modules correspond to steps S1 to S6 in embodiment 1, and the above modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In further embodiments, there is also provided:
an electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of embodiment 1. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method described in embodiment 1.
The method in embodiment 1 may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (10)
1. A three-dimensional medical image segmentation method based on a self-attention mechanism neural network is characterized by comprising the following steps:
acquiring a medical image, and extracting a convolution characteristic map of the medical image;
carrying out tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix;
obtaining an attention feature map matrix according to the first feature map matrix and the second feature map matrix;
obtaining a self-attention feature map according to the attention feature map matrix and the third feature map matrix;
carrying out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map;
and restoring the global feature map to obtain a segmented image.
2. The method of claim 1, wherein the attention feature map matrix is obtained by stitching and weight-normalizing the first feature map matrix and the second feature map matrix.
3. The self-attention mechanism neural network-based three-dimensional medical image segmentation method of claim 1, wherein the self-attention feature map is obtained by multiplying and matrix-converting an attention feature map matrix and a third feature map matrix.
4. The method for segmenting the three-dimensional medical image based on the self-attention mechanism neural network as claimed in claim 1, wherein a downsampling operation is adopted to extract a convolution feature map of the medical image, wherein the downsampling operation comprises three layers of neural networks, and specifically comprises the following steps:
performing convolution operation on the medical image twice in a first layer of neural network, and performing maximum pooling operation with stride of 2 on the obtained output characteristic diagram once;
in a second layer of neural network, performing convolution operation twice on the output characteristic diagram after the maximum pooling operation in the first layer of neural network, and performing maximum pooling operation with stride of 2 once on the obtained output characteristic diagram;
and in the third layer of neural network, performing convolution operation twice on the output characteristic diagram after the maximum pooling operation in the second layer of neural network to obtain a convolution characteristic diagram.
5. The method for segmenting the three-dimensional medical image based on the attention-driven neural network as claimed in claim 4, wherein the segmented image is obtained by restoring the global feature map by an upsampling operation, the upsampling operation comprises three layers of neural networks, and specifically comprises the following steps:
in the first layer of neural network, after performing convolution operation and inverse pooling operation for two times on the global feature map, performing feature fusion with the output feature map of the second layer of neural network subjected to downsampling operation;
in the second layer of neural network, after performing convolution operation twice and inverse pooling operation once on the feature map subjected to feature fusion of the first layer of neural network, performing feature fusion with the output feature map of the first layer of neural network subjected to downsampling operation;
in the third layer of neural network, the feature map after the feature fusion of the second layer of neural network is subjected to convolution operation with different sizes in sequence to obtain a segmented image.
6. The method for segmenting three-dimensional medical images based on the self-attention mechanism neural network as claimed in claim 4 or 5, wherein the ReLU activation function is used once after each convolution operation.
7. The method for segmenting a three-dimensional medical image based on a self-attention mechanism neural network as claimed in claim 4 or 5, wherein the convolution operation is a convolution operation of 3 x 3; or both the pooling operation and the anti-pooling operation are 2 × 2 × 2 pooling operation and anti-pooling operation; or in the up-sampling operation, in the third layer of neural network, performing convolution operation of 3 × 3 × 3 and 1 × 1 × 1 twice on the feature map after feature fusion of the second layer of neural network to obtain a segmented image.
8. A three-dimensional medical image segmentation system based on a self-attention mechanism neural network, comprising:
the feature extraction module is configured to acquire a medical image and extract a convolution feature map of the medical image;
the tensor conversion module is configured to perform tensor conversion on the convolution characteristic diagram to obtain a first characteristic diagram matrix, a second characteristic diagram matrix and a third characteristic diagram matrix;
an attention module configured to derive an attention profile matrix from the first and second profile matrices;
a self-attention module configured to obtain a self-attention feature map according to the attention feature map matrix and the third feature map matrix;
the global module is configured to carry out residual error connection on the self-attention feature map and the convolution feature map to obtain a global feature map;
and the restoration module is configured to restore the global feature map to obtain the segmented image.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011537174.3A CN112541918A (en) | 2020-12-23 | 2020-12-23 | Three-dimensional medical image segmentation method based on self-attention mechanism neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011537174.3A CN112541918A (en) | 2020-12-23 | 2020-12-23 | Three-dimensional medical image segmentation method based on self-attention mechanism neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112541918A true CN112541918A (en) | 2021-03-23 |
Family
ID=75017636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011537174.3A Pending CN112541918A (en) | 2020-12-23 | 2020-12-23 | Three-dimensional medical image segmentation method based on self-attention mechanism neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112541918A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113393568A (en) * | 2021-06-08 | 2021-09-14 | 先临三维科技股份有限公司 | Training method, device, equipment and medium for neck-edge linear deformation prediction model |
CN113706642A (en) * | 2021-08-31 | 2021-11-26 | 北京三快在线科技有限公司 | Image processing method and device |
CN113743450A (en) * | 2021-07-20 | 2021-12-03 | 浙江工业大学 | Hyperspectral image segmentation method based on non-local feature fusion |
CN113793345A (en) * | 2021-09-07 | 2021-12-14 | 复旦大学附属华山医院 | Medical image segmentation method and device based on improved attention module |
CN116402780A (en) * | 2023-03-31 | 2023-07-07 | 北京长木谷医疗科技有限公司 | Thoracic vertebra image segmentation method and device based on double self-attention and deep learning |
CN116469132A (en) * | 2023-06-20 | 2023-07-21 | 济南瑞泉电子有限公司 | Fall detection method, system, equipment and medium based on double-flow feature extraction |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111402259A (en) * | 2020-03-23 | 2020-07-10 | 杭州健培科技有限公司 | Brain tumor segmentation method based on multi-level structure relation learning network |
-
2020
- 2020-12-23 CN CN202011537174.3A patent/CN112541918A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111402259A (en) * | 2020-03-23 | 2020-07-10 | 杭州健培科技有限公司 | Brain tumor segmentation method based on multi-level structure relation learning network |
Non-Patent Citations (2)
Title |
---|
OZGUN CICEK ET AL.: "3D U-Net:Learning Dense Volumetric Segmentation from Sparse Annotation", 《ARXIV:1606.06650V1》 * |
ZHENGYANG WANG,ET AL.: "Non-local U-Nets for Biomedical Image Segmentation", 《ARXIV:1812.04103V2》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113393568A (en) * | 2021-06-08 | 2021-09-14 | 先临三维科技股份有限公司 | Training method, device, equipment and medium for neck-edge linear deformation prediction model |
CN113393568B (en) * | 2021-06-08 | 2022-07-29 | 先临三维科技股份有限公司 | Training method, device, equipment and medium for neck-edge line-shape-variation prediction model |
CN113743450A (en) * | 2021-07-20 | 2021-12-03 | 浙江工业大学 | Hyperspectral image segmentation method based on non-local feature fusion |
CN113706642A (en) * | 2021-08-31 | 2021-11-26 | 北京三快在线科技有限公司 | Image processing method and device |
CN113793345A (en) * | 2021-09-07 | 2021-12-14 | 复旦大学附属华山医院 | Medical image segmentation method and device based on improved attention module |
CN113793345B (en) * | 2021-09-07 | 2023-10-31 | 复旦大学附属华山医院 | Medical image segmentation method and device based on improved attention module |
CN116402780A (en) * | 2023-03-31 | 2023-07-07 | 北京长木谷医疗科技有限公司 | Thoracic vertebra image segmentation method and device based on double self-attention and deep learning |
CN116402780B (en) * | 2023-03-31 | 2024-04-02 | 北京长木谷医疗科技股份有限公司 | Thoracic vertebra image segmentation method and device based on double self-attention and deep learning |
CN116469132A (en) * | 2023-06-20 | 2023-07-21 | 济南瑞泉电子有限公司 | Fall detection method, system, equipment and medium based on double-flow feature extraction |
CN116469132B (en) * | 2023-06-20 | 2023-09-05 | 济南瑞泉电子有限公司 | Fall detection method, system, equipment and medium based on double-flow feature extraction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112541918A (en) | Three-dimensional medical image segmentation method based on self-attention mechanism neural network | |
CN109034162B (en) | Image semantic segmentation method | |
KR20220066945A (en) | Image processing method, apparatus, electronic device and computer readable storage medium | |
CN113888744A (en) | Image semantic segmentation method based on Transformer visual upsampling module | |
WO2016019484A1 (en) | An apparatus and a method for providing super-resolution of a low-resolution image | |
CN116051549B (en) | Method, system, medium and equipment for dividing defects of solar cell | |
CN111192278A (en) | Semantic segmentation method, semantic segmentation device, computer equipment and computer-readable storage medium | |
CN113642585B (en) | Image processing method, apparatus, device, storage medium, and computer program product | |
CN116433914A (en) | Two-dimensional medical image segmentation method and system | |
CN111951167A (en) | Super-resolution image reconstruction method, super-resolution image reconstruction device, computer equipment and storage medium | |
JP6830742B2 (en) | A program for pixel-based image segmentation | |
CN114913094A (en) | Image restoration method, image restoration apparatus, computer device, storage medium, and program product | |
CN115936992A (en) | Garbage image super-resolution method and system of lightweight transform | |
CN115496919A (en) | Hybrid convolution-transformer framework based on window mask strategy and self-supervision method | |
CN110517267B (en) | Image segmentation method and device and storage medium | |
CN113066089B (en) | Real-time image semantic segmentation method based on attention guide mechanism | |
CN111507100A (en) | Convolution self-encoder and word embedding vector compression method based on same | |
CN111274936B (en) | Multispectral image ground object classification method, system, medium and terminal | |
CN115187820A (en) | Light-weight target detection method, device, equipment and storage medium | |
CN111724309B (en) | Image processing method and device, training method of neural network and storage medium | |
CN117058160A (en) | Three-dimensional medical image segmentation method and system based on self-adaptive feature fusion network | |
CN117315241A (en) | Scene image semantic segmentation method based on transformer structure | |
CN114494006A (en) | Training method and device for image reconstruction model, electronic equipment and storage medium | |
CN116433911A (en) | Camouflage object instance segmentation method, device and system based on multi-scale pooling modeling | |
CN114565528A (en) | Remote sensing image noise reduction method and system based on multi-scale and attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210323 |
|
RJ01 | Rejection of invention patent application after publication |