WO2021031066A1 - Cartilage image segmentation method and apparatus, readable storage medium, and terminal device - Google Patents
Cartilage image segmentation method and apparatus, readable storage medium, and terminal device Download PDFInfo
- Publication number
- WO2021031066A1 WO2021031066A1 PCT/CN2019/101339 CN2019101339W WO2021031066A1 WO 2021031066 A1 WO2021031066 A1 WO 2021031066A1 CN 2019101339 W CN2019101339 W CN 2019101339W WO 2021031066 A1 WO2021031066 A1 WO 2021031066A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature map
- cartilage
- module
- convolution
- target
- Prior art date
Links
- 210000000845 cartilage Anatomy 0.000 title claims abstract description 280
- 238000003709 image segmentation Methods 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000005070 sampling Methods 0.000 claims abstract description 145
- 238000012545 processing Methods 0.000 claims abstract description 63
- 238000011176 pooling Methods 0.000 claims abstract description 62
- 230000011218 segmentation Effects 0.000 claims abstract description 40
- 230000007246 mechanism Effects 0.000 claims abstract description 35
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000012549 training Methods 0.000 claims description 59
- 238000000605 extraction Methods 0.000 claims description 39
- 238000004590 computer program Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 15
- 238000010606 normalization Methods 0.000 claims description 7
- 230000017105 transposition Effects 0.000 claims description 7
- 238000007499 fusion processing Methods 0.000 claims description 5
- 238000012952 Resampling Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 13
- 230000004913 activation Effects 0.000 description 8
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 1
- YTAHJIFKAKIKAV-XNMGPUDCSA-N [(1R)-3-morpholin-4-yl-1-phenylpropyl] N-[(3S)-2-oxo-5-phenyl-1,3-dihydro-1,4-benzodiazepin-3-yl]carbamate Chemical compound O=C1[C@H](N=C(C2=C(N1)C=CC=C2)C1=CC=CC=C1)NC(O[C@H](CCN1CCOCC1)C1=CC=CC=C1)=O YTAHJIFKAKIKAV-XNMGPUDCSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000015100 cartilage disease Diseases 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Definitions
- This application belongs to the field of image processing technology, and in particular relates to a method, device, computer-readable storage medium, and terminal equipment for cartilage image segmentation.
- the current cartilage image segmentation method is mainly based on the convolutional neural network model. Due to the particularity of cartilage characteristics, the existing segmentation method based on the convolutional neural network model still has the problem of low segmentation accuracy.
- the embodiments of the present application provide a cartilage image segmentation method, device, computer-readable storage medium, and terminal equipment, which can solve the problem of low accuracy of cartilage image segmentation.
- an embodiment of the present application provides a cartilage image segmentation method, including:
- the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and the pyramid cavity
- An attention mechanism module connected to the pooling module and a fusion module connected to the hollow convolution module and the attention mechanism module respectively;
- the first feature map is pooled by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and the second feature map is weighted by the attention mechanism module Processing to obtain a third feature map corresponding to the target cartilage image;
- an embodiment of the present application provides a cartilage image segmentation device, including:
- the target image acquisition module is used to acquire the target cartilage image to be segmented
- the target image input module is used to input the target cartilage image to a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pooling connected to the cavity convolution module Module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module connected to the hole convolution module and the attention mechanism module respectively;
- a feature extraction module configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image
- the pooling weighting processing module is used to perform pooling processing on the first feature map through the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and to perform the pooling process through the attention mechanism module Performing weighting processing on the second feature map to obtain a third feature map corresponding to the target cartilage image;
- the result output module is used to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the output of the cartilage image segmentation model The cartilage segmentation result of the target cartilage image.
- an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
- the processor executes the computer program, The cartilage image segmentation method described in the first aspect is realized.
- an embodiment of the present application provides a computer-readable storage medium that stores a computer program that, when executed by a processor, implements the cartilage image segmentation method described in the first aspect above .
- embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the cartilage image segmentation method described in the first aspect.
- multi-scale image information is extracted from the target cartilage image through the hole convolution module and the pyramid hole pooling module, and the multi-scale image information is fused through the fusion module, which can effectively retain the detailed information of the image and improve
- the image boundary segmentation capability improves the segmentation accuracy of cartilage images.
- the weighting of image information through the attention mechanism module can effectively enhance the segmentation ability of cartilage images and improve the accuracy and precision of cartilage image segmentation.
- FIG. 1 is a schematic flowchart of a cartilage image segmentation method provided by an embodiment of the present application
- FIG. 2 is a structural block diagram of a cartilage image segmentation model provided by an embodiment of the present application.
- FIG. 3 is a schematic structural diagram of a hole convolution module provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of convolution of a channel hole convolution layer provided by an embodiment of the present application.
- FIG. 5 is a schematic flowchart of obtaining a second feature map in an application scenario by the cartilage image segmentation method provided by an embodiment of the present application;
- Fig. 6 is a schematic structural diagram of a pyramid cavity pooling module provided by an embodiment of the present application.
- Fig. 6a is a schematic diagram of convolution with different sampling rates provided by an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of an attention mechanism module provided by an embodiment of the present application.
- FIG. 8 is a schematic flowchart of obtaining a third feature map in an application scenario by the cartilage image segmentation method provided by an embodiment of the present application;
- Figure 9a is a cartilage segmentation diagram of artificially labeled gold standard
- 9b is a cartilage segmentation diagram segmented by the cartilage image segmentation method in an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a cartilage image segmentation device provided by an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
- Fig. 1 shows a schematic flowchart of a cartilage image segmentation method provided by an embodiment of the present application, and the cartilage image segmentation method includes:
- Step S101 Obtain a target cartilage image to be segmented
- the execution subject of the embodiments of the present application may be a terminal device, and the terminal device includes, but is not limited to, computing devices such as desktop computers, notebooks, palmtop computers, and cloud servers.
- the target cartilage image to be segmented may be sent to the terminal device, where the target cartilage image may be a magnetic resonance imaging (MRI) image containing cartilage, for example, an MRI of a knee joint image.
- MRI magnetic resonance imaging
- Step S102 Input the target cartilage image to a preset cartilage image segmentation model, where the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and The attention mechanism module connected to the pyramid hole pooling module and the fusion module respectively connected to the hole convolution module and the attention mechanism module;
- the terminal device after the terminal device obtains the target cartilage image, it can call the cartilage image segmentation model as shown in FIG. 2 and can input the target cartilage image into the cartilage image segmentation model.
- the inputting the target cartilage image to a preset cartilage image segmentation model may include:
- Step a Obtain the original resolution and original sampling distance corresponding to the target cartilage image
- Step b Determine the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution
- Step c Use the target sampling distance to resample the target cartilage image, and input the resampled target cartilage image into a preset cartilage image segmentation model.
- the determining the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution includes:
- Spacing is the target sampling distance
- spacing' is the original sampling distance
- ImageRe' is the original resolution
- ImageRe is the target resolution
- the original resolution can be any resolution
- the original sampling distance can be any distance
- the target resolution can be a resolution of 513 ⁇ 513 pixels to determine the target
- the sampling distance is used to perform image resampling to crop the target cartilage image to the target resolution, which is convenient for the cartilage image segmentation model to perform image feature extraction, improves the segmentation efficiency of the cartilage image segmentation model, and can reduce the The size of the target cartilage image is limited to facilitate user use and improve user experience.
- Step S103 Perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
- the cavity convolution module of the cartilage image segmentation model can extract image features of the target cartilage image to obtain The first feature map corresponding to the target cartilage image.
- the hole convolution module is a convolution module based on the Xception network structure, where the Xception network structure includes a flat hole convolution layer and a channel hole convolution layer
- the sampling rate of the planar hole convolution layer is 1 or 3
- the sampling rate of the channel hole convolution layer is 6. It should be understood that the size of the convolution kernel of the planar hole convolution layer and the channel hole convolution layer may both be 3 ⁇ 3.
- the Xception network structure may include an input unit, an intermediate processing unit, and an output unit.
- the input unit may include a first 3 ⁇ 3 convolutional layer connected in series (that is, a convolutional layer with a convolution kernel size of 3 ⁇ 3).
- the intermediate processing unit may include 16 fourth convolutions connected in series Unit, the fourth convolution subunit may include three third planar hole convolution layers connected in series;
- the output unit may include a fifth convolution subunit and a sixth convolution subunit connected in series, the fifth The convolution subunit may include a second 1 ⁇ 1 convolution layer and a fourth planar hole convolution layer, a fifth planar hole convolution layer, and a second channel hole convolution layer connected in series.
- the sixth convolution subunit may It includes
- the first channel hole convolution layer, the second 1 ⁇ 1 convolutional layer, and the first channel hole convolutional layer a standardized output layer and a Relu activation layer may be connected in sequence.
- the first channel hole convolution layer, the second channel hole convolution layer, and the third channel hole convolution layer mainly adopt depth-wise hole convolution, which can be specifically formed by convolution
- the core size is 3x3 hole convolution and the cross-module convolution core size is 1x1 convolution in series, which can reduce model parameters and improve convolution efficiency.
- the input unit of the cavity convolution module may first perform feature extraction on the target cartilage image, and may
- the extracted feature map A is input to the intermediate processing unit of the hole convolution module, and the intermediate processing unit can perform feature extraction on the feature map A, and can input the extracted feature map B to the hole convolution
- the output unit of the module, the output unit can further perform feature extraction on the feature map B to obtain the first feature map corresponding to the target cartilage image.
- the process for the input unit to perform feature extraction on the target cartilage image may be: the first 3 ⁇ 3 convolutional layer of the input unit may first perform feature extraction on the target cartilage image, and may extract The feature map R of the input unit is input to the second 3 ⁇ 3 convolutional layer of the input unit; the second 3 ⁇ 3 convolutional layer can perform further feature extraction on the feature map R, and the extracted feature map T Input to the first convolution subunit of the input unit.
- the first 1 ⁇ 1 convolution layer of the first convolution subunit can perform feature extraction on the feature map T to obtain the extracted feature map T1, and the first convolution subunit
- the planar cavity convolution layer can also perform feature extraction on the feature map T to obtain the extracted feature map T2, and can input the feature map T2 to the second planar cavity convolution layer of the first convolution subunit.
- the second plane hole convolution layer can perform feature extraction on the feature map T2, and can input the extracted feature map T21 into the first channel hole convolution layer of the first convolution subunit, and the first channel hole
- the convolutional layer can perform feature extraction on the feature map T21 to obtain the feature map S.
- the first convolution subunit may fuse the feature map T1 and the feature map S, and input the feature map L obtained by the fusion into the second convolution subunit of the input unit.
- the second convolution subunit can perform feature extraction on the feature map L, and input the extracted feature map H into the third convolution subunit of the input unit, and the third convolution subunit can
- the feature map H performs feature extraction to obtain the feature map A extracted by the input unit.
- the feature extraction process performed by the second convolution subunit on the feature map L and the feature extraction process performed by the third convolution subunit on the feature map H are the same as those performed by the first convolution subunit on the feature map.
- the feature extraction process performed by T is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
- the process of the intermediate processing unit performing feature extraction on the feature map A may be: the first third planar hole convolution layer of the first fourth convolution subunit can first perform feature extraction on the feature map A, And the extracted feature map A1 can be input to the second third plane hole convolution layer of the first fourth convolution subunit, and the second third plane hole convolution layer can compare the feature map A1 Perform feature extraction, and input the extracted feature map A11 into the third third plane hole convolution layer of the first fourth convolution subunit, and the third third plane hole convolution layer can be Feature map A11 performs feature extraction to obtain feature map G; then the first fourth convolution subunit can fuse feature map A and feature map G, and can input the fused feature map K to the second The fourth convolution subunit, the second and fourth convolution subunit can perform feature extraction on the feature map K, and input the extracted feature map to the third and fourth convolution subunit, and so on, until Input the feature map to the sixteenth fourth convolution subunit, and perform feature extraction on the feature map through the sixteenth fourth
- the second and fourth convolution subunits, the third and fourth convolution subunits,..., the process of feature extraction by the sixteenth and fourth convolution subunits is the same as that of the first and fourth convolution subunits.
- the feature extraction process of the feature map A by the unit is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
- the feature extraction process of the feature map B by the output unit may be as follows: the fifth convolution subunit of the output unit can first perform feature extraction on the feature map B, and can input the extracted feature map F to The sixth convolution subunit of the output unit, where the fifth convolution subunit performs feature extraction on the feature map B and the first convolution subunit of the input unit performs feature map T
- the extraction process is similar, and the basic principle is the same. For the sake of brevity, I will not repeat it here.
- the sixth plane hole convolution layer of the sixth convolution subunit can perform feature extraction on the feature map F, and can input the extracted feature map F1 into the third channel hole of the sixth convolution subunit Convolutional layer, the third channel hole convolution layer can perform feature extraction on the feature map F1, and can input the extracted feature map F11 to the seventh plane hole convolution layer of the sixth convolution subunit,
- the feature map F11 can be extracted through the seventh plane cavity convolution layer to obtain the first feature map corresponding to the target cartilage image.
- the Xception network structure in the embodiment of the present application is similar to the residual network structure, so the hole convolution module based on the Xception network structure can effectively reduce the gradient attenuation rate, avoid the degradation of the network structure, and ensure cartilage image segmentation Accuracy.
- the hole convolution module adopts the hole convolution layer with different sampling rates to extract the features of the target cartilage image, which can increase the receptive field, increase the amount of information contained in the feature map, and effectively retain the detailed information of the image. Improve the segmentation accuracy of cartilage images.
- Step S104 Perform pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and perform a pooling process on the second feature map by the attention mechanism module The image is weighted to obtain a third feature image corresponding to the target cartilage image;
- the cavity convolution module extracts the first feature map corresponding to the target cartilage image
- the first feature map can be input to the pyramid hole pooling module, and the pyramid hole pooling module can perform pooling processing on the first feature map to obtain the first feature map
- the second feature map can be input to the attention mechanism module, and the attention mechanism module can perform weighting processing on the second feature map to obtain the target cartilage
- the third feature map corresponding to the image uses the pyramid hole pooling module to extract image information at multiple scales, which improves the boundary segmentation capability of the image and improves the segmentation accuracy of the cartilage image.
- the attention mechanism module is used to weight the image information. It can effectively improve the segmentation accuracy and segmentation accuracy of cartilage images.
- the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
- performing pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image may include:
- Step S501 Perform feature sampling on the first feature map through each of the first convolution branches to obtain a first sampling feature map, a second sampling feature map, and a third sampling feature corresponding to the first feature map.
- Step S502 splicing the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced splicing feature map;
- Step S503 Perform average pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
- the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, and the first hole convolution unit and the first hole convolution unit Two hole convolution units are connected, and the second hole convolution unit is connected to the third hole convolution unit.
- the pyramid hole pooling module may include four first convolution branches parallel to each other, and each first convolution branch may include a first hole convolution unit and a second convolution unit connected in series in sequence.
- a hole convolution unit and a third hole convolution unit wherein the first hole convolution unit may include a convolution layer with a sampling rate of 6, as shown in FIG. 6a, a first Relu activation layer and a first dropout layer,
- the second hole convolution unit may include a convolution layer with a sampling rate of 12 as shown in FIG. 6a, a second Relu activation layer, and a second dropout layer
- the third hole convolution unit may include the convolution layer shown in FIG. 6a.
- the sample rate shown is 18 convolutional layers.
- the pyramid hole pooling module may further include a splicing layer connected to the convolutional layer with a sampling rate of 18 and an average pooling layer connected to the splicing layer.
- the pyramid hole pooling module obtains the first feature map extracted by the hole convolution module, it can perform feature sampling on the first feature map through four parallel first convolution branches respectively. Obtain a first sampling feature map, a second sampling feature map, a third sampling feature map, and a fourth sampling feature map corresponding to the first feature map, respectively.
- the process of performing feature sampling on the first feature map by the first convolution branch to obtain the first sampling feature map may be as follows: first, the first convolutional layer with a sampling rate of 6 can The feature map performs feature sampling, and the sampled feature map C obtained by sampling can be input to the first Relu activation layer; secondly, the sampled feature map C can be processed through the first Relu activation layer, and the processed feature map C can be processed
- the sampling feature map C1 is input to the first dropout layer; the sampling feature map C1 can be processed again through the first dropout layer, and the processed sampling feature map C2 can be input to the convolutional layer with a sampling rate of 12
- the sampling feature map C2 can then be further sampled through the convolutional layer with a sampling rate of 12, and the sampling feature map C3 obtained by sampling can be input to the second Relu activation layer; then the second Relu activation layer can be used
- the Relu activation layer processes the sampled feature map C3, and can input the processed sampled feature map C
- a sampling feature map wherein, the process of performing feature sampling on the first feature map by each of the other first convolution branches to obtain the second sampling feature map, the third sampling feature map, and the fourth sampling feature map is the same as that obtained above.
- the process of the first sampling feature map is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
- the first convolution branch obtains the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map
- all The first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map are respectively input to the splicing layer of the pyramid hole pooling module, and the splicing layer is sufficient
- the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map are spliced together.
- the first sampling feature map, the second sampling feature map can be combined
- the sampling feature map, the third sampling feature map, and the fourth sampling feature map are summed and spliced to obtain the spliced splicing feature map, and the obtained splicing feature map can be input to the pyramid hole pooling module
- the average pooling layer of, the average pooling layer can perform average pooling processing on the stitched feature map, so as to obtain the second feature map corresponding to the target cartilage image.
- the attention mechanism module may include multiple second convolution branches parallel to each other, for example, may include three second convolution branches parallel to each other, where:
- Each of the second convolution branches may include a convolution layer with a convolution kernel size of 1 ⁇ 1 and a step size of 2.
- performing weighting processing on the second feature map by the attention mechanism module to obtain a third feature map corresponding to the target cartilage image may include:
- Step S801 Perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map, the second convolution feature map, and the first convolution feature map corresponding to the second feature map.
- Step S802 Perform transposition processing on the first convolution feature map, and perform matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature map;
- Step S803 Perform normalization processing on the fifth feature map, and perform matrix multiplication processing on the normalized fifth feature map and the third convolution feature map to obtain the second feature map Corresponding weighting coefficient matrix;
- Step S804 Perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
- the pyramid hole pooling module obtains the second feature map corresponding to the target cartilage image, it can input the second feature map to the attention mechanism module, and the attention
- the mechanism module can first perform convolution processing on the second feature map through three parallel 1 ⁇ 1 convolution layers to obtain three convolution feature maps corresponding to the second feature map, that is, through three 1 ⁇ 1 convolutional layers.
- the convolutional layer performs dimensionality reduction processing on the second feature map to generate a first convolution feature map, a second convolution feature map, and a third convolution feature map that retain detailed information; then the first convolution feature map can be The product feature map is transposed, and the transposed feature map obtained by the transposition can be subjected to matrix multiplication processing with the second convolution feature map to obtain a fifth feature map; subsequently, the fifth feature map can be Perform normalization processing, for example, the fifth feature map can be normalized through the softmax function, and the normalized fifth feature map can be matrix-multiplied by the third convolution feature map Processing, the weighting coefficient matrix corresponding to the second feature map is obtained, that is, the attention of each position in the feature map relative to other positions is obtained; finally, the second feature map can be weighted by the weighting coefficient matrix, To obtain the third feature map corresponding to the target cartilage image, for example, a weighted third feature map can be obtained by summing the second feature map and the weighting coefficient matrix on the basis of
- Step S105 Up-sampling the third feature map by the fusion module, and fusing the sampled fourth feature map with the first feature map to obtain the target output by the cartilage image segmentation model Cartilage segmentation result of cartilage image.
- the third feature map is upsampled by the fusion module, and the fourth feature map obtained by sampling is fused with the first feature map to obtain the cartilage
- the cartilage segmentation result of the target cartilage image output by the image segmentation model may include:
- Step d Perform bilinear up-sampling on the third feature map by the fusion module to obtain the fourth feature map;
- Step e Perform convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
- Step f Perform fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map
- Step g Perform bilinear upsampling on the seventh feature map to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
- the bilinear upsampling may be 4 times bilinear upsampling
- the third convolution branch of the fusion module may include a convolution kernel with a size of 1 ⁇ 1.
- Layers, wherein the third convolution branch can be connected to the hole convolution module to obtain the first feature map output by the hole convolution module, and can perform convolution processing on the first feature map , Thereby obtaining a sixth feature map corresponding to the first feature map.
- the fusion module may further perform fusion processing on the fourth feature map and the sixth feature map, that is, perform layer-wise splicing on the fourth feature map and the sixth feature map, so as to preserve the void volume.
- the useful information in the feature map output by the product module improves the segmentation accuracy and segmentation accuracy of the cartilage image; finally, the size of the original feature map can be restored by performing 4-fold bilinear upsampling on the fused seventh feature map.
- the cartilage image segmentation model can be obtained through training in the following steps:
- Step h Acquire a first preset number of first training cartilage images
- Step i Expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage image, and The second preset quantity is greater than the first preset quantity;
- Step j Use the second training cartilage image and a preset loss function to train the cartilage image segmentation model, and the loss function is:
- B is the number of training cartilage images
- N is the number of pixels of each training cartilage image
- p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage
- ⁇ 0.75
- ⁇ 2.
- a first preset number (such as 440) of first training cartilage images of different resolutions may be acquired from the medical imaging control system mimics; , Each first training cartilage image can be preprocessed, and the original resolution and original sampling distance corresponding to each first training cartilage image can be obtained, and the original resolution, original sampling distance and original sampling distance corresponding to each first training cartilage image can be obtained.
- the preset target resolution determines the corresponding target sampling distance, and uses each target sampling distance to resample the corresponding first training cartilage image so that all the first training cartilage images are at the same origin and the same direction.
- the method for determining the target sampling distance at the location is the same as the method for determining the target sampling distance described above; then, the first training software can be expanded by performing symmetric, stretching, and rotating radiation transformations on each first training cartilage image along the xy plane.
- the number of images is expanded to obtain a second preset number of second training cartilage images, where the second training cartilage images can include the first training cartilage images; finally, the second training can be used under the guidance of the Focal loss method
- the cartilage image trains the cartilage image segmentation model to obtain the optimal model parameters.
- Focal loss can be used to define the loss function loss during the cartilage image segmentation model training, and the loss function is minimized based on the Adam batch gradient descent algorithm loss to obtain the optimal model parameters.
- a dropout layer with a dropout rate of 0.9 can also be used in the training process to improve training efficiency.
- Table 1 below shows the comparison results of the cartilage image segmentation method in the embodiment of the present application, the cartilage image method based on the Deeplab v3 structure, and the cartilage image method based on the U-net structure, where Dice is the similarity coefficient ( Dice similarity coefficient, DSC) is a parameter for evaluating the effect of cartilage segmentation.
- Dice is the similarity coefficient ( Dice similarity coefficient, DSC) is a parameter for evaluating the effect of cartilage segmentation.
- Dice similarity coefficient, DSC Dice similarity coefficient
- FIG. 9a and FIG. 9b show a schematic diagram of the comparison of the cartilage segmentation results of the artificial labeling gold standard and the cartilage image segmentation method in the embodiment of the present application.
- FIG. 9a is the artificial labeling gold standard
- FIG. 9b is the embodiment of the application.
- the cartilage segmentation results of the cartilage image segmentation method in FIG. 9a and FIG. 9b show that the cartilage image segmentation method in the embodiment of the present application can reach the segmentation accuracy of the artificial labeling gold standard.
- multi-scale image information is extracted from the target cartilage image through the hole convolution module and the pyramid hole pooling module, and the multi-scale image information is fused through the fusion module, which can effectively retain the detailed information of the image and improve
- the image boundary segmentation capability improves the segmentation accuracy of cartilage images.
- the weighting of image information through the attention mechanism module can effectively enhance the segmentation ability of cartilage images and improve the accuracy and precision of cartilage image segmentation.
- FIG. 10 shows a structural block diagram of a cartilage image segmentation device provided by an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
- the cartilage image segmentation device includes:
- the target image acquisition module 1001 is used to acquire the target cartilage image to be segmented
- the target image input module 1002 is configured to input the target cartilage image into a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pool connected to the cavity convolution module A transformation module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module respectively connected to the hole convolution module and the attention mechanism module;
- the feature extraction module 1003 is configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
- Pooling weighting processing module 1004 configured to perform pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and pass the attention mechanism module Weighting the second feature map to obtain a third feature map corresponding to the target cartilage image;
- the result output module 1005 is configured to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the cartilage image segmentation model output The result of cartilage segmentation of the target cartilage image.
- the target image input module 1002 includes:
- An original sampling distance obtaining unit configured to obtain the original resolution and original sampling distance corresponding to the target cartilage image
- a target sampling distance determining unit configured to determine a target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution
- the image resampling unit is configured to resample the target cartilage image by using the target sampling distance, and input the resampled target cartilage image into a preset cartilage image segmentation model.
- the target sampling distance determining unit is specifically configured to determine the target sampling distance according to the following formula:
- Spacing is the target sampling distance
- spacing' is the original sampling distance
- ImageRe' is the original resolution
- ImageRe is the target resolution
- the hole convolution module is a convolution module based on an Xception network structure, wherein the Xception network structure includes a flat hole convolution layer and a channel hole convolution layer, and the flat hole convolution
- the sampling rate of the accumulation layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
- the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
- the pooling weighting processing module 1004 includes:
- the feature sampling unit is configured to perform feature sampling on the first feature map through each of the first convolution branches to obtain the first sampling feature map, the second sampling feature map, and the first feature map corresponding to the first feature map.
- a feature splicing unit configured to splice the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced splicing feature map
- the average pooling unit is configured to perform average pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
- the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, and the first hole convolution unit and the first hole convolution unit Two hole convolution units are connected, and the second hole convolution unit is connected to the third hole convolution unit.
- the attention mechanism target includes multiple second convolution branches parallel to each other;
- the pooling weighting processing module 1004 includes:
- the first convolution processing unit is configured to perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map and the second feature map corresponding to the second feature map. Convolution feature map and the third convolution feature map;
- the matrix multiplication unit is configured to perform transposition processing on the first convolution feature map, and perform matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature Figure;
- the normalization processing unit is configured to perform normalization processing on the fifth feature map, and perform matrix multiplication processing on the fifth feature map after the normalization processing and the third convolution feature map to obtain the The weighting coefficient matrix corresponding to the second feature map;
- the weighting processing unit is configured to perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
- the result output module 1005 includes:
- the first upsampling unit is configured to perform bilinear upsampling on the third feature map by the fusion module to obtain the fourth feature map;
- a second convolution processing unit configured to perform convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
- a fusion processing unit configured to perform fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map
- the second up-sampling unit is configured to perform bilinear up-sampling on the seventh feature map to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
- the cartilage image segmentation device includes:
- a training image acquisition module for acquiring a first preset number of first training cartilage images
- the training image expansion module is used to expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage Images, the second preset number is greater than the first preset number;
- the segmentation model training module is used to train the cartilage image segmentation model by using the second training cartilage image and a preset loss function, and the loss function is:
- B is the number of training cartilage images
- N is the number of pixels of each training cartilage image
- p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage
- ⁇ 0.75
- ⁇ 2.
- FIG. 11 is a schematic structural diagram of a terminal device provided by an embodiment of this application.
- the terminal device 11 of this embodiment includes: at least one processor 1100 (only one is shown in FIG. 11), a processor, a memory 1101, and a processor stored in the memory 1101 and capable of being processed in the at least one processor.
- a computer program 1102 running on the processor 1100, and when the processor 1100 executes the computer program 1102, the steps in any of the foregoing cartilage image segmentation method embodiments are implemented.
- the terminal device 11 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the terminal device may include, but is not limited to, a processor 1100 and a memory 1101.
- FIG. 11 is only an example of the terminal device 11, and does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or a combination of certain components, or different components. For example, input and output devices may also be included.
- the so-called processor 1100 may be a central processing unit (Central Processing Unit, CPU), and the processor 1100 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory 1101 may be an internal storage unit of the terminal device 11 in some embodiments, such as a hard disk or memory of the terminal device 11. In other embodiments, the memory 1101 may also be an external storage device of the terminal device 11, for example, a plug-in hard disk equipped on the terminal device 11, a smart media card (SMC), a secure digital (Secure Digital, SD) card, Flash Card, etc. Further, the memory 1101 may also include both an internal storage unit of the terminal device 11 and an external storage device.
- the memory 1101 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 1101 can also be used to temporarily store data that has been output or will be output.
- the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be realized.
- the embodiments of the present application provide a computer program product.
- the computer program product runs on a terminal device, the terminal device can realize the steps in the foregoing method embodiments when executed.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
- the computer-readable medium may at least include: any entity or device capable of carrying computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
- entity or device capable of carrying computer program code to the photographing device/terminal device
- recording medium computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
- ROM read-only memory
- RAM Random Access Memory
- the disclosed apparatus/equipment and method may be implemented in other ways.
- the device/equipment embodiments described above are only illustrative.
- the division of the modules or units is only a logical function division.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
To | Dice(DSC)Dice(DSC) | 平均表面距离(PG,GP)Average surface distance (PG, GP) |
本申请实施例的方法The method of the embodiment of the application | 77%77% | (0.69mm,0.49mm)(0.69mm, 0.49mm) |
Deeplab v3Deeplab v3 | 65%65% | (2.56mm,1.49mm)(2.56mm, 1.49mm) |
U-netU-net | 55%55% | (2.93mm,1.85mm)(2.93mm, 1.85mm) |
Claims (15)
- 一种软骨图像分割方法,其特征在于,包括:A method for segmentation of cartilage image, characterized in that it comprises:获取待分割的目标软骨图像;Acquiring an image of the target cartilage to be segmented;将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;Input the target cartilage image to a preset cartilage image segmentation model, where the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and the pyramid cavity An attention mechanism module connected to the pooling module and a fusion module connected to the hollow convolution module and the attention mechanism module respectively;通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;Performing feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;The first feature map is pooled by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and the second feature map is weighted by the attention mechanism module Processing to obtain a third feature map corresponding to the target cartilage image;通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。Up-sampling the third feature map by the fusion module, and fusing the sampled fourth feature map with the first feature map to obtain the target cartilage image output by the cartilage image segmentation model Cartilage segmentation results.
- 如权利要求1所述的软骨图像分割方法,其特征在于,所述将所述目标软骨图像输入至预设的软骨图像分割模型,包括:The cartilage image segmentation method according to claim 1, wherein the inputting the target cartilage image into a preset cartilage image segmentation model comprises:获取所述目标软骨图像对应的原始分辨率和原始采样距离;Acquiring the original resolution and original sampling distance corresponding to the target cartilage image;根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离;Determining the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution;利用所述目标采样距离对所述目标软骨图像进行重采样,并将重采样得到的目标软骨图像输入至预设的软骨图像分割模型。The target cartilage image is resampled by using the target sampling distance, and the target cartilage image obtained by the resampling is input into a preset cartilage image segmentation model.
- 如权利要求2所述的软骨图像分割方法,其特征在于,所述根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离,包括:The cartilage image segmentation method according to claim 2, wherein the determining the target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution comprises:根据下式确定所述目标采样距离:Determine the target sampling distance according to the following formula:spacing为所述目标采样距离,spacing′为所述原始采样距离,ImageRe’为所述原始分辨率,ImageRe为所述目标分辨率。Spacing is the target sampling distance, spacing' is the original sampling distance, ImageRe' is the original resolution, and ImageRe is the target resolution.
- 如权利要求1所述的软骨图像分割方法,其特征在于,所述空洞卷积模块为基于Xception网络结构的卷积模块,其中,所述Xception网络结构包括平面空洞卷积层和通道空洞卷积层,所述平面空洞卷积层的采样率为1或3,所述通道空洞卷积层的采样率为6。The cartilage image segmentation method according to claim 1, wherein the cavity convolution module is a convolution module based on an Xception network structure, wherein the Xception network structure includes a planar cavity convolution layer and a channel cavity convolution The sampling rate of the planar hole convolutional layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
- 如权利要求1所述的软骨图像分割方法,其特征在于,所述金字塔空洞池化模块包括相互并行的多个第一卷积分支;The cartilage image segmentation method according to claim 1, wherein the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;所述通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,包括:The performing pooling processing on the first feature map by the pyramid hole pooling module to obtain the second feature map corresponding to the target cartilage image includes:分别通过各所述第一卷积分支对所述第一特征图进行特征采样,得到所述第一特征图分别对应的第一采样特征图、第二采样特征图、第三采样特征图和第四采样特征图;Perform feature sampling on the first feature map through each of the first convolution branches to obtain the first sampling feature map, the second sampling feature map, the third sampling feature map, and the first feature map corresponding to the first feature map. Four sampling feature maps;对所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图进行拼接,得到拼接后的拼接特征图;Splicing the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced feature map after splicing;对所述拼接特征图进行均值池化处理,得到所述目标软骨图像对应的第二特征图。Performing mean pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
- 如权利要求5所述的软骨图像分割方法,其特征在于,所述第一卷积分支包括采样率不同的第一空洞卷积单元、第二空洞卷积单元和第三空洞卷积单元,所述第一空洞卷积单元与所述第二空洞卷积单元连接,所述第二空洞卷积单元与所述第三空洞卷积单元连接。The cartilage image segmentation method according to claim 5, wherein the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, so The first hole convolution unit is connected to the second hole convolution unit, and the second hole convolution unit is connected to the third hole convolution unit.
- 如权利要求1所述的软骨图像分割方法,其特征在于,所述注意力机制目标包括相互并行的多个第二卷积分支;The cartilage image segmentation method according to claim 1, wherein the attention mechanism target includes a plurality of second convolution branches parallel to each other;所述通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图,包括:The performing weighting processing on the second feature map by the attention mechanism module to obtain the third feature map corresponding to the target cartilage image includes:分别通过各所述第二卷积分支对所述第二特征图进行卷积处理,得到所述第二特征图分别对应的第一卷积特征图、第二卷积特征图和第三卷积特征图;Perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map, the second convolution feature map, and the third convolution corresponding to the second feature map. Feature map对所述第一卷积特征图进行转置处理,并将转置得到的转置特征图与所述第二卷积特征图进行矩阵相乘处理,得到第五特征图;Performing transposition processing on the first convolution feature map, and performing matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature map;对所述第五特征图进行归一化处理,并将归一化处理后的第五特征图与所述第三卷积特征图进行矩阵相乘处理,得到所述第二特征图对应的加权系数矩阵;Perform a normalization process on the fifth feature map, and perform a matrix multiplication process on the fifth feature map after the normalization process and the third convolution feature map to obtain a weight corresponding to the second feature map Coefficient matrix通过所述加权系数矩阵对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图。Perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
- 如权利要求1所述的软骨图像分割方法,其特征在于,所述通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果,包括:The cartilage image segmentation method according to claim 1, wherein the third feature map is up-sampled by the fusion module, and the fourth feature map obtained by the sampling is compared with the first feature map. Performing fusion to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model includes:通过所述融合模块对所述第三特征图进行双线性上采样,得到所述第四特征图;Performing bilinear upsampling on the third feature map by the fusion module to obtain the fourth feature map;通过所述融合模块的第三卷积分支对所述第一特征图进行卷积处理,得到所述第一特征图对应的第六特征图;Performing convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;对所述第四特征图和所述第六特征图进行融合处理,得到融合后的第七特征图;Performing fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map;对所述第七特征图进行双线性上采样,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。Performing bilinear upsampling on the seventh feature map to obtain a cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
- 如权利要求1至8中任一项所述的软骨图像分割方法,其特征在于,所述软骨图像分割模型通过下述步骤训练得到:The cartilage image segmentation method according to any one of claims 1 to 8, wherein the cartilage image segmentation model is obtained through training in the following steps:获取第一预设数量的第一训练软骨图像;Acquiring a first preset number of first training cartilage images;利用预设扩展方式对所述第一训练软骨图像进行扩展,得到第二预设数量的第二训练软骨图像,所述第二训练软骨图像包括所述第一训练软骨图像,所述第二预设数量大于所述第一预设数量;The first training cartilage image is expanded using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage image, and the second preset Set the number to be greater than the first preset number;利用所述第二训练软骨图像和预设的损失函数对所述软骨图像分割模型进行训练,所述损失函数为:Use the second training cartilage image and a preset loss function to train the cartilage image segmentation model, and the loss function is:其中,B为训练软骨图像的数量,N为各训练软骨图像的像素数,p ij为第i个训练软骨图像的第j个像素属于软骨的概率,α=0.75,γ=2。 Where, B is the number of training cartilage images, N is the number of pixels of each training cartilage image, p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage, α=0.75, γ=2.
- 一种软骨图像分割装置,其特征在于,包括:A cartilage image segmentation device, characterized in that it comprises:目标图像获取模块,用于获取待分割的目标软骨图像;The target image acquisition module is used to acquire the target cartilage image to be segmented;目标图像输入模块,用于将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;The target image input module is used to input the target cartilage image to a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pooling connected to the cavity convolution module Module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module connected to the hole convolution module and the attention mechanism module respectively;特征提取模块,用于通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;A feature extraction module, configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;池化加权处理模块,用于通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;The pooling weighting processing module is used to perform pooling processing on the first feature map through the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and to perform the pooling process through the attention mechanism module Performing weighting processing on the second feature map to obtain a third feature map corresponding to the target cartilage image;结果输出模块,用于通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。The result output module is used to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the output of the cartilage image segmentation model The cartilage segmentation result of the target cartilage image.
- 如权利要求10所述的软骨图像分割装置,其特征在于,所述目标图像输入模块,包括:10. The cartilage image segmentation device of claim 10, wherein the target image input module comprises:原始采样距离获取单元,用于获取所述目标软骨图像对应的原始分辨率和原始采样距离;An original sampling distance obtaining unit, configured to obtain the original resolution and original sampling distance corresponding to the target cartilage image;目标采样距离确定单元,用于根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离;A target sampling distance determining unit, configured to determine a target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution;图像重采样单元,用于利用所述目标采样距离对所述目标软骨图像进行重采样,并将重采样得到的目标软骨图像输入至预设的软骨图像分割模型。The image resampling unit is configured to resample the target cartilage image by using the target sampling distance, and input the resampled target cartilage image into a preset cartilage image segmentation model.
- 如权利要求10所述的软骨图像分割装置,其特征在于,所述空洞卷积模块为基于Xception网络结构的卷积模块,其中,所述Xception网络结构包括平面空洞卷积层和通道空洞卷积层,所述平面空洞卷积层的采样率为1或3,所述通道空洞卷积层的采样率为6。The cartilage image segmentation device according to claim 10, wherein the cavity convolution module is a convolution module based on the Xception network structure, wherein the Xception network structure includes a planar cavity convolution layer and a channel cavity convolution The sampling rate of the planar hole convolutional layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
- 如权利要求10至12任一项所述的软骨图像分割装置,其特征在于,所述软骨图像分割装置,包括:The cartilage image segmentation device according to any one of claims 10 to 12, wherein the cartilage image segmentation device comprises:训练图像获取模块,用于获取第一预设数量的第一训练软骨图像;A training image acquisition module for acquiring a first preset number of first training cartilage images;训练图像扩展模块,用于利用预设扩展方式对所述第一训练软骨图像进行扩展,得到第二预设数量的第二训练软骨图像,所述第二训练软骨图像包括所述第一训练软骨图像,所述第二预设数量大于所述第一预设数量;The training image expansion module is used to expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage Images, the second preset number is greater than the first preset number;分割模型训练模块,用于利用所述第二训练软骨图像和预设的损失函数对所述软骨图像分割模型进行训练,所述损失函数为:The segmentation model training module is used to train the cartilage image segmentation model by using the second training cartilage image and a preset loss function, and the loss function is:其中,B为训练软骨图像的数量,N为各训练软骨图像的像素数,p ij为第i个训练软骨图像的第j个像素属于软骨的概率,α=0.75,γ=2。 Where, B is the number of training cartilage images, N is the number of pixels of each training cartilage image, p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage, α=0.75, γ=2.
- 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至9任一项所述软骨图像分割方法。A terminal device, comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program as claimed in claims 1 to 9. Any of the cartilage image segmentation methods.
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9任一项所述软骨图像分割方法。A computer-readable storage medium storing a computer program, wherein the computer program implements the cartilage image segmentation method according to any one of claims 1 to 9 when the computer program is executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/101339 WO2021031066A1 (en) | 2019-08-19 | 2019-08-19 | Cartilage image segmentation method and apparatus, readable storage medium, and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/101339 WO2021031066A1 (en) | 2019-08-19 | 2019-08-19 | Cartilage image segmentation method and apparatus, readable storage medium, and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021031066A1 true WO2021031066A1 (en) | 2021-02-25 |
Family
ID=74659582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/101339 WO2021031066A1 (en) | 2019-08-19 | 2019-08-19 | Cartilage image segmentation method and apparatus, readable storage medium, and terminal device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021031066A1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113012074A (en) * | 2021-04-21 | 2021-06-22 | 山东新一代信息产业技术研究院有限公司 | Intelligent image processing method suitable for low-illumination environment |
CN113139543A (en) * | 2021-04-28 | 2021-07-20 | 北京百度网讯科技有限公司 | Training method of target object detection model, target object detection method and device |
CN113177938A (en) * | 2021-05-25 | 2021-07-27 | 深圳大学 | Method and device for segmenting brain glioma based on circular convolution kernel and related components |
CN113191222A (en) * | 2021-04-15 | 2021-07-30 | 中国农业大学 | Underwater fish target detection method and device |
CN113283466A (en) * | 2021-04-12 | 2021-08-20 | 开放智能机器(上海)有限公司 | Instrument reading identification method and device and readable storage medium |
CN113313718A (en) * | 2021-05-28 | 2021-08-27 | 华南理工大学 | Acute lumbar vertebra fracture MRI image segmentation system based on deep learning |
CN113326851A (en) * | 2021-05-21 | 2021-08-31 | 中国科学院深圳先进技术研究院 | Image feature extraction method and device, electronic equipment and storage medium |
CN113393371A (en) * | 2021-06-28 | 2021-09-14 | 北京百度网讯科技有限公司 | Image processing method and device and electronic equipment |
CN113449770A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Image detection method, electronic device and storage device |
CN113468967A (en) * | 2021-06-02 | 2021-10-01 | 北京邮电大学 | Lane line detection method, device, equipment and medium based on attention mechanism |
CN113643318A (en) * | 2021-06-30 | 2021-11-12 | 深圳市优必选科技股份有限公司 | Image segmentation method, image segmentation device and terminal equipment |
CN113744280A (en) * | 2021-07-20 | 2021-12-03 | 北京旷视科技有限公司 | Image processing method, apparatus, device and medium |
CN113793345A (en) * | 2021-09-07 | 2021-12-14 | 复旦大学附属华山医院 | Medical image segmentation method and device based on improved attention module |
CN113837993A (en) * | 2021-07-29 | 2021-12-24 | 天津中科智能识别产业技术研究院有限公司 | Lightweight iris image segmentation method and device, electronic equipment and storage medium |
CN113869181A (en) * | 2021-09-24 | 2021-12-31 | 电子科技大学 | Unmanned aerial vehicle target detection method for selecting pooling nuclear structure |
CN113989287A (en) * | 2021-09-10 | 2022-01-28 | 国网吉林省电力有限公司 | Urban road remote sensing image segmentation method and device, electronic equipment and storage medium |
CN114170167A (en) * | 2021-11-29 | 2022-03-11 | 深圳职业技术学院 | Polyp segmentation method and computer device based on attention-guided context correction |
CN114418064A (en) * | 2021-12-27 | 2022-04-29 | 西安天和防务技术股份有限公司 | Target detection method, terminal equipment and storage medium |
CN114445426A (en) * | 2022-01-28 | 2022-05-06 | 深圳大学 | Method and device for segmenting polyp region in endoscope image and related assembly |
CN114758137A (en) * | 2022-06-15 | 2022-07-15 | 深圳瀚维智能医疗科技有限公司 | Ultrasonic image segmentation method and device and computer readable storage medium |
CN114842333A (en) * | 2022-04-14 | 2022-08-02 | 湖南盛鼎科技发展有限责任公司 | Remote sensing image building extraction method, computer equipment and storage medium |
CN115131300A (en) * | 2022-06-15 | 2022-09-30 | 北京长木谷医疗科技有限公司 | Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning |
CN116469132A (en) * | 2023-06-20 | 2023-07-21 | 济南瑞泉电子有限公司 | Fall detection method, system, equipment and medium based on double-flow feature extraction |
CN116612142A (en) * | 2023-07-19 | 2023-08-18 | 青岛市中心医院 | Intelligent lung cancer CT sample data segmentation method and device |
CN116993762A (en) * | 2023-09-26 | 2023-11-03 | 腾讯科技(深圳)有限公司 | Image segmentation method, device, electronic equipment and storage medium |
CN117745745A (en) * | 2024-02-18 | 2024-03-22 | 湖南大学 | CT image segmentation method based on context fusion perception |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170200274A1 (en) * | 2014-05-23 | 2017-07-13 | Watrix Technology | Human-Shape Image Segmentation Method |
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
CN110084249A (en) * | 2019-04-24 | 2019-08-02 | 哈尔滨工业大学 | The image significance detection method paid attention to based on pyramid feature |
CN110097550A (en) * | 2019-05-05 | 2019-08-06 | 电子科技大学 | A kind of medical image cutting method and system based on deep learning |
CN110110617A (en) * | 2019-04-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | Medical image dividing method, device, electronic equipment and storage medium |
CN110598714A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院深圳先进技术研究院 | Cartilage image segmentation method and device, readable storage medium and terminal equipment |
-
2019
- 2019-08-19 WO PCT/CN2019/101339 patent/WO2021031066A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170200274A1 (en) * | 2014-05-23 | 2017-07-13 | Watrix Technology | Human-Shape Image Segmentation Method |
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
CN110110617A (en) * | 2019-04-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | Medical image dividing method, device, electronic equipment and storage medium |
CN110084249A (en) * | 2019-04-24 | 2019-08-02 | 哈尔滨工业大学 | The image significance detection method paid attention to based on pyramid feature |
CN110097550A (en) * | 2019-05-05 | 2019-08-06 | 电子科技大学 | A kind of medical image cutting method and system based on deep learning |
CN110598714A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院深圳先进技术研究院 | Cartilage image segmentation method and device, readable storage medium and terminal equipment |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283466A (en) * | 2021-04-12 | 2021-08-20 | 开放智能机器(上海)有限公司 | Instrument reading identification method and device and readable storage medium |
CN113191222B (en) * | 2021-04-15 | 2024-05-03 | 中国农业大学 | Underwater fish target detection method and device |
CN113191222A (en) * | 2021-04-15 | 2021-07-30 | 中国农业大学 | Underwater fish target detection method and device |
CN113012074A (en) * | 2021-04-21 | 2021-06-22 | 山东新一代信息产业技术研究院有限公司 | Intelligent image processing method suitable for low-illumination environment |
CN113139543A (en) * | 2021-04-28 | 2021-07-20 | 北京百度网讯科技有限公司 | Training method of target object detection model, target object detection method and device |
CN113139543B (en) * | 2021-04-28 | 2023-09-01 | 北京百度网讯科技有限公司 | Training method of target object detection model, target object detection method and equipment |
CN113449770A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Image detection method, electronic device and storage device |
CN113449770B (en) * | 2021-05-18 | 2024-02-13 | 科大讯飞股份有限公司 | Image detection method, electronic device and storage device |
CN113326851B (en) * | 2021-05-21 | 2023-10-27 | 中国科学院深圳先进技术研究院 | Image feature extraction method and device, electronic equipment and storage medium |
CN113326851A (en) * | 2021-05-21 | 2021-08-31 | 中国科学院深圳先进技术研究院 | Image feature extraction method and device, electronic equipment and storage medium |
CN113177938A (en) * | 2021-05-25 | 2021-07-27 | 深圳大学 | Method and device for segmenting brain glioma based on circular convolution kernel and related components |
CN113177938B (en) * | 2021-05-25 | 2023-04-07 | 深圳大学 | Method and device for segmenting brain glioma based on circular convolution kernel and related components |
CN113313718B (en) * | 2021-05-28 | 2023-02-10 | 华南理工大学 | Acute lumbar vertebra fracture MRI image segmentation system based on deep learning |
CN113313718A (en) * | 2021-05-28 | 2021-08-27 | 华南理工大学 | Acute lumbar vertebra fracture MRI image segmentation system based on deep learning |
CN113468967B (en) * | 2021-06-02 | 2023-08-18 | 北京邮电大学 | Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium |
CN113468967A (en) * | 2021-06-02 | 2021-10-01 | 北京邮电大学 | Lane line detection method, device, equipment and medium based on attention mechanism |
CN113393371B (en) * | 2021-06-28 | 2024-02-27 | 北京百度网讯科技有限公司 | Image processing method and device and electronic equipment |
CN113393371A (en) * | 2021-06-28 | 2021-09-14 | 北京百度网讯科技有限公司 | Image processing method and device and electronic equipment |
CN113643318B (en) * | 2021-06-30 | 2023-11-24 | 深圳市优必选科技股份有限公司 | Image segmentation method, image segmentation device and terminal equipment |
CN113643318A (en) * | 2021-06-30 | 2021-11-12 | 深圳市优必选科技股份有限公司 | Image segmentation method, image segmentation device and terminal equipment |
CN113744280A (en) * | 2021-07-20 | 2021-12-03 | 北京旷视科技有限公司 | Image processing method, apparatus, device and medium |
CN113837993A (en) * | 2021-07-29 | 2021-12-24 | 天津中科智能识别产业技术研究院有限公司 | Lightweight iris image segmentation method and device, electronic equipment and storage medium |
CN113837993B (en) * | 2021-07-29 | 2024-01-30 | 天津中科智能识别产业技术研究院有限公司 | Lightweight iris image segmentation method and device, electronic equipment and storage medium |
CN113793345A (en) * | 2021-09-07 | 2021-12-14 | 复旦大学附属华山医院 | Medical image segmentation method and device based on improved attention module |
CN113793345B (en) * | 2021-09-07 | 2023-10-31 | 复旦大学附属华山医院 | Medical image segmentation method and device based on improved attention module |
CN113989287A (en) * | 2021-09-10 | 2022-01-28 | 国网吉林省电力有限公司 | Urban road remote sensing image segmentation method and device, electronic equipment and storage medium |
CN113869181B (en) * | 2021-09-24 | 2023-05-02 | 电子科技大学 | Unmanned aerial vehicle target detection method for selecting pooling core structure |
CN113869181A (en) * | 2021-09-24 | 2021-12-31 | 电子科技大学 | Unmanned aerial vehicle target detection method for selecting pooling nuclear structure |
CN114170167A (en) * | 2021-11-29 | 2022-03-11 | 深圳职业技术学院 | Polyp segmentation method and computer device based on attention-guided context correction |
CN114418064A (en) * | 2021-12-27 | 2022-04-29 | 西安天和防务技术股份有限公司 | Target detection method, terminal equipment and storage medium |
CN114418064B (en) * | 2021-12-27 | 2023-04-18 | 西安天和防务技术股份有限公司 | Target detection method, terminal equipment and storage medium |
CN114445426A (en) * | 2022-01-28 | 2022-05-06 | 深圳大学 | Method and device for segmenting polyp region in endoscope image and related assembly |
CN114842333B (en) * | 2022-04-14 | 2022-10-28 | 湖南盛鼎科技发展有限责任公司 | Remote sensing image building extraction method, computer equipment and storage medium |
CN114842333A (en) * | 2022-04-14 | 2022-08-02 | 湖南盛鼎科技发展有限责任公司 | Remote sensing image building extraction method, computer equipment and storage medium |
CN115131300B (en) * | 2022-06-15 | 2023-04-07 | 北京长木谷医疗科技有限公司 | Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning |
CN114758137A (en) * | 2022-06-15 | 2022-07-15 | 深圳瀚维智能医疗科技有限公司 | Ultrasonic image segmentation method and device and computer readable storage medium |
CN115131300A (en) * | 2022-06-15 | 2022-09-30 | 北京长木谷医疗科技有限公司 | Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning |
CN114758137B (en) * | 2022-06-15 | 2022-11-01 | 深圳瀚维智能医疗科技有限公司 | Ultrasonic image segmentation method and device and computer readable storage medium |
CN116469132A (en) * | 2023-06-20 | 2023-07-21 | 济南瑞泉电子有限公司 | Fall detection method, system, equipment and medium based on double-flow feature extraction |
CN116469132B (en) * | 2023-06-20 | 2023-09-05 | 济南瑞泉电子有限公司 | Fall detection method, system, equipment and medium based on double-flow feature extraction |
CN116612142A (en) * | 2023-07-19 | 2023-08-18 | 青岛市中心医院 | Intelligent lung cancer CT sample data segmentation method and device |
CN116612142B (en) * | 2023-07-19 | 2023-09-22 | 青岛市中心医院 | Intelligent lung cancer CT sample data segmentation method and device |
CN116993762B (en) * | 2023-09-26 | 2024-01-19 | 腾讯科技(深圳)有限公司 | Image segmentation method, device, electronic equipment and storage medium |
CN116993762A (en) * | 2023-09-26 | 2023-11-03 | 腾讯科技(深圳)有限公司 | Image segmentation method, device, electronic equipment and storage medium |
CN117745745A (en) * | 2024-02-18 | 2024-03-22 | 湖南大学 | CT image segmentation method based on context fusion perception |
CN117745745B (en) * | 2024-02-18 | 2024-05-10 | 湖南大学 | CT image segmentation method based on context fusion perception |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021031066A1 (en) | Cartilage image segmentation method and apparatus, readable storage medium, and terminal device | |
CN110598714B (en) | Cartilage image segmentation method and device, readable storage medium and terminal equipment | |
US11373305B2 (en) | Image processing method and device, computer apparatus, and storage medium | |
WO2020125498A1 (en) | Cardiac magnetic resonance image segmentation method and apparatus, terminal device and storage medium | |
CN111091521B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN111291825B (en) | Focus classification model training method, apparatus, computer device and storage medium | |
WO2023065503A1 (en) | Facial expression classification method and electronic device | |
WO2021120961A1 (en) | Brain addiction structure map evaluation method and apparatus | |
US11430123B2 (en) | Sampling latent variables to generate multiple segmentations of an image | |
WO2020248898A1 (en) | Image processing method, apparatus and device, and storage medium | |
CN114581628B (en) | Cerebral cortex surface reconstruction method and readable storage medium | |
CN112634231A (en) | Image classification method and device, terminal equipment and storage medium | |
CN114782686A (en) | Image segmentation method and device, terminal equipment and storage medium | |
WO2021139351A1 (en) | Image segmentation method, apparatus, medium, and electronic device | |
CN116563533A (en) | Medical image segmentation method and system based on target position priori information | |
CN114742750A (en) | Abnormal cell detection method, abnormal cell detection device, terminal device and readable storage medium | |
CN114693703A (en) | Skin mirror image segmentation model training and skin mirror image recognition method and device | |
CN114240935B (en) | Space-frequency domain feature fusion medical image feature identification method and device | |
US20230343438A1 (en) | Systems and methods for automatic image annotation | |
CN116189209B (en) | Medical document image classification method and device, electronic device and storage medium | |
WO2023044612A1 (en) | Image classification method and apparatus | |
CN116091459A (en) | CT tumor image segmentation method and system based on multi-attention U-shaped network | |
CN115222997A (en) | Testis image classification method based on deep learning | |
CN117635306A (en) | Crop financing risk assessment method, device, equipment and medium | |
CN113239978A (en) | Method and device for correlating medical image preprocessing model and analysis model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19942549 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942549 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942549 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942549 Country of ref document: EP Kind code of ref document: A1 |