WO2021031066A1 - Cartilage image segmentation method and apparatus, readable storage medium, and terminal device - Google Patents

Cartilage image segmentation method and apparatus, readable storage medium, and terminal device Download PDF

Info

Publication number
WO2021031066A1
WO2021031066A1 PCT/CN2019/101339 CN2019101339W WO2021031066A1 WO 2021031066 A1 WO2021031066 A1 WO 2021031066A1 CN 2019101339 W CN2019101339 W CN 2019101339W WO 2021031066 A1 WO2021031066 A1 WO 2021031066A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature map
cartilage
module
convolution
target
Prior art date
Application number
PCT/CN2019/101339
Other languages
French (fr)
Chinese (zh)
Inventor
李佳颖
胡庆茂
张晓东
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2019/101339 priority Critical patent/WO2021031066A1/en
Publication of WO2021031066A1 publication Critical patent/WO2021031066A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • This application belongs to the field of image processing technology, and in particular relates to a method, device, computer-readable storage medium, and terminal equipment for cartilage image segmentation.
  • the current cartilage image segmentation method is mainly based on the convolutional neural network model. Due to the particularity of cartilage characteristics, the existing segmentation method based on the convolutional neural network model still has the problem of low segmentation accuracy.
  • the embodiments of the present application provide a cartilage image segmentation method, device, computer-readable storage medium, and terminal equipment, which can solve the problem of low accuracy of cartilage image segmentation.
  • an embodiment of the present application provides a cartilage image segmentation method, including:
  • the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and the pyramid cavity
  • An attention mechanism module connected to the pooling module and a fusion module connected to the hollow convolution module and the attention mechanism module respectively;
  • the first feature map is pooled by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and the second feature map is weighted by the attention mechanism module Processing to obtain a third feature map corresponding to the target cartilage image;
  • an embodiment of the present application provides a cartilage image segmentation device, including:
  • the target image acquisition module is used to acquire the target cartilage image to be segmented
  • the target image input module is used to input the target cartilage image to a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pooling connected to the cavity convolution module Module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module connected to the hole convolution module and the attention mechanism module respectively;
  • a feature extraction module configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image
  • the pooling weighting processing module is used to perform pooling processing on the first feature map through the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and to perform the pooling process through the attention mechanism module Performing weighting processing on the second feature map to obtain a third feature map corresponding to the target cartilage image;
  • the result output module is used to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the output of the cartilage image segmentation model The cartilage segmentation result of the target cartilage image.
  • an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program, The cartilage image segmentation method described in the first aspect is realized.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program that, when executed by a processor, implements the cartilage image segmentation method described in the first aspect above .
  • embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the cartilage image segmentation method described in the first aspect.
  • multi-scale image information is extracted from the target cartilage image through the hole convolution module and the pyramid hole pooling module, and the multi-scale image information is fused through the fusion module, which can effectively retain the detailed information of the image and improve
  • the image boundary segmentation capability improves the segmentation accuracy of cartilage images.
  • the weighting of image information through the attention mechanism module can effectively enhance the segmentation ability of cartilage images and improve the accuracy and precision of cartilage image segmentation.
  • FIG. 1 is a schematic flowchart of a cartilage image segmentation method provided by an embodiment of the present application
  • FIG. 2 is a structural block diagram of a cartilage image segmentation model provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a hole convolution module provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of convolution of a channel hole convolution layer provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of obtaining a second feature map in an application scenario by the cartilage image segmentation method provided by an embodiment of the present application;
  • Fig. 6 is a schematic structural diagram of a pyramid cavity pooling module provided by an embodiment of the present application.
  • Fig. 6a is a schematic diagram of convolution with different sampling rates provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an attention mechanism module provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of obtaining a third feature map in an application scenario by the cartilage image segmentation method provided by an embodiment of the present application;
  • Figure 9a is a cartilage segmentation diagram of artificially labeled gold standard
  • 9b is a cartilage segmentation diagram segmented by the cartilage image segmentation method in an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a cartilage image segmentation device provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • Fig. 1 shows a schematic flowchart of a cartilage image segmentation method provided by an embodiment of the present application, and the cartilage image segmentation method includes:
  • Step S101 Obtain a target cartilage image to be segmented
  • the execution subject of the embodiments of the present application may be a terminal device, and the terminal device includes, but is not limited to, computing devices such as desktop computers, notebooks, palmtop computers, and cloud servers.
  • the target cartilage image to be segmented may be sent to the terminal device, where the target cartilage image may be a magnetic resonance imaging (MRI) image containing cartilage, for example, an MRI of a knee joint image.
  • MRI magnetic resonance imaging
  • Step S102 Input the target cartilage image to a preset cartilage image segmentation model, where the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and The attention mechanism module connected to the pyramid hole pooling module and the fusion module respectively connected to the hole convolution module and the attention mechanism module;
  • the terminal device after the terminal device obtains the target cartilage image, it can call the cartilage image segmentation model as shown in FIG. 2 and can input the target cartilage image into the cartilage image segmentation model.
  • the inputting the target cartilage image to a preset cartilage image segmentation model may include:
  • Step a Obtain the original resolution and original sampling distance corresponding to the target cartilage image
  • Step b Determine the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution
  • Step c Use the target sampling distance to resample the target cartilage image, and input the resampled target cartilage image into a preset cartilage image segmentation model.
  • the determining the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution includes:
  • Spacing is the target sampling distance
  • spacing' is the original sampling distance
  • ImageRe' is the original resolution
  • ImageRe is the target resolution
  • the original resolution can be any resolution
  • the original sampling distance can be any distance
  • the target resolution can be a resolution of 513 ⁇ 513 pixels to determine the target
  • the sampling distance is used to perform image resampling to crop the target cartilage image to the target resolution, which is convenient for the cartilage image segmentation model to perform image feature extraction, improves the segmentation efficiency of the cartilage image segmentation model, and can reduce the The size of the target cartilage image is limited to facilitate user use and improve user experience.
  • Step S103 Perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
  • the cavity convolution module of the cartilage image segmentation model can extract image features of the target cartilage image to obtain The first feature map corresponding to the target cartilage image.
  • the hole convolution module is a convolution module based on the Xception network structure, where the Xception network structure includes a flat hole convolution layer and a channel hole convolution layer
  • the sampling rate of the planar hole convolution layer is 1 or 3
  • the sampling rate of the channel hole convolution layer is 6. It should be understood that the size of the convolution kernel of the planar hole convolution layer and the channel hole convolution layer may both be 3 ⁇ 3.
  • the Xception network structure may include an input unit, an intermediate processing unit, and an output unit.
  • the input unit may include a first 3 ⁇ 3 convolutional layer connected in series (that is, a convolutional layer with a convolution kernel size of 3 ⁇ 3).
  • the intermediate processing unit may include 16 fourth convolutions connected in series Unit, the fourth convolution subunit may include three third planar hole convolution layers connected in series;
  • the output unit may include a fifth convolution subunit and a sixth convolution subunit connected in series, the fifth The convolution subunit may include a second 1 ⁇ 1 convolution layer and a fourth planar hole convolution layer, a fifth planar hole convolution layer, and a second channel hole convolution layer connected in series.
  • the sixth convolution subunit may It includes
  • the first channel hole convolution layer, the second 1 ⁇ 1 convolutional layer, and the first channel hole convolutional layer a standardized output layer and a Relu activation layer may be connected in sequence.
  • the first channel hole convolution layer, the second channel hole convolution layer, and the third channel hole convolution layer mainly adopt depth-wise hole convolution, which can be specifically formed by convolution
  • the core size is 3x3 hole convolution and the cross-module convolution core size is 1x1 convolution in series, which can reduce model parameters and improve convolution efficiency.
  • the input unit of the cavity convolution module may first perform feature extraction on the target cartilage image, and may
  • the extracted feature map A is input to the intermediate processing unit of the hole convolution module, and the intermediate processing unit can perform feature extraction on the feature map A, and can input the extracted feature map B to the hole convolution
  • the output unit of the module, the output unit can further perform feature extraction on the feature map B to obtain the first feature map corresponding to the target cartilage image.
  • the process for the input unit to perform feature extraction on the target cartilage image may be: the first 3 ⁇ 3 convolutional layer of the input unit may first perform feature extraction on the target cartilage image, and may extract The feature map R of the input unit is input to the second 3 ⁇ 3 convolutional layer of the input unit; the second 3 ⁇ 3 convolutional layer can perform further feature extraction on the feature map R, and the extracted feature map T Input to the first convolution subunit of the input unit.
  • the first 1 ⁇ 1 convolution layer of the first convolution subunit can perform feature extraction on the feature map T to obtain the extracted feature map T1, and the first convolution subunit
  • the planar cavity convolution layer can also perform feature extraction on the feature map T to obtain the extracted feature map T2, and can input the feature map T2 to the second planar cavity convolution layer of the first convolution subunit.
  • the second plane hole convolution layer can perform feature extraction on the feature map T2, and can input the extracted feature map T21 into the first channel hole convolution layer of the first convolution subunit, and the first channel hole
  • the convolutional layer can perform feature extraction on the feature map T21 to obtain the feature map S.
  • the first convolution subunit may fuse the feature map T1 and the feature map S, and input the feature map L obtained by the fusion into the second convolution subunit of the input unit.
  • the second convolution subunit can perform feature extraction on the feature map L, and input the extracted feature map H into the third convolution subunit of the input unit, and the third convolution subunit can
  • the feature map H performs feature extraction to obtain the feature map A extracted by the input unit.
  • the feature extraction process performed by the second convolution subunit on the feature map L and the feature extraction process performed by the third convolution subunit on the feature map H are the same as those performed by the first convolution subunit on the feature map.
  • the feature extraction process performed by T is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
  • the process of the intermediate processing unit performing feature extraction on the feature map A may be: the first third planar hole convolution layer of the first fourth convolution subunit can first perform feature extraction on the feature map A, And the extracted feature map A1 can be input to the second third plane hole convolution layer of the first fourth convolution subunit, and the second third plane hole convolution layer can compare the feature map A1 Perform feature extraction, and input the extracted feature map A11 into the third third plane hole convolution layer of the first fourth convolution subunit, and the third third plane hole convolution layer can be Feature map A11 performs feature extraction to obtain feature map G; then the first fourth convolution subunit can fuse feature map A and feature map G, and can input the fused feature map K to the second The fourth convolution subunit, the second and fourth convolution subunit can perform feature extraction on the feature map K, and input the extracted feature map to the third and fourth convolution subunit, and so on, until Input the feature map to the sixteenth fourth convolution subunit, and perform feature extraction on the feature map through the sixteenth fourth
  • the second and fourth convolution subunits, the third and fourth convolution subunits,..., the process of feature extraction by the sixteenth and fourth convolution subunits is the same as that of the first and fourth convolution subunits.
  • the feature extraction process of the feature map A by the unit is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
  • the feature extraction process of the feature map B by the output unit may be as follows: the fifth convolution subunit of the output unit can first perform feature extraction on the feature map B, and can input the extracted feature map F to The sixth convolution subunit of the output unit, where the fifth convolution subunit performs feature extraction on the feature map B and the first convolution subunit of the input unit performs feature map T
  • the extraction process is similar, and the basic principle is the same. For the sake of brevity, I will not repeat it here.
  • the sixth plane hole convolution layer of the sixth convolution subunit can perform feature extraction on the feature map F, and can input the extracted feature map F1 into the third channel hole of the sixth convolution subunit Convolutional layer, the third channel hole convolution layer can perform feature extraction on the feature map F1, and can input the extracted feature map F11 to the seventh plane hole convolution layer of the sixth convolution subunit,
  • the feature map F11 can be extracted through the seventh plane cavity convolution layer to obtain the first feature map corresponding to the target cartilage image.
  • the Xception network structure in the embodiment of the present application is similar to the residual network structure, so the hole convolution module based on the Xception network structure can effectively reduce the gradient attenuation rate, avoid the degradation of the network structure, and ensure cartilage image segmentation Accuracy.
  • the hole convolution module adopts the hole convolution layer with different sampling rates to extract the features of the target cartilage image, which can increase the receptive field, increase the amount of information contained in the feature map, and effectively retain the detailed information of the image. Improve the segmentation accuracy of cartilage images.
  • Step S104 Perform pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and perform a pooling process on the second feature map by the attention mechanism module The image is weighted to obtain a third feature image corresponding to the target cartilage image;
  • the cavity convolution module extracts the first feature map corresponding to the target cartilage image
  • the first feature map can be input to the pyramid hole pooling module, and the pyramid hole pooling module can perform pooling processing on the first feature map to obtain the first feature map
  • the second feature map can be input to the attention mechanism module, and the attention mechanism module can perform weighting processing on the second feature map to obtain the target cartilage
  • the third feature map corresponding to the image uses the pyramid hole pooling module to extract image information at multiple scales, which improves the boundary segmentation capability of the image and improves the segmentation accuracy of the cartilage image.
  • the attention mechanism module is used to weight the image information. It can effectively improve the segmentation accuracy and segmentation accuracy of cartilage images.
  • the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
  • performing pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image may include:
  • Step S501 Perform feature sampling on the first feature map through each of the first convolution branches to obtain a first sampling feature map, a second sampling feature map, and a third sampling feature corresponding to the first feature map.
  • Step S502 splicing the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced splicing feature map;
  • Step S503 Perform average pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
  • the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, and the first hole convolution unit and the first hole convolution unit Two hole convolution units are connected, and the second hole convolution unit is connected to the third hole convolution unit.
  • the pyramid hole pooling module may include four first convolution branches parallel to each other, and each first convolution branch may include a first hole convolution unit and a second convolution unit connected in series in sequence.
  • a hole convolution unit and a third hole convolution unit wherein the first hole convolution unit may include a convolution layer with a sampling rate of 6, as shown in FIG. 6a, a first Relu activation layer and a first dropout layer,
  • the second hole convolution unit may include a convolution layer with a sampling rate of 12 as shown in FIG. 6a, a second Relu activation layer, and a second dropout layer
  • the third hole convolution unit may include the convolution layer shown in FIG. 6a.
  • the sample rate shown is 18 convolutional layers.
  • the pyramid hole pooling module may further include a splicing layer connected to the convolutional layer with a sampling rate of 18 and an average pooling layer connected to the splicing layer.
  • the pyramid hole pooling module obtains the first feature map extracted by the hole convolution module, it can perform feature sampling on the first feature map through four parallel first convolution branches respectively. Obtain a first sampling feature map, a second sampling feature map, a third sampling feature map, and a fourth sampling feature map corresponding to the first feature map, respectively.
  • the process of performing feature sampling on the first feature map by the first convolution branch to obtain the first sampling feature map may be as follows: first, the first convolutional layer with a sampling rate of 6 can The feature map performs feature sampling, and the sampled feature map C obtained by sampling can be input to the first Relu activation layer; secondly, the sampled feature map C can be processed through the first Relu activation layer, and the processed feature map C can be processed
  • the sampling feature map C1 is input to the first dropout layer; the sampling feature map C1 can be processed again through the first dropout layer, and the processed sampling feature map C2 can be input to the convolutional layer with a sampling rate of 12
  • the sampling feature map C2 can then be further sampled through the convolutional layer with a sampling rate of 12, and the sampling feature map C3 obtained by sampling can be input to the second Relu activation layer; then the second Relu activation layer can be used
  • the Relu activation layer processes the sampled feature map C3, and can input the processed sampled feature map C
  • a sampling feature map wherein, the process of performing feature sampling on the first feature map by each of the other first convolution branches to obtain the second sampling feature map, the third sampling feature map, and the fourth sampling feature map is the same as that obtained above.
  • the process of the first sampling feature map is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
  • the first convolution branch obtains the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map
  • all The first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map are respectively input to the splicing layer of the pyramid hole pooling module, and the splicing layer is sufficient
  • the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map are spliced together.
  • the first sampling feature map, the second sampling feature map can be combined
  • the sampling feature map, the third sampling feature map, and the fourth sampling feature map are summed and spliced to obtain the spliced splicing feature map, and the obtained splicing feature map can be input to the pyramid hole pooling module
  • the average pooling layer of, the average pooling layer can perform average pooling processing on the stitched feature map, so as to obtain the second feature map corresponding to the target cartilage image.
  • the attention mechanism module may include multiple second convolution branches parallel to each other, for example, may include three second convolution branches parallel to each other, where:
  • Each of the second convolution branches may include a convolution layer with a convolution kernel size of 1 ⁇ 1 and a step size of 2.
  • performing weighting processing on the second feature map by the attention mechanism module to obtain a third feature map corresponding to the target cartilage image may include:
  • Step S801 Perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map, the second convolution feature map, and the first convolution feature map corresponding to the second feature map.
  • Step S802 Perform transposition processing on the first convolution feature map, and perform matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature map;
  • Step S803 Perform normalization processing on the fifth feature map, and perform matrix multiplication processing on the normalized fifth feature map and the third convolution feature map to obtain the second feature map Corresponding weighting coefficient matrix;
  • Step S804 Perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
  • the pyramid hole pooling module obtains the second feature map corresponding to the target cartilage image, it can input the second feature map to the attention mechanism module, and the attention
  • the mechanism module can first perform convolution processing on the second feature map through three parallel 1 ⁇ 1 convolution layers to obtain three convolution feature maps corresponding to the second feature map, that is, through three 1 ⁇ 1 convolutional layers.
  • the convolutional layer performs dimensionality reduction processing on the second feature map to generate a first convolution feature map, a second convolution feature map, and a third convolution feature map that retain detailed information; then the first convolution feature map can be The product feature map is transposed, and the transposed feature map obtained by the transposition can be subjected to matrix multiplication processing with the second convolution feature map to obtain a fifth feature map; subsequently, the fifth feature map can be Perform normalization processing, for example, the fifth feature map can be normalized through the softmax function, and the normalized fifth feature map can be matrix-multiplied by the third convolution feature map Processing, the weighting coefficient matrix corresponding to the second feature map is obtained, that is, the attention of each position in the feature map relative to other positions is obtained; finally, the second feature map can be weighted by the weighting coefficient matrix, To obtain the third feature map corresponding to the target cartilage image, for example, a weighted third feature map can be obtained by summing the second feature map and the weighting coefficient matrix on the basis of
  • Step S105 Up-sampling the third feature map by the fusion module, and fusing the sampled fourth feature map with the first feature map to obtain the target output by the cartilage image segmentation model Cartilage segmentation result of cartilage image.
  • the third feature map is upsampled by the fusion module, and the fourth feature map obtained by sampling is fused with the first feature map to obtain the cartilage
  • the cartilage segmentation result of the target cartilage image output by the image segmentation model may include:
  • Step d Perform bilinear up-sampling on the third feature map by the fusion module to obtain the fourth feature map;
  • Step e Perform convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
  • Step f Perform fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map
  • Step g Perform bilinear upsampling on the seventh feature map to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
  • the bilinear upsampling may be 4 times bilinear upsampling
  • the third convolution branch of the fusion module may include a convolution kernel with a size of 1 ⁇ 1.
  • Layers, wherein the third convolution branch can be connected to the hole convolution module to obtain the first feature map output by the hole convolution module, and can perform convolution processing on the first feature map , Thereby obtaining a sixth feature map corresponding to the first feature map.
  • the fusion module may further perform fusion processing on the fourth feature map and the sixth feature map, that is, perform layer-wise splicing on the fourth feature map and the sixth feature map, so as to preserve the void volume.
  • the useful information in the feature map output by the product module improves the segmentation accuracy and segmentation accuracy of the cartilage image; finally, the size of the original feature map can be restored by performing 4-fold bilinear upsampling on the fused seventh feature map.
  • the cartilage image segmentation model can be obtained through training in the following steps:
  • Step h Acquire a first preset number of first training cartilage images
  • Step i Expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage image, and The second preset quantity is greater than the first preset quantity;
  • Step j Use the second training cartilage image and a preset loss function to train the cartilage image segmentation model, and the loss function is:
  • B is the number of training cartilage images
  • N is the number of pixels of each training cartilage image
  • p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage
  • 0.75
  • 2.
  • a first preset number (such as 440) of first training cartilage images of different resolutions may be acquired from the medical imaging control system mimics; , Each first training cartilage image can be preprocessed, and the original resolution and original sampling distance corresponding to each first training cartilage image can be obtained, and the original resolution, original sampling distance and original sampling distance corresponding to each first training cartilage image can be obtained.
  • the preset target resolution determines the corresponding target sampling distance, and uses each target sampling distance to resample the corresponding first training cartilage image so that all the first training cartilage images are at the same origin and the same direction.
  • the method for determining the target sampling distance at the location is the same as the method for determining the target sampling distance described above; then, the first training software can be expanded by performing symmetric, stretching, and rotating radiation transformations on each first training cartilage image along the xy plane.
  • the number of images is expanded to obtain a second preset number of second training cartilage images, where the second training cartilage images can include the first training cartilage images; finally, the second training can be used under the guidance of the Focal loss method
  • the cartilage image trains the cartilage image segmentation model to obtain the optimal model parameters.
  • Focal loss can be used to define the loss function loss during the cartilage image segmentation model training, and the loss function is minimized based on the Adam batch gradient descent algorithm loss to obtain the optimal model parameters.
  • a dropout layer with a dropout rate of 0.9 can also be used in the training process to improve training efficiency.
  • Table 1 below shows the comparison results of the cartilage image segmentation method in the embodiment of the present application, the cartilage image method based on the Deeplab v3 structure, and the cartilage image method based on the U-net structure, where Dice is the similarity coefficient ( Dice similarity coefficient, DSC) is a parameter for evaluating the effect of cartilage segmentation.
  • Dice is the similarity coefficient ( Dice similarity coefficient, DSC) is a parameter for evaluating the effect of cartilage segmentation.
  • Dice similarity coefficient, DSC Dice similarity coefficient
  • FIG. 9a and FIG. 9b show a schematic diagram of the comparison of the cartilage segmentation results of the artificial labeling gold standard and the cartilage image segmentation method in the embodiment of the present application.
  • FIG. 9a is the artificial labeling gold standard
  • FIG. 9b is the embodiment of the application.
  • the cartilage segmentation results of the cartilage image segmentation method in FIG. 9a and FIG. 9b show that the cartilage image segmentation method in the embodiment of the present application can reach the segmentation accuracy of the artificial labeling gold standard.
  • multi-scale image information is extracted from the target cartilage image through the hole convolution module and the pyramid hole pooling module, and the multi-scale image information is fused through the fusion module, which can effectively retain the detailed information of the image and improve
  • the image boundary segmentation capability improves the segmentation accuracy of cartilage images.
  • the weighting of image information through the attention mechanism module can effectively enhance the segmentation ability of cartilage images and improve the accuracy and precision of cartilage image segmentation.
  • FIG. 10 shows a structural block diagram of a cartilage image segmentation device provided by an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
  • the cartilage image segmentation device includes:
  • the target image acquisition module 1001 is used to acquire the target cartilage image to be segmented
  • the target image input module 1002 is configured to input the target cartilage image into a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pool connected to the cavity convolution module A transformation module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module respectively connected to the hole convolution module and the attention mechanism module;
  • the feature extraction module 1003 is configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
  • Pooling weighting processing module 1004 configured to perform pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and pass the attention mechanism module Weighting the second feature map to obtain a third feature map corresponding to the target cartilage image;
  • the result output module 1005 is configured to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the cartilage image segmentation model output The result of cartilage segmentation of the target cartilage image.
  • the target image input module 1002 includes:
  • An original sampling distance obtaining unit configured to obtain the original resolution and original sampling distance corresponding to the target cartilage image
  • a target sampling distance determining unit configured to determine a target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution
  • the image resampling unit is configured to resample the target cartilage image by using the target sampling distance, and input the resampled target cartilage image into a preset cartilage image segmentation model.
  • the target sampling distance determining unit is specifically configured to determine the target sampling distance according to the following formula:
  • Spacing is the target sampling distance
  • spacing' is the original sampling distance
  • ImageRe' is the original resolution
  • ImageRe is the target resolution
  • the hole convolution module is a convolution module based on an Xception network structure, wherein the Xception network structure includes a flat hole convolution layer and a channel hole convolution layer, and the flat hole convolution
  • the sampling rate of the accumulation layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
  • the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
  • the pooling weighting processing module 1004 includes:
  • the feature sampling unit is configured to perform feature sampling on the first feature map through each of the first convolution branches to obtain the first sampling feature map, the second sampling feature map, and the first feature map corresponding to the first feature map.
  • a feature splicing unit configured to splice the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced splicing feature map
  • the average pooling unit is configured to perform average pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
  • the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, and the first hole convolution unit and the first hole convolution unit Two hole convolution units are connected, and the second hole convolution unit is connected to the third hole convolution unit.
  • the attention mechanism target includes multiple second convolution branches parallel to each other;
  • the pooling weighting processing module 1004 includes:
  • the first convolution processing unit is configured to perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map and the second feature map corresponding to the second feature map. Convolution feature map and the third convolution feature map;
  • the matrix multiplication unit is configured to perform transposition processing on the first convolution feature map, and perform matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature Figure;
  • the normalization processing unit is configured to perform normalization processing on the fifth feature map, and perform matrix multiplication processing on the fifth feature map after the normalization processing and the third convolution feature map to obtain the The weighting coefficient matrix corresponding to the second feature map;
  • the weighting processing unit is configured to perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
  • the result output module 1005 includes:
  • the first upsampling unit is configured to perform bilinear upsampling on the third feature map by the fusion module to obtain the fourth feature map;
  • a second convolution processing unit configured to perform convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
  • a fusion processing unit configured to perform fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map
  • the second up-sampling unit is configured to perform bilinear up-sampling on the seventh feature map to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
  • the cartilage image segmentation device includes:
  • a training image acquisition module for acquiring a first preset number of first training cartilage images
  • the training image expansion module is used to expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage Images, the second preset number is greater than the first preset number;
  • the segmentation model training module is used to train the cartilage image segmentation model by using the second training cartilage image and a preset loss function, and the loss function is:
  • B is the number of training cartilage images
  • N is the number of pixels of each training cartilage image
  • p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage
  • 0.75
  • 2.
  • FIG. 11 is a schematic structural diagram of a terminal device provided by an embodiment of this application.
  • the terminal device 11 of this embodiment includes: at least one processor 1100 (only one is shown in FIG. 11), a processor, a memory 1101, and a processor stored in the memory 1101 and capable of being processed in the at least one processor.
  • a computer program 1102 running on the processor 1100, and when the processor 1100 executes the computer program 1102, the steps in any of the foregoing cartilage image segmentation method embodiments are implemented.
  • the terminal device 11 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 1100 and a memory 1101.
  • FIG. 11 is only an example of the terminal device 11, and does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or a combination of certain components, or different components. For example, input and output devices may also be included.
  • the so-called processor 1100 may be a central processing unit (Central Processing Unit, CPU), and the processor 1100 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 1101 may be an internal storage unit of the terminal device 11 in some embodiments, such as a hard disk or memory of the terminal device 11. In other embodiments, the memory 1101 may also be an external storage device of the terminal device 11, for example, a plug-in hard disk equipped on the terminal device 11, a smart media card (SMC), a secure digital (Secure Digital, SD) card, Flash Card, etc. Further, the memory 1101 may also include both an internal storage unit of the terminal device 11 and an external storage device.
  • the memory 1101 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 1101 can also be used to temporarily store data that has been output or will be output.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the computer program product runs on a terminal device, the terminal device can realize the steps in the foregoing method embodiments when executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • entity or device capable of carrying computer program code to the photographing device/terminal device
  • recording medium computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM Random Access Memory
  • the disclosed apparatus/equipment and method may be implemented in other ways.
  • the device/equipment embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present application is applicable to the technical field of image processing, and in particular, to a cartilage image segmentation method and apparatus. The method comprises: obtaining a target cartilage image to be segmented; inputting the target cartilage image to a preset cartilage image segmentation model, the cartilage image segmentation model comprising an atrous convolution module, an atrous pyramid pooling module connected to the atrous convolution module, an attention mechanism module connected to the atrous pyramid pooling module, and a fusion module separately connected to the atrous convolution module and the attention mechanism module; extracting features of the target cartilage image by means of the atrous convolution module to obtain a first feature map; pooling the first feature map by means of the atrous pyramid pooling module to obtain a second feature map, and weighting the second feature map by means of the attention mechanism module to obtain a third feature map; and up-sampling the third feature map by means of the fusion module, and fusing a fourth feature map obtaining by sampling with the first feature map to obtain a cartilage segmentation result.

Description

一种软骨图像分割方法、装置、可读存储介质及终端设备Cartilage image segmentation method, device, readable storage medium and terminal equipment 技术领域Technical field
本申请属于图像处理技术领域,尤其涉及一种软骨图像分割方法、装置、计算机可读存储介质及终端设备。This application belongs to the field of image processing technology, and in particular relates to a method, device, computer-readable storage medium, and terminal equipment for cartilage image segmentation.
背景技术Background technique
在医学领域中,经常需要对软骨图像进行分割来得到分割出的软骨,从而可通过计算软骨厚度、体积等参数来方便对软骨情况进行评估,进而方便进行软骨疾病的诊断。目前的软骨图像分割方法主要为基于卷积神经网络模型的分割方法,而由于软骨特征的特殊性,现有的基于卷积神经网络模型的分割方法仍然存在分割精度较低的问题。In the medical field, it is often necessary to segment the cartilage image to obtain the segmented cartilage, so that parameters such as the thickness and volume of the cartilage can be calculated to facilitate the assessment of the cartilage condition, thereby facilitating the diagnosis of cartilage diseases. The current cartilage image segmentation method is mainly based on the convolutional neural network model. Due to the particularity of cartilage characteristics, the existing segmentation method based on the convolutional neural network model still has the problem of low segmentation accuracy.
技术问题technical problem
本申请实施例提供了一种软骨图像分割方法、装置、计算机可读存储介质及终端设备,可以解决现有的软骨图像分割精度低的问题。The embodiments of the present application provide a cartilage image segmentation method, device, computer-readable storage medium, and terminal equipment, which can solve the problem of low accuracy of cartilage image segmentation.
技术解决方案Technical solutions
第一方面,本申请实施例提供了一种软骨图像分割方法,包括:In the first aspect, an embodiment of the present application provides a cartilage image segmentation method, including:
获取待分割的目标软骨图像;Acquiring an image of the target cartilage to be segmented;
将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;Input the target cartilage image to a preset cartilage image segmentation model, where the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and the pyramid cavity An attention mechanism module connected to the pooling module and a fusion module connected to the hollow convolution module and the attention mechanism module respectively;
通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;Performing feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;The first feature map is pooled by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and the second feature map is weighted by the attention mechanism module Processing to obtain a third feature map corresponding to the target cartilage image;
通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。Up-sampling the third feature map by the fusion module, and fusing the sampled fourth feature map with the first feature map to obtain the target cartilage image output by the cartilage image segmentation model Cartilage segmentation results.
第二方面,本申请实施例提供了一种软骨图像分割装置,包括:In a second aspect, an embodiment of the present application provides a cartilage image segmentation device, including:
目标图像获取模块,用于获取待分割的目标软骨图像;The target image acquisition module is used to acquire the target cartilage image to be segmented;
目标图像输入模块,用于将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;The target image input module is used to input the target cartilage image to a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pooling connected to the cavity convolution module Module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module connected to the hole convolution module and the attention mechanism module respectively;
特征提取模块,用于通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;A feature extraction module, configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
池化加权处理模块,用于通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力 机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;The pooling weighting processing module is used to perform pooling processing on the first feature map through the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and to perform the pooling process through the attention mechanism module Performing weighting processing on the second feature map to obtain a third feature map corresponding to the target cartilage image;
结果输出模块,用于通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。The result output module is used to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the output of the cartilage image segmentation model The cartilage segmentation result of the target cartilage image.
第三方面,本申请实施例提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述软骨图像分割方法。In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor. When the processor executes the computer program, The cartilage image segmentation method described in the first aspect is realized.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所述软骨图像分割方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium that stores a computer program that, when executed by a processor, implements the cartilage image segmentation method described in the first aspect above .
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面所述的软骨图像分割方法。In a fifth aspect, embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the cartilage image segmentation method described in the first aspect.
有益效果Beneficial effect
本申请实施例中,通过空洞卷积模块和金字塔空洞池化模块对目标软骨图像提取多尺度的图像信息,并通过融合模块对多尺度的图像信息进行融合,可有效保留图像的细节信息,提高图像的边界分割能力,提高软骨图像的分割精度。另外,通过注意力机制模块来进行图像信息的加权处理,可有效增强软骨图像的分割能力,提高软骨图像的分割准确性和分割精度。In the embodiment of this application, multi-scale image information is extracted from the target cartilage image through the hole convolution module and the pyramid hole pooling module, and the multi-scale image information is fused through the fusion module, which can effectively retain the detailed information of the image and improve The image boundary segmentation capability improves the segmentation accuracy of cartilage images. In addition, the weighting of image information through the attention mechanism module can effectively enhance the segmentation ability of cartilage images and improve the accuracy and precision of cartilage image segmentation.
附图说明Description of the drawings
图1是本申请一实施例提供的软骨图像分割方法的流程示意图;FIG. 1 is a schematic flowchart of a cartilage image segmentation method provided by an embodiment of the present application;
图2是本申请一实施例提供的软骨图像分割模型的结构框图;2 is a structural block diagram of a cartilage image segmentation model provided by an embodiment of the present application;
图3是本申请一实施例提供的空洞卷积模块的结构示意图;3 is a schematic structural diagram of a hole convolution module provided by an embodiment of the present application;
图4是本申请一实施例提供的通道空洞卷积层的卷积示意图;FIG. 4 is a schematic diagram of convolution of a channel hole convolution layer provided by an embodiment of the present application;
图5是本申请一实施例提供的软骨图像分割方法在一个应用场景中获取第二特征图的流程示意图;FIG. 5 is a schematic flowchart of obtaining a second feature map in an application scenario by the cartilage image segmentation method provided by an embodiment of the present application;
图6是本申请一实施例提供的金字塔空洞池化模块的结构示意图;Fig. 6 is a schematic structural diagram of a pyramid cavity pooling module provided by an embodiment of the present application;
图6a是本申请一实施例提供的不同采样率的卷积示意图;Fig. 6a is a schematic diagram of convolution with different sampling rates provided by an embodiment of the present application;
图7是本申请一实施例提供的注意力机制模块的结构示意图;FIG. 7 is a schematic structural diagram of an attention mechanism module provided by an embodiment of the present application;
图8是本申请一实施例提供的软骨图像分割方法在一个应用场景中获取第三特征图的流程示意图;FIG. 8 is a schematic flowchart of obtaining a third feature map in an application scenario by the cartilage image segmentation method provided by an embodiment of the present application;
图9a是人工标记金标准的软骨分割图;Figure 9a is a cartilage segmentation diagram of artificially labeled gold standard;
图9b是本申请实施例中的软骨图像分割方法分割的软骨分割图;9b is a cartilage segmentation diagram segmented by the cartilage image segmentation method in an embodiment of the present application;
图10是本申请实施例提供的软骨图像分割装置的结构示意图;FIG. 10 is a schematic structural diagram of a cartilage image segmentation device provided by an embodiment of the present application;
图11是本申请实施例提供的终端设备的结构示意图。FIG. 11 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
本发明的实施方式Embodiments of the invention
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当 清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are proposed for a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in the specification and appended claims of this application, the term "comprising" indicates the existence of the described features, wholes, steps, operations, elements and/or components, but does not exclude one or more other The existence or addition of features, wholes, steps, operations, elements, components, and/or collections thereof.
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of this application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
图1示出了本申请实施例提供的软骨图像分割方法的示意性流程图,所述软骨图像分割方法包括:Fig. 1 shows a schematic flowchart of a cartilage image segmentation method provided by an embodiment of the present application, and the cartilage image segmentation method includes:
步骤S101、获取待分割的目标软骨图像;Step S101: Obtain a target cartilage image to be segmented;
本申请实施例的执行主体可为终端设备,所述终端设备包括但不限于:桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。在需要进行软骨分割时,可以将待分割的目标软骨图像发送至所述终端设备,其中,所述目标软骨图像可以为包含软骨的磁共振成像(MRI)图像,例如,可以为膝关节的MRI图像。The execution subject of the embodiments of the present application may be a terminal device, and the terminal device includes, but is not limited to, computing devices such as desktop computers, notebooks, palmtop computers, and cloud servers. When cartilage segmentation is required, the target cartilage image to be segmented may be sent to the terminal device, where the target cartilage image may be a magnetic resonance imaging (MRI) image containing cartilage, for example, an MRI of a knee joint image.
步骤S102、将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;Step S102. Input the target cartilage image to a preset cartilage image segmentation model, where the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and The attention mechanism module connected to the pyramid hole pooling module and the fusion module respectively connected to the hole convolution module and the attention mechanism module;
具体地,所述终端设备获取到所述目标软骨图像后,即可以调用如图2所示的软骨图像分割模型,并可以将所述目标软骨图像输入至所述软骨图像分割模型中。Specifically, after the terminal device obtains the target cartilage image, it can call the cartilage image segmentation model as shown in FIG. 2 and can input the target cartilage image into the cartilage image segmentation model.
在一种可能的实现方式中,所述将所述目标软骨图像输入至预设的软骨图像分割模型,可以包括:In a possible implementation manner, the inputting the target cartilage image to a preset cartilage image segmentation model may include:
步骤a、获取所述目标软骨图像对应的原始分辨率和原始采样距离;Step a: Obtain the original resolution and original sampling distance corresponding to the target cartilage image;
步骤b、根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离;Step b: Determine the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution;
步骤c、利用所述目标采样距离对所述目标软骨图像进行重采样,并将重采样得到的目标软骨图像输入至预设的软骨图像分割模型。Step c: Use the target sampling distance to resample the target cartilage image, and input the resampled target cartilage image into a preset cartilage image segmentation model.
其中,所述根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离,包括:Wherein, the determining the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution includes:
根据下式确定所述目标采样距离:Determine the target sampling distance according to the following formula:
Figure PCTCN2019101339-appb-000001
Figure PCTCN2019101339-appb-000001
spacing为所述目标采样距离,spacing′为所述原始采样距离,ImageRe’为所述原始分辨率,ImageRe为所述目标分辨率。Spacing is the target sampling distance, spacing' is the original sampling distance, ImageRe' is the original resolution, and ImageRe is the target resolution.
对于上述步骤a至步骤c,所述原始分辨率可为任意的分辨率,所述原始采样距离可为任意的距离,所述目标分辨率可为513×513像素的分辨率,以通 过确定目标采样距离进行图像的重采样来将所述目标软骨图像裁剪至目标分辨率,方便所述软骨图像分割模型进行图像的特征提取,提高所述软骨图像分割模型的分割效率,并可以减少对所述目标软骨图像的尺寸限制,以方便用户使用,提升用户体验。For the above steps a to c, the original resolution can be any resolution, the original sampling distance can be any distance, and the target resolution can be a resolution of 513×513 pixels to determine the target The sampling distance is used to perform image resampling to crop the target cartilage image to the target resolution, which is convenient for the cartilage image segmentation model to perform image feature extraction, improves the segmentation efficiency of the cartilage image segmentation model, and can reduce the The size of the target cartilage image is limited to facilitate user use and improve user experience.
步骤S103、通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;Step S103: Perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
应理解,所述终端设备将所述目标软骨图像输入至所述软骨图像分割模型后,所述软骨图像分割模型的空洞卷积模块即可以对所述目标软骨图像进行图像特征的提取,从而得到所述目标软骨图像对应的第一特征图。It should be understood that after the terminal device inputs the target cartilage image into the cartilage image segmentation model, the cavity convolution module of the cartilage image segmentation model can extract image features of the target cartilage image to obtain The first feature map corresponding to the target cartilage image.
在一种可能的实现方式中,如图3所示,所述空洞卷积模块为基于Xception网络结构的卷积模块,其中,所述Xception网络结构包括平面空洞卷积层和通道空洞卷积层,所述平面空洞卷积层的采样率为1或3,所述通道空洞卷积层的采样率为6。应理解,所述平面空洞卷积层和所述通道空洞卷积层的卷积核尺寸均可以为3×3。In a possible implementation manner, as shown in FIG. 3, the hole convolution module is a convolution module based on the Xception network structure, where the Xception network structure includes a flat hole convolution layer and a channel hole convolution layer The sampling rate of the planar hole convolution layer is 1 or 3, and the sampling rate of the channel hole convolution layer is 6. It should be understood that the size of the convolution kernel of the planar hole convolution layer and the channel hole convolution layer may both be 3×3.
具体地,所述Xception网络结构可包括输入单元、中间处理单元和输出单元,所述输入单元可包括串联的第一3×3卷积层(即卷积核尺寸为3×3的卷积层,下述类似表述含义相同)、第二3×3卷积层、第一卷积子单元、第二卷积子单元和第三卷积子单元,其中,所述第一卷积子单元、第二卷积子单元和第三卷积子单元均可包括第一1×1卷积层以及串联的第一平面空洞卷积层(图3中所示为平面卷积)、第二平面空洞卷积层(图3中所示为平面卷积)和第一通道空洞卷积层(图3中所示为空洞卷积);所述中间处理单元可包括串联的16个第四卷积子单元,所述第四卷积子单元可包括串联的3个第三平面空洞卷积层;所述输出单元可包括串联的第五卷积子单元和第六卷积子单元,所述第五卷积子单元可包括第二1×1卷积层以及串联的第四平面空洞卷积层、第五平面空洞卷积层和第二通道空洞卷积层,所述第六卷积子单元可包括串联的第六平面空洞卷积层、第三通道空洞卷积层和第七平面空洞卷积层。Specifically, the Xception network structure may include an input unit, an intermediate processing unit, and an output unit. The input unit may include a first 3×3 convolutional layer connected in series (that is, a convolutional layer with a convolution kernel size of 3×3). , The following similar expressions have the same meaning), the second 3×3 convolutional layer, the first convolution subunit, the second convolution subunit, and the third convolution subunit, wherein the first convolution subunit, Both the second convolution subunit and the third convolution subunit can include the first 1×1 convolution layer and the concatenated first plane hole convolution layer (the plane convolution shown in Figure 3), the second plane hole Convolutional layer (planar convolution shown in Figure 3) and the first channel convolutional layer (hole convolution shown in Figure 3); the intermediate processing unit may include 16 fourth convolutions connected in series Unit, the fourth convolution subunit may include three third planar hole convolution layers connected in series; the output unit may include a fifth convolution subunit and a sixth convolution subunit connected in series, the fifth The convolution subunit may include a second 1×1 convolution layer and a fourth planar hole convolution layer, a fifth planar hole convolution layer, and a second channel hole convolution layer connected in series. The sixth convolution subunit may It includes a concatenated sixth plane hole convolution layer, a third channel hole convolution layer and a seventh plane hole convolution layer.
需要说明的是,在所述第一1×1卷积层、所述第二1×1卷积层和所述第一通道空洞卷积层之后还可分别依次连接标准化输出层和Relu激活层。如图4所示,所述第一通道空洞卷积层、所述第二通道空洞卷积层和所述第三通道空洞卷积层主要是采用深度方向的空洞卷积,具体可以由卷积核大小为3x3的空洞卷积和跨模块的卷积核大小为1x1的卷积串联组成,可减少模型参数,提高卷积效率。It should be noted that after the first 1×1 convolutional layer, the second 1×1 convolutional layer, and the first channel hole convolutional layer, a standardized output layer and a Relu activation layer may be connected in sequence. . As shown in FIG. 4, the first channel hole convolution layer, the second channel hole convolution layer, and the third channel hole convolution layer mainly adopt depth-wise hole convolution, which can be specifically formed by convolution The core size is 3x3 hole convolution and the cross-module convolution core size is 1x1 convolution in series, which can reduce model parameters and improve convolution efficiency.
如图3所示,在所述终端设备将所述目标软骨图像输入至所述软骨图像分割模型之后,所述空洞卷积模块的输入单元可首先对所述目标软骨图像进行特征提取,并可将所提取的特征图A输入至所述空洞卷积模块的中间处理单元,所述中间处理单元则可对特征图A进行特征提取,并可将提取的特征图B输入至所述空洞卷积模块的输出单元,所述输出单元则可进一步对特征图B进行特征提取,得到所述目标软骨图像对应的第一特征图。As shown in FIG. 3, after the terminal device inputs the target cartilage image into the cartilage image segmentation model, the input unit of the cavity convolution module may first perform feature extraction on the target cartilage image, and may The extracted feature map A is input to the intermediate processing unit of the hole convolution module, and the intermediate processing unit can perform feature extraction on the feature map A, and can input the extracted feature map B to the hole convolution The output unit of the module, the output unit can further perform feature extraction on the feature map B to obtain the first feature map corresponding to the target cartilage image.
具体地,所述输入单元对所述目标软骨图像进行特征提取的过程可以为: 所述输入单元的第一3×3卷积层可首先对所述目标软骨图像进行特征提取,并可将提取的特征图R输入至所述输入单元的第二3×3卷积层;所述第二3×3卷积层则可对特征图R进行进一步的特征提取,并可将提取的特征图T输入至所述输入单元的第一卷积子单元。在此,所述第一卷积子单元的第一1×1卷积层则可以对特征图T进行特征提取,得到所提取的特征图T1,同时所述第一卷积子单元的第一平面空洞卷积层也可对特征图T进行特征提取,得到所提取的特征图T2,并可将特征图T2输入至所述第一卷积子单元的第二平面空洞卷积层,所述第二平面空洞卷积层则可对特征图T2进行特征提取,并可将提取的特征图T21输入至所述第一卷积子单元的第一通道空洞卷积层,所述第一通道空洞卷积层则可对特征图T21进行特征提取,得到特征图S。随后所述第一卷积子单元则可以对特征图T1和特征图S进行融合,并将融合得到的特征图L输入至所述输入单元的第二卷积子单元。所述第二卷积子单元则可对特征图L进行特征提取,并将提取的特征图H输入至所述输入单元的第三卷积子单元,所述第三卷积子单元则可对特征图H进行特征提取,以此得到所述输入单元提取的特征图A。在此,所述第二卷积子单元对特征图L进行的特征提取过程以及所述第三卷积子单元对特征图H进行的特征提取过程与所述第一卷积子单元对特征图T进行的特征提取过程相似,基本原理相同,为简明起见,在此不再赘述。Specifically, the process for the input unit to perform feature extraction on the target cartilage image may be: the first 3×3 convolutional layer of the input unit may first perform feature extraction on the target cartilage image, and may extract The feature map R of the input unit is input to the second 3×3 convolutional layer of the input unit; the second 3×3 convolutional layer can perform further feature extraction on the feature map R, and the extracted feature map T Input to the first convolution subunit of the input unit. Here, the first 1×1 convolution layer of the first convolution subunit can perform feature extraction on the feature map T to obtain the extracted feature map T1, and the first convolution subunit The planar cavity convolution layer can also perform feature extraction on the feature map T to obtain the extracted feature map T2, and can input the feature map T2 to the second planar cavity convolution layer of the first convolution subunit. The second plane hole convolution layer can perform feature extraction on the feature map T2, and can input the extracted feature map T21 into the first channel hole convolution layer of the first convolution subunit, and the first channel hole The convolutional layer can perform feature extraction on the feature map T21 to obtain the feature map S. Subsequently, the first convolution subunit may fuse the feature map T1 and the feature map S, and input the feature map L obtained by the fusion into the second convolution subunit of the input unit. The second convolution subunit can perform feature extraction on the feature map L, and input the extracted feature map H into the third convolution subunit of the input unit, and the third convolution subunit can The feature map H performs feature extraction to obtain the feature map A extracted by the input unit. Here, the feature extraction process performed by the second convolution subunit on the feature map L and the feature extraction process performed by the third convolution subunit on the feature map H are the same as those performed by the first convolution subunit on the feature map. The feature extraction process performed by T is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
相应地,所述中间处理单元对特征图A进行特征提取的过程可为:第一个第四卷积子单元的第一个第三平面空洞卷积层可首先对特征图A进行特征提取,并可将所提取的特征图A1输入至该第一个第四卷积子单元的第二个第三平面空洞卷积层,该第二个第三平面空洞卷积层则可对特征图A1进行特征提取,并可将提取的特征图A11输入至该第一个第四卷积子单元的第三个第三平面空洞卷积层,该第三个第三平面空洞卷积层则可对特征图A11进行特征提取,得到特征图G;随后该第一个第四卷积子单元则可对特征图A和特征图G进行融合,并可将融合得到的特征图K输入至第二个第四卷积子单元,该第二个第四卷积子单元则可对特征图K进行特征提取,并将提取的特征图输入至第三个第四卷积子单元,以此类推,直到将特征图输入至第十六个第四卷积子单元,并通过该第十六个第四卷积子单元对特征图进行特征提取,以此得到所述中间处理单元提取的特征图B。在此,第二个第四卷积子单元、第三个第四卷积子单元,……,第十六个第四卷积子单元进行特征提取的过程与第一个第四卷积子单元对特征图A进行特征提取的过程相似,基本原理相同,为简明起见,在此不再赘述。Correspondingly, the process of the intermediate processing unit performing feature extraction on the feature map A may be: the first third planar hole convolution layer of the first fourth convolution subunit can first perform feature extraction on the feature map A, And the extracted feature map A1 can be input to the second third plane hole convolution layer of the first fourth convolution subunit, and the second third plane hole convolution layer can compare the feature map A1 Perform feature extraction, and input the extracted feature map A11 into the third third plane hole convolution layer of the first fourth convolution subunit, and the third third plane hole convolution layer can be Feature map A11 performs feature extraction to obtain feature map G; then the first fourth convolution subunit can fuse feature map A and feature map G, and can input the fused feature map K to the second The fourth convolution subunit, the second and fourth convolution subunit can perform feature extraction on the feature map K, and input the extracted feature map to the third and fourth convolution subunit, and so on, until Input the feature map to the sixteenth fourth convolution subunit, and perform feature extraction on the feature map through the sixteenth fourth convolution subunit, so as to obtain the feature map B extracted by the intermediate processing unit. Here, the second and fourth convolution subunits, the third and fourth convolution subunits,..., the process of feature extraction by the sixteenth and fourth convolution subunits is the same as that of the first and fourth convolution subunits. The feature extraction process of the feature map A by the unit is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
相应地,所述输出单元对特征图B进行特征提取的过程可以为:所述输出单元的第五卷积子单元可首先对特征图B进行特征提取,并可将提取的特征图F输入至所述输出单元的第六卷积子单元,在此,所述第五卷积子单元对特征图B进行特征提取的过程与所述输入单元的第一卷积子单元对特征图T进行特征提取的过程相似,基本原理相同,为简明起见,在此不再赘述。随后所述第六卷积子单元的第六平面空洞卷积层则可对特征图F进行特征提取,并可将提 取的特征图F1输入至所述第六卷积子单元的第三通道空洞卷积层,所述第三通道空洞卷积层则可对特征图F1进行特征提取,并可将提取的特征图F11输入至所述第六卷积子单元的第七平面空洞卷积层,并可通过所述第七平面空洞卷积层对特征图F11进行特征提取,以此得到所述目标软骨图像对应的第一特征图。Correspondingly, the feature extraction process of the feature map B by the output unit may be as follows: the fifth convolution subunit of the output unit can first perform feature extraction on the feature map B, and can input the extracted feature map F to The sixth convolution subunit of the output unit, where the fifth convolution subunit performs feature extraction on the feature map B and the first convolution subunit of the input unit performs feature map T The extraction process is similar, and the basic principle is the same. For the sake of brevity, I will not repeat it here. Subsequently, the sixth plane hole convolution layer of the sixth convolution subunit can perform feature extraction on the feature map F, and can input the extracted feature map F1 into the third channel hole of the sixth convolution subunit Convolutional layer, the third channel hole convolution layer can perform feature extraction on the feature map F1, and can input the extracted feature map F11 to the seventh plane hole convolution layer of the sixth convolution subunit, The feature map F11 can be extracted through the seventh plane cavity convolution layer to obtain the first feature map corresponding to the target cartilage image.
需要说明的是,本申请实施例中的Xception网络结构为类似残差网络的结构,因此基于Xception网络结构的空洞卷积模块可有效降低梯度衰减率,避免网络结构的退化,从而确保软骨图像分割的准确性。另外,所述空洞卷积模块中采用不同采样率的空洞卷积层对所述目标软骨图像进行特征提取,可增大感受野,增加特征图所包含的信息量,有效保留图像的细节信息,提高软骨图像的分割精度。It should be noted that the Xception network structure in the embodiment of the present application is similar to the residual network structure, so the hole convolution module based on the Xception network structure can effectively reduce the gradient attenuation rate, avoid the degradation of the network structure, and ensure cartilage image segmentation Accuracy. In addition, the hole convolution module adopts the hole convolution layer with different sampling rates to extract the features of the target cartilage image, which can increase the receptive field, increase the amount of information contained in the feature map, and effectively retain the detailed information of the image. Improve the segmentation accuracy of cartilage images.
步骤S104、通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;Step S104: Perform pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and perform a pooling process on the second feature map by the attention mechanism module The image is weighted to obtain a third feature image corresponding to the target cartilage image;
应理解,所述软骨图像分割模型的空洞卷积模块、金字塔空洞池化模块和注意力机制模块依次串联连接,因此,所述空洞卷积模块提取出所述目标软骨图像对应的第一特征图后,即可将所述第一特征图输入至所述金字塔空洞池化模块,所述金字塔空洞池化模块即可对所述第一特征图进行池化处理,从而得到所述第一特征图对应的第二特征图,并可将所述第二特征图输入至所述注意力机制模块,所述注意力机制模块即可对所述第二特征图进行加权处理,从而得到所述目标软骨图像对应的第三特征图,以通过金字塔空洞池化模块来多尺度提取图像信息,提高图像的边界分割能力,提高软骨图像的分割精度,同时通过注意力机制模块来进行图像信息的加权处理,可有效提高软骨图像的分割准确性和分割精度。It should be understood that the cavity convolution module, the pyramid cavity pooling module, and the attention mechanism module of the cartilage image segmentation model are sequentially connected in series. Therefore, the cavity convolution module extracts the first feature map corresponding to the target cartilage image After that, the first feature map can be input to the pyramid hole pooling module, and the pyramid hole pooling module can perform pooling processing on the first feature map to obtain the first feature map Corresponding to the second feature map, the second feature map can be input to the attention mechanism module, and the attention mechanism module can perform weighting processing on the second feature map to obtain the target cartilage The third feature map corresponding to the image uses the pyramid hole pooling module to extract image information at multiple scales, which improves the boundary segmentation capability of the image and improves the segmentation accuracy of the cartilage image. At the same time, the attention mechanism module is used to weight the image information. It can effectively improve the segmentation accuracy and segmentation accuracy of cartilage images.
在一种可能的实现方式中,所述金字塔空洞池化模块包括相互并行的多个第一卷积分支;In a possible implementation manner, the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
具体地,如图5所示,所述通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,可以包括:Specifically, as shown in FIG. 5, performing pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image may include:
步骤S501、分别通过各所述第一卷积分支对所述第一特征图进行特征采样,得到所述第一特征图分别对应的第一采样特征图、第二采样特征图、第三采样特征图和第四采样特征图;Step S501: Perform feature sampling on the first feature map through each of the first convolution branches to obtain a first sampling feature map, a second sampling feature map, and a third sampling feature corresponding to the first feature map. Figure and the fourth sampling feature map;
步骤S502、对所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图进行拼接,得到拼接后的拼接特征图;Step S502, splicing the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced splicing feature map;
步骤S503、对所述拼接特征图进行均值池化处理,得到所述目标软骨图像对应的第二特征图。Step S503: Perform average pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
需要说明的是,所述第一卷积分支包括采样率不同的第一空洞卷积单元、第二空洞卷积单元和第三空洞卷积单元,所述第一空洞卷积单元与所述第二空洞卷积单元连接,所述第二空洞卷积单元与所述第三空洞卷积单元连接。It should be noted that the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, and the first hole convolution unit and the first hole convolution unit Two hole convolution units are connected, and the second hole convolution unit is connected to the third hole convolution unit.
具体地,如图6所示,所述金字塔空洞池化模块可以包括相互并行的4个 第一卷积分支,每一个第一卷积分支可以包括依次串联的第一空洞卷积单元、第二空洞卷积单元和第三空洞卷积单元,其中,所述第一空洞卷积单元可以包括如图6a所示的采样率为6的卷积层以及第一Relu激活层和第一dropout层,所述第二空洞卷积单元可以包括如图6a所示的采样率为12的卷积层以及第二Relu激活层和第二dropout层,所述第三空洞卷积单元可以包括如图6a所示的采样率为18的卷积层。可选地,所述金字塔空洞池化模块还可以包括与所述采样率为18的卷积层连接的拼接层以及与所述拼接层连接的均值池化层。Specifically, as shown in FIG. 6, the pyramid hole pooling module may include four first convolution branches parallel to each other, and each first convolution branch may include a first hole convolution unit and a second convolution unit connected in series in sequence. A hole convolution unit and a third hole convolution unit, wherein the first hole convolution unit may include a convolution layer with a sampling rate of 6, as shown in FIG. 6a, a first Relu activation layer and a first dropout layer, The second hole convolution unit may include a convolution layer with a sampling rate of 12 as shown in FIG. 6a, a second Relu activation layer, and a second dropout layer, and the third hole convolution unit may include the convolution layer shown in FIG. 6a. The sample rate shown is 18 convolutional layers. Optionally, the pyramid hole pooling module may further include a splicing layer connected to the convolutional layer with a sampling rate of 18 and an average pooling layer connected to the splicing layer.
应理解,所述金字塔空洞池化模块获取到所述空洞卷积模块提取的第一特征图后,即可通过并行的4个第一卷积分支分别对所述第一特征图进行特征采样,得到所述第一特征图分别对应的第一采样特征图、第二采样特征图、第三采样特征图和第四采样特征图。在此,所述第一卷积分支对所述第一特征图进行特征采样得到所述第一采样特征图的过程均可以为:首先可通过采样率为6的卷积层对所述第一特征图进行特征采样,并可将采样得到的采样特征图C输入至所述第一Relu激活层;其次可通过所述第一Relu激活层对采样特征图C进行处理,并可将处理得到的采样特征图C1输入至所述第一dropout层;再次可通过所述第一dropout层对采样特征图C1进行处理,并可将处理得到的采样特征图C2输入至采样率为12的卷积层;随后可通过采样率为12的卷积层对采样特征图C2进行进一步的特征采样,并可将采样得到的采样特征图C3输入至所述第二Relu激活层;然后可通过所述第二Relu激活层对采样特征图C3进行处理,并可将处理得到的采样特征图C4输入至所述第二dropout层,以通过所述第二dropout层对采样特征图C4进行处理,并可将处理得到的采样特征图C5输入至采样率为18的卷积层;最后,可通过采样率为18的卷积层对采样特征图C5进行特征采样,以此得到所述第一特征图对应的第一采样特征图。其中,其他各所述第一卷积分支对所述第一特征图进行特征采样得到所述第二采样特征图、所述第三采样特征图和所述第四采样特征图的过程与上述得到第一采样特征图的过程相似,基本原理相同,为简明起见,在此不再赘述。It should be understood that after the pyramid hole pooling module obtains the first feature map extracted by the hole convolution module, it can perform feature sampling on the first feature map through four parallel first convolution branches respectively. Obtain a first sampling feature map, a second sampling feature map, a third sampling feature map, and a fourth sampling feature map corresponding to the first feature map, respectively. Here, the process of performing feature sampling on the first feature map by the first convolution branch to obtain the first sampling feature map may be as follows: first, the first convolutional layer with a sampling rate of 6 can The feature map performs feature sampling, and the sampled feature map C obtained by sampling can be input to the first Relu activation layer; secondly, the sampled feature map C can be processed through the first Relu activation layer, and the processed feature map C can be processed The sampling feature map C1 is input to the first dropout layer; the sampling feature map C1 can be processed again through the first dropout layer, and the processed sampling feature map C2 can be input to the convolutional layer with a sampling rate of 12 The sampling feature map C2 can then be further sampled through the convolutional layer with a sampling rate of 12, and the sampling feature map C3 obtained by sampling can be input to the second Relu activation layer; then the second Relu activation layer can be used The Relu activation layer processes the sampled feature map C3, and can input the processed sampled feature map C4 to the second dropout layer, so as to process the sampled feature map C4 through the second dropout layer, and can process The obtained sampling feature map C5 is input to the convolutional layer with a sampling rate of 18; finally, the sampling feature map C5 can be sampled by the convolutional layer with a sampling rate of 18 to obtain the first feature map corresponding to the first feature map. A sampling feature map. Wherein, the process of performing feature sampling on the first feature map by each of the other first convolution branches to obtain the second sampling feature map, the third sampling feature map, and the fourth sampling feature map is the same as that obtained above. The process of the first sampling feature map is similar, and the basic principle is the same. For the sake of brevity, it will not be repeated here.
需要说明的是,所述第一卷积分支得到所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图后,即可将所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图分别输入至所述金字塔空洞池化模块的拼接层,所述拼接层即可对所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图进行拼接,如可将所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图进行求和拼接,得到拼接后的拼接特征图,并可将所得到的拼接特征图输入至所述金字塔空洞池化模块的均值池化层,所述均值池化层即可对所述拼接特征图进行均值池化处理,从而得到所述目标软骨图像对应的第二特征图。It should be noted that after the first convolution branch obtains the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map, all The first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map are respectively input to the splicing layer of the pyramid hole pooling module, and the splicing layer is sufficient The first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map are spliced together. For example, the first sampling feature map, the second sampling feature map can be combined The sampling feature map, the third sampling feature map, and the fourth sampling feature map are summed and spliced to obtain the spliced splicing feature map, and the obtained splicing feature map can be input to the pyramid hole pooling module The average pooling layer of, the average pooling layer can perform average pooling processing on the stitched feature map, so as to obtain the second feature map corresponding to the target cartilage image.
在一种可能的实现方式中,如图7所示,所述注意力机制模块可以包括相互并行的多个第二卷积分支,例如可以包括相互并行的3个第二卷积分支,其中,各所述第二卷积分支均可以包括卷积核尺寸为1×1、步长为2的卷积层。In a possible implementation, as shown in FIG. 7, the attention mechanism module may include multiple second convolution branches parallel to each other, for example, may include three second convolution branches parallel to each other, where: Each of the second convolution branches may include a convolution layer with a convolution kernel size of 1×1 and a step size of 2.
具体地,如图8所示,所述通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图,可以包括:Specifically, as shown in FIG. 8, performing weighting processing on the second feature map by the attention mechanism module to obtain a third feature map corresponding to the target cartilage image may include:
步骤S801、分别通过各所述第二卷积分支对所述第二特征图进行卷积处理,得到所述第二特征图分别对应的第一卷积特征图、第二卷积特征图和第三卷积特征图;Step S801: Perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map, the second convolution feature map, and the first convolution feature map corresponding to the second feature map. Three convolution feature maps;
步骤S802、对所述第一卷积特征图进行转置处理,并将转置得到的转置特征图与所述第二卷积特征图进行矩阵相乘处理,得到第五特征图;Step S802: Perform transposition processing on the first convolution feature map, and perform matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature map;
步骤S803、对所述第五特征图进行归一化处理,并将归一化处理后的第五特征图与所述第三卷积特征图进行矩阵相乘处理,得到所述第二特征图对应的加权系数矩阵;Step S803: Perform normalization processing on the fifth feature map, and perform matrix multiplication processing on the normalized fifth feature map and the third convolution feature map to obtain the second feature map Corresponding weighting coefficient matrix;
步骤S804、通过所述加权系数矩阵对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图。Step S804: Perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
对于步骤S801至步骤S804,所述金字塔空洞池化模块得到所述目标软骨图像对应的第二特征图后,即可以将所述第二特征图输入至所述注意力机制模块,所述注意力机制模块可首先通过并行的3个1×1卷积层对所述第二特征图进行卷积处理,得到所述第二特征图对应的三个卷积特征图,即通过3个1×1卷积层对所述第二特征图进行降维处理,以生成保留细节信息的第一卷积特征图、第二卷积特征图和第三卷积特征图;然后可对所述第一卷积特征图进行转置处理,并可将转置得到的转置特征图与所述第二卷积特征图进行矩阵相乘处理,得到第五特征图;随后,可对所述第五特征图进行归一化处理,如可通过softmax函数对所述第五特征图进行归一化处理,并可将归一化处理后的第五特征图与所述第三卷积特征图进行矩阵相乘处理,得到所述第二特征图对应的加权系数矩阵,即得到特征图中每一个位置相对于其他位置的注意力;最后可通过所述加权系数矩阵对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图,如可通过将所述第二特征图和所述加权系数矩阵在预设系数的基础上进行求和,得到加权后的第三特征图。For steps S801 to S804, after the pyramid hole pooling module obtains the second feature map corresponding to the target cartilage image, it can input the second feature map to the attention mechanism module, and the attention The mechanism module can first perform convolution processing on the second feature map through three parallel 1×1 convolution layers to obtain three convolution feature maps corresponding to the second feature map, that is, through three 1×1 convolutional layers. The convolutional layer performs dimensionality reduction processing on the second feature map to generate a first convolution feature map, a second convolution feature map, and a third convolution feature map that retain detailed information; then the first convolution feature map can be The product feature map is transposed, and the transposed feature map obtained by the transposition can be subjected to matrix multiplication processing with the second convolution feature map to obtain a fifth feature map; subsequently, the fifth feature map can be Perform normalization processing, for example, the fifth feature map can be normalized through the softmax function, and the normalized fifth feature map can be matrix-multiplied by the third convolution feature map Processing, the weighting coefficient matrix corresponding to the second feature map is obtained, that is, the attention of each position in the feature map relative to other positions is obtained; finally, the second feature map can be weighted by the weighting coefficient matrix, To obtain the third feature map corresponding to the target cartilage image, for example, a weighted third feature map can be obtained by summing the second feature map and the weighting coefficient matrix on the basis of preset coefficients.
步骤S105、通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。Step S105: Up-sampling the third feature map by the fusion module, and fusing the sampled fourth feature map with the first feature map to obtain the target output by the cartilage image segmentation model Cartilage segmentation result of cartilage image.
在一种可能的实现方式中,所述通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果,可以包括:In a possible implementation manner, the third feature map is upsampled by the fusion module, and the fourth feature map obtained by sampling is fused with the first feature map to obtain the cartilage The cartilage segmentation result of the target cartilage image output by the image segmentation model may include:
步骤d、通过所述融合模块对所述第三特征图进行双线性上采样,得到所述第四特征图;Step d: Perform bilinear up-sampling on the third feature map by the fusion module to obtain the fourth feature map;
步骤e、通过所述融合模块的第三卷积分支对所述第一特征图进行卷积处理,得到所述第一特征图对应的第六特征图;Step e: Perform convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
步骤f、对所述第四特征图和所述第六特征图进行融合处理,得到融合后的第七特征图;Step f: Perform fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map;
步骤g、对所述第七特征图进行双线性上采样,得到所述软骨图像分割模 型输出的所述目标软骨图像的软骨分割结果。Step g: Perform bilinear upsampling on the seventh feature map to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
对于上述步骤d至步骤g,应理解,所述双线性上采样可以为4倍双线性上采样,所述融合模块的第三卷积分支可以包括卷积核尺寸为1×1的卷积层,其中,所述第三卷积分支可与所述空洞卷积模块连接,以获取所述空洞卷积模块输出的第一特征图,并可对所述第一特征图进行卷积处理,从而得到所述第一特征图对应的第六特征图。所述融合模块则可进一步对所述第四特征图和所述第六特征图进行融合处理,即对所述第四特征图和所述第六特征图进行层拼接,以保留所述空洞卷积模块输出的特征图中的有用信息,提高软骨图像的分割精度和分割准确性;最后可通过对融合后的第七特征图进行4倍双线性上采样来恢复原特征图的大小。For the above steps d to g, it should be understood that the bilinear upsampling may be 4 times bilinear upsampling, and the third convolution branch of the fusion module may include a convolution kernel with a size of 1×1. Layers, wherein the third convolution branch can be connected to the hole convolution module to obtain the first feature map output by the hole convolution module, and can perform convolution processing on the first feature map , Thereby obtaining a sixth feature map corresponding to the first feature map. The fusion module may further perform fusion processing on the fourth feature map and the sixth feature map, that is, perform layer-wise splicing on the fourth feature map and the sixth feature map, so as to preserve the void volume. The useful information in the feature map output by the product module improves the segmentation accuracy and segmentation accuracy of the cartilage image; finally, the size of the original feature map can be restored by performing 4-fold bilinear upsampling on the fused seventh feature map.
需要说明的是,本申请实施例中,所述软骨图像分割模型可通过下述步骤训练得到:It should be noted that, in the embodiment of the present application, the cartilage image segmentation model can be obtained through training in the following steps:
步骤h、获取第一预设数量的第一训练软骨图像;Step h: Acquire a first preset number of first training cartilage images;
步骤i、利用预设扩展方式对所述第一训练软骨图像进行扩展,得到第二预设数量的第二训练软骨图像,所述第二训练软骨图像包括所述第一训练软骨图像,所述第二预设数量大于所述第一预设数量;Step i. Expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage image, and The second preset quantity is greater than the first preset quantity;
步骤j、利用所述第二训练软骨图像和预设的损失函数对所述软骨图像分割模型进行训练,所述损失函数为:Step j: Use the second training cartilage image and a preset loss function to train the cartilage image segmentation model, and the loss function is:
Figure PCTCN2019101339-appb-000002
Figure PCTCN2019101339-appb-000002
其中,B为训练软骨图像的数量,N为各训练软骨图像的像素数,p ij为第i个训练软骨图像的第j个像素属于软骨的概率,α=0.75,γ=2。 Where, B is the number of training cartilage images, N is the number of pixels of each training cartilage image, p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage, α=0.75, γ=2.
对于上述步骤h至步骤j,在训练所述软骨图像分割模型时,可首先从医学影像控制系统mimics中获取第一预设数量(如440张)的不同分辨率的第一训练软骨图像;随后,可对各第一训练软骨图像进行预处理,即可获取各第一训练软骨图像对应的原始分辨率和原始采样距离,分别根据各第一训练软骨图像对应的原始分辨率、原始采样距离和预设的目标分辨率确定对应的目标采样距离,并利用各目标采样距离对对应的第一训练软骨图像进行重采样,以使得所有第一训练软骨图像在同一原点和同一方向,在此,此处的目标采样距离的确定方式与前述所述的目标采样距离的确定方式相同;然后,可通过对各第一训练软骨图像沿xy平面做对称、拉伸及旋转放射变换来扩展第一训练软图像的数量,即扩展得到第二预设数量的第二训练软骨图像,其中,所述第二训练软骨图像可包括第一训练软骨图像;最后,可在Focal loss方法的指导下利用第二训练软骨图像对所述软骨图像分割模型进行训练,以获得最优的模型参数,即可采用Focal loss定义所述软骨图像分割模型训练时的损失函数loss,并基于Adam批量梯度下降算法最小化损失函数loss来获得最优的模型参数。For the above steps h to j, when training the cartilage image segmentation model, a first preset number (such as 440) of first training cartilage images of different resolutions may be acquired from the medical imaging control system mimics; , Each first training cartilage image can be preprocessed, and the original resolution and original sampling distance corresponding to each first training cartilage image can be obtained, and the original resolution, original sampling distance and original sampling distance corresponding to each first training cartilage image can be obtained. The preset target resolution determines the corresponding target sampling distance, and uses each target sampling distance to resample the corresponding first training cartilage image so that all the first training cartilage images are at the same origin and the same direction. Here, The method for determining the target sampling distance at the location is the same as the method for determining the target sampling distance described above; then, the first training software can be expanded by performing symmetric, stretching, and rotating radiation transformations on each first training cartilage image along the xy plane. The number of images is expanded to obtain a second preset number of second training cartilage images, where the second training cartilage images can include the first training cartilage images; finally, the second training can be used under the guidance of the Focal loss method The cartilage image trains the cartilage image segmentation model to obtain the optimal model parameters. Focal loss can be used to define the loss function loss during the cartilage image segmentation model training, and the loss function is minimized based on the Adam batch gradient descent algorithm loss to obtain the optimal model parameters.
应理解,在训练过程中还可使用dropout率为0.9的dropout层来提高训练效率。It should be understood that a dropout layer with a dropout rate of 0.9 can also be used in the training process to improve training efficiency.
下表1显示了本申请实施例中的软骨图像分割方法和基于Deeplab v3结构的软骨图像方法、基于U-net结构的软骨图像方法进行软骨图像分割的对比结果,其中,Dice为相似度系数(Dice similarity coefficient,DSC),是评估软骨分割效果的参数,Dice越大,则分割精度越高,而平均表面距离越小,则分割精度越高。由下表1所示的对比结果可知,本申请实施例中的软骨图像分割方法明显优于基于Deeplab v3结构的软骨图像方法和基于U-net结构的软骨图像方法。Table 1 below shows the comparison results of the cartilage image segmentation method in the embodiment of the present application, the cartilage image method based on the Deeplab v3 structure, and the cartilage image method based on the U-net structure, where Dice is the similarity coefficient ( Dice similarity coefficient, DSC) is a parameter for evaluating the effect of cartilage segmentation. The larger the Dice, the higher the segmentation accuracy, and the smaller the average surface distance, the higher the segmentation accuracy. It can be seen from the comparison results shown in Table 1 below that the cartilage image segmentation method in the embodiment of the present application is significantly better than the cartilage image method based on the Deeplab v3 structure and the cartilage image method based on the U-net structure.
表1Table 1
 To Dice(DSC)Dice(DSC) 平均表面距离(PG,GP)Average surface distance (PG, GP)
本申请实施例的方法The method of the embodiment of the application 77%77% (0.69mm,0.49mm)(0.69mm, 0.49mm)
Deeplab v3Deeplab v3 65%65% (2.56mm,1.49mm)(2.56mm, 1.49mm)
U-netU-net 55%55% (2.93mm,1.85mm)(2.93mm, 1.85mm)
另外,图9a和图9b示出了人工标记金标准和本申请实施例中的软骨图像分割方法的软骨分割结果的对比示意图,其中,图9a为人工标记金标准,图9b为本申请实施例中的软骨图像分割方法的软骨分割结果,由图9a和图9b可以看出,本申请实施例中的软骨图像分割方法可达到人工标记金标准的分割精度。In addition, FIG. 9a and FIG. 9b show a schematic diagram of the comparison of the cartilage segmentation results of the artificial labeling gold standard and the cartilage image segmentation method in the embodiment of the present application. Among them, FIG. 9a is the artificial labeling gold standard, and FIG. 9b is the embodiment of the application. The cartilage segmentation results of the cartilage image segmentation method in FIG. 9a and FIG. 9b show that the cartilage image segmentation method in the embodiment of the present application can reach the segmentation accuracy of the artificial labeling gold standard.
本申请实施例中,通过空洞卷积模块和金字塔空洞池化模块对目标软骨图像提取多尺度的图像信息,并通过融合模块对多尺度的图像信息进行融合,可有效保留图像的细节信息,提高图像的边界分割能力,提高软骨图像的分割精度。另外,通过注意力机制模块来进行图像信息的加权处理,可有效增强软骨图像的分割能力,提高软骨图像的分割准确性和分割精度。In the embodiment of this application, multi-scale image information is extracted from the target cartilage image through the hole convolution module and the pyramid hole pooling module, and the multi-scale image information is fused through the fusion module, which can effectively retain the detailed information of the image and improve The image boundary segmentation capability improves the segmentation accuracy of cartilage images. In addition, the weighting of image information through the attention mechanism module can effectively enhance the segmentation ability of cartilage images and improve the accuracy and precision of cartilage image segmentation.
对应于上文实施例所述的软骨图像分割方法,图10示出了本申请实施例提供的软骨图像分割装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。Corresponding to the cartilage image segmentation method described in the above embodiment, FIG. 10 shows a structural block diagram of a cartilage image segmentation device provided by an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
参照图10,所述软骨图像分割装置包括:10, the cartilage image segmentation device includes:
目标图像获取模块1001,用于获取待分割的目标软骨图像;The target image acquisition module 1001 is used to acquire the target cartilage image to be segmented;
目标图像输入模块1002,用于将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;The target image input module 1002 is configured to input the target cartilage image into a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pool connected to the cavity convolution module A transformation module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module respectively connected to the hole convolution module and the attention mechanism module;
特征提取模块1003,用于通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;The feature extraction module 1003 is configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
池化加权处理模块1004,用于通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;Pooling weighting processing module 1004, configured to perform pooling processing on the first feature map by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and pass the attention mechanism module Weighting the second feature map to obtain a third feature map corresponding to the target cartilage image;
结果输出模块1005,用于通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图 像分割模型输出的所述目标软骨图像的软骨分割结果。The result output module 1005 is configured to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the cartilage image segmentation model output The result of cartilage segmentation of the target cartilage image.
在一种可能的实现方式中,所述目标图像输入模块1002,包括:In a possible implementation manner, the target image input module 1002 includes:
原始采样距离获取单元,用于获取所述目标软骨图像对应的原始分辨率和原始采样距离;An original sampling distance obtaining unit, configured to obtain the original resolution and original sampling distance corresponding to the target cartilage image;
目标采样距离确定单元,用于根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离;A target sampling distance determining unit, configured to determine a target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution;
图像重采样单元,用于利用所述目标采样距离对所述目标软骨图像进行重采样,并将重采样得到的目标软骨图像输入至预设的软骨图像分割模型。The image resampling unit is configured to resample the target cartilage image by using the target sampling distance, and input the resampled target cartilage image into a preset cartilage image segmentation model.
可选地,所述目标采样距离确定单元,具体用于根据下式确定所述目标采样距离:Optionally, the target sampling distance determining unit is specifically configured to determine the target sampling distance according to the following formula:
Figure PCTCN2019101339-appb-000003
Figure PCTCN2019101339-appb-000003
spacing为所述目标采样距离,spacing′为所述原始采样距离,ImageRe’为所述原始分辨率,ImageRe为所述目标分辨率。Spacing is the target sampling distance, spacing' is the original sampling distance, ImageRe' is the original resolution, and ImageRe is the target resolution.
在一种可能的实现方式中,所述空洞卷积模块为基于Xception网络结构的卷积模块,其中,所述Xception网络结构包括平面空洞卷积层和通道空洞卷积层,所述平面空洞卷积层的采样率为1或3,所述通道空洞卷积层的采样率为6。In a possible implementation manner, the hole convolution module is a convolution module based on an Xception network structure, wherein the Xception network structure includes a flat hole convolution layer and a channel hole convolution layer, and the flat hole convolution The sampling rate of the accumulation layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
可选地,所述金字塔空洞池化模块包括相互并行的多个第一卷积分支;Optionally, the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
所述池化加权处理模块1004,包括:The pooling weighting processing module 1004 includes:
特征采样单元,用于分别通过各所述第一卷积分支对所述第一特征图进行特征采样,得到所述第一特征图分别对应的第一采样特征图、第二采样特征图、第三采样特征图和第四采样特征图;The feature sampling unit is configured to perform feature sampling on the first feature map through each of the first convolution branches to obtain the first sampling feature map, the second sampling feature map, and the first feature map corresponding to the first feature map. Three sampling feature map and fourth sampling feature map;
特征拼接单元,用于对所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图进行拼接,得到拼接后的拼接特征图;A feature splicing unit, configured to splice the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced splicing feature map;
均值池化单元,用于对所述拼接特征图进行均值池化处理,得到所述目标软骨图像对应的第二特征图。The average pooling unit is configured to perform average pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
需要说明的是,所述第一卷积分支包括采样率不同的第一空洞卷积单元、第二空洞卷积单元和第三空洞卷积单元,所述第一空洞卷积单元与所述第二空洞卷积单元连接,所述第二空洞卷积单元与所述第三空洞卷积单元连接。It should be noted that the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, and the first hole convolution unit and the first hole convolution unit Two hole convolution units are connected, and the second hole convolution unit is connected to the third hole convolution unit.
在一种可能的实现方式中,所述注意力机制目标包括相互并行的多个第二卷积分支;In a possible implementation manner, the attention mechanism target includes multiple second convolution branches parallel to each other;
所述池化加权处理模块1004,包括:The pooling weighting processing module 1004 includes:
第一卷积处理单元,用于分别通过各所述第二卷积分支对所述第二特征图进行卷积处理,得到所述第二特征图分别对应的第一卷积特征图、第二卷积特征图和第三卷积特征图;The first convolution processing unit is configured to perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map and the second feature map corresponding to the second feature map. Convolution feature map and the third convolution feature map;
矩阵相乘单元,用于对所述第一卷积特征图进行转置处理,并将转置得到的转置特征图与所述第二卷积特征图进行矩阵相乘处理,得到第五特征图;The matrix multiplication unit is configured to perform transposition processing on the first convolution feature map, and perform matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature Figure;
归一化处理单元,用于对所述第五特征图进行归一化处理,并将归一化处理后的第五特征图与所述第三卷积特征图进行矩阵相乘处理,得到所述第二特 征图对应的加权系数矩阵;The normalization processing unit is configured to perform normalization processing on the fifth feature map, and perform matrix multiplication processing on the fifth feature map after the normalization processing and the third convolution feature map to obtain the The weighting coefficient matrix corresponding to the second feature map;
加权处理单元,用于通过所述加权系数矩阵对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图。The weighting processing unit is configured to perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
可选地,所述结果输出模块1005,包括:Optionally, the result output module 1005 includes:
第一上采样单元,用于通过所述融合模块对所述第三特征图进行双线性上采样,得到所述第四特征图;The first upsampling unit is configured to perform bilinear upsampling on the third feature map by the fusion module to obtain the fourth feature map;
第二卷积处理单元,用于通过所述融合模块的第三卷积分支对所述第一特征图进行卷积处理,得到所述第一特征图对应的第六特征图;A second convolution processing unit, configured to perform convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
融合处理单元,用于对所述第四特征图和所述第六特征图进行融合处理,得到融合后的第七特征图;A fusion processing unit, configured to perform fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map;
第二上采样单元,用于对所述第七特征图进行双线性上采样,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。The second up-sampling unit is configured to perform bilinear up-sampling on the seventh feature map to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
在一种可能的实现方式中,所述软骨图像分割装置,包括:In a possible implementation manner, the cartilage image segmentation device includes:
训练图像获取模块,用于获取第一预设数量的第一训练软骨图像;A training image acquisition module for acquiring a first preset number of first training cartilage images;
训练图像扩展模块,用于利用预设扩展方式对所述第一训练软骨图像进行扩展,得到第二预设数量的第二训练软骨图像,所述第二训练软骨图像包括所述第一训练软骨图像,所述第二预设数量大于所述第一预设数量;The training image expansion module is used to expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage Images, the second preset number is greater than the first preset number;
分割模型训练模块,用于利用所述第二训练软骨图像和预设的损失函数对所述软骨图像分割模型进行训练,所述损失函数为:The segmentation model training module is used to train the cartilage image segmentation model by using the second training cartilage image and a preset loss function, and the loss function is:
Figure PCTCN2019101339-appb-000004
Figure PCTCN2019101339-appb-000004
其中,B为训练软骨图像的数量,N为各训练软骨图像的像素数,p ij为第i个训练软骨图像的第j个像素属于软骨的概率,α=0.75,γ=2。 Where, B is the number of training cartilage images, N is the number of pixels of each training cartilage image, p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage, α=0.75, γ=2.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction and execution process between the above-mentioned devices/units are based on the same concept as the method embodiments of this application, and their specific functions and technical effects can be found in the method embodiments. I won't repeat it here.
图11为本申请一实施例提供的终端设备的结构示意图。如图11所示,该实施例的终端设备11包括:至少一个处理器1100(图11中仅示出一个)处理器、存储器1101以及存储在所述存储器1101中并可在所述至少一个处理器1100上运行的计算机程序1102,所述处理器1100执行所述计算机程序1102时实现上述任意各个软骨图像分割方法实施例中的步骤。FIG. 11 is a schematic structural diagram of a terminal device provided by an embodiment of this application. As shown in FIG. 11, the terminal device 11 of this embodiment includes: at least one processor 1100 (only one is shown in FIG. 11), a processor, a memory 1101, and a processor stored in the memory 1101 and capable of being processed in the at least one processor. A computer program 1102 running on the processor 1100, and when the processor 1100 executes the computer program 1102, the steps in any of the foregoing cartilage image segmentation method embodiments are implemented.
所述终端设备11可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该终端设备可包括,但不仅限于,处理器1100、存储器1101。本领域技术人员可以理解,图11仅仅是终端设备11的举例,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备等。The terminal device 11 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, a processor 1100 and a memory 1101. Those skilled in the art can understand that FIG. 11 is only an example of the terminal device 11, and does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or a combination of certain components, or different components. For example, input and output devices may also be included.
所称处理器1100可以是中央处理单元(Central Processing Unit,CPU),该处理器1100还可以是其他通用处理器、数字信号处理器(Digital Signal  Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 1100 may be a central processing unit (Central Processing Unit, CPU), and the processor 1100 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
所述存储器1101在一些实施例中可以是所述终端设备11的内部存储单元,例如终端设备11的硬盘或内存。所述存储器1101在另一些实施例中也可以是所述终端设备11的外部存储设备,例如所述终端设备11上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器1101还可以既包括所述终端设备11的内部存储单元也包括外部存储设备。所述存储器1101用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器1101还可以用于暂时地存储已经输出或者将要输出的数据。The memory 1101 may be an internal storage unit of the terminal device 11 in some embodiments, such as a hard disk or memory of the terminal device 11. In other embodiments, the memory 1101 may also be an external storage device of the terminal device 11, for example, a plug-in hard disk equipped on the terminal device 11, a smart media card (SMC), a secure digital (Secure Digital, SD) card, Flash Card, etc. Further, the memory 1101 may also include both an internal storage unit of the terminal device 11 and an external storage device. The memory 1101 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 1101 can also be used to temporarily store data that has been output or will be output.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be realized.
本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application provide a computer program product. When the computer program product runs on a terminal device, the terminal device can realize the steps in the foregoing method embodiments when executed.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the implementation of all or part of the processes in the above-mentioned embodiment methods in the present application can be completed by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may at least include: any entity or device capable of carrying computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium. Such as U disk, mobile hard disk, floppy disk or CD-ROM, etc. In some jurisdictions, according to legislation and patent practices, computer-readable media cannot be electrical carrier signals and telecommunication signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may be aware that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/设备实施例仅仅是示意性的, 例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/equipment and method may be implemented in other ways. For example, the device/equipment embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or Components can be combined or integrated into another system, or some features can be omitted or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (15)

  1. 一种软骨图像分割方法,其特征在于,包括:A method for segmentation of cartilage image, characterized in that it comprises:
    获取待分割的目标软骨图像;Acquiring an image of the target cartilage to be segmented;
    将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;Input the target cartilage image to a preset cartilage image segmentation model, where the cartilage image segmentation model includes a cavity convolution module, a pyramid cavity pooling module connected to the cavity convolution module, and the pyramid cavity An attention mechanism module connected to the pooling module and a fusion module connected to the hollow convolution module and the attention mechanism module respectively;
    通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;Performing feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
    通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;The first feature map is pooled by the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and the second feature map is weighted by the attention mechanism module Processing to obtain a third feature map corresponding to the target cartilage image;
    通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。Up-sampling the third feature map by the fusion module, and fusing the sampled fourth feature map with the first feature map to obtain the target cartilage image output by the cartilage image segmentation model Cartilage segmentation results.
  2. 如权利要求1所述的软骨图像分割方法,其特征在于,所述将所述目标软骨图像输入至预设的软骨图像分割模型,包括:The cartilage image segmentation method according to claim 1, wherein the inputting the target cartilage image into a preset cartilage image segmentation model comprises:
    获取所述目标软骨图像对应的原始分辨率和原始采样距离;Acquiring the original resolution and original sampling distance corresponding to the target cartilage image;
    根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离;Determining the target sampling distance according to the original resolution, the original sampling distance, and the preset target resolution;
    利用所述目标采样距离对所述目标软骨图像进行重采样,并将重采样得到的目标软骨图像输入至预设的软骨图像分割模型。The target cartilage image is resampled by using the target sampling distance, and the target cartilage image obtained by the resampling is input into a preset cartilage image segmentation model.
  3. 如权利要求2所述的软骨图像分割方法,其特征在于,所述根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离,包括:The cartilage image segmentation method according to claim 2, wherein the determining the target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution comprises:
    根据下式确定所述目标采样距离:Determine the target sampling distance according to the following formula:
    Figure PCTCN2019101339-appb-100001
    Figure PCTCN2019101339-appb-100001
    spacing为所述目标采样距离,spacing′为所述原始采样距离,ImageRe’为所述原始分辨率,ImageRe为所述目标分辨率。Spacing is the target sampling distance, spacing' is the original sampling distance, ImageRe' is the original resolution, and ImageRe is the target resolution.
  4. 如权利要求1所述的软骨图像分割方法,其特征在于,所述空洞卷积模块为基于Xception网络结构的卷积模块,其中,所述Xception网络结构包括平面空洞卷积层和通道空洞卷积层,所述平面空洞卷积层的采样率为1或3,所述通道空洞卷积层的采样率为6。The cartilage image segmentation method according to claim 1, wherein the cavity convolution module is a convolution module based on an Xception network structure, wherein the Xception network structure includes a planar cavity convolution layer and a channel cavity convolution The sampling rate of the planar hole convolutional layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
  5. 如权利要求1所述的软骨图像分割方法,其特征在于,所述金字塔空洞池化模块包括相互并行的多个第一卷积分支;The cartilage image segmentation method according to claim 1, wherein the pyramid hole pooling module includes a plurality of first convolution branches parallel to each other;
    所述通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,包括:The performing pooling processing on the first feature map by the pyramid hole pooling module to obtain the second feature map corresponding to the target cartilage image includes:
    分别通过各所述第一卷积分支对所述第一特征图进行特征采样,得到所述第一特征图分别对应的第一采样特征图、第二采样特征图、第三采样特征图和第四采样特征图;Perform feature sampling on the first feature map through each of the first convolution branches to obtain the first sampling feature map, the second sampling feature map, the third sampling feature map, and the first feature map corresponding to the first feature map. Four sampling feature maps;
    对所述第一采样特征图、所述第二采样特征图、所述第三采样特征图和所述第四采样特征图进行拼接,得到拼接后的拼接特征图;Splicing the first sampling feature map, the second sampling feature map, the third sampling feature map, and the fourth sampling feature map to obtain a spliced feature map after splicing;
    对所述拼接特征图进行均值池化处理,得到所述目标软骨图像对应的第二特征图。Performing mean pooling processing on the stitched feature map to obtain a second feature map corresponding to the target cartilage image.
  6. 如权利要求5所述的软骨图像分割方法,其特征在于,所述第一卷积分支包括采样率不同的第一空洞卷积单元、第二空洞卷积单元和第三空洞卷积单元,所述第一空洞卷积单元与所述第二空洞卷积单元连接,所述第二空洞卷积单元与所述第三空洞卷积单元连接。The cartilage image segmentation method according to claim 5, wherein the first convolution branch includes a first hole convolution unit, a second hole convolution unit, and a third hole convolution unit with different sampling rates, so The first hole convolution unit is connected to the second hole convolution unit, and the second hole convolution unit is connected to the third hole convolution unit.
  7. 如权利要求1所述的软骨图像分割方法,其特征在于,所述注意力机制目标包括相互并行的多个第二卷积分支;The cartilage image segmentation method according to claim 1, wherein the attention mechanism target includes a plurality of second convolution branches parallel to each other;
    所述通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图,包括:The performing weighting processing on the second feature map by the attention mechanism module to obtain the third feature map corresponding to the target cartilage image includes:
    分别通过各所述第二卷积分支对所述第二特征图进行卷积处理,得到所述第二特征图分别对应的第一卷积特征图、第二卷积特征图和第三卷积特征图;Perform convolution processing on the second feature map through each of the second convolution branches to obtain the first convolution feature map, the second convolution feature map, and the third convolution corresponding to the second feature map. Feature map
    对所述第一卷积特征图进行转置处理,并将转置得到的转置特征图与所述第二卷积特征图进行矩阵相乘处理,得到第五特征图;Performing transposition processing on the first convolution feature map, and performing matrix multiplication processing on the transposed feature map obtained by the transposition and the second convolution feature map to obtain a fifth feature map;
    对所述第五特征图进行归一化处理,并将归一化处理后的第五特征图与所述第三卷积特征图进行矩阵相乘处理,得到所述第二特征图对应的加权系数矩阵;Perform a normalization process on the fifth feature map, and perform a matrix multiplication process on the fifth feature map after the normalization process and the third convolution feature map to obtain a weight corresponding to the second feature map Coefficient matrix
    通过所述加权系数矩阵对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图。Perform weighting processing on the second feature map through the weighting coefficient matrix to obtain a third feature map corresponding to the target cartilage image.
  8. 如权利要求1所述的软骨图像分割方法,其特征在于,所述通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果,包括:The cartilage image segmentation method according to claim 1, wherein the third feature map is up-sampled by the fusion module, and the fourth feature map obtained by the sampling is compared with the first feature map. Performing fusion to obtain the cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model includes:
    通过所述融合模块对所述第三特征图进行双线性上采样,得到所述第四特征图;Performing bilinear upsampling on the third feature map by the fusion module to obtain the fourth feature map;
    通过所述融合模块的第三卷积分支对所述第一特征图进行卷积处理,得到所述第一特征图对应的第六特征图;Performing convolution processing on the first feature map through the third convolution branch of the fusion module to obtain a sixth feature map corresponding to the first feature map;
    对所述第四特征图和所述第六特征图进行融合处理,得到融合后的第七特征图;Performing fusion processing on the fourth feature map and the sixth feature map to obtain a fused seventh feature map;
    对所述第七特征图进行双线性上采样,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。Performing bilinear upsampling on the seventh feature map to obtain a cartilage segmentation result of the target cartilage image output by the cartilage image segmentation model.
  9. 如权利要求1至8中任一项所述的软骨图像分割方法,其特征在于,所述软骨图像分割模型通过下述步骤训练得到:The cartilage image segmentation method according to any one of claims 1 to 8, wherein the cartilage image segmentation model is obtained through training in the following steps:
    获取第一预设数量的第一训练软骨图像;Acquiring a first preset number of first training cartilage images;
    利用预设扩展方式对所述第一训练软骨图像进行扩展,得到第二预设数量的第二训练软骨图像,所述第二训练软骨图像包括所述第一训练软骨图像,所述第二预设数量大于所述第一预设数量;The first training cartilage image is expanded using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage image, and the second preset Set the number to be greater than the first preset number;
    利用所述第二训练软骨图像和预设的损失函数对所述软骨图像分割模型进行训练,所述损失函数为:Use the second training cartilage image and a preset loss function to train the cartilage image segmentation model, and the loss function is:
    Figure PCTCN2019101339-appb-100002
    Figure PCTCN2019101339-appb-100002
    其中,B为训练软骨图像的数量,N为各训练软骨图像的像素数,p ij为第i个训练软骨图像的第j个像素属于软骨的概率,α=0.75,γ=2。 Where, B is the number of training cartilage images, N is the number of pixels of each training cartilage image, p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage, α=0.75, γ=2.
  10. 一种软骨图像分割装置,其特征在于,包括:A cartilage image segmentation device, characterized in that it comprises:
    目标图像获取模块,用于获取待分割的目标软骨图像;The target image acquisition module is used to acquire the target cartilage image to be segmented;
    目标图像输入模块,用于将所述目标软骨图像输入至预设的软骨图像分割模型,其中,所述软骨图像分割模型包括空洞卷积模块、与所述空洞卷积模块连接的金字塔空洞池化模块、与所述金字塔空洞池化模块连接的注意力机制模块以及分别与所述空洞卷积模块和所述注意力机制模块连接的融合模块;The target image input module is used to input the target cartilage image to a preset cartilage image segmentation model, wherein the cartilage image segmentation model includes a cavity convolution module and a pyramid cavity pooling connected to the cavity convolution module Module, an attention mechanism module connected to the pyramid hole pooling module, and a fusion module connected to the hole convolution module and the attention mechanism module respectively;
    特征提取模块,用于通过所述空洞卷积模块对所述目标软骨图像进行特征提取,得到所述目标软骨图像对应的第一特征图;A feature extraction module, configured to perform feature extraction on the target cartilage image through the cavity convolution module to obtain a first feature map corresponding to the target cartilage image;
    池化加权处理模块,用于通过所述金字塔空洞池化模块对所述第一特征图进行池化处理,得到所述目标软骨图像对应的第二特征图,并通过所述注意力机制模块对所述第二特征图进行加权处理,得到所述目标软骨图像对应的第三特征图;The pooling weighting processing module is used to perform pooling processing on the first feature map through the pyramid hole pooling module to obtain a second feature map corresponding to the target cartilage image, and to perform the pooling process through the attention mechanism module Performing weighting processing on the second feature map to obtain a third feature map corresponding to the target cartilage image;
    结果输出模块,用于通过所述融合模块对所述第三特征图进行上采样,并将采样得到的第四特征图与所述第一特征图进行融合,得到所述软骨图像分割模型输出的所述目标软骨图像的软骨分割结果。The result output module is used to up-sample the third feature map through the fusion module, and fuse the fourth feature map obtained by sampling with the first feature map to obtain the output of the cartilage image segmentation model The cartilage segmentation result of the target cartilage image.
  11. 如权利要求10所述的软骨图像分割装置,其特征在于,所述目标图像输入模块,包括:10. The cartilage image segmentation device of claim 10, wherein the target image input module comprises:
    原始采样距离获取单元,用于获取所述目标软骨图像对应的原始分辨率和原始采样距离;An original sampling distance obtaining unit, configured to obtain the original resolution and original sampling distance corresponding to the target cartilage image;
    目标采样距离确定单元,用于根据所述原始分辨率、所述原始采样距离和预设的目标分辨率确定目标采样距离;A target sampling distance determining unit, configured to determine a target sampling distance according to the original resolution, the original sampling distance, and a preset target resolution;
    图像重采样单元,用于利用所述目标采样距离对所述目标软骨图像进行重采样,并将重采样得到的目标软骨图像输入至预设的软骨图像分割模型。The image resampling unit is configured to resample the target cartilage image by using the target sampling distance, and input the resampled target cartilage image into a preset cartilage image segmentation model.
  12. 如权利要求10所述的软骨图像分割装置,其特征在于,所述空洞卷积模块为基于Xception网络结构的卷积模块,其中,所述Xception网络结构包括平面空洞卷积层和通道空洞卷积层,所述平面空洞卷积层的采样率为1或3,所述通道空洞卷积层的采样率为6。The cartilage image segmentation device according to claim 10, wherein the cavity convolution module is a convolution module based on the Xception network structure, wherein the Xception network structure includes a planar cavity convolution layer and a channel cavity convolution The sampling rate of the planar hole convolutional layer is 1 or 3, and the sampling rate of the channel hole convolutional layer is 6.
  13. 如权利要求10至12任一项所述的软骨图像分割装置,其特征在于,所述软骨图像分割装置,包括:The cartilage image segmentation device according to any one of claims 10 to 12, wherein the cartilage image segmentation device comprises:
    训练图像获取模块,用于获取第一预设数量的第一训练软骨图像;A training image acquisition module for acquiring a first preset number of first training cartilage images;
    训练图像扩展模块,用于利用预设扩展方式对所述第一训练软骨图像进行扩展,得到第二预设数量的第二训练软骨图像,所述第二训练软骨图像包括所述第一训练软骨图像,所述第二预设数量大于所述第一预设数量;The training image expansion module is used to expand the first training cartilage image by using a preset expansion method to obtain a second preset number of second training cartilage images, where the second training cartilage image includes the first training cartilage Images, the second preset number is greater than the first preset number;
    分割模型训练模块,用于利用所述第二训练软骨图像和预设的损失函数对所述软骨图像分割模型进行训练,所述损失函数为:The segmentation model training module is used to train the cartilage image segmentation model by using the second training cartilage image and a preset loss function, and the loss function is:
    Figure PCTCN2019101339-appb-100003
    Figure PCTCN2019101339-appb-100003
    其中,B为训练软骨图像的数量,N为各训练软骨图像的像素数,p ij为第i个训练软骨图像的第j个像素属于软骨的概率,α=0.75,γ=2。 Where, B is the number of training cartilage images, N is the number of pixels of each training cartilage image, p ij is the probability that the j-th pixel of the i-th training cartilage image belongs to the cartilage, α=0.75, γ=2.
  14. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至9任一项所述软骨图像分割方法。A terminal device, comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program as claimed in claims 1 to 9. Any of the cartilage image segmentation methods.
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9任一项所述软骨图像分割方法。A computer-readable storage medium storing a computer program, wherein the computer program implements the cartilage image segmentation method according to any one of claims 1 to 9 when the computer program is executed by a processor.
PCT/CN2019/101339 2019-08-19 2019-08-19 Cartilage image segmentation method and apparatus, readable storage medium, and terminal device WO2021031066A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/101339 WO2021031066A1 (en) 2019-08-19 2019-08-19 Cartilage image segmentation method and apparatus, readable storage medium, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/101339 WO2021031066A1 (en) 2019-08-19 2019-08-19 Cartilage image segmentation method and apparatus, readable storage medium, and terminal device

Publications (1)

Publication Number Publication Date
WO2021031066A1 true WO2021031066A1 (en) 2021-02-25

Family

ID=74659582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101339 WO2021031066A1 (en) 2019-08-19 2019-08-19 Cartilage image segmentation method and apparatus, readable storage medium, and terminal device

Country Status (1)

Country Link
WO (1) WO2021031066A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012074A (en) * 2021-04-21 2021-06-22 山东新一代信息产业技术研究院有限公司 Intelligent image processing method suitable for low-illumination environment
CN113139543A (en) * 2021-04-28 2021-07-20 北京百度网讯科技有限公司 Training method of target object detection model, target object detection method and device
CN113177938A (en) * 2021-05-25 2021-07-27 深圳大学 Method and device for segmenting brain glioma based on circular convolution kernel and related components
CN113191222A (en) * 2021-04-15 2021-07-30 中国农业大学 Underwater fish target detection method and device
CN113283466A (en) * 2021-04-12 2021-08-20 开放智能机器(上海)有限公司 Instrument reading identification method and device and readable storage medium
CN113313718A (en) * 2021-05-28 2021-08-27 华南理工大学 Acute lumbar vertebra fracture MRI image segmentation system based on deep learning
CN113326851A (en) * 2021-05-21 2021-08-31 中国科学院深圳先进技术研究院 Image feature extraction method and device, electronic equipment and storage medium
CN113393371A (en) * 2021-06-28 2021-09-14 北京百度网讯科技有限公司 Image processing method and device and electronic equipment
CN113449770A (en) * 2021-05-18 2021-09-28 科大讯飞股份有限公司 Image detection method, electronic device and storage device
CN113468967A (en) * 2021-06-02 2021-10-01 北京邮电大学 Lane line detection method, device, equipment and medium based on attention mechanism
CN113643318A (en) * 2021-06-30 2021-11-12 深圳市优必选科技股份有限公司 Image segmentation method, image segmentation device and terminal equipment
CN113744280A (en) * 2021-07-20 2021-12-03 北京旷视科技有限公司 Image processing method, apparatus, device and medium
CN113793345A (en) * 2021-09-07 2021-12-14 复旦大学附属华山医院 Medical image segmentation method and device based on improved attention module
CN113837993A (en) * 2021-07-29 2021-12-24 天津中科智能识别产业技术研究院有限公司 Lightweight iris image segmentation method and device, electronic equipment and storage medium
CN113869181A (en) * 2021-09-24 2021-12-31 电子科技大学 Unmanned aerial vehicle target detection method for selecting pooling nuclear structure
CN113989287A (en) * 2021-09-10 2022-01-28 国网吉林省电力有限公司 Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN114170167A (en) * 2021-11-29 2022-03-11 深圳职业技术学院 Polyp segmentation method and computer device based on attention-guided context correction
CN114418064A (en) * 2021-12-27 2022-04-29 西安天和防务技术股份有限公司 Target detection method, terminal equipment and storage medium
CN114445426A (en) * 2022-01-28 2022-05-06 深圳大学 Method and device for segmenting polyp region in endoscope image and related assembly
CN114758137A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic image segmentation method and device and computer readable storage medium
CN114842333A (en) * 2022-04-14 2022-08-02 湖南盛鼎科技发展有限责任公司 Remote sensing image building extraction method, computer equipment and storage medium
CN115131300A (en) * 2022-06-15 2022-09-30 北京长木谷医疗科技有限公司 Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning
CN116469132A (en) * 2023-06-20 2023-07-21 济南瑞泉电子有限公司 Fall detection method, system, equipment and medium based on double-flow feature extraction
CN116612142A (en) * 2023-07-19 2023-08-18 青岛市中心医院 Intelligent lung cancer CT sample data segmentation method and device
CN116993762A (en) * 2023-09-26 2023-11-03 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium
CN117745745A (en) * 2024-02-18 2024-03-22 湖南大学 CT image segmentation method based on context fusion perception

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170200274A1 (en) * 2014-05-23 2017-07-13 Watrix Technology Human-Shape Image Segmentation Method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN110084249A (en) * 2019-04-24 2019-08-02 哈尔滨工业大学 The image significance detection method paid attention to based on pyramid feature
CN110097550A (en) * 2019-05-05 2019-08-06 电子科技大学 A kind of medical image cutting method and system based on deep learning
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110598714A (en) * 2019-08-19 2019-12-20 中国科学院深圳先进技术研究院 Cartilage image segmentation method and device, readable storage medium and terminal equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170200274A1 (en) * 2014-05-23 2017-07-13 Watrix Technology Human-Shape Image Segmentation Method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110084249A (en) * 2019-04-24 2019-08-02 哈尔滨工业大学 The image significance detection method paid attention to based on pyramid feature
CN110097550A (en) * 2019-05-05 2019-08-06 电子科技大学 A kind of medical image cutting method and system based on deep learning
CN110598714A (en) * 2019-08-19 2019-12-20 中国科学院深圳先进技术研究院 Cartilage image segmentation method and device, readable storage medium and terminal equipment

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283466A (en) * 2021-04-12 2021-08-20 开放智能机器(上海)有限公司 Instrument reading identification method and device and readable storage medium
CN113191222B (en) * 2021-04-15 2024-05-03 中国农业大学 Underwater fish target detection method and device
CN113191222A (en) * 2021-04-15 2021-07-30 中国农业大学 Underwater fish target detection method and device
CN113012074A (en) * 2021-04-21 2021-06-22 山东新一代信息产业技术研究院有限公司 Intelligent image processing method suitable for low-illumination environment
CN113139543A (en) * 2021-04-28 2021-07-20 北京百度网讯科技有限公司 Training method of target object detection model, target object detection method and device
CN113139543B (en) * 2021-04-28 2023-09-01 北京百度网讯科技有限公司 Training method of target object detection model, target object detection method and equipment
CN113449770A (en) * 2021-05-18 2021-09-28 科大讯飞股份有限公司 Image detection method, electronic device and storage device
CN113449770B (en) * 2021-05-18 2024-02-13 科大讯飞股份有限公司 Image detection method, electronic device and storage device
CN113326851B (en) * 2021-05-21 2023-10-27 中国科学院深圳先进技术研究院 Image feature extraction method and device, electronic equipment and storage medium
CN113326851A (en) * 2021-05-21 2021-08-31 中国科学院深圳先进技术研究院 Image feature extraction method and device, electronic equipment and storage medium
CN113177938A (en) * 2021-05-25 2021-07-27 深圳大学 Method and device for segmenting brain glioma based on circular convolution kernel and related components
CN113177938B (en) * 2021-05-25 2023-04-07 深圳大学 Method and device for segmenting brain glioma based on circular convolution kernel and related components
CN113313718B (en) * 2021-05-28 2023-02-10 华南理工大学 Acute lumbar vertebra fracture MRI image segmentation system based on deep learning
CN113313718A (en) * 2021-05-28 2021-08-27 华南理工大学 Acute lumbar vertebra fracture MRI image segmentation system based on deep learning
CN113468967B (en) * 2021-06-02 2023-08-18 北京邮电大学 Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium
CN113468967A (en) * 2021-06-02 2021-10-01 北京邮电大学 Lane line detection method, device, equipment and medium based on attention mechanism
CN113393371B (en) * 2021-06-28 2024-02-27 北京百度网讯科技有限公司 Image processing method and device and electronic equipment
CN113393371A (en) * 2021-06-28 2021-09-14 北京百度网讯科技有限公司 Image processing method and device and electronic equipment
CN113643318B (en) * 2021-06-30 2023-11-24 深圳市优必选科技股份有限公司 Image segmentation method, image segmentation device and terminal equipment
CN113643318A (en) * 2021-06-30 2021-11-12 深圳市优必选科技股份有限公司 Image segmentation method, image segmentation device and terminal equipment
CN113744280A (en) * 2021-07-20 2021-12-03 北京旷视科技有限公司 Image processing method, apparatus, device and medium
CN113837993A (en) * 2021-07-29 2021-12-24 天津中科智能识别产业技术研究院有限公司 Lightweight iris image segmentation method and device, electronic equipment and storage medium
CN113837993B (en) * 2021-07-29 2024-01-30 天津中科智能识别产业技术研究院有限公司 Lightweight iris image segmentation method and device, electronic equipment and storage medium
CN113793345A (en) * 2021-09-07 2021-12-14 复旦大学附属华山医院 Medical image segmentation method and device based on improved attention module
CN113793345B (en) * 2021-09-07 2023-10-31 复旦大学附属华山医院 Medical image segmentation method and device based on improved attention module
CN113989287A (en) * 2021-09-10 2022-01-28 国网吉林省电力有限公司 Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN113869181B (en) * 2021-09-24 2023-05-02 电子科技大学 Unmanned aerial vehicle target detection method for selecting pooling core structure
CN113869181A (en) * 2021-09-24 2021-12-31 电子科技大学 Unmanned aerial vehicle target detection method for selecting pooling nuclear structure
CN114170167A (en) * 2021-11-29 2022-03-11 深圳职业技术学院 Polyp segmentation method and computer device based on attention-guided context correction
CN114418064A (en) * 2021-12-27 2022-04-29 西安天和防务技术股份有限公司 Target detection method, terminal equipment and storage medium
CN114418064B (en) * 2021-12-27 2023-04-18 西安天和防务技术股份有限公司 Target detection method, terminal equipment and storage medium
CN114445426A (en) * 2022-01-28 2022-05-06 深圳大学 Method and device for segmenting polyp region in endoscope image and related assembly
CN114842333B (en) * 2022-04-14 2022-10-28 湖南盛鼎科技发展有限责任公司 Remote sensing image building extraction method, computer equipment and storage medium
CN114842333A (en) * 2022-04-14 2022-08-02 湖南盛鼎科技发展有限责任公司 Remote sensing image building extraction method, computer equipment and storage medium
CN115131300B (en) * 2022-06-15 2023-04-07 北京长木谷医疗科技有限公司 Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning
CN114758137A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic image segmentation method and device and computer readable storage medium
CN115131300A (en) * 2022-06-15 2022-09-30 北京长木谷医疗科技有限公司 Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning
CN114758137B (en) * 2022-06-15 2022-11-01 深圳瀚维智能医疗科技有限公司 Ultrasonic image segmentation method and device and computer readable storage medium
CN116469132A (en) * 2023-06-20 2023-07-21 济南瑞泉电子有限公司 Fall detection method, system, equipment and medium based on double-flow feature extraction
CN116469132B (en) * 2023-06-20 2023-09-05 济南瑞泉电子有限公司 Fall detection method, system, equipment and medium based on double-flow feature extraction
CN116612142A (en) * 2023-07-19 2023-08-18 青岛市中心医院 Intelligent lung cancer CT sample data segmentation method and device
CN116612142B (en) * 2023-07-19 2023-09-22 青岛市中心医院 Intelligent lung cancer CT sample data segmentation method and device
CN116993762B (en) * 2023-09-26 2024-01-19 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium
CN116993762A (en) * 2023-09-26 2023-11-03 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium
CN117745745A (en) * 2024-02-18 2024-03-22 湖南大学 CT image segmentation method based on context fusion perception
CN117745745B (en) * 2024-02-18 2024-05-10 湖南大学 CT image segmentation method based on context fusion perception

Similar Documents

Publication Publication Date Title
WO2021031066A1 (en) Cartilage image segmentation method and apparatus, readable storage medium, and terminal device
CN110598714B (en) Cartilage image segmentation method and device, readable storage medium and terminal equipment
US11373305B2 (en) Image processing method and device, computer apparatus, and storage medium
WO2020125498A1 (en) Cardiac magnetic resonance image segmentation method and apparatus, terminal device and storage medium
CN111091521B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111291825B (en) Focus classification model training method, apparatus, computer device and storage medium
WO2023065503A1 (en) Facial expression classification method and electronic device
WO2021120961A1 (en) Brain addiction structure map evaluation method and apparatus
US11430123B2 (en) Sampling latent variables to generate multiple segmentations of an image
WO2020248898A1 (en) Image processing method, apparatus and device, and storage medium
CN114581628B (en) Cerebral cortex surface reconstruction method and readable storage medium
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
CN114782686A (en) Image segmentation method and device, terminal equipment and storage medium
WO2021139351A1 (en) Image segmentation method, apparatus, medium, and electronic device
CN116563533A (en) Medical image segmentation method and system based on target position priori information
CN114742750A (en) Abnormal cell detection method, abnormal cell detection device, terminal device and readable storage medium
CN114693703A (en) Skin mirror image segmentation model training and skin mirror image recognition method and device
CN114240935B (en) Space-frequency domain feature fusion medical image feature identification method and device
US20230343438A1 (en) Systems and methods for automatic image annotation
CN116189209B (en) Medical document image classification method and device, electronic device and storage medium
WO2023044612A1 (en) Image classification method and apparatus
CN116091459A (en) CT tumor image segmentation method and system based on multi-attention U-shaped network
CN115222997A (en) Testis image classification method based on deep learning
CN117635306A (en) Crop financing risk assessment method, device, equipment and medium
CN113239978A (en) Method and device for correlating medical image preprocessing model and analysis model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942549

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19942549

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19942549

Country of ref document: EP

Kind code of ref document: A1