CN111968122B - Textile material CT image segmentation method and device based on convolutional neural network - Google Patents

Textile material CT image segmentation method and device based on convolutional neural network Download PDF

Info

Publication number
CN111968122B
CN111968122B CN202010878338.2A CN202010878338A CN111968122B CN 111968122 B CN111968122 B CN 111968122B CN 202010878338 A CN202010878338 A CN 202010878338A CN 111968122 B CN111968122 B CN 111968122B
Authority
CN
China
Prior art keywords
image
segmentation
textile material
trained
decoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010878338.2A
Other languages
Chinese (zh)
Other versions
CN111968122A (en
Inventor
张锦华
须颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010878338.2A priority Critical patent/CN111968122B/en
Publication of CN111968122A publication Critical patent/CN111968122A/en
Application granted granted Critical
Publication of CN111968122B publication Critical patent/CN111968122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses a textile material CT image segmentation method and device based on convolutional neural network, firstly, constructing a segmentation model comprising an encoder and a decoder; secondly, acquiring a CT image of the textile material to be segmented, inputting the CT image into a trained segmentation model, enabling an encoder to perform feature extraction on the input image to obtain an encoding feature image, inputting the encoding feature images of a plurality of levels into corresponding levels of a decoder by the encoder to perform feature fusion to obtain a first fusion feature image, decoding the first fusion feature image by the decoder to obtain a decoding feature image, fusing the decoding feature images of middle levels and deep levels to obtain a second fusion feature image, and performing convolution operation on the second fusion feature image to obtain a segmentation result, thereby solving the technical problems that the segmentation difficulty is high, the segmentation process is complicated, time and labor are wasted, the segmentation result is greatly dependent on experience and knowledge of operators, and the segmentation result is difficult to reproduce in the existing CT image segmentation method of the textile material.

Description

Textile material CT image segmentation method and device based on convolutional neural network
Technical Field
The application relates to the technical field of image segmentation, in particular to a textile material CT image segmentation method and device based on a convolutional neural network.
Background
The image processing is an important step for analyzing the industrial CT image, so that the image is more visual and clear, and the detection efficiency and accuracy are improved. Image segmentation is an important link of image processing, and segmentation of CT images mainly involves 3 related problems: varying noise, uncertainty in pixel gray scale classification, and gray scale non-uniformity. The CT image of the textile material has the characteristics of uneven gray level and blurred edge, and brings great challenges to image segmentation. The existing textile CT image segmentation method mainly adopts manual segmentation, and the manual segmentation has high precision, but has the defects of large segmentation difficulty, complicated segmentation process, time and labor waste, the segmentation result depends on the experience and knowledge of operators to a great extent, and the segmentation result is difficult to reproduce.
Disclosure of Invention
The application provides a textile material CT image segmentation method and device based on a convolutional neural network, which are used for solving the technical problems that the existing textile material CT image segmentation method adopts manual segmentation, has large segmentation difficulty, complex segmentation process, time and labor waste, and the segmentation result depends on experience and knowledge of operators to a great extent, and the segmentation result is difficult to reproduce.
In view of this, the first aspect of the present application provides a method for CT image segmentation of textile materials based on convolutional neural networks, comprising:
constructing a segmentation model, wherein the segmentation model comprises an encoder and a decoder, transverse connection exists between a plurality of levels of the encoder and the decoder, and connection exists between an intermediate level and a deep level of the decoder;
acquiring a CT image of a textile material to be segmented, inputting the CT image of the textile material to be segmented into a trained segmentation model, enabling a plurality of layers of an encoder to serially extract features of the CT image of the textile material to be segmented to obtain an encoding feature map, inputting the encoding feature maps of the layers into corresponding layers of a decoder by the encoder to perform feature fusion to obtain a first fusion feature map, decoding the first fusion feature map by the decoder to obtain a decoding feature map, fusing the decoding feature map of an intermediate layer and the decoding feature map of a deep layer to obtain a second fusion feature map by the decoder, and performing convolution operation on the second fusion feature map by the decoder to obtain a segmentation result of the CT image of the textile material to be segmented.
Optionally, the training process of the segmentation model includes:
acquiring a CT image of a textile material to be trained, wherein the CT image of the textile material to be trained is marked with a segmentation marking result;
inputting the CT image of the textile material to be trained into the constructed segmentation model, and outputting a segmentation prediction result of the CT image of the textile material to be trained;
calculating a loss value of a segmentation prediction result and a segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function;
and updating parameters to be trained of the segmentation model according to the loss value until the segmentation model reaches a convergence condition, so as to obtain the trained segmentation model.
Optionally, the formula of the preset loss function is:
wherein y is the result of the segmentation prediction,to divide the labeling result, w n Is constant and epsilon is constant.
Optionally, before inputting the CT image of the textile material to be trained into the constructed segmentation model, the method further includes:
and carrying out normalization processing on the CT image of the textile material to be trained.
Optionally, after the normalizing the CT image of the textile material to be trained, before inputting the CT image of the textile material to be trained to the constructed segmentation model, the method further includes:
and carrying out data enhancement on the normalized CT image of the textile material to be trained.
Optionally, the encoder is composed of a plurality of convolution layers and a plurality of downsampling layers, and the decoder is composed of a plurality of convolution layers, a plurality of batch normalization layers and a plurality of upsampling layers.
A second aspect of the present application provides a textile CT image segmentation apparatus based on convolutional neural network, comprising:
a building unit, configured to build a segmentation model, where the segmentation model includes an encoder and a decoder, where a plurality of levels of the encoder and the decoder are connected laterally, and where a connection exists between an intermediate level and a deep level of the decoder;
the segmentation unit is used for acquiring CT images of textile materials to be segmented, inputting the CT images of the textile materials to be segmented into a trained segmentation model, enabling a plurality of levels of the encoder to serially extract features of the CT images of the textile materials to be segmented to obtain coding feature images, inputting the coding feature images of the levels into corresponding levels of the decoder by the encoder to perform feature fusion to obtain a first fusion feature image, decoding the first fusion feature image by the decoder to obtain a decoding feature image, fusing the decoding feature images of middle levels and deep levels by the decoder to obtain a second fusion feature image, and performing convolution operation on the second fusion feature image by the decoder to obtain segmentation results of the CT images of the textile materials to be segmented.
Optionally, the method further comprises:
the acquisition unit is used for acquiring CT images of the textile material to be trained, and the CT images of the textile material to be trained are marked with segmentation marking results;
the output unit is used for inputting the CT image of the textile material to be trained into the constructed segmentation model and outputting a segmentation prediction result of the CT image of the textile material to be trained;
the calculating unit is used for calculating the loss value of the segmentation prediction result and the segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function;
and the updating unit is used for updating the parameters to be trained of the segmentation model according to the loss value until the segmentation model reaches a convergence condition, so as to obtain the trained segmentation model.
Optionally, the method further comprises:
and the normalization unit is used for performing normalization processing on the CT image of the textile material to be trained.
Optionally, the method further comprises:
and the data enhancement unit is used for carrying out data enhancement on the normalized CT image of the textile material to be trained.
From the above technical scheme, the application has the following advantages:
the application provides a textile material CT image segmentation method based on a convolutional neural network, which comprises the following steps: constructing a segmentation model, wherein the segmentation model comprises an encoder and a decoder, transverse connection exists between a plurality of layers of the encoder and the decoder, and connection exists between an intermediate layer and a deep layer of the decoder; the method comprises the steps of obtaining a CT image of a textile material to be segmented, inputting the CT image of the textile material to be segmented into a trained segmentation model, enabling a plurality of layers of an encoder to perform feature extraction on the CT image of the textile material to be segmented in series to obtain an encoding feature image, inputting the encoding feature images of the layers into corresponding layers of a decoder by the encoder to perform feature fusion to obtain a first fusion feature image, decoding the first fusion feature image by the decoder to obtain a decoding feature image, fusing the decoding feature image of a middle layer and the decoding feature image of a deep layer to obtain a second fusion feature image by the decoder, and performing convolution operation on the second fusion feature image by the decoder to obtain a segmentation result of the CT image of the textile material to be segmented.
According to the textile material CT image segmentation method based on the convolutional neural network, the segmentation model is constructed to automatically segment the textile material CT image to be segmented, an end-to-end segmentation mode is adopted, manual processing is not needed, the advantages of high segmentation efficiency and high segmentation precision are achieved, and the segmentation result can be reproduced; in the constructed segmentation model, through transversely connecting a plurality of layers between an encoder and a decoder, the coding features output by the plurality of layers of the encoder are subjected to feature fusion with the corresponding layers of the decoder, so that the detailed feature information in the encoder can be reserved, and the segmentation precision is improved; the middle level and the deep level of the decoder are connected, so that the decoder fuses a decoding feature map of the middle level and a decoding feature map of the deep level, can fuse image features of shallower layers and deeper layers, and enhances semantic information and detail information of corresponding layers of decoding, thereby improving segmentation precision, and solving the technical problems that the existing textile CT image segmentation method is difficult to segment by adopting manual segmentation, the segmentation process is complicated, time and labor are wasted, the segmentation result depends on experience and knowledge of operators to a great extent, and the segmentation result is difficult to reproduce.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for segmenting CT images of textile materials based on convolutional neural network according to an embodiment of the present application;
fig. 2 is another flow chart of a method for segmenting CT images of textile materials based on convolutional neural network according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a segmentation model according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a segmentation prediction of a CT image of a textile material according to an embodiment of the present disclosure;
FIG. 5 is a schematic view of expert segmentation results of a CT image of a textile material according to embodiments of the present application;
fig. 6 is a schematic structural diagram of a textile CT image segmentation apparatus based on convolutional neural network according to an embodiment of the present application.
Detailed Description
The application provides a textile material CT image segmentation method and device based on a convolutional neural network, which are used for solving the technical problems that the existing textile material CT image segmentation method adopts manual segmentation, has large segmentation difficulty, complex segmentation process, time and labor waste, and the segmentation result depends on experience and knowledge of operators to a great extent, and the segmentation result is difficult to reproduce.
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
For ease of understanding, referring to fig. 1, an embodiment of a method for CT image segmentation of textile materials based on convolutional neural network provided in the present application includes:
and 101, constructing a segmentation model.
In order to avoid manual image segmentation, a segmentation model is constructed for automatic image segmentation in the embodiment of the application, wherein the segmentation model comprises an encoder and a decoder, transverse connection exists between multiple levels of the encoder and the decoder, and connection exists between an intermediate level and a deep level of the decoder.
Step 102, acquiring a CT image of a textile material to be segmented, inputting the CT image of the textile material to be segmented into a trained segmentation model, enabling a plurality of layers of an encoder to perform feature extraction on the CT image of the textile material to be segmented in series to obtain an encoded feature image, inputting the encoded feature images of the layers into corresponding layers of a decoder by the encoder to perform feature fusion to obtain a first fusion feature image, decoding the first fusion feature image by the decoder to obtain a decoding feature image, fusing the decoding feature image of the middle layer and the deep layer to obtain a second fusion feature image by the decoder, and performing convolution operation on the second fusion feature image by the decoder to obtain a segmentation result of the CT image of the textile material to be segmented.
The method comprises the steps of acquiring a CT image of a textile material to be segmented, wherein the CT image of the textile material to be segmented can be a weft yarn stretch yarn CT image, inputting the acquired CT image of the textile material to be segmented into a trained segmentation model for image segmentation, extracting context information of the input image by an encoder, and accurately positioning a part to be segmented in the input image by a decoder. And carrying out feature extraction on the CT images of the textile materials to be segmented by a plurality of layers of the encoder in the segmentation model in series to obtain an encoding feature map, and carrying out decoding processing on the encoding feature map output by the encoder by a decoder. In order to keep the detail characteristic information in the encoder process to the greatest extent, the decoder is better helped to repair the details of the image, the image segmentation precision is improved, a plurality of layers of the encoder and the decoder are transversely connected, the encoder inputs the encoding characteristic images of the plurality of layers into the corresponding layers of the decoder to perform characteristic fusion to obtain a first fusion characteristic image, and the decoder decodes the first fusion characteristic image to obtain a decoding characteristic image. In order to further improve segmentation accuracy, in the embodiment of the present application, an intermediate level and a deep level in a decoder are connected, so that the decoder fuses a decoding feature map of the intermediate level and a decoding feature map of the deep level to obtain a second fused feature map, thereby fusing image features of a shallower layer and a deeper layer, and enhancing semantic information and detail information of decoding corresponding levels. And the decoder carries out convolution operation on the second fusion feature map to obtain a segmentation result of the CT image of the textile material to be segmented.
According to the textile material CT image segmentation method based on the convolutional neural network, the segmentation model is constructed to automatically segment the textile material CT image to be segmented, an end-to-end segmentation mode is adopted, manual processing is not needed, the advantages of high segmentation efficiency and high segmentation precision are achieved, and the segmentation result can be reproduced; in the constructed segmentation model, through transversely connecting a plurality of layers between an encoder and a decoder, the coding features output by the plurality of layers of the encoder are subjected to feature fusion with the corresponding layers of the decoder, so that the detailed feature information in the encoder can be reserved, and the segmentation precision is improved; the middle level and the deep level of the decoder are connected, so that the decoder fuses a decoding feature map of the middle level and a decoding feature map of the deep level, can fuse image features of shallower layers and deeper layers, and enhances semantic information and detail information of corresponding layers of decoding, thereby improving segmentation precision, and solving the technical problems that the existing textile CT image segmentation method is difficult to segment by adopting manual segmentation, the segmentation process is complicated, time and labor are wasted, the segmentation result depends on experience and knowledge of operators to a great extent, and the segmentation result is difficult to reproduce.
The above is one embodiment of a method for segmenting CT images of textile materials based on convolutional neural networks provided in the embodiments of the present application, and the following is another embodiment of a method for segmenting CT images of textile materials based on convolutional neural networks provided in the embodiments of the present application.
For ease of understanding, referring to fig. 2, another embodiment of a method for CT image segmentation of textile materials based on convolutional neural network provided in the present application includes:
step 201, constructing a segmentation model.
In order to avoid manual image segmentation, a segmentation model is constructed in the embodiment of the present application for automatically segmenting an image, and reference is made to fig. 3 for the segmentation model in the embodiment of the present application, where the segmentation model includes an encoder and a decoder.
The encoder in the segmentation model is preferably composed of 4 convolution modules and 4 downsampling layers, a first convolution module C1 in the 4 convolution modules is composed of two convolution layers, each convolution layer is connected with an activation function, and the activation function is preferably a Relu activation function; the second convolution module C2, the third convolution module C3 and the fourth convolution module C4 are each composed of three convolution layers, each convolution layer is followed by an activation function, the activation function is preferably a Relu activation function, the number of convolution kernels of each convolution layer in the same convolution module is the same, the number of convolution kernels of different convolution modules is different, the number of convolution kernels of each convolution layer of 4 convolution modules in the encoder is 64, 128, 256 and 512 in sequence, the downsampling layer is a maximum pooling layer (max pool), and the downsampling is performed by adopting maximum pooling operation among different convolution modules.
The decoder in the segmentation model comprises 5 convolution modules and 4 up-sampling modules, wherein a fifth convolution module A5 in the 5 convolution modules consists of two convolution layers, each convolution layer is connected with an activation function, and the activation function is preferably a Relu activation function; the second convolution module A2, the third convolution module A3, the fourth convolution module A4 and the fifth convolution module A5 in the 5 convolution modules are composed of three convolution layers, each convolution layer is connected with an activation function, the activation function is preferably a Relu activation function, the convolution kernel numbers of each convolution layer in the same convolution module are the same, the convolution kernel numbers of different convolution modules are different, the convolution kernel numbers of each convolution layer in the 5 convolution modules in the decoder are 1024, 512, 256, 128 and 64, and the 4 up-sampling modules are composed of batch normalization (batch normalization) layers, up-sampling layers and 2×2 convolution layers, and the structure of the up-sampling modules can be simplified to be that the convolution kernel numbers of each convolution layer in the bn-up-conv2×2, and the 4 up-sampling modules are 512, 256, 128 and 64 in sequence.
Referring to fig. 3, in this embodiment, a first convolution module C1 in the encoder is laterally connected to a first convolution module A1 in the decoder, a second convolution module C2 in the encoder is laterally connected to a second convolution module A2 in the decoder, a third convolution module C3 in the encoder is laterally connected to a third convolution module A3 in the decoder, a fourth convolution module C4 in the encoder is laterally connected to a fourth convolution module A4 in the decoder, and in consideration of the fact that the encoding feature map of the level of each convolution module in the encoder is inconsistent with the feature map of the level of each convolution module in the decoder, the encoding feature map of each level of the encoder can be cut (crop) to obtain an encoding feature map of the same size as the feature map of the corresponding level of the decoder, and then feature map fusion is performed with the feature map in the decoder. In this embodiment, it is preferable to connect the three middle levels of the fourth convolution module A4, the third convolution module A3, and the second convolution module A2 in the decoder with the deep level of the first convolution module A1, so as to fuse the feature map of the shallower level with the feature map of the deep level.
Step 202, acquiring a CT image of the textile material to be trained, wherein the CT image of the textile material to be trained is marked with a segmentation marking result.
The CT images of the textile material to be trained can be acquired from the database in a preset number, and due to the fact that the scanning machine or the operation condition is different when the CT images of the textile material to be trained are acquired, the acquired CT images of the textile material to be trained lack consistency, and therefore normalization processing can be carried out on the CT images of the textile material to be trained after the CT images of the textile material to be trained are acquired.
In order to train the segmentation model more fully, the CT images of the textile material to be trained can be subjected to data enhancement to expand the number of the CT images of the textile material to be trained, and the CT images of the textile material to be trained can be subjected to translation transformation, rotation transformation, turnover transformation or scale transformation to perform data enhancement. The translation transformation is to translate the image on the image plane, the rotation transformation is to rotate the image clockwise or counter-clockwise according to a certain angle, the overturn transformation is to overturn the image along the horizontal or vertical direction, and the scale transformation is to enlarge or reduce the image according to a certain proportion.
And 203, inputting the CT image of the textile material to be trained into the constructed segmentation model, and outputting a segmentation prediction result of the CT image of the textile material to be trained.
And 204, calculating the loss values of the segmentation prediction result and the segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function.
Inputting a CT image of a textile material to be trained into a constructed segmentation model for image segmentation, outputting a segmentation prediction result of the CT image of the textile material to be trained, and calculating a loss value of the segmentation prediction result and the segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function, wherein the preset loss function f loss The formula of (2) is:
wherein y is the result of the segmentation prediction,to divide the labeling result, w n Is constant, preferably 0.5, and ε is constant, preferably 0.05.
And 205, updating parameters to be trained of the segmentation model according to the loss value until the segmentation model reaches a convergence condition, so as to obtain a trained segmentation model.
According to the loss value, the parameter to be trained of the segmentation model is updated through back propagation, an Adam optimizer can be adopted to optimize in the training process, momentum in the super parameter can be set to be 0.9, the initial learning rate can be set to be 0.0001, and the learning rate in training can be adjusted in a cosine annealing mode. Until the segmentation model reaches the convergence condition, specifically, the loss value of the segmentation model is lower than a preset threshold value, or the training iteration number of the segmentation model reaches the preset iteration number, the segmentation model converges, training is stopped, and the trained segmentation model is obtained.
Step 206, acquiring a CT image of the textile material to be segmented, inputting the CT image of the textile material to be segmented into a trained segmentation model, enabling a plurality of layers of an encoder to perform feature extraction on the CT image of the textile material to be segmented in series to obtain an encoded feature image, inputting the encoded feature images of the layers into corresponding layers of a decoder by the encoder to perform feature fusion to obtain a first fusion feature image, decoding the first fusion feature image by the decoder to obtain a decoding feature image, fusing the decoding feature image of the middle layer and the deep layer to obtain a second fusion feature image by the decoder, and performing convolution operation on the second fusion feature image by the decoder to obtain a segmentation result of the CT image of the textile material to be segmented.
After the CT image of the textile material to be segmented is obtained, normalization processing can be carried out on the CT image of the textile material to be segmented, the CT image of the textile material to be segmented after normalization processing is input into a trained segmentation model for image segmentation processing, an encoder is used for extracting context information of the input image, and a decoder is used for accurately positioning the part to be segmented in the input image. And carrying out feature extraction on the CT images of the textile materials to be segmented by a plurality of layers of the encoder in the segmentation model in series to obtain an encoding feature map, and carrying out decoding processing on the encoding feature map output by the encoder by a decoder. In order to keep the detail characteristic information in the encoder process to the greatest extent, the decoder is better helped to repair the details of the image, the image segmentation precision is improved, a plurality of layers of the encoder and the decoder are transversely connected, the encoder inputs the encoding characteristic images of the plurality of layers into the corresponding layers of the decoder to perform characteristic fusion to obtain a first fusion characteristic image, and the decoder decodes the first fusion characteristic image to obtain a decoding characteristic image. In order to further improve segmentation accuracy, in the embodiment of the present application, an intermediate level and a deep level in a decoder are connected, so that the decoder fuses a decoding feature map of the intermediate level and a decoding feature map of the deep level to obtain a second fused feature map, thereby fusing image features of a shallower layer and a deeper layer, and enhancing semantic information and detail information of decoding corresponding levels. And the decoder performs 1X 1 convolution operation on the second fusion feature map and processing through a sigmoid activation function to obtain a segmentation result of the CT image of the textile material to be segmented. The segmentation prediction result of the CT image of the textile material to be segmented is obtained through segmentation of the segmentation model, as shown in fig. 4, and the expert segmentation result of the CT image of the textile material to be segmented is also provided, as shown in fig. 5, by comparing fig. 4 and fig. 5, the segmentation prediction result obtained through the segmentation model is basically consistent with the expert segmentation result, and the segmentation accuracy of the segmentation model in the application is higher.
The above is an embodiment of a method for segmenting a CT image of a textile material based on a convolutional neural network provided in the embodiments of the present application, and the following is an embodiment of a device for segmenting a CT image of a textile material based on a convolutional neural network provided in the embodiments of the present application.
For ease of understanding, referring to fig. 6, an embodiment of a textile CT image segmentation apparatus based on convolutional neural network provided in the present application includes:
a construction unit 301 for constructing a segmentation model comprising an encoder and a decoder, a cross-connection between the multiple levels of the encoder and the decoder, a connection between the intermediate level and the deep level of the decoder.
The segmentation unit 302 is configured to obtain a CT image of a textile material to be segmented, input the CT image of the textile material to be segmented to a trained segmentation model, enable multiple levels of the encoder to perform feature extraction on the CT image of the textile material to be segmented in series to obtain an encoded feature map, enable the encoder to input the encoded feature maps of the multiple levels to corresponding levels of the decoder to perform feature fusion to obtain a first fused feature map, enable the decoder to perform decoding processing on the first fused feature map to obtain a decoded feature map, enable the decoder to fuse the decoded feature map of the middle level and the decoded feature map of the deep level to obtain a second fused feature map, and enable the decoder to perform convolution operation on the second fused feature map to obtain a segmentation result of the CT image of the textile material to be segmented.
As a further improvement, further comprising:
the acquiring unit 303 is configured to acquire a CT image of the textile material to be trained, where the CT image of the textile material to be trained is labeled with a segmentation labeling result.
The output unit 304 is configured to input the CT image of the textile material to be trained into the constructed segmentation model, and output a segmentation prediction result of the CT image of the textile material to be trained;
the calculating unit 305 is configured to calculate a loss value of the segmentation prediction result and the segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function.
And the updating unit 306 is configured to update the to-be-trained parameter of the segmentation model according to the loss value until the segmentation model reaches the convergence condition, thereby obtaining a trained segmentation model.
As a further improvement, further comprising:
the normalization unit 307 is configured to normalize the CT image of the textile material to be trained.
As a further improvement, further comprising:
the data enhancement unit 308 is configured to perform data enhancement on the normalized CT image of the textile material to be trained.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and units described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. The textile material CT image segmentation method based on the convolutional neural network is characterized by comprising the following steps of:
constructing a segmentation model, wherein the segmentation model comprises an encoder and a decoder, transverse connection exists between a plurality of levels of the encoder and the decoder, and connection exists between an intermediate level and a deep level of the decoder;
acquiring a CT image of a textile material to be segmented, inputting the CT image of the textile material to be segmented into a trained segmentation model, enabling a plurality of layers of a coder to serially extract features of the CT image of the textile material to be segmented to obtain a coding feature image, inputting the coding feature images of the layers into corresponding layers of a decoder by the coder to perform feature fusion to obtain a first fusion feature image, decoding the first fusion feature image by the decoder to obtain a decoding feature image, fusing the decoding feature image of an intermediate layer with the decoding feature image of a deep layer to obtain a second fusion feature image, and performing convolution operation on the second fusion feature image by the decoder to obtain a segmentation result of the CT image of the textile material to be segmented;
the training process of the segmentation model comprises the following steps:
acquiring a CT image of a textile material to be trained, wherein the CT image of the textile material to be trained is marked with a segmentation marking result;
inputting the CT image of the textile material to be trained into the constructed segmentation model, and outputting a segmentation prediction result of the CT image of the textile material to be trained;
calculating a loss value of a segmentation prediction result and a segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function;
updating parameters to be trained of the segmentation model according to the loss value until the segmentation model reaches a convergence condition, so as to obtain the trained segmentation model;
the formula of the preset loss function is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,yin order to segment the prediction result,labeling results for segmentation, ten->Is constant (I)>Is constant.
2. The method for segmenting a CT image of a textile material based on a convolutional neural network according to claim 1, wherein before inputting the CT image of the textile material to be trained into the constructed segmentation model, further comprising:
and carrying out normalization processing on the CT image of the textile material to be trained.
3. The method for segmenting a CT image of a textile material based on a convolutional neural network according to claim 2, wherein after normalizing the CT image of the textile material to be trained, before inputting the CT image of the textile material to be trained into the constructed segmentation model, further comprises:
and carrying out data enhancement on the normalized CT image of the textile material to be trained.
4. The method of claim 1, wherein the encoder is comprised of a plurality of convolutional layers and a plurality of downsampling layers, and the decoder is comprised of a plurality of convolutional layers, a plurality of batch normalization layers, and a plurality of upsampling layers.
5. A textile CT image segmentation apparatus based on convolutional neural network, comprising:
a building unit, configured to build a segmentation model, where the segmentation model includes an encoder and a decoder, where a plurality of levels of the encoder and the decoder are connected laterally, and where a connection exists between an intermediate level and a deep level of the decoder;
the segmentation unit is used for acquiring a CT image of the textile material to be segmented, inputting the CT image of the textile material to be segmented into the trained segmentation model, enabling a plurality of levels of the encoder to serially extract features of the CT image of the textile material to be segmented to obtain an encoding feature map, inputting the encoding feature maps of the levels into corresponding levels of the decoder to perform feature fusion to obtain a first fusion feature map, decoding the first fusion feature map by the decoder to obtain a decoding feature map, fusing the decoding feature map of the middle level and the decoding feature map of the deep level by the decoder to obtain a second fusion feature map, and performing convolution operation on the second fusion feature map by the decoder to obtain a segmentation result of the CT image of the textile material to be segmented;
the acquisition unit is used for acquiring CT images of the textile material to be trained, and the CT images of the textile material to be trained are marked with segmentation marking results;
the output unit is used for inputting the CT image of the textile material to be trained into the constructed segmentation model and outputting a segmentation prediction result of the CT image of the textile material to be trained;
the calculating unit is used for calculating the loss value of the segmentation prediction result and the segmentation labeling result of the CT image of the textile material to be trained based on a preset loss function;
the updating unit is used for updating the parameters to be trained of the segmentation model according to the loss value until the segmentation model reaches a convergence condition, so as to obtain the trained segmentation model;
the formula of the preset loss function is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,yin order to segment the prediction result,labeling results for segmentation, ten->Is constant (I)>Is constant.
6. The convolutional neural network-based textile material CT image segmentation device of claim 5, further comprising:
and the normalization unit is used for performing normalization processing on the CT image of the textile material to be trained.
7. The convolutional neural network-based textile material CT image segmentation device of claim 6, further comprising:
and the data enhancement unit is used for carrying out data enhancement on the normalized CT image of the textile material to be trained.
CN202010878338.2A 2020-08-27 2020-08-27 Textile material CT image segmentation method and device based on convolutional neural network Active CN111968122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010878338.2A CN111968122B (en) 2020-08-27 2020-08-27 Textile material CT image segmentation method and device based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010878338.2A CN111968122B (en) 2020-08-27 2020-08-27 Textile material CT image segmentation method and device based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111968122A CN111968122A (en) 2020-11-20
CN111968122B true CN111968122B (en) 2023-07-28

Family

ID=73399323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010878338.2A Active CN111968122B (en) 2020-08-27 2020-08-27 Textile material CT image segmentation method and device based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111968122B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614091A (en) * 2020-12-10 2021-04-06 清华大学 Ultrasonic multi-section data detection method for congenital heart disease
CN114219807B (en) * 2022-02-22 2022-07-12 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks
CN110428427A (en) * 2019-07-16 2019-11-08 东南大学 Semi-supervised arteria renalis dividing method based on intensive biasing networks and self-encoding encoder
WO2020119679A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Three-dimensional left atrium segmentation method and apparatus, terminal device, and storage medium
CN111311629A (en) * 2020-02-21 2020-06-19 京东方科技集团股份有限公司 Image processing method, image processing device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020119679A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Three-dimensional left atrium segmentation method and apparatus, terminal device, and storage medium
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks
CN110428427A (en) * 2019-07-16 2019-11-08 东南大学 Semi-supervised arteria renalis dividing method based on intensive biasing networks and self-encoding encoder
CN111311629A (en) * 2020-02-21 2020-06-19 京东方科技集团股份有限公司 Image processing method, image processing device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多尺度特征融合模型的遥感图像建筑物分割;徐胜军;欧阳朴衍;郭学源;Khan Taha Muthar;;计算机测量与控制(07);第220-225页 *

Also Published As

Publication number Publication date
CN111968122A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111563902B (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
EP3916635B1 (en) Defect detection method and apparatus
US11735322B2 (en) Systems and methods for ossification center detection and bone age assessment
CN110097554B (en) Retina blood vessel segmentation method based on dense convolution and depth separable convolution
CN111968122B (en) Textile material CT image segmentation method and device based on convolutional neural network
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN109727195B (en) Image super-resolution reconstruction method
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN110689525B (en) Method and device for identifying lymph nodes based on neural network
CN110675339A (en) Image restoration method and system based on edge restoration and content restoration
CN104217404A (en) Video image sharpness processing method in fog and haze day and device thereof
CN111783494B (en) Damaged two-dimensional code recovery method of convolution self-encoder combined with binary segmentation
CN111369565A (en) Digital pathological image segmentation and classification method based on graph convolution network
CN110969632A (en) Deep learning model training method, image processing method and device
CN111445474A (en) Kidney CT image segmentation method based on bidirectional complex attention depth network
CN116934780B (en) Deep learning-based electric imaging logging image crack segmentation method and system
CN114549555A (en) Human ear image planning and division method based on semantic division network
CN113762265B (en) Classified segmentation method and system for pneumonia
CN117253071B (en) Semi-supervised target detection method and system based on multistage pseudo tag enhancement
CN111612802A (en) Re-optimization training method based on existing image semantic segmentation model and application
CN113706469B (en) Iris automatic segmentation method and system based on multi-model voting mechanism
CN113409321B (en) Cell nucleus image segmentation method based on pixel classification and distance regression
CN112102306B (en) Dual-GAN-based defect detection method for edge repair feature fusion
CN114821067A (en) Pathological image segmentation method based on point annotation data
CN117078564B (en) Intelligent generation method and system for video conference picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant