CN115880491A - Crack image segmentation method based on residual error network and back-and-forth sampling - Google Patents

Crack image segmentation method based on residual error network and back-and-forth sampling Download PDF

Info

Publication number
CN115880491A
CN115880491A CN202211620981.0A CN202211620981A CN115880491A CN 115880491 A CN115880491 A CN 115880491A CN 202211620981 A CN202211620981 A CN 202211620981A CN 115880491 A CN115880491 A CN 115880491A
Authority
CN
China
Prior art keywords
convolution operation
decoding
operation group
layer
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211620981.0A
Other languages
Chinese (zh)
Inventor
厉涛
谢永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202211620981.0A priority Critical patent/CN115880491A/en
Publication of CN115880491A publication Critical patent/CN115880491A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a crack image segmentation method based on a residual error network and round-trip sampling, which comprises the following steps: inputting an image to be segmented into an encoder network to obtain a characteristic mapping F1 of the deepest layer; inputting the feature mapping F1 into a back-and-forth sampling module to obtain a feature mapping F2; inputting the feature mapping F2 into a decoder network to obtain a feature mapping F3, and inputting the feature mapping F3 into a prediction network to obtain a crack segmentation image. And replacing convolution layers in the first four layers of convolution operation groups of the encoder network and the decoder network with a residual error network module. And replacing the convolution layers in the fifth layer convolution operation group of the encoder network and the decoder network with a combination of a residual error network module and a cavity space pyramid pooling module. Compared with the prior art, the method has the advantages that the network performance is improved, the problem that deep semantic information is diluted is solved, and the crack segmentation effect is obviously improved.

Description

Crack image segmentation method based on residual error network and back-and-forth sampling
Technical Field
The invention belongs to the technical field of crack detection, and particularly relates to a crack image segmentation method based on a residual error network and round-trip sampling.
Background
The crack detection problem can also be seen as a pixel-level semantic segmentation task. Since the image has more position information in the shallow layer convolution layer and more semantic information in the deep layer convolution layer, the features of different scales need to be fused. The typical U-net network adopts a mode of skipping connection to perform feature fusion, and a good training effect is obtained. But this causes the deep semantic features to be diluted layer by layer, resulting in the network degrading into a shallow network, thus losing the advantages of the deep network.
Deep learning is a machine learning branch inspired by human brain structures, and many neural network algorithms are proposed for target detection and image classification tasks. At present, classical neural networks, such as a fully-connected neural network FCN, a full convolution network CNN, and other neural network algorithms have obvious advantages in semantic segmentation compared with the conventional detection method, but nevertheless, training of the neural networks requires a large amount of sample data to obtain an optimal result, and as the number of network layers increases, the problem of gradient disappearance or explosion also occurs in the training process.
Disclosure of Invention
In order to improve the utilization rate of the sample and solve the problems of gradient disappearance and the like, the structure of the neural network needs to be optimized so as to achieve the optimal training result.
The purpose is as follows: in order to solve the problem and further optimize the network structure, the invention provides a crack image segmentation method based on a residual error network and round-trip sampling, which is beneficial to solving the problem that deep semantic information is diluted, improving the network performance and optimizing the crack image segmentation effect.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a fracture image segmentation method based on a residual error network and round-trip sampling is provided, which includes:
s1, inputting an image to be segmented into an encoder network to obtain a deepest feature mapping F1;
wherein the encoder network comprises, in order, a first, second, third, fourth, and fifth encoding convolution operation groups connected by a maximum pooling layer;
the first coding convolution operation group, the second coding convolution operation group, the third coding convolution operation group and the fourth coding convolution operation group are all composed of two residual error network modules; each residual error network module comprises a convolution layer, an activation layer and a batch normalization layer;
the fifth coding convolution operation group consists of two combined modules, and each combined module is a combination of a residual error network module and a cavity space pyramid pooling module;
s2, inputting the feature mapping F1 into a back-and-forth sampling module to obtain a feature mapping F2; the round-trip sampling module consists of two convolution operation groups and a down-sampling layer, wherein each convolution operation group consists of an up-sampling layer and two convolution layers;
step S3, inputting the feature mapping F2 into a decoder network to obtain a feature mapping F3,
wherein the decoder network comprises a fifth decoding convolution operation group, a fourth decoding convolution operation group, a third decoding convolution operation group, a second decoding convolution operation group and a first decoding convolution operation group which are connected by an upsampling layer in sequence;
the first decoding convolution operation group, the second decoding convolution operation group, the third decoding convolution operation group and the fourth decoding convolution operation group are all composed of two residual error network modules; each residual error network module comprises a convolution layer, an activation layer and a batch normalization layer;
the fifth decoding convolution operation group consists of two combined modules, and each combined module is a combination of a residual error network module and a cavity space pyramid pooling module;
and S4, inputting the feature mapping F3 into a prediction network to obtain a crack segmentation image.
In some embodiments, step S1 comprises:
the image to be segmented is subjected to a first coding convolution operation group to obtain a first coding feature map;
the first coding feature graph is input into a second coding convolution operation group after passing through a maximum pooling layer to obtain a second coding feature graph;
the second coding feature graph is input into a third coding convolution operation group after passing through a maximum pooling layer to obtain a third coding feature graph;
inputting the third coding feature graph into a fourth coding convolution operation group after passing through a maximum pooling layer to obtain a fourth coding feature graph;
and inputting the fourth coding feature map into a fifth coding convolution operation group after passing through a maximum pooling layer to obtain a feature mapping F1.
In some embodiments, step S3 comprises:
the feature mapping F2 obtains a fifth decoding feature map through a fifth decoding convolution operation group;
after the fifth decoding characteristic diagram is subjected to up-sampling and fused with the fourth coding characteristic diagram, inputting the fifth decoding characteristic diagram into a fourth decoding convolution operation group to obtain a fourth decoding characteristic diagram;
after the fourth decoding characteristic diagram is subjected to upsampling and then fused with the third coding characteristic diagram, inputting the fused fourth decoding characteristic diagram into a third decoding convolution operation group to obtain a third decoding characteristic diagram;
after the third decoding characteristic diagram is up-sampled and fused with the second coding characteristic diagram, inputting the third decoding characteristic diagram into a second decoding convolution operation group to obtain a second decoding characteristic diagram;
and after the second decoding characteristic diagram is subjected to up-sampling and fused with the first coding characteristic diagram, inputting the second decoding characteristic diagram into a first decoding convolution operation group to obtain a characteristic mapping F3.
In some embodiments, the fusing is performed using an optimized feature fusion unit that includes a fusion block, a batch normalization layer, and a convolution layer.
In some embodiments, in step S4, the predictive network uses a Sigmoid function.
In some embodiments, each residual network module comprises two 3*3 convolutional layers, two active layers, three batch normalization layers, and one 1*1 convolutional layer;
the input characteristics of the residual error network module are sequentially processed by a first 3*3 convolution layer, a first batch of normalization layer, a first activation layer, a second 3*3 convolution layer and a second batch of normalization layer to obtain first characteristics,
the input features of the residual error network module are sequentially processed by a 1*1 convolution layer and a third normalization layer to obtain second features;
and after the first characteristic and the second characteristic are fused, processing the fused first characteristic and the second characteristic by a second activation layer to obtain the output characteristic of the residual error network module.
In some embodiments, the entire codec network employs a cross-entropy loss function L ce
Figure BDA0004002119730000041
Where y is the true value of the image,
Figure BDA0004002119730000042
is the predicted value of the image, β is the weighting coefficient, and γ is the adjustable focus parameter.
In a second aspect, the invention provides a fracture image segmentation device based on a residual error network and round-trip sampling, comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to the first aspect.
In a third aspect, the present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
In a fourth aspect, the present invention provides a computer device comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention provides a crack image segmentation method based on a residual error network and round-trip sampling. A new model is provided for a semantic segmentation task by adopting a codec network based on a residual error network and round-trip sampling, the structural advantages of the codec are reserved, and the problem that deep semantic information is diluted is solved.
(2) The invention introduces the residual error network module and the pyramid pooling of the cavity space, solves the problem of gradient disappearance and improves the network performance. And meanwhile, the capability of the network for processing the context information is improved.
The method uses the optimized feature fusion unit and the improved cross entropy loss function, so that the final crack detection effect is improved, particularly the crack edge segmentation effect.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention.
Fig. 2 is a diagram of a codec network according to an embodiment of the present invention.
Fig. 3 is a structure diagram of a ResBlock residual network module in the embodiment of the present invention.
Fig. 4 is a feature fusion unit structure optimized in the embodiment of the present invention.
Detailed Description
The invention is further explained by the following embodiments in conjunction with the drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present numbers, and the above, below, within, etc. are understood as including the present numbers. If there is a description of first and second for the purpose of distinguishing technical features only, this is not to be understood as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of technical features indicated.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Example 1
As shown in fig. 1, a fracture image segmentation method based on residual error network and round-trip sampling includes:
s1, inputting an image to be segmented into an encoder network to obtain a deepest feature mapping F1;
wherein the encoder network comprises, in order, a first, second, third, fourth, and fifth encoding convolution operation groups connected by a maximum pooling layer;
the first coding convolution operation group, the second coding convolution operation group, the third coding convolution operation group and the fourth coding convolution operation group are all composed of two residual error network modules; each residual error network module comprises a convolution layer, an activation layer and a batch normalization layer;
the fifth coding convolution operation group consists of two combined modules, and each combined module is a combination of a residual error network module and a cavity space pyramid pooling module;
s2, inputting the feature mapping F1 into a back-and-forth sampling module to obtain a feature mapping F2; the round-trip sampling module consists of two convolution operation groups and a down-sampling layer, wherein each convolution operation group consists of an up-sampling layer and two convolution layers;
step S3, inputting the feature mapping F2 into a decoder network to obtain a feature mapping F3,
wherein the decoder network comprises a fifth decoding convolution operation group, a fourth decoding convolution operation group, a third decoding convolution operation group, a second decoding convolution operation group and a first decoding convolution operation group which are connected by an upsampling layer in sequence;
the first decoding convolution operation group, the second decoding convolution operation group, the third decoding convolution operation group and the fourth decoding convolution operation group are all composed of two residual error network modules; each residual error network module comprises a convolution layer, an activation layer and a batch normalization layer;
the fifth decoding convolution operation group consists of two combined modules, and each combined module is a combination of a residual error network module and a cavity space pyramid pooling module;
and S4, inputting the feature mapping F3 into a prediction network to obtain a crack segmentation image.
In some embodiments, as shown in fig. 2, step S1 comprises:
the image to be segmented is subjected to a first coding convolution operation group to obtain a first coding feature map;
the first coding feature graph is input into a second coding convolution operation group after passing through a maximum pooling layer to obtain a second coding feature graph;
the second coding feature graph is input into a third coding convolution operation group after passing through a maximum pooling layer to obtain a third coding feature graph;
inputting the third coding feature graph into a fourth coding convolution operation group after passing through a maximum pooling layer to obtain a fourth coding feature graph;
and inputting the fourth coding feature map into a fifth coding convolution operation group after passing through a maximum pooling layer to obtain a feature mapping F1.
In some embodiments, as shown in fig. 2, step S3 comprises:
the feature mapping F2 is subjected to a fifth decoding convolution operation group to obtain a fifth decoding feature map;
after the fifth decoding characteristic diagram is subjected to up-sampling and fused with the fourth coding characteristic diagram, inputting the fifth decoding characteristic diagram into a fourth decoding convolution operation group to obtain a fourth decoding characteristic diagram;
after the fourth decoding characteristic diagram is subjected to upsampling and then fused with the third coding characteristic diagram, inputting the fused fourth decoding characteristic diagram into a third decoding convolution operation group to obtain a third decoding characteristic diagram;
after the third decoding characteristic diagram is up-sampled and fused with the second coding characteristic diagram, inputting the third decoding characteristic diagram into a second decoding convolution operation group to obtain a second decoding characteristic diagram;
and after the second decoding characteristic diagram is subjected to up-sampling and fused with the first coding characteristic diagram, inputting the second decoding characteristic diagram into a first decoding convolution operation group to obtain a characteristic mapping F3.
In some embodiments, as shown in fig. 3, each residual network module includes two 3*3 convolutional layers, two active layers, three batch normalization layers, and one 1*1 convolutional layer;
the input characteristics of the residual error network module are processed by a first 3*3 convolution layer, a first normalization layer, a first activation layer, a second 3*3 convolution layer and a second normalization layer in sequence to obtain first characteristics,
the input features of the residual error network module are sequentially processed by a 1*1 convolution layer and a third normalization layer to obtain second features;
and after the first characteristic and the second characteristic are fused, the output characteristic of the residual error network module is obtained through the processing of a second activation layer.
In some embodiments, the fusion is performed using an optimized feature fusion unit, as shown in fig. 4, which includes a fusion block, a batch normalization layer, and a convolution layer.
In some embodiments, in step S4, the predictive network uses a Sigmoid function.
In some embodiments, a codec network further optimization scheme based on residual network and round-trip sampling replaces the loss function used by the network with an improved cross-entropy loss function.
The whole codec network adopts a cross entropy loss function L ce Comprises the following steps:
Figure BDA0004002119730000091
where y is the true value of the image,
Figure BDA0004002119730000092
is a predicted value of the picture, and β is a weight systemNumber, γ, is an adjustable focusing parameter.
The invention replaces the convolution layers in the first four layers of convolution operation groups of the encoder network and the decoder network with the residual error network module. Wherein each residual network module comprises a convolutional layer, an activation layer and a batch normalization layer.
The convolution layer in the fifth layer convolution operation group of the encoder network and the decoder network is replaced by the combination of the residual error network module and the cavity space pyramid pooling module.
The invention uses an optimized feature fusion unit instead of a fusion unit in the decoding network.
The present invention provides a crack image segmentation method based on residual error network and round-trip sampling, and a number of methods and approaches for implementing the technical solution are provided, the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a number of improvements and modifications may be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Example 2
In a second aspect, the present embodiment provides a fracture image segmentation apparatus based on a residual error network and round-trip sampling, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method of embodiment 1.
Example 3
In a third aspect, the present embodiment provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of embodiment 1.
Example 4
In a fourth aspect, the present embodiment provides a computer device, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to embodiment 1.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (10)

1. A crack image segmentation method based on residual error network and round-trip sampling is characterized by comprising the following steps:
s1, inputting an image to be segmented into an encoder network to obtain a deepest feature mapping F1;
wherein the encoder network comprises, in order, a first, second, third, fourth, and fifth encoding convolution operation groups connected by a maximum pooling layer; the first coding convolution operation group, the second coding convolution operation group, the third coding convolution operation group and the fourth coding convolution operation group are all composed of two residual error network modules; each residual error network module comprises a convolution layer, an activation layer and a batch normalization layer; the fifth coding convolution operation group consists of two combined modules, and each combined module is a combination of a residual error network module and a cavity space pyramid pooling module;
s2, inputting the feature mapping F1 into a back-and-forth sampling module to obtain a feature mapping F2; the round-trip sampling module consists of two convolution operation groups and a down-sampling layer, wherein each convolution operation group consists of an up-sampling layer and two convolution layers;
step S3, inputting the feature mapping F2 into a decoder network to obtain a feature mapping F3,
wherein the decoder network comprises a fifth decoding convolution operation group, a fourth decoding convolution operation group, a third decoding convolution operation group, a second decoding convolution operation group and a first decoding convolution operation group which are connected by an upsampling layer in sequence; the first decoding convolution operation group, the second decoding convolution operation group, the third decoding convolution operation group and the fourth decoding convolution operation group are all composed of two residual error network modules; each residual error network module comprises a convolution layer, an activation layer and a batch normalization layer; the fifth decoding convolution operation group consists of two combined modules, and each combined module is a combination of a residual error network module and a cavity space pyramid pooling module;
and S4, inputting the feature mapping F3 into a prediction network to obtain a crack segmentation image.
2. The method for segmenting the crack image based on the residual error network and the round-trip sampling according to claim 1, wherein the step S1 comprises:
the image to be segmented is subjected to a first coding convolution operation group to obtain a first coding feature map;
the first coding feature graph is input into a second coding convolution operation group after passing through a maximum pooling layer to obtain a second coding feature graph;
the second coding feature graph is input into a third coding convolution operation group after passing through a maximum pooling layer to obtain a third coding feature graph;
inputting the third coding feature graph into a fourth coding convolution operation group after passing through a maximum pooling layer to obtain a fourth coding feature graph;
and inputting the fourth coding feature map into a fifth coding convolution operation group after passing through a maximum pooling layer to obtain a feature mapping F1.
3. The residual network and round-trip sampling based fracture image segmentation method according to claim 2, wherein the step S3 comprises:
the feature mapping F2 is subjected to a fifth decoding convolution operation group to obtain a fifth decoding feature map;
after the fifth decoding characteristic diagram is subjected to up-sampling and fused with the fourth coding characteristic diagram, inputting the fifth decoding characteristic diagram into a fourth decoding convolution operation group to obtain a fourth decoding characteristic diagram;
after the fourth decoding characteristic graph is subjected to up-sampling and fused with the third coding characteristic graph, inputting the fourth decoding characteristic graph into a third decoding convolution operation group to obtain a third decoding characteristic graph;
after the third decoding characteristic diagram is up-sampled and fused with the second coding characteristic diagram, inputting the third decoding characteristic diagram into a second decoding convolution operation group to obtain a second decoding characteristic diagram;
and after the second decoding characteristic diagram is subjected to up-sampling and fused with the first coding characteristic diagram, inputting the second decoding characteristic diagram into a first decoding convolution operation group to obtain a characteristic mapping F3.
4. The residual network and round-trip sampling based fracture image segmentation method of claim 3, wherein the fusing is performed using an optimized feature fusion unit, wherein the feature fusion unit comprises a fusion block, a batch normalization layer and a convolution layer.
5. The residual network and round-trip sampling based fracture image segmentation method according to claim 1, wherein in step S4, the prediction network uses a Sigmoid function.
6. The crack image segmentation method based on the residual error network and the round-trip sampling as claimed in claim 1, wherein each residual error network module comprises two 3*3 convolutional layers, two active layers, three batch normalization layers and one 1*1 convolutional layer;
the input characteristics of the residual error network module are processed by a first 3*3 convolution layer, a first normalization layer, a first activation layer, a second 3*3 convolution layer and a second normalization layer in sequence to obtain first characteristics,
the input features of the residual error network module are sequentially processed by a 1*1 convolution layer and a third normalization layer to obtain second features;
and after the first characteristic and the second characteristic are fused, the output characteristic of the residual error network module is obtained through the processing of a second activation layer.
7. The residual network and round-trip sampling based fracture image segmentation method as claimed in claim 1, wherein the whole codec network adopts a cross entropy loss function L ce
Figure FDA0004002119720000031
Where y is the true value of the image,
Figure FDA0004002119720000041
is the predicted value of the image, β is the weighting coefficient, and γ is the adjustable focus parameter.
8. A crack image segmentation device based on residual error network and round-trip sampling is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to one of claims 1 to 7.
9. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, performing the steps of the method of any one of claims 1 to 7.
10. A computer device comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to one of claims 1 to 7.
CN202211620981.0A 2022-12-16 2022-12-16 Crack image segmentation method based on residual error network and back-and-forth sampling Pending CN115880491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211620981.0A CN115880491A (en) 2022-12-16 2022-12-16 Crack image segmentation method based on residual error network and back-and-forth sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211620981.0A CN115880491A (en) 2022-12-16 2022-12-16 Crack image segmentation method based on residual error network and back-and-forth sampling

Publications (1)

Publication Number Publication Date
CN115880491A true CN115880491A (en) 2023-03-31

Family

ID=85755012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211620981.0A Pending CN115880491A (en) 2022-12-16 2022-12-16 Crack image segmentation method based on residual error network and back-and-forth sampling

Country Status (1)

Country Link
CN (1) CN115880491A (en)

Similar Documents

Publication Publication Date Title
KR102419136B1 (en) Image processing apparatus and method using multiple-channel feature map
Liu et al. FDDWNet: a lightweight convolutional neural network for real-time semantic segmentation
WO2022116856A1 (en) Model structure, model training method, and image enhancement method and device
CN111091130A (en) Real-time image semantic segmentation method and system based on lightweight convolutional neural network
CN109344893B (en) Image classification method based on mobile terminal
EP3766021B1 (en) Cluster compression for compressing weights in neural networks
CN112634296A (en) RGB-D image semantic segmentation method and terminal for guiding edge information distillation through door mechanism
CN114581300A (en) Image super-resolution reconstruction method and device
CN114239861A (en) Model compression method and system based on multi-teacher combined guidance quantification
CN113870286A (en) Foreground segmentation method based on multi-level feature and mask fusion
CN113822287B (en) Image processing method, system, device and medium
CN116703947A (en) Image semantic segmentation method based on attention mechanism and knowledge distillation
CN116935292B (en) Short video scene classification method and system based on self-attention model
CN115082306A (en) Image super-resolution method based on blueprint separable residual error network
CN113971732A (en) Small target detection method and device, readable storage medium and electronic equipment
CN117671271A (en) Model training method, image segmentation method, device, equipment and medium
WO2023174256A1 (en) Data compression method and related device
CN115880491A (en) Crack image segmentation method based on residual error network and back-and-forth sampling
CN116310324A (en) Pyramid cross-layer fusion decoder based on semantic segmentation
CN116246110A (en) Image classification method based on improved capsule network
CN110378466A (en) Quantization method and system based on neural network difference
CN114501031B (en) Compression coding and decompression method and device
CN115115835A (en) Image semantic segmentation method, device, equipment, storage medium and program product
CN113570036A (en) Hardware accelerator architecture supporting dynamic neural network sparse model
CN115409150A (en) Data compression method, data decompression method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination