CN113658200A - Edge perception image semantic segmentation method based on self-adaptive feature fusion - Google Patents
Edge perception image semantic segmentation method based on self-adaptive feature fusion Download PDFInfo
- Publication number
- CN113658200A CN113658200A CN202110864679.9A CN202110864679A CN113658200A CN 113658200 A CN113658200 A CN 113658200A CN 202110864679 A CN202110864679 A CN 202110864679A CN 113658200 A CN113658200 A CN 113658200A
- Authority
- CN
- China
- Prior art keywords
- semantic
- edge
- features
- feature
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 51
- 230000004927 fusion Effects 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000008447 perception Effects 0.000 title claims abstract description 16
- 239000011800 void material Substances 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 31
- 238000012549 training Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 239000000654 additive Substances 0.000 claims description 2
- 230000000996 additive effect Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 2
- 238000000137 annealing Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 238000003912 environmental pollution Methods 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an edge perception image semantic segmentation method based on self-adaptive feature fusion, which is a new semantic segmentation method based on a residual error network, and is a double-branch network structure model, comprising edge branches and semantic branches, wherein the edge branches are led out from the shallow part of the semantic branches, and the semantic branches adopt coding and decoding structures. In the edge branches, the added multi-scale cross fusion operation acquires image multi-scale features by superposing the void convolutions with different void ratios, meanwhile, the cross fusion among the branches can further improve the robustness of the multi-scale features, and the deep-layer features and the shallow-layer features are fused in the semantic branches based on a space attention mechanism, so that abundant space information contained in the shallow-layer features can be obtained, and simultaneously, a large amount of noise contained in the shallow-layer features can be filtered; and finally, fusing the semantic branch features and the edge branch features to further optimize the segmentation effect.
Description
Technical Field
The invention relates to the technical field of semantic segmentation, in particular to an edge perception image semantic segmentation method based on self-adaptive feature fusion.
Background
Semantic segmentation refers to the realization of pixel-level automatic identification and segmentation of image contents, and is widely applied to a plurality of fields such as land utilization planning, earthquake monitoring, vegetation classification, environmental pollution monitoring and the like at present. For example, by analyzing the remote sensing images of the atmosphere, the distribution state of the atmospheric pollutants can be clarified so as to monitor the atmospheric pollution. How to accurately segment images is always a hotspot and a difficulty of domestic and foreign research.
In recent years, with the rapid development of the deep learning field, the semantic segmentation research work has made a breakthrough progress, and the semantic segmentation model based on the convolutional neural network has been significantly improved in the aspects of computational efficiency, accuracy and the like. The current popular semantic segmentation models are full convolution neural network FCN, U-Net and Deeplab series. Although these advanced models can achieve good segmentation, there are two main problems: (1) the edge features of the image are not fully extracted, so that the segmentation performance of the model in the edge region is poor. (2) The shallow feature maps are not filtered before the shallow and deep feature maps are fused, introducing a large amount of noise information.
Disclosure of Invention
Aiming at the defects of the prior art and solving the problems of the existing semantic segmentation model, the invention provides an edge perception image semantic segmentation method based on self-adaptive feature fusion, which is used for segmenting an image so as to realize a better image segmentation effect.
In order to achieve the technical effect, the invention provides an edge perception image semantic segmentation method based on self-adaptive feature fusion, which comprises the following steps:
the method comprises the following steps: making a data set, collecting N images and carrying out pixel-level classification labeling on each image, wherein each sample in the data set comprises one image and a pixel-level labeling result of the image;
step two: building an edge perception image semantic segmentation model based on self-adaptive feature fusion, wherein the edge perception image semantic segmentation model is a double-branch network structure which takes a ResNet network as a main body and comprises edge branches and semantic branches;
step three: dividing a data set into a training set and a testing set, training a model by using the training set, and verifying the trained model by using the testing set;
step four: and using the trained semantic segmentation model for image segmentation to obtain the segmentation result of the image.
The second step comprises the following steps:
step 1: using a ResNet network model as a downsampling stage of semantic branches, leading out an edge branch at the downsampling stage of the semantic branches, and performing multi-scale cross fusion operation on edge features in the edge branch;
step 2: performing up-sampling on output features of a semantic branch down-sampling stage, and performing fusion of deep features and shallow features in an up-sampling stage to obtain semantic features containing rich spatial information;
and step 3: edge feature for edge branch outputSemantic features and semantic branching outputFusion was performed to obtain fused feature F'.
The step 1 comprises the following steps:
step 1.1: taking shallow features output by the semantic branch downsampling stage as input features of edge branches, and performing convolution processing;
step 1.2: processing the edge characteristics by using the void convolution layers with the porosity of 7, 5 and 3 respectively to obtain 3 characteristics F1、F2、F3;
Step 1.3: for feature F1Performing convolution treatment to obtain new characteristic F'1;
Step 1.4: feature F 'is spliced'1And F2Fusing to obtain new characteristic F'2;
Step 1.5: feature F 'is spliced'2And F3Fusing to obtain new characteristic F'3;
Step 1.6: feature F 'is spliced'1、F′2And F'3Carrying out fusion to obtain a new characteristic F';
step 1.7: the new feature F' is successively processed by convolution and upsampling to obtain the final edge feature
The step 2 comprises the following steps:
step 2.1: obtaining 4 features M from a downsampling stage of semantic branching1、M2、M3、M4;
Step 2.2: for feature M4After convolution processing, the characteristic M is compared with3Multiplying to obtain new characteristic M'3;
Step 2.3: fusion of feature M 'in a splicing manner'3And M4And obtaining an output characteristic M' through two-layer convolution processing3;
Step (ii) of2.4: for output characteristic M ″)3After convolution processing, the characteristic M is compared with2Multiplying to obtain new characteristic M'2;
Step 2.5: fusing feature map M 'in a splicing manner'2And M ″)3And obtaining an output characteristic M' through two-layer convolution processing2;
Step 2.6: for output characteristic M ″)2After convolution processing, the characteristic M is compared with1Multiplying to obtain new characteristic M'1;
Step 2.7: fusion of feature M 'in a splicing manner'1And M ″)2And obtaining semantic features of semantic branch output through two-layer convolution processing
The step 3 comprises the following steps:
step 3.1: edge feature by splicingAnd semantic featuresFusing, and performing convolution processing to obtain a new characteristic W';
step 3.2: carrying out global pooling, convolution and Sigmoid activation processing on the feature W 'in sequence to obtain a new feature W'1;
Step 3.3: mixing characteristics W 'and W'1Multiplication to obtain new characteristic W'2;
Step 3.4: mixing characteristics W 'and W'2Fusion was performed in an additive manner to obtain fused features F'.
The invention has the beneficial effects that:
the invention provides an edge perception image semantic segmentation method based on self-adaptive feature fusion, which is a new semantic segmentation method based on a residual error network (ResNet). The model constructed based on the method is a double-branch network structure model and comprises edge branches and semantic branches, wherein the edge branches are led out from the shallow part of the semantic branches, and the semantic branches adopt coding and decoding structures. In the edge branches, the added multi-scale cross fusion operation acquires image multi-scale features by superposing the void convolutions with different void ratios, meanwhile, the cross fusion among the branches can further improve the robustness of the multi-scale features, and the deep-layer features and the shallow-layer features are fused in the semantic branches based on a space attention mechanism, so that abundant space information contained in the shallow-layer features can be obtained, and simultaneously, a large amount of noise contained in the shallow-layer features can be filtered; and finally, fusing the semantic branch features and the edge branch features to further optimize the segmentation effect.
Drawings
FIG. 1 is a flow chart of an edge perception image semantic segmentation method based on adaptive feature fusion in the present invention;
FIG. 2 is a schematic diagram of the construction of a semantic segmentation model in the present invention;
FIG. 3 is a schematic diagram of the multi-scale cross-fusion operation of the present invention;
FIG. 4 is a schematic diagram of the fusion of deep and shallow features in the present invention;
FIG. 5 is a schematic diagram of edge feature and semantic feature fusion in the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
As shown in fig. 1, an edge-aware image semantic segmentation method based on adaptive feature fusion includes:
the method comprises the following steps: making a data set, collecting N images and carrying out pixel-level classification labeling on each image, wherein each sample in the data set comprises one image and a pixel-level labeling result of the image;
the data set adopted in this embodiment is an ISPRS Vaihingen data set, which includes six categories: impervious surfaces, buildings, low-rise vegetation, trees, automobiles, backgrounds; the data set contained a total of 33 images with an average size of 2494 × 2046 and a spatial resolution of 9 cm.
And (3) data preprocessing, namely, in order to further improve the segmentation precision of the model, methods such as random inversion, random cutting, random scaling and the like are adopted for the training data set.
A schematic diagram for constructing a semantic segmentation model is shown in fig. 2, wherein the network overall structure comprises semantic branches and edge branches, the semantic branches are mainly used for extracting semantic features, and a space attention fusion module (SA) is inserted into the semantic branches for fusing shallow semantic features and deep semantic features; the edge branch is mainly used for extracting edge features, and a multi-scale cross fusion Module (MSB) is inserted into the edge branch for fully digging the edge features; finally, a channel adaptation module (CA) is referenced for the fusion of semantic features and edge features. Wherein, MSB: a multi-scale cross-fusion module; and SA: a spatial attention fusion module; CA: a channel adaptation module; mul: multiplying; add: adding; UpSample: upsampling; concat: splicing; conv: convolution; BN: carrying out batch standardization; ReLU: activating; output: and outputting the characteristics.
Step two: constructing an edge perception image semantic segmentation model based on self-adaptive feature fusion, wherein the edge perception image semantic segmentation model comprises an edge branch and semantic branch double-branch network structure by taking a ResNet network as a main body; the method comprises the following steps:
step 1: using a ResNet network model as a downsampling stage of semantic branching, leading out an edge branch at the downsampling stage of the semantic branching, performing multi-scale cross fusion operation on edge features in the edge branch, and extracting more detailed edge features, as shown in FIG. 3, performing convolution on the edge features through 3 void convolution layers with different porosity to obtain 3 features with different scales, and then performing cross fusion operation (instead of simple splicing or addition) on the 3 features to more fully extract edge information; the method comprises the following steps:
step 1.1: taking shallow features output by the semantic branch downsampling stage as input features of edge branches, and performing convolution processing;
step 1.2: processing the edge characteristics by using the void convolution layers with the porosity of 7, 5 and 3 respectively to obtain 3 characteristics F1、F2、F3;
Step 1.3: for feature F1Performing convolution treatment to obtain new characteristic F'1;
Step 1.4: feature F 'is spliced'1And F2Fusing to obtain new characteristic F'2;
Step 1.5: feature F 'is spliced'2And F3Fusing to obtain new characteristic F'3;
Step 1.6: feature F 'is spliced'1、F′2And F'3Carrying out fusion to obtain a new characteristic F';
step 1.7: the new feature F' is successively processed by convolution and upsampling to obtain the final edge feature
Step 2: the ResNet network is a down-sampling stage of semantic branching, and performs up-sampling on output features of the down-sampling stage of semantic branching, and performs fusion of deep features and shallow features in the up-sampling stage to obtain semantic features containing rich spatial information, as shown in fig. 4, including:
step 2.1: obtaining 4 features M from a downsampling stage of semantic branching1、M2、M3、M4;
Step 2.2: for feature M4After convolution processing, the characteristic M is compared with3Multiplying to obtain new characteristic M'3;
Step 2.3: fusion of feature M 'in a splicing manner'3And M4And obtaining an output characteristic M' through two-layer convolution processing3;
Step 2.4: for output characteristic M ″)3After convolution processing, the characteristic M is compared with2Multiplying to obtain new characteristic M'2;
Step 2.5: fusing feature map M 'in a splicing manner'2And M ″)3And obtaining an output characteristic M' through two-layer convolution processing2;
Step 2.6: for output characteristic M ″)2After convolution processing, the characteristic M is compared with1Multiplying to obtain new characteristic M'1;
Step 2.7: fusion of feature M 'in a splicing manner'1And M ″)2And obtaining semantic features of semantic branch output through two-layer convolution processing
And step 3: edge feature for edge branch outputSemantic features and semantic branching outputThe fusion is performed to obtain a fused feature F', and a finer segmentation result is obtained, as shown in fig. 5, which includes:
step 3.1: edge feature by splicingAnd semantic featuresFusing, and performing convolution processing to obtain a new characteristic W';
step 3.2: carrying out global pooling, convolution and Sigmoid activation processing on the feature W 'in sequence to obtain a new feature W'1;
Step 3.3: mixing characteristics W 'and W'1Multiplication to obtain new characteristic W'2;
Step 3.4: mixing characteristics W 'and W'2Fusing in an adding mode to obtain fused characteristics F', wherein the characteristics output in the step are a group of pixel data, and the pixel data are visualized and converted into an image to obtain a segmented image;
step three: dividing a data set into a training set and a testing set, wherein the training set is used for training a model, the testing set is used for evaluating the final performance of the model, the trained model is verified by using the testing set, images in the testing data set are input into the trained semantic segmentation model, the obtained semantic segmentation result is compared with the labels of the testing set, and the segmentation precision is calculated; visualizing the segmentation result of the input image to explicitly show the segmentation effect;
step four: and using the trained semantic segmentation model for image segmentation to obtain the segmentation result of the image.
In order to verify the effect of the invention, compared with the FCN, PSPNet, scaled FCN, RotEqNet and DeepLab v3+, which have better effect, the experimental data is ISPRS Vaihingen data set widely used in semantic segmentation. Table 1 gives the results of the comparative experiments.
When the model is trained, the image is cut into 512 × 512 sub-blocks by a sliding window, considering that the image resolution in the ISPRS Vaihingen data set is too large to be input into the model at one time. In addition, the model is built by a pyrrch frame, training is completed on a GeForce RTX 2080Ti, an Adam optimizer is adopted in the training process, the initial learning rate is set to be 0.001, in order to improve the stability of the model in the training stage and jump out the local minimum value, a learning rate reduction strategy of cosine annealing is adopted, the epoch times in the three annealing processes are respectively 10, 20 and 40, and finally the model with the highest precision on the verification set is selected as the final result.
Table 1: comparative experiment results
As can be seen from Table 1, the proposed edge perception image semantic segmentation method based on adaptive feature fusion is verified on an ISPRS Vaihingen data set, the obtained segmentation precision is higher than that of other networks, and the effectiveness of the method is proved.
Claims (5)
1. An edge perception image semantic segmentation method based on self-adaptive feature fusion is characterized by comprising the following steps:
the method comprises the following steps: making a data set, collecting N images and carrying out pixel-level classification labeling on each image, wherein each sample in the data set comprises one image and a pixel-level labeling result of the image;
step two: building an edge perception image semantic segmentation model based on self-adaptive feature fusion, wherein the edge perception image semantic segmentation model is a double-branch network structure which takes a ResNet network as a main body and comprises edge branches and semantic branches;
step three: dividing a data set into a training set and a testing set, training a model by using the training set, and verifying the trained model by using the testing set;
step four: and using the trained semantic segmentation model for image segmentation to obtain the segmentation result of the image.
2. The method for semantic segmentation of edge-aware images based on adaptive feature fusion as claimed in claim 1, wherein the second step comprises:
step 1: using a ResNet network model as a downsampling stage of semantic branches, leading out an edge branch at the downsampling stage of the semantic branches, and performing multi-scale cross fusion operation on edge features in the edge branch;
step 2: performing up-sampling on output features of a semantic branch down-sampling stage, and performing fusion of deep features and shallow features in an up-sampling stage to obtain semantic features containing rich spatial information;
3. The method for semantic segmentation of edge-aware images based on adaptive feature fusion as claimed in claim 1, wherein the step 1 comprises:
step 1.1: taking shallow features output by the semantic branch downsampling stage as input features of edge branches, and performing convolution processing;
step 1.2: processing the edge characteristics by using the void convolution layers with the porosity of 7, 5 and 3 respectively to obtain 3 characteristics F1、F2、F3;
Step 1.3: for feature F1Convolution processing is carried out to obtain a new characteristic F1';
Step 1.4: for features F in a spliced manner1' and F2Fusing to obtain new characteristic F'2;
Step 1.5: feature F 'is spliced'2And F3Fusing to obtain new characteristic F'3;
Step 1.6: for features F in a spliced manner1'、F′2And F'3Fusing to obtain a new characteristic F';
4. The method for semantic segmentation of edge-aware images based on adaptive feature fusion as claimed in claim 1, wherein the step 2 comprises:
step 2.1: obtaining 4 features M from a downsampling stage of semantic branching1、M2、M3、M4;
Step 2.2: for feature M4After convolution processing, the characteristic M is compared with3Multiplying to obtain new characteristic M'3;
Step 2.3: fusion of feature M 'in a splicing manner'3And M4And obtaining an output feature M' by two-layer convolution processing "3;
Step 2.4: for output characteristics M'3After convolution processing, the characteristic M is compared with2Multiplying to obtain new characteristic M'2;
Step 2.5: fusing feature map M 'in a splicing manner'2And M "3And obtaining an output characteristic M' through two-layer convolution processing2;
Step 2.6: for output characteristic M ″)2After convolution processing, the characteristic M is compared with1Multiplying to obtain new characteristic M'1;
5. The method for semantic segmentation of edge-aware images based on adaptive feature fusion as claimed in claim 1, wherein the step 3 comprises:
step 3.1: edge feature by splicingAnd semantic featuresFusing, and performing convolution processing to obtain a new characteristic W';
step 3.2: carrying out global pooling, convolution and Sigmoid activation processing on the feature W' in sequence to obtain a new feature W1';
Step 3.3: the features W' and W1'multiplication gives the New feature W'2;
Step 3.4: mixing characteristics W 'and W'2Fusion was performed in an additive manner to obtain fused features F'.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110864679.9A CN113658200B (en) | 2021-07-29 | 2021-07-29 | Edge perception image semantic segmentation method based on self-adaptive feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110864679.9A CN113658200B (en) | 2021-07-29 | 2021-07-29 | Edge perception image semantic segmentation method based on self-adaptive feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113658200A true CN113658200A (en) | 2021-11-16 |
CN113658200B CN113658200B (en) | 2024-01-02 |
Family
ID=78490847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110864679.9A Active CN113658200B (en) | 2021-07-29 | 2021-07-29 | Edge perception image semantic segmentation method based on self-adaptive feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113658200B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113449727A (en) * | 2021-07-19 | 2021-09-28 | 中国电子科技集团公司第二十八研究所 | Camouflage target detection and identification method based on deep neural network |
CN113936204A (en) * | 2021-11-22 | 2022-01-14 | 安徽师范大学 | High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network |
CN114463187A (en) * | 2022-04-14 | 2022-05-10 | 合肥高维数据技术有限公司 | Image semantic segmentation method and system based on aggregation edge features |
CN114565628A (en) * | 2022-03-23 | 2022-05-31 | 中南大学 | Image segmentation method and system based on boundary perception attention |
CN114648668A (en) * | 2022-05-18 | 2022-06-21 | 浙江大华技术股份有限公司 | Method and apparatus for classifying attributes of target object, and computer-readable storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170132769A1 (en) * | 2015-11-05 | 2017-05-11 | Google Inc. | Edge-Aware Bilateral Image Processing |
WO2018076212A1 (en) * | 2016-10-26 | 2018-05-03 | 中国科学院自动化研究所 | De-convolutional neural network-based scene semantic segmentation method |
CN110263833A (en) * | 2019-06-03 | 2019-09-20 | 韩慧慧 | Based on coding-decoding structure image, semantic dividing method |
US20200082541A1 (en) * | 2018-09-11 | 2020-03-12 | Apple Inc. | Robust Use of Semantic Segmentation for Depth and Disparity Estimation |
CN111127493A (en) * | 2019-11-12 | 2020-05-08 | 中国矿业大学 | Remote sensing image semantic segmentation method based on attention multi-scale feature fusion |
CN111797779A (en) * | 2020-07-08 | 2020-10-20 | 兰州交通大学 | Remote sensing image semantic segmentation method based on regional attention multi-scale feature fusion |
CN112541503A (en) * | 2020-12-11 | 2021-03-23 | 南京邮电大学 | Real-time semantic segmentation method based on context attention mechanism and information fusion |
CN112634296A (en) * | 2020-10-12 | 2021-04-09 | 深圳大学 | RGB-D image semantic segmentation method and terminal for guiding edge information distillation through door mechanism |
CN113034505A (en) * | 2021-04-30 | 2021-06-25 | 杭州师范大学 | Glandular cell image segmentation method and device based on edge perception network |
CN113033570A (en) * | 2021-03-29 | 2021-06-25 | 同济大学 | Image semantic segmentation method for improving fusion of void volume and multilevel characteristic information |
CN113159051A (en) * | 2021-04-27 | 2021-07-23 | 长春理工大学 | Remote sensing image lightweight semantic segmentation method based on edge decoupling |
-
2021
- 2021-07-29 CN CN202110864679.9A patent/CN113658200B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170132769A1 (en) * | 2015-11-05 | 2017-05-11 | Google Inc. | Edge-Aware Bilateral Image Processing |
WO2018076212A1 (en) * | 2016-10-26 | 2018-05-03 | 中国科学院自动化研究所 | De-convolutional neural network-based scene semantic segmentation method |
US20200082541A1 (en) * | 2018-09-11 | 2020-03-12 | Apple Inc. | Robust Use of Semantic Segmentation for Depth and Disparity Estimation |
CN110263833A (en) * | 2019-06-03 | 2019-09-20 | 韩慧慧 | Based on coding-decoding structure image, semantic dividing method |
CN111127493A (en) * | 2019-11-12 | 2020-05-08 | 中国矿业大学 | Remote sensing image semantic segmentation method based on attention multi-scale feature fusion |
CN111797779A (en) * | 2020-07-08 | 2020-10-20 | 兰州交通大学 | Remote sensing image semantic segmentation method based on regional attention multi-scale feature fusion |
CN112634296A (en) * | 2020-10-12 | 2021-04-09 | 深圳大学 | RGB-D image semantic segmentation method and terminal for guiding edge information distillation through door mechanism |
CN112541503A (en) * | 2020-12-11 | 2021-03-23 | 南京邮电大学 | Real-time semantic segmentation method based on context attention mechanism and information fusion |
CN113033570A (en) * | 2021-03-29 | 2021-06-25 | 同济大学 | Image semantic segmentation method for improving fusion of void volume and multilevel characteristic information |
CN113159051A (en) * | 2021-04-27 | 2021-07-23 | 长春理工大学 | Remote sensing image lightweight semantic segmentation method based on edge decoupling |
CN113034505A (en) * | 2021-04-30 | 2021-06-25 | 杭州师范大学 | Glandular cell image segmentation method and device based on edge perception network |
Non-Patent Citations (2)
Title |
---|
岑仕杰;何元烈;陈小聪;: "结合注意力与无监督深度学习的单目深度估计", 广东工业大学学报, no. 04, pages 39 - 45 * |
董子昊: "多类别的边缘感知方法在图像分割中的应用", 计算机辅助设计与图形学学报, vol. 31, no. 7, pages 1075 - 1085 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113449727A (en) * | 2021-07-19 | 2021-09-28 | 中国电子科技集团公司第二十八研究所 | Camouflage target detection and identification method based on deep neural network |
CN113936204A (en) * | 2021-11-22 | 2022-01-14 | 安徽师范大学 | High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network |
CN113936204B (en) * | 2021-11-22 | 2023-04-07 | 安徽师范大学 | High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network |
CN114565628A (en) * | 2022-03-23 | 2022-05-31 | 中南大学 | Image segmentation method and system based on boundary perception attention |
CN114565628B (en) * | 2022-03-23 | 2022-09-13 | 中南大学 | Image segmentation method and system based on boundary perception attention |
CN114463187A (en) * | 2022-04-14 | 2022-05-10 | 合肥高维数据技术有限公司 | Image semantic segmentation method and system based on aggregation edge features |
CN114463187B (en) * | 2022-04-14 | 2022-06-17 | 合肥高维数据技术有限公司 | Image semantic segmentation method and system based on aggregation edge features |
CN114648668A (en) * | 2022-05-18 | 2022-06-21 | 浙江大华技术股份有限公司 | Method and apparatus for classifying attributes of target object, and computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113658200B (en) | 2024-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113658200A (en) | Edge perception image semantic segmentation method based on self-adaptive feature fusion | |
CN111275711B (en) | Real-time image semantic segmentation method based on lightweight convolutional neural network model | |
CN111091130A (en) | Real-time image semantic segmentation method and system based on lightweight convolutional neural network | |
CN110853057B (en) | Aerial image segmentation method based on global and multi-scale full-convolution network | |
CN114943963A (en) | Remote sensing image cloud and cloud shadow segmentation method based on double-branch fusion network | |
CN112749578A (en) | Remote sensing image automatic road extraction method based on deep convolutional neural network | |
CN116912257B (en) | Concrete pavement crack identification method based on deep learning and storage medium | |
CN112561876A (en) | Image-based pond and reservoir water quality detection method and system | |
CN116778318A (en) | Convolutional neural network remote sensing image road extraction model and method | |
CN113034506A (en) | Remote sensing image semantic segmentation method and device, computer equipment and storage medium | |
CN115908793A (en) | Coding and decoding structure semantic segmentation model based on position attention mechanism | |
CN115439483A (en) | High-quality welding seam and welding seam defect identification system, method and storage medium | |
US12087046B2 (en) | Method for fine-grained detection of driver distraction based on unsupervised learning | |
CN116310871A (en) | Inland water extraction method integrating cavity space pyramid pooling | |
CN113313721B (en) | Real-time semantic segmentation method based on multi-scale structure | |
CN111627055A (en) | Scene depth completion method based on semantic segmentation | |
CN117726954A (en) | Sea-land segmentation method and system for remote sensing image | |
CN117274789B (en) | Underwater crack semantic segmentation method for hydraulic concrete structure | |
CN114693966A (en) | Target detection method based on deep learning | |
CN118038052A (en) | Anti-difference medical image segmentation method based on multi-modal diffusion model | |
CN113963271A (en) | Model for identifying impervious surface from remote sensing image and method for training model | |
CN115995002B (en) | Network construction method and urban scene real-time semantic segmentation method | |
CN116977712B (en) | Knowledge distillation-based road scene segmentation method, system, equipment and medium | |
CN112418229A (en) | Unmanned ship marine scene image real-time segmentation method based on deep learning | |
CN117274355A (en) | Drainage pipeline flow intelligent measurement method based on acceleration guidance area convolutional neural network and parallel multi-scale unified network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |