CN113096136A - Panoramic segmentation method based on deep learning - Google Patents
Panoramic segmentation method based on deep learning Download PDFInfo
- Publication number
- CN113096136A CN113096136A CN202110337987.6A CN202110337987A CN113096136A CN 113096136 A CN113096136 A CN 113096136A CN 202110337987 A CN202110337987 A CN 202110337987A CN 113096136 A CN113096136 A CN 113096136A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- semantic
- sub
- network
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 148
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000013135 deep learning Methods 0.000 title claims abstract description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 21
- 238000003709 image segmentation Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 101100194362 Schizosaccharomyces pombe (strain 972 / ATCC 24843) res1 gene Proteins 0.000 description 1
- 101100194363 Schizosaccharomyces pombe (strain 972 / ATCC 24843) res2 gene Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a panoramic segmentation technology based on deep learning. The invention has certain universality and generalization capability in the panorama segmentation direction. There is great diversity and close connection between sub-networks in panoramic segmentation. On one hand, semantic segmentation is applied to pixel segmentation of image background categories, and semantic information of scenes is concerned more; the example segmentation focuses on the segmentation of individual examples in the image, and focuses on the structural information of the image in terms of characteristics. Therefore, the invention designs the corresponding attention module according to the characteristics of the sub-networks, so that the sub-networks can better focus on the respective segmentation objects. On the other hand, the background in the image often has rich semantic relation with the foreground, and the segmentation of the sub-network can be better promoted by reasonably applying the context semantics of the background and the foreground. Therefore, the invention designs a semantic auxiliary instance segmentation module, so that the feature information among the sub-networks can be better exchanged, and the effect of mutual promotion is achieved. The method has good universality and can be well applied to various panoramic segmentation networks.
Description
Technical Field
The invention belongs to the field of computer vision, and belongs to an image segmentation technology for performing pixel-level segmentation on an image in scene analysis.
Background
Image segmentation is a large research hotspot in the field of computer vision, and aims to divide an image into a plurality of regions according to the characteristics of color, shape, semantics and the like in the image. Before the deep learning technology, some traditional image processing methods such as thresholding, region growing, edge detection and the like are mostly adopted for image segmentation. With the rise and rapid development of neural networks, the field of image segmentation has made great progress in many aspects. The image segmentation technology under deep learning mainly comprises semantic segmentation, instance segmentation and panoramic segmentation. A schematic diagram of semantic segmentation, example segmentation and panorama segmentation is shown in fig. 1. The left image is a semantic segmentation schematic diagram, the middle is an example segmentation schematic diagram, and the right image is a panorama segmentation schematic diagram.
The main task of image semantic segmentation is to perform category prediction on each pixel point in an image and realize pixel level segmentation of the image. The example segmentation integrates semantic segmentation on the basis of target detection, realizes pixel-level segmentation on example objects, and gives example IDs corresponding to each pixel point while classifying the pixel points. Generally, semantic segmentation focuses on segmentation of the image background, while instance segmentation focuses on segmentation of foreground instances. In order to unify the work of semantic segmentation and instance segmentation, recent scholars propose a new segmentation task, namely panoramic segmentation. The panorama segmentation integrates semantic segmentation and instance segmentation, and the main task of the panorama segmentation is to perform semantic category prediction (stuff) on each pixel point in a scene image and endow an instance identification number to pixels belonging to an instance target (ings) so as to realize more comprehensive scene understanding. The panoramic segmentation can provide abundant semantic information and fine scene image segmentation, and is a key technology in the fields of future automatic driving, biomedicine and the like. However, since the panorama segmentation is more complicated than the semantic segmentation and the instance segmentation, the industrial application is still not realized at present.
Because the semantic segmentation and the example segmentation belong to different visual tasks and have larger difference in input data, network structure, training strategy and the like, the panoramic segmentation adopts two sub-networks to realize the semantic segmentation and the example segmentation, and then the semantic segmentation and the example segmentation result are fused by a post-processing fusion method to obtain a final panoramic segmentation result. The segmentation result of the panorama segmentation sub-network will directly affect the effect of panorama segmentation. At the same time, this method will bring a lot of redundant computation. In a scene image, the foreground and the background are often closely related, and how to utilize the information between two sub-networks to promote each other and reduce unnecessary calculation is an important research content of panoramic segmentation. The present invention is directed to improving the performance of a panorama segmentation network by improving the panorama segmentation sub-network. The invention has better universality and can be conveniently combined with various panoramic segmentation networks.
Disclosure of Invention
In order to effectively improve the performance of the panoramic segmentation sub-network, the invention respectively designs a semantic attention module and an example attention module aiming at the characteristics of the panoramic segmentation sub-network so as to enhance the segmentation capability of the panoramic segmentation sub-network. Meanwhile, a semantic auxiliary instance module is designed according to the correlation between semantic segmentation and instance segmentation, and the feature information transmission between networks is enhanced.
The technical scheme adopted by the invention is as follows:
step 1: ResNet-50 and FPN networks are used as the backbone network for panorama segmentation feature extraction. And extracting feature maps C1, C2, C3, C4 and C5 with multi-scale feature information.
Step 2: and C2-C5 in the step 1 are respectively sent into a semantic segmentation sub-network and an example segmentation sub-network as shared features.
And step 3: this step is one of the core contents of the patent. And in the semantic segmentation sub-network, the shared features pass through a semantic attention module, and then the semantic segmentation feature map is obtained by up-sampling. The semantic attention module is shown in fig. 2.
And 4, step 4: this step is one of the core contents of the patent. And in the example segmentation sub-network, the shared features pass through an example segmentation module and then pass through an RPN network to obtain an example candidate anchor frame. Example attention module is shown in fig. 3.
And 5: this step is one of the core contents of the patent. And (4) enabling the semantic segmentation feature map obtained in the step (3) and the instance candidate anchor frame in the step (4) to pass through a semantic auxiliary instance module, and enabling semantic information to be fused into instance features. The semantic assistance instance partitioning module is shown in fig. 4.
Step 6: and performing semantic segmentation and example segmentation according to the feature maps of the sub-networks respectively, and fusing the results to obtain a final panoramic segmentation result. The overall structure of the network is shown in fig. 5.
Compared with the prior art, the invention can effectively enhance the sub-tasks of the panoramic division, and realizes mutual promotion of the sub-network division by utilizing the internal relation of the sub-tasks, thereby improving the panoramic division effect. The method has better universality and is suitable for various panoramic segmentation networks.
Drawings
FIG. 1 is a diagram: schematic diagram of semantic segmentation, instance segmentation and panorama segmentation
FIG. 2 is a diagram of: the semantic attention module schematic diagram of the invention.
FIG. 3 is a diagram of: an example attention module schematic of the present invention.
FIG. 4 is a diagram of: the invention discloses a semantic assisted instance segmentation module schematic diagram.
FIG. 5 is a diagram: the invention relates to a whole structure diagram of a panoramic segmentation network.
FIG. 6 is a diagram of: the invention relates to a panorama segmentation effect graph.
FIG. 7 is a diagram of: the method compares the result with a mainstream panoramic segmentation algorithm on a COCO data set.
FIG. 8 is a diagram of: the invention compares the result with the mainstream panorama segmentation algorithm on the Cityscapes data set.
Detailed Description
The invention is further described below with reference to the accompanying drawings and tables.
First, the network performs feature extraction on an input image by using a feature sharing module in which one ResNet-50 and an FPN network constitute panorama segmentation, and the ResNet-50 includes five stages, which are denoted as res1, res2, res3, res4, and res 5. Each stage outputs a feature layer with dimensions 1/2, 1/4, 1/8, 1/16, 1/32 of the original image. The characteristics are sent into a traditional FPN network to obtain the shared characteristics of different sizes of the network. As C1, C2, C3, C4 and C5. And then, the shared features are respectively sent into a semantic segmentation sub-network and an example segmentation sub-network for sub-task segmentation.
In a semantic segmentation sub-network, the shared features first go through a semantic attention module. The implementation of the semantic attention module is as follows:
as shown in fig. 2. For the input feature map A ∈ RC*H*WThe module firstly reduces the channel dimension of the feature map to 1 dimension through convolution of 1 multiplied by 1, and then the reconstruction operation maps the features into a one-dimensional vector B, wherein each element in the vector represents the information of a corresponding pixel point in the original feature map. Therefore, the correlation coefficient of any two points in the characteristic diagram can be obtained. Namely:
C=BTB (1)
wherein B ═ B1,b2,…,bn]Representing the characteristic intensity of each pixel point in the characteristic diagram; c. CijRepresenting a pixel point biAnd bjThe greater the correlation, the stronger the enhancement effect on the features. And finally, reflecting the correlation to each pixel point, thereby completing the semantic attention mechanism. The specific implementation is shown in formula (2).
Si=∑jcijbi (2)
And adding the S and the original features to obtain a final feature map of semantic segmentation.
The example segmentation sub-network adopts a Mask R-CNN network as a basic network, and an example attention module and a semantic auxiliary example segmentation module are added on the basis to enhance the example segmentation sub-network. The implementation of the example attention module is as follows:
as shown in fig. 3. Example attention module aims at learning interrelationships between different features, without focusing on detailed information inside the features. Thus for an input feature layer A ∈ RC*H*WThe example attention module first performs a Global Average Pooling (GAP) operation on the feature map to reduce each feature map to 1 × 1, reducing the amount of network computation.
Then, the correlation among different characteristic layers is learned through two 1 × 1 convolution kernels, and a ReLU layer is added after the first 1 × 1 convolution kernel so as to enhance the network nonlinear learning capability.
C=Conv(B) (4)
D=Conv(ReLU(C)) (5)
The first convolution operation reduces the dimension of the feature vector B by 16 times, and the second convolution operation reduces the feature dimension to the original dimension. After two convolution operations, the size of each element in D represents the sum of weights contributed to the element by other feature layers, and the weights are obtained by network learning. And finally multiplying the learned weight by the original feature map to obtain a final example segmentation feature map.
Sc=DcAc (6)
The semantic assistant instance segmentation module is specifically realized as follows:
as shown in fig. 4. The feature map containing rich semantic information can be obtained by training the semantic segmentation sub-network, and the semantic auxiliary instance segmentation module firstly reduces the dimensionality of the semantic segmentation sub-branch feature map to one dimension through a 1 x 1 convolution, so that the semantic segmentation features have stronger feature expression capability. The feature is then connected to the RPN network output feature layer and the semantic feature information is fused into the instance segmented feature by a 1 x 1 convolution.
And finally, fusing the segmentation results of the two sub-networks to obtain a final panoramic segmentation result.
The specific method comprises the following steps:
(1) and the ResNet-50 backbone network performs feature extraction on the input image to obtain five feature layers of C1, C2, C3, C4 and C5. And taking C2-C5 as network input characteristics.
(2) And sending the characteristic diagrams C2-C5 into an FPN network to obtain characteristic diagrams fusing multi-scale information, and marking the characteristic diagrams as P1-P5.
(3) And the P1-P5 are used as shared features and sent into a semantic segmentation sub-network, and a semantic segmentation feature map is obtained through a semantic attention module and an up-sampling process. And performing semantic segmentation on the image according to the semantic segmentation feature map.
(4) P1-P5 are sent into the instance segmentation sub-network as a shared feature, and instance candidate anchor frames are obtained through an instance attention module and an RPN.
(5) And (4) fusing the semantic segmentation features in the step (3) with the instance segmentation features in the step (4) by using a semantic auxiliary instance segmentation module to obtain instance features with fused semantic information.
(6) And (5) performing Mask branch Mask generation and Box and Class prediction according to the example characteristics in the step (5). An instance segmentation result is generated.
(7) And (4) fusing the semantic segmentation result in the step (3) and the example segmentation result in the step (6) to obtain a panoramic segmentation result.
The invention designs a semantic attention module and an example attention module aiming at the characteristics of semantic segmentation and example segmentation. The object of semantic segmentation sub-network processing is background filler in the image. It is characterized by that it has no fixed form, and is generally some non-countable objects, such as sky, lawn and road surface, etc. The method is characterized by strong dependence on space positions and rich semantic information. Therefore, the spatial position of the pixel in the image and the context semantic information will have a large influence on the semantic segmentation. The semantic attention module captures the spatial dependency of any two positions in space by encoding a wider range of semantic information into the local receptive field, thereby achieving the effect that two positions with similar characteristics promote each other. The object processed by the example segmentation sub-branch is a foreground target in the image, and the example segmentation is more concerned with the structural feature extraction of the image in the training process. In deep learning, a convolutional neural network usually obtains a set of multi-channel feature maps in each layer of feature learning, wherein each channel represents the response of the network to a certain feature of an image. Therefore, the example attention module acquires the interrelation of different structural features by establishing the dependency relationship among the channels, and enhances the segmentation capability of the example segmentation sub-network on each example object.
Aiming at the relevance between semantic segmentation and instance segmentation, the invention designs a semantic auxiliary instance segmentation module to realize the mutual promotion of subtasks. In one scene, the background and the foreground are often in close relation, a specific target object is more easily appeared in some semantic scenes, and the probability of appearing in other semantic scenes is greatly reduced. Therefore, reasonable application of semantic information of the scene can play a good guiding role in detection and segmentation of the target object. The semantic segmentation feature map contains rich scene semantic information, and the semantic auxiliary instance segmentation module fuses the semantic segmentation feature map and the instance segmentation feature map, so that context semantics can be better acquired by instance segmentation, and the determination of instances is more accurate. Fig. 6 shows the panorama segmentation effect of the present invention. The first column is the original image, the second column is the real label, and the third column is the panoramic segmentation result of the invention.
The comparison between the COCO data set and the cityscaps data set with the current mainstream panorama segmentation algorithm is performed, and the results are shown in fig. 7 and 8. Where PQ is the panorama segmentation quality, and a higher value indicates a better segmentation result. According to the results in the table, the panorama segmentation method has higher accuracy.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except combinations where mutually exclusive features or/and steps are present.
Claims (3)
1. A panorama segmentation method based on deep learning is characterized by comprising the following steps:
step 1: ResNet-50 and FPN networks are used as a backbone network for panorama segmentation leading to feature extraction. Extracting feature maps P1, P2, P3, P4 and P5;
step 2: sending the shared features in the step 1 into a semantic segmentation sub-network for semantic segmentation;
and step 3: sending the shared characteristics in the step 1 into an RPN network for example anchor frame prediction;
and 4, step 4: in step 3, fusing the semantic segmentation feature map obtained in step 2 with the feature map obtained in step 3 through a semantic auxiliary instance module to obtain an instance segmentation feature map;
and 5: carrying out example segmentation according to the example segmentation feature map in the step 4;
step 6: and (5) fusing the segmentation results of the step (2) and the step (5) to obtain a panoramic segmentation result.
2. The method of claim 1, wherein the semantic segmentation sub-network in step 2 is first optimized for feature weights by a semantic attention module.
3. The method of claim 1, wherein the instance partitioning sub-network of step 3 is initially optimized for feature weights by an instance attention module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110337987.6A CN113096136A (en) | 2021-03-30 | 2021-03-30 | Panoramic segmentation method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110337987.6A CN113096136A (en) | 2021-03-30 | 2021-03-30 | Panoramic segmentation method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113096136A true CN113096136A (en) | 2021-07-09 |
Family
ID=76670836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110337987.6A Pending CN113096136A (en) | 2021-03-30 | 2021-03-30 | Panoramic segmentation method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113096136A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902692A (en) * | 2021-09-26 | 2022-01-07 | 北京医准智能科技有限公司 | Blood vessel segmentation method, device and computer readable medium |
CN114708317A (en) * | 2022-05-24 | 2022-07-05 | 北京中科慧眼科技有限公司 | Matching cost matrix generation method and system based on binocular stereo matching |
CN115908442A (en) * | 2023-01-06 | 2023-04-04 | 山东巍然智能科技有限公司 | Image panorama segmentation method for unmanned aerial vehicle ocean monitoring and model building method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008808A (en) * | 2018-12-29 | 2019-07-12 | 北京迈格威科技有限公司 | Panorama dividing method, device and system and storage medium |
CN110276765A (en) * | 2019-06-21 | 2019-09-24 | 北京交通大学 | Image panorama dividing method based on multi-task learning deep neural network |
US20200082219A1 (en) * | 2018-09-07 | 2020-03-12 | Toyota Research Institute, Inc. | Fusing predictions for end-to-end panoptic segmentation |
CN111242954A (en) * | 2020-01-20 | 2020-06-05 | 浙江大学 | Panorama segmentation method with bidirectional connection and shielding processing |
CN111259809A (en) * | 2020-01-17 | 2020-06-09 | 五邑大学 | Unmanned aerial vehicle coastline floating garbage inspection system based on DANet |
US20200210721A1 (en) * | 2019-01-02 | 2020-07-02 | Zoox, Inc. | Hierarchical machine-learning network architecture |
CN111428726A (en) * | 2020-06-10 | 2020-07-17 | 中山大学 | Panorama segmentation method, system, equipment and storage medium based on graph neural network |
CN111862140A (en) * | 2020-06-11 | 2020-10-30 | 中山大学 | Panoramic segmentation network and method based on collaborative module level search |
-
2021
- 2021-03-30 CN CN202110337987.6A patent/CN113096136A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200082219A1 (en) * | 2018-09-07 | 2020-03-12 | Toyota Research Institute, Inc. | Fusing predictions for end-to-end panoptic segmentation |
CN110008808A (en) * | 2018-12-29 | 2019-07-12 | 北京迈格威科技有限公司 | Panorama dividing method, device and system and storage medium |
US20200210721A1 (en) * | 2019-01-02 | 2020-07-02 | Zoox, Inc. | Hierarchical machine-learning network architecture |
CN110276765A (en) * | 2019-06-21 | 2019-09-24 | 北京交通大学 | Image panorama dividing method based on multi-task learning deep neural network |
CN111259809A (en) * | 2020-01-17 | 2020-06-09 | 五邑大学 | Unmanned aerial vehicle coastline floating garbage inspection system based on DANet |
CN111242954A (en) * | 2020-01-20 | 2020-06-05 | 浙江大学 | Panorama segmentation method with bidirectional connection and shielding processing |
CN111428726A (en) * | 2020-06-10 | 2020-07-17 | 中山大学 | Panorama segmentation method, system, equipment and storage medium based on graph neural network |
CN111862140A (en) * | 2020-06-11 | 2020-10-30 | 中山大学 | Panoramic segmentation network and method based on collaborative module level search |
Non-Patent Citations (1)
Title |
---|
李昕蔚等: "一种针对路口监控图像的区域分割方法", 《计算机应用与软件》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902692A (en) * | 2021-09-26 | 2022-01-07 | 北京医准智能科技有限公司 | Blood vessel segmentation method, device and computer readable medium |
CN114708317A (en) * | 2022-05-24 | 2022-07-05 | 北京中科慧眼科技有限公司 | Matching cost matrix generation method and system based on binocular stereo matching |
CN115908442A (en) * | 2023-01-06 | 2023-04-04 | 山东巍然智能科技有限公司 | Image panorama segmentation method for unmanned aerial vehicle ocean monitoring and model building method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7218805B2 (en) | Semantic segmentation using soft cross-entropy loss | |
CN111462126B (en) | Semantic image segmentation method and system based on edge enhancement | |
CN113096136A (en) | Panoramic segmentation method based on deep learning | |
CN112541503B (en) | Real-time semantic segmentation method based on context attention mechanism and information fusion | |
CN111310773B (en) | Efficient license plate positioning method of convolutional neural network | |
CN112348036A (en) | Self-adaptive target detection method based on lightweight residual learning and deconvolution cascade | |
CN113344188A (en) | Lightweight neural network model based on channel attention module | |
CN113066089B (en) | Real-time image semantic segmentation method based on attention guide mechanism | |
CN113344794B (en) | Image processing method and device, computer equipment and storage medium | |
CN111476133B (en) | Unmanned driving-oriented foreground and background codec network target extraction method | |
CN113052184A (en) | Target detection method based on two-stage local feature alignment | |
Al-Amaren et al. | RHN: A residual holistic neural network for edge detection | |
CN113901928A (en) | Target detection method based on dynamic super-resolution, and power transmission line component detection method and system | |
CN116596966A (en) | Segmentation and tracking method based on attention and feature fusion | |
CN118397485A (en) | Lightweight unmanned aerial vehicle image target detection method and system | |
CN117473415A (en) | Internet of things flow classification method based on fusion characteristics and self-adaptive weights | |
Nguyen et al. | Smart solution to detect images in limited visibility conditions based convolutional neural networks | |
CN116740359A (en) | Real-time semantic segmentation method based on multi-feature reuse | |
CN116977631A (en) | Streetscape semantic segmentation method based on DeepLabV3+ | |
Ding et al. | Light-Deeplabv3+: a lightweight real-time semantic segmentation method for complex environment perception | |
CN116363072A (en) | Light aerial image detection method and system | |
CN111126451A (en) | Method for dual semantic segmentation | |
CN114494284B (en) | Scene analysis model and method based on explicit supervision area relation | |
Yu et al. | Adaptive multi-information distillation network for image dehazing | |
CN112733934B (en) | Multi-mode feature fusion road scene semantic segmentation method in complex environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210709 |