CN113139450A - Camouflage target detection method based on edge detection - Google Patents

Camouflage target detection method based on edge detection Download PDF

Info

Publication number
CN113139450A
CN113139450A CN202110409358.XA CN202110409358A CN113139450A CN 113139450 A CN113139450 A CN 113139450A CN 202110409358 A CN202110409358 A CN 202110409358A CN 113139450 A CN113139450 A CN 113139450A
Authority
CN
China
Prior art keywords
detection
target
feature
camouflage
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110409358.XA
Other languages
Chinese (zh)
Inventor
胡晓
向俊将
钟小容
欧嘉敏
黄奕秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202110409358.XA priority Critical patent/CN113139450A/en
Publication of CN113139450A publication Critical patent/CN113139450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a disguised target based on edge detection, which comprises the following steps: carrying out layered labeling on the to-be-detected camouflage image, inputting the to-be-detected camouflage image subjected to layered labeling into a constructed and trained camouflage target detection network model, and finishing the detection of the camouflage target by the camouflage target detection network model; the network model for detecting the disguised target comprises a backbone network, an RF module, an EF module, an SA module and a PDC module; extracting a multi-scale characteristic diagram of the camouflage image to be detected by the backbone network, and storing information of different characteristic layers by adopting a dense connection strategy for the multi-scale characteristic diagram; inputting the densely connected multi-scale features into an RF module, and expanding a receptive field by using the RF module; the method can further highlight the disguised significant target and refine the edge characteristics due to the idea of additionally enhancing the edge characteristics, so that the capability of detecting the disguised target is improved, and the use scene of the method is expanded.

Description

Camouflage target detection method based on edge detection
Technical Field
The invention relates to the technical field of disguised target detection, in particular to a disguised target detection method based on edge detection.
Background
The disguiser can disguise the predation or disguise the disguise and hide the disguise and protect the disguise. Since the disguiser is very similar to the surrounding background environment, both general object detection algorithms and salient object detection algorithms cannot detect a disguised object. The reason is as follows: 1. the COCO data set is used for training an algorithm model in a target detection algorithm under a common condition, and as the number of samples in the training set is more, the robustness of the algorithm is better, and the detection and recognition functions are stronger. However, only a few targets in the COCO data set are very similar to the surrounding environment, so that the algorithm model lacks guidance of label information, and the target detection algorithm basically has no function of detecting the disguised target. 2. A common saliency object detection algorithm is for detecting one or more objects of interest in an image, whereas a masquerading object detection algorithm is for detecting a disguised object that can only be detected by careful discrimination, so that the saliency object detection algorithm is inconsistent with the masquerading object detection algorithm from a design point of view and principle. 3. When the operations such as pooling and downsampling in the convolutional neural network are used for reducing the image, a part of pixel points can be lost, the pixel points contain a part of edge characteristics, the target is connected with the background due to the loss of the edge characteristics, and the target cannot be separated from the background environment through the algorithm. In summary, the ability of existing algorithms to detect disguised objects is yet to be improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for detecting a disguised target based on edge detection, which improves the capability of detecting the disguised target.
The purpose of the invention is realized by the following technical scheme:
a camouflaged object detection method based on edge detection comprises the following steps: carrying out layered labeling on the to-be-detected camouflage image, inputting the to-be-detected camouflage image subjected to layered labeling into a constructed and trained camouflage target detection network model, and finishing the detection of the camouflage target by the camouflage target detection network model; the network model for detecting the disguised target comprises a backbone network, an RF module, an EF module, an SA module and a PDC module; extracting a multi-scale characteristic diagram of the camouflage image to be detected by the backbone network, and storing information of different characteristic layers by adopting a dense connection strategy for the multi-scale characteristic diagram; inputting the densely connected multi-scale features into an RF module, and expanding a receptive field by using the RF module; the EF module extracts the edge characteristics and the detection characteristics of the to-be-detected camouflage image, and fuses and outputs the edge characteristics E and the detection characteristics S; the SA module eliminates irrelevant feature interference of detection features and enhances a middle feature layer L3 of a backbone network; and finally, the PDC aggregates the characteristics of different layers to finish the detection of the camouflage target.
Preferably, the step of performing layered labeling on the to-be-detected camouflage image comprises the following steps: and carrying out layered labeling on various types of disguised target images according to the types, bounding boxes and attributes of the to-be-detected disguised target images.
Preferably, the constructing and training of the disguised target detection network model comprises:
s11, carrying out layered labeling on the various types of pre-collected camouflage target images to obtain a camouflage image data set; dividing the camouflaged image data set into a training set and a test set;
s12, constructing a camouflage target detection network model;
s13, training the constructed network model for detecting the disguised target by using a training set;
and S14, testing the trained camouflage target detection network model by using the test set.
Preferably, the backbone network comprises L1, L2, L3, L4-1, L4-2, L5-1 and L5-2, wherein L4-1 and L4-2, L5-1 and L5-2 are all parallel structures.
Preferably, the RF module includes five parallel branch structures, each branch structure is subjected to convolution with size (1, 1) before operation, the first three parallel branches are respectively subjected to dilation convolution with dilation coefficient Dk ═ k and then spliced with the fourth parallel branch, and the spliced result is added to the fifth parallel branch through convolution with size (1, 1) to obtain the final output, k ═ 3, 5, 7.
Preferably, the EF module is configured to divide four layers of features in the backbone network into an edge feature E and a detection feature S, merge and output the edge feature E and the detection feature S, construct a loss between the detection feature S and the detection tag value, construct a loss between the edge feature E and the edge tag value, and input the detection feature S to the SA module;
wherein the edge features E include E1, E2, E3, E4; the detection characteristics S comprise S1, S2, S3 and S4;
preferably, the formula for fusing the edge feature E and the detection feature S is as follows:
Ei+1=E+g(Ei,S) (1)
wherein the content of the first and second substances,
Figure BDA0003023550150000031
preferably, an SA module for detecting the feature S1After Gaussian convolution kernel, generating Maxfeature and Min feature, subtracting the Min feature from the Maxfeature to obtain Attentionfeature, and multiplying the Attentionfeature with Input through data standardization to obtain an enhanced camouflage target image, wherein the Maxfeature is a detection feature S1The maximum value after the gaussian convolution kernel convolution is preferably, the PDC module is configured to aggregate features of different layers and extract information of each feature map.
Preferably, the loss function for training the constructed disguised target detection network model by using the training set is a cross entropy loss function LceAnd the total loss function L of the disguised target detection network model is as follows:
L=λ1Loss-E+λ2Loss-EF+λ3Loss-S, (3)
Loss-E=Lce(E,GE), (4)
Loss-EF=Lce(S1,GS), (5)
Loss-S=Lce(S2,GS); (6)
wherein, the Loss-E is used for monitoring the edge of the camouflage target, the Loss-EF and the Loss-S are used for directly monitoring the camouflage target, and the lambda is1、λ2、λ3A weight balance factor representing each penalty; e denotes the predicted edge map of the disguised object, SiSaliency map, G, representing predicted disguised objectsSSaliency labels, G, representing camouflage objectsEAn edge label representing a camouflage object.
Compared with the prior art, the invention has the following advantages:
the invention realizes the detection of the disguised target by utilizing the deep learning technology. Due to the idea of additionally enhancing the edge characteristics, the significance target of the camouflage can be further highlighted and the edge characteristics can be refined, so that the capability of detecting the camouflage target can be improved, and the use scene of the invention is expanded. The invention is a detection model obtained by training on a large-scale data set, and has better robustness and universality.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic analysis diagram of the improvement of the detection accuracy of the disguised target based on the edge detection of the present invention.
Fig. 2 is a structural diagram of a disguised object detection network model of the present invention.
Fig. 3 is a structural diagram of an RF module of the present invention.
Fig. 4 is a structural diagram of an EF module of the present invention.
Fig. 5 is a structural diagram of an SA module of the present invention.
Fig. 6 is a structural view of a PDC module of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
Fig. 1 (a) is a sharp binary image, fig. 1 (b) is a sharp binary image edge, fig. 1 (c) is a blurred binary image, fig. 1 (d) is a blurred binary image edge, and fig. 1 (e) is an edge supplement image. The color of the object (butterfly) is similar to the color of the background in the image, while the texture of the object (butterfly wings) is similar to the texture of the background (leaves), and therefore can be considered as a "camouflaged" object. Fig. 1 (c) is an image after detection with a salient object detection algorithm. It can be analyzed from the figure that the ability of the salient object detection algorithm to detect the disguised object needs to be improved. After the supplement of the edge image (fig. 1 (b)), fig. 1 (e) can be obtained, and the disguised object (butterfly) can be basically detected. The idea of enhancing the target edge not only can refine the edge characteristics, but also can effectively realize the further emphasis of the target. Therefore, the invention provides a disguised target detection method based on edge detection, which can improve the overall capability and detection precision of target detection, so that more applications can be obtained in the relevant practical application scenes (military field, biological field and the like) of disguising, and the working efficiency of relevant workers is improved. The invention mainly applies the deep learning technology, adds an edge detection module in the neural network, and effectively utilizes edge information by combining edge characteristics and detection characteristics to separate the disguised target from the background environment.
Referring to fig. 2-6, a method for detecting a disguised object based on edge detection includes: carrying out layered labeling on the to-be-detected camouflage image, inputting the to-be-detected camouflage image subjected to layered labeling into a constructed and trained camouflage target detection network model, and finishing the detection of the camouflage target by the camouflage target detection network model; the network model for detecting the disguised target comprises a backbone network, an RF module, an EF module, an SA module and a PDC module; extracting a multi-scale characteristic diagram of the camouflage image to be detected by the backbone network, and storing information of different characteristic layers by adopting a dense connection strategy for the multi-scale characteristic diagram; inputting the densely connected multi-scale features into an RF module, and expanding a receptive field by using the RF module; the EF module extracts the edge characteristics and the detection characteristics of the to-be-detected camouflage image, and fuses and outputs the edge characteristics E and the detection characteristics S; the SA module eliminates irrelevant feature interference of detection features and enhances a middle feature layer L3 of a backbone network; and finally, the PDC aggregates the characteristics of different layers to finish the detection of the camouflage target.
In this embodiment, hierarchical labeling is performed on the to-be-detected camouflage image: and carrying out layered labeling on various types of disguised target images according to the types, bounding boxes and attributes of the to-be-detected disguised target images.
In this embodiment, constructing and training a camouflage target detection network model includes:
s11, carrying out layered labeling on the various types of pre-collected camouflage target images to obtain a camouflage image data set; dividing the camouflage image data set into a training set and a testing set, wherein the ratio is 7: 3; and marking all the pre-collected various types of disguised target images according to the attributes of the image type- > bounding box- > for hierarchical marking.
S12, constructing a camouflage target detection network model;
s13, training the constructed network model for detecting the disguised target by using a training set; in this embodiment, the loss function for training the constructed disguised target detection network model by using the training set is a cross entropy loss function LceAnd the total loss function L of the disguised target detection network model is as follows:
L=λ1Loss-E+λ2Loss-EF+λ3Loss-S, (3)
Loss-E=Lce(E,GE), (4)
Loss-EF=Lce(S1,GS), (5)
Loss-S=Lce(S2,GS); (6)
wherein, the Loss-E is used for monitoring the edge of the camouflage target, the Loss-EF and the Loss-S are used for directly monitoring the camouflage target, and the lambda is1、λ2、λ3Represent each oneA lost weight balancing factor; e denotes the predicted edge map of the disguised object, SiSaliency map, G, representing predicted disguised objectsSSaliency labels, G, representing camouflage objectsEAn edge label representing a camouflage object.
And S14, testing the trained camouflage target detection network model by using the test set.
In this embodiment, the backbone network includes L1, L2, L3, L4-1, L4-2, L5-1, and L5-2, where L4-1, L4-2, L5-1, and L5-2 are all parallel structures, and the network structures are consistent but do not share weights. The backbone network generally has detection and segmentation capabilities after being trained by the ImageNet data set, and may adopt the most common networks such as VGG and ResNet, which are not specifically limited herein.
In this embodiment, the RF module structure is as shown in fig. 3, the RF module includes five parallel branch structures, each branch structure is subjected to convolution with size (1, 1) before operation, the first three parallel branches are respectively subjected to dilation convolution with dilation coefficient Dk ═ k and then spliced with the fourth parallel branch, and the result after splicing is added to the fifth parallel branch through convolution with size (1, 1) to obtain the final output, k ═ 3, 5, 7. The RF module takes the characteristics of dense connection as the input of the module, and simulates the receptive field in human visual sense, so that the model has higher robustness (better migration capability), the search range of a neural network can be expanded, and the global information can be obtained.
In this embodiment, as shown in fig. 4, the EF module first divides the four-layer features passing through the RF module into edge features E (including E1, E2, E3, E4), and detection features S (S1, S2, S3, S4), the edge features of S1 are E1, the edge features of S2 are E2, and the others are similar to this. Then, in order to improve the definition of the boundary of the detected target in the detection result, the EF module fuses the edge feature and the detection feature, and in this embodiment, the formula for fusing the edge feature E and the detection feature S is as follows:
Ei+1=E+g(Ei,S) (1)
wherein the content of the first and second substances,
Figure BDA0003023550150000071
loss is constructed between the detection characteristic S and the detection label value after the EF module is optimized, and loss is constructed between the edge characteristic E and the edge label value. The tag values are true values and are made with the dataset during the dataset making process. Meanwhile, the detection characteristics S optimized by the EF module are input to the SA module as input parameters (i.e., the Attention parameter). The Conv (-) function represents the convolution operation, and f (-) represents the function for adjusting the feature channel and size, so as to facilitate feature multiplication.
Figure BDA0003023550150000072
Which represents the multiplication. Pi denotes multiplication accumulation.
In this embodiment, as shown in FIG. 5, the SA module is used for detecting the feature S1After Gaussian convolution kernel, generating Max feature and Min feature, subtracting the Max feature from the Min feature to obtain Attention feature, and multiplying the Attention feature standardized by data by Input to obtain an enhanced camouflage target image, wherein the Max feature is a detection feature S1Maximum after Gaussian convolution kernel convolution, Min feature detection feature S1Minimum after being convolved by a gaussian convolution kernel. The SA module is a search attention mechanism module, realizes the characteristic weighting of input characteristics and is used for improving or enhancing the capability of characteristic detection targets.
In this embodiment, the PDC module is configured to aggregate features of different layers and extract information of each feature map. The PDC module has a structure as shown in fig. 6, and the PDC has three inputs, Input1, Input2, and Input3, where Input1 has 3 parallel branch structures, and Input2 has 2 parallel branch structures. The first branch of Input1 is convolved and multiplied with the first branch of Input 2; the second branch of Input1 is spliced with the previous result (the result of multiplying the first branch of Input1 by the first branch of Input 2); the third branch of Input1 is convolved and multiplied with the first branch of Input2, Input 3. Finally, the branches are subjected to splicing and summarizing characteristics, and the summarizing characteristics are subjected to convolution to obtain final output. The connection of the branched structures enables further retrieval of images by aggregation using three-layer features. Because each feature map carries different features, the splicing aggregation mode is beneficial to extracting the information of each feature map, and in addition, the difference between the features is reduced by adopting a multiplication mode.
The above-mentioned embodiments are preferred embodiments of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions that do not depart from the technical spirit of the present invention are included in the scope of the present invention.

Claims (10)

1. A camouflaged object detection method based on edge detection is characterized by comprising the following steps:
carrying out layered labeling on the to-be-detected camouflage image, inputting the to-be-detected camouflage image subjected to layered labeling into a constructed and trained camouflage target detection network model, and finishing the detection of the camouflage target by the camouflage target detection network model; the network model for detecting the disguised target comprises a backbone network, an RF module, an EF module, an SA module and a PDC module;
extracting a multi-scale characteristic diagram of the camouflage image to be detected by the backbone network, and storing information of different characteristic layers by adopting a dense connection strategy for the multi-scale characteristic diagram; inputting the densely connected multi-scale features into an RF module, and expanding a receptive field by using the RF module; the EF module extracts the edge characteristics and the detection characteristics of the to-be-detected camouflage image, and fuses and outputs the edge characteristics E and the detection characteristics S; the SA module eliminates irrelevant feature interference of detection features and enhances a middle feature layer L3 of a backbone network; and finally, the PDC aggregates the characteristics of different layers to finish the detection of the camouflage target.
2. The method for detecting the disguised target based on the edge detection as claimed in claim 1, wherein the step of performing layered labeling on the disguised image to be detected comprises the steps of: and carrying out layered labeling on various types of disguised target images according to the types, bounding boxes and attributes of the to-be-detected disguised target images.
3. The method of claim 1, wherein the constructing and training of the network model for detecting the disguised target comprises:
s11, carrying out layered labeling on the various types of pre-collected camouflage target images to obtain a camouflage image data set; dividing the camouflaged image data set into a training set and a test set;
s12, constructing a camouflage target detection network model;
s13, training the constructed network model for detecting the disguised target by using a training set;
and S14, testing the trained camouflage target detection network model by using the test set.
4. The edge detection-based disguised object detection method according to claim 3, wherein the backbone network comprises L1, L2, L3, L4-1, L4-2, L5-1, L5-2, wherein L4-1 and L4-2, L5-1 and L5-2 are all parallel structures.
5. The method for detecting the disguised target based on the edge detection is characterized in that the RF module comprises five parallel branch structures, each branch structure is subjected to convolution with the size of (1, 1) before operation, the first three parallel branches are spliced with the fourth parallel branch after being subjected to convolution with the expansion coefficient of Dk-k respectively, and the spliced result is added to the fifth parallel branch through the convolution with the size of (1, 1) to obtain the final output, wherein k is 3, 5 and 7.
6. The method for detecting the disguised target based on the edge detection as claimed in claim 5, wherein the EF module is configured to divide four layers of features in the backbone network into an edge feature E and a detection feature S, fuse and output the edge feature E and the detection feature S, simultaneously construct a loss between the detection feature S and the detection tag value, construct a loss between the edge feature E and the edge tag value, and input the detection feature S into the SA module;
wherein the edge features E include E1, E2, E3, E4; the detection features S include S1, S2, S3, S4.
7. The method for detecting the disguised object based on the edge detection as claimed in claim 6, wherein the formula for fusing the edge feature E and the detection feature S is as follows:
Ei+1=E+g(Ei,S) (1)
wherein the content of the first and second substances,
Figure FDA0003023550140000021
8. the edge detection-based disguised object detection method according to claim 7, wherein the SA module is used for detecting the feature S1After Gaussian convolution kernel, generating Max feature and Min feature, subtracting the Max feature from the Min feature to obtain Attention feature, and multiplying the Attention feature standardized by data by Input to obtain an enhanced camouflage target image, wherein the Max feature is a detection feature S1Maximum after Gaussian convolution kernel convolution, Min feature detection feature S1Minimum after being convolved by a gaussian convolution kernel.
9. The method for detecting the disguised object based on the edge detection as claimed in claim 8, wherein the PDC module is configured to aggregate features of different layers and extract information of each feature map.
10. The edge detection-based camouflaged target detection method of claim 9, wherein the loss function for training the constructed camouflaged target detection network model using the training set is a cross entropy loss function LceAnd the total loss function L of the disguised target detection network model is as follows:
L=λ1Loss-E+λ2Loss-EF+λ3Loss-S, (3)
Loss-E=Lce(E,GE), (4)
Loss-EF=Lce(S1,GS), (5)
Loss-S=Lce(S2,GS); (6)
wherein, the Loss-E is used for monitoring the edge of the camouflage target, the Loss-EF and the Loss-S are used for directly monitoring the camouflage target, and the lambda is1、λ2、λ3A weight balance factor representing each penalty; e denotes the predicted edge map of the disguised object, SiSaliency map, G, representing predicted disguised objectsSSaliency labels, G, representing camouflage objectsEAn edge label representing a camouflage object.
CN202110409358.XA 2021-04-16 2021-04-16 Camouflage target detection method based on edge detection Pending CN113139450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110409358.XA CN113139450A (en) 2021-04-16 2021-04-16 Camouflage target detection method based on edge detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110409358.XA CN113139450A (en) 2021-04-16 2021-04-16 Camouflage target detection method based on edge detection

Publications (1)

Publication Number Publication Date
CN113139450A true CN113139450A (en) 2021-07-20

Family

ID=76813141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110409358.XA Pending CN113139450A (en) 2021-04-16 2021-04-16 Camouflage target detection method based on edge detection

Country Status (1)

Country Link
CN (1) CN113139450A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581752A (en) * 2022-05-09 2022-06-03 华北理工大学 Camouflage target detection method based on context sensing and boundary refining

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069197A1 (en) * 2010-09-16 2012-03-22 Stephen Michael Maloney Method and process of making camouflage patterns
CN112097686A (en) * 2020-08-10 2020-12-18 安徽农业大学 Camouflage object detection method based on binary fringe projection
US20200410669A1 (en) * 2019-06-27 2020-12-31 Board Of Regents Of The University Of Nebraska Animal Detection Based on Detection and Association of Parts

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069197A1 (en) * 2010-09-16 2012-03-22 Stephen Michael Maloney Method and process of making camouflage patterns
US20200410669A1 (en) * 2019-06-27 2020-12-31 Board Of Regents Of The University Of Nebraska Animal Detection Based on Detection and Association of Parts
CN112097686A (en) * 2020-08-10 2020-12-18 安徽农业大学 Camouflage object detection method based on binary fringe projection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DENGPINGFAN, ET AL.: "Camouflaged Object Detection", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
HENRY一个理工BOY: "详解基于深度学习的伪装目标检测", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/349798764》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581752A (en) * 2022-05-09 2022-06-03 华北理工大学 Camouflage target detection method based on context sensing and boundary refining
CN114581752B (en) * 2022-05-09 2022-07-15 华北理工大学 Camouflage target detection method based on context awareness and boundary refinement

Similar Documents

Publication Publication Date Title
Gong et al. Context-aware convolutional neural network for object detection in VHR remote sensing imagery
Deng et al. Attention-gate-based encoder–decoder network for automatical building extraction
Li et al. Weighted feature pyramid networks for object detection
CN111460968B (en) Unmanned aerial vehicle identification and tracking method and device based on video
CN106815323B (en) Cross-domain visual retrieval method based on significance detection
CN107169954B (en) Image significance detection method based on parallel convolutional neural network
CN108537121B (en) Self-adaptive remote sensing scene classification method based on meteorological environment parameter and image information fusion
CN114581752B (en) Camouflage target detection method based on context awareness and boundary refinement
CN106683046A (en) Real-time image splicing method for police unmanned aerial vehicle investigation and evidence obtaining
Wang et al. Semantic segmentation of remote sensing ship image via a convolutional neural networks model
CN111582091B (en) Pedestrian recognition method based on multi-branch convolutional neural network
CN105303163B (en) A kind of method and detection device of target detection
Liu et al. A deep fully convolution neural network for semantic segmentation based on adaptive feature fusion
Cui et al. Vehicle re-identification by fusing multiple deep neural networks
CN113468996A (en) Camouflage object detection method based on edge refinement
Han et al. Research on remote sensing image target recognition based on deep convolution neural network
CN112784754A (en) Vehicle re-identification method, device, equipment and storage medium
CN114913337A (en) Camouflage target frame detection method based on ternary cascade perception
Song et al. Depth-aware saliency detection using discriminative saliency fusion
CN113139450A (en) Camouflage target detection method based on edge detection
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
Madai-Tahy et al. Revisiting deep convolutional neural networks for RGB-D based object recognition
Kattakinda et al. FOCUS: Familiar objects in common and uncommon settings
CN109934147B (en) Target detection method, system and device based on deep neural network
Meng et al. Fast-armored target detection based on multi-scale representation and guided anchor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720

RJ01 Rejection of invention patent application after publication