CN113449666A - Remote sensing image multi-scale target detection method based on data fusion and feature selection - Google Patents

Remote sensing image multi-scale target detection method based on data fusion and feature selection Download PDF

Info

Publication number
CN113449666A
CN113449666A CN202110766948.8A CN202110766948A CN113449666A CN 113449666 A CN113449666 A CN 113449666A CN 202110766948 A CN202110766948 A CN 202110766948A CN 113449666 A CN113449666 A CN 113449666A
Authority
CN
China
Prior art keywords
image
fused
training
remote sensing
data fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110766948.8A
Other languages
Chinese (zh)
Inventor
陈杰
刘方亮
赵杰
东野升效
秦登达
孙庚�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Zhuoyuan Data Technology Co ltd
Central South University
Original Assignee
Shandong Zhuoyuan Data Technology Co ltd
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Zhuoyuan Data Technology Co ltd, Central South University filed Critical Shandong Zhuoyuan Data Technology Co ltd
Priority to CN202110766948.8A priority Critical patent/CN113449666A/en
Publication of CN113449666A publication Critical patent/CN113449666A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing image multi-scale target detection method based on data fusion and feature selection, which comprises the following steps: step S1: acquiring a training set, and counting the number of bounding boxes of each category in the whole training set; step S2: enhancing and filling the number difference of the bounding boxes of each category in the training set through data fusion, so that the number of the bounding boxes of each category in the training set is balanced, and a final training set for model training is obtained; step S3: and training the training set processed in the step S2 by using a FoveaBox network, and identifying the targets with different sizes on different scale feature maps. The combination of the multi-scale feature expression and selection and the data fusion enhancement can cope with images with relatively complex backgrounds in the remote sensing images, reduce the influence of class imbalance, better conform to the scenes used by the remote sensing images, improve the performance of the model and improve the effect of identifying different-scale targets in the remote sensing images.

Description

Remote sensing image multi-scale target detection method based on data fusion and feature selection
Technical Field
The invention relates to the technical field of remote sensing image target detection, in particular to a remote sensing image multi-scale target detection method based on data fusion and feature selection.
Background
The remote sensing image target detection algorithm based on deep learning has the problem of sample imbalance, meanwhile, in the multi-task remote sensing image target detection, detection objects with different sizes exist, and the model has a certain limit on the generalization of ground feature scales, so that the sample imbalance and the diversity of the remote sensing image scales bring great battles for the target detection task.
To mitigate the negative effects of sample imbalances, Pang J et al propose a simple and efficient balance learning target detection framework Libra R-CNN. It integrates three new components: IoU equalized sample, equalized feature pyramid, and equalized L1loss functions, respectively, for reducing imbalances at the sample, feature, and target levels. Shrivastava A et al propose a method of "on-line hard case mining" (OHEM) to select some samples which are difficult to predict as training samples, thereby improving the problem of poor detection effect caused by unbalanced sample number. Cao Y et al propose a simple and effective sampling and learning strategy (PISA) that directs the emphasis of the training process to important samples selected from the basic samples, which are more effective in training focused on the original samples. An image pyramid Scale Normalization (SNIP) training scheme selectively back-propagates gradients of different sized target instances according to changes in image scale.
A large number of targets with different scales and small samples exist in the optical remote sensing image, and the multi-scale feature fusion can effectively improve the detection effect of the small targets and the different targets. The SSD algorithm carries out prediction of different scales by carrying out regression on a target box on a plurality of feature maps of different sizes of VGG16, and finally the prediction effect of the small target is superior to that of other single-stage networks. The FPN network provides a top-down structure with feature layer fusion on the basis of a fast RCNN frame, and the structure can effectively extract feature information of different scales of a picture. Since the remote sensing image has targets of various scales, the multi-scale fusion structure has excellent effect in remote sensing target detection, and meanwhile, the structure becomes the most common multi-scale feature extraction network. The subsequent improvements of YOLO v2 and YOLO v3 also add multi-scale improvement to the detection precision. The RetinaNet model adopts FPN as a feature extraction network, provides a Focal local Loss function to reduce the influence of positive and negative samples on the precision, and adds a bottom-up fusion structure on the basis of the FPN in the PaNet model.
Although great progress is made in the detection of the remote sensing image target, the following defects still exist:
1. first, the improved solutions for sample imbalance on the network structure above are not universally applicable to all target detection frameworks.
2. Secondly, although the commonly used multi-scale method can take target information of different scales into account on the remote sensing image, the feature layers of different scales cannot be well reflected on the target information of each scale, and the commonly used multi-scale method identifies targets of all sizes on each feature map, so that the target identification effect with weak feature information is poor.
In summary, a remote sensing image multi-scale target detection method based on data fusion and feature selection is urgently needed to solve the problems in the prior art.
Disclosure of Invention
The invention aims to provide a remote sensing image multi-scale target detection method based on data fusion and feature selection, aims to solve the problem that the target detection task is greatly challenged due to sample imbalance and the diversity of remote sensing image scales, and adopts the following specific technical scheme:
a remote sensing image multi-scale target detection method based on data fusion and feature selection comprises the following steps:
step S1: acquiring a training set, and counting the number of bounding boxes of each category in the whole training set;
step S2: enhancing and filling the number difference of the bounding boxes of each category in the training set through data fusion, so that the number of the bounding boxes of each category in the training set is balanced, and a final training set for model training is obtained;
step S3: and training the training set processed in the step S2 by using a FoveaBox network, and identifying the targets with different sizes on different scale feature maps.
Preferably, in the above technical solution, in the step S1, the remote sensing image target detection data set is proportionally divided into a training set and a test set.
Preferably, in the above technical solution, the data fusion enhancement in step S2 specifically includes:
step S2.1: taking an image to be enhanced of a category to be enhanced, then taking a background image which does not contain any target, and obtaining the maximum height h and the maximum width w of the image to be enhanced and the background image;
step S2.2: creating a fusion image with the height h, the width w, the channel number 3 and the pixel value 0;
step S2.3: and fusing the fused image and the image to be enhanced to obtain a fused image fused with the image to be enhanced, and fusing the fused image fused with the image to be enhanced and the background image to obtain a final fused image.
Preferably, in the above technical solution, in step S2.3, the fused image and the image to be enhanced are fused according to formula (1) to obtain a fused image fused with the image to be enhanced, and the fused image fused with the image to be enhanced is fused with the background image according to formula (2) to obtain a final fused image;
Figure BDA0003152105730000031
Figure BDA0003152105730000032
wherein the content of the first and second substances,
Figure BDA0003152105730000033
the representation matrixes are added according to corresponding coordinates, and x represents the multiplication of the matrixes; coefficient theta is artificially set to [0, 1 ]]A constant of (d); i represents an image to be enhanced; v represents a fused image before fusion; m represents a fused image into which an image to be enhanced is fused; pkRepresenting a background image; v represents the final fused image.
Preferably, in the above technical solution, the step S3 specifically includes:
step S3.1: training the training set obtained in the step S2 by adopting a FoveaBox network, and obtaining a plurality of feature maps with different scales by using the training images in the training set through an FPN feature extraction network;
step S3.2: each feature map respectively selects a plurality of targets in different bounding box area ranges for prediction, so that the feature maps in different scales can predict the targets in different sizes, and one target can be predicted on the plurality of feature maps according to the area;
step S3.3: the feature diagram is trained through a classification branch and a boundary box branch, the classification branch is trained by using a Focal Loss function, and the boundary box branch is trained by using a Smooth L1Loss function; after training is finished, the classification branch and the boundary box branch of the model respectively predict the classification and the boundary box of the target in the characteristic diagram to obtain a final output result.
Preferably, in the above technical solution, in the process of training the boundary box branch, an area smaller than the real boundary box is adopted for training.
Preferably, in step S3.1, 5 feature maps with different sizes are obtained from the training pictures in the training set through the FPN feature extraction network.
Preferably, in the above technical solution, in step S3.2, 5 feature maps with different scales are respectively selected and predicted for targets in (1,64), (32,128), (64,256), (128,512), (256,2048)) bounding box pixel area ranges.
The technical scheme of the invention has the following beneficial effects:
in order to solve the problem of unbalanced sample number, a data enhancement strategy based on image fusion is provided, the diversity of a sample library is improved through image fusion enhancement (namely data fusion enhancement), the influence of class imbalance can be reduced, and the result generalization of model training is better; secondly, carrying out feature expression and selection of multi-scale target detection on the optical remote sensing image by adopting a FoveaBox, identifying a target with a corresponding scale by utilizing a feature map with a proper scale, and simultaneously allowing the target with the same scale to be predicted on an adjacent feature map; the combination of the multi-scale feature expression and selection and the data fusion enhancement method can deal with the image with a relatively complex background in the remote sensing image, reduce the influence of class imbalance, better conform to the scene used by the remote sensing image, improve the performance of the model and improve the effect of identifying the target with different scales in the remote sensing image.
The image fusion enhancement provided by the invention is to carry out sample unbalance processing on a data layer, can be suitable for all target detection frames, and solves the problem that the existing method cannot be generally suitable for all target detection frames. In addition, the target with different sizes is predicted on the feature maps with different scales by the feature expression and the selected multi-scale network, the target features contained in different feature layers are fully utilized for better target identification, and the problem that the common multi-scale method identifies the targets with all sizes on each feature map, so that the target identification effect with weaker feature information is poor is solved.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic process flow diagram of the process of the present invention;
FIG. 2a is an original image before image fusion;
FIG. 2b is a background image I before image fusion;
FIG. 2c is a second background image before image fusion;
FIG. 2d is a background image three before image fusion;
FIG. 3a is a graph of the effect of FIG. 2a without fusion;
FIG. 3b is a graph showing the effect of FIG. 2a fused with FIG. 2 b;
FIG. 3c is a graph of the effect of FIG. 2a fused with FIG. 2 c;
FIG. 3d is a graph showing the effect of FIG. 2a fused with FIG. 2 d;
FIG. 4 is a schematic diagram of candidate box prediction based on a target center region;
FIG. 5a is a realistic annotation of scenario one in a test set test result visualization;
FIG. 5b is a realistic annotation of scenario two in a test set test result visualization;
FIG. 5c is a realistic annotation of scenario three in a test set test result visualization;
FIG. 5d is a realistic annotation of scenario four in the test set test result visualization;
FIG. 6a is the result of FIG. 5a using FPN network prediction;
FIG. 6b is the result of FIG. 5b using FPN network prediction;
FIG. 6c is the result of FIG. 5c using FPN network prediction;
FIG. 6d is the result of FIG. 5d using FPN network prediction;
FIG. 7a is the results of FIG. 5a using RetinaNet prediction;
FIG. 7b is the result of FIG. 5b predicted using RetinaNet;
FIG. 7c is the results of FIG. 5c using RetinaNet prediction;
FIG. 7d is the results of FIG. 5d using RetinaNet prediction;
FIG. 8a is the result of FIG. 5a using FoveaBox prediction;
FIG. 8b is the result of FIG. 5b using FoveaBox prediction;
FIG. 8c is the result of FIG. 5c using FoveaBox prediction;
FIG. 8d is the result of FIG. 5d using FoveaBox prediction;
FIG. 9a is the predicted result of FIG. 5a using FoveaBox and data fusion to enhance binding;
FIG. 9b is the predicted result of FIG. 5b using FoveaBox and data fusion to enhance binding;
FIG. 9c is the predicted result of FIG. 5c using FoveaBox and data fusion to enhance binding;
FIG. 9d is the predicted result of FIG. 5d using FoveaBox and data fusion to enhance binding.
Detailed Description
In order that the invention may be more fully understood, there now follows a more detailed description of the invention, and there is shown preferred embodiments of the invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Example 1:
referring to fig. 1, a remote sensing image multi-scale target detection method based on data fusion and feature selection includes the following steps:
step S1: acquiring a training set, and counting the number of bounding boxes of each category in the whole training set;
preferably, in step S1, the remote sensing image target detection data set is divided into a training set and a test set according to a ratio of 8:2, where the training set is used for the processing in step S1, and the test set is used for the subsequent precision test.
Step S2: enhancing and filling the number difference of the bounding boxes of each category in the training set through data fusion, so that the number of the bounding boxes of each category in the training set is balanced (namely the number of the bounding boxes is consistent), and a final training set for model training is obtained;
referring to fig. 2a to fig. 3d, the data fusion enhancement in step S2 specifically includes:
step S2.1: taking an image I to be enhanced which needs to be enhanced, and then taking a background image P which does not contain any targetkAcquiring the maximum height h and the maximum width w of the image to be enhanced and the background image;
namely: h is max (h)I,hk),w=max(wI,wk) Wherein h isI、wIRespectively representing the height and width, h, of the image to be enhancedk、 wkRespectively representing the height and width of the background image;
step S2.2: creating a fused image v with height h, width w, number of channels 3 and pixel value 0, namely: v ═ zero (h, w, 3);
step S2.3: fusing the fused image and the image to be enhanced according to the formula (1) to obtain a fused image m fused with the image to be enhanced, and fusing the fused image fused with the image to be enhanced and the background image according to the formula (2) to obtain a final fused image V;
Figure BDA0003152105730000061
Figure BDA0003152105730000062
wherein the content of the first and second substances,
Figure BDA0003152105730000063
the representation matrixes are added according to corresponding coordinates, and x represents the multiplication of the matrixes; coefficient theta is artificially set to [0, 1 ]]A constant of (d); i represents an image to be enhanced; v represents a fused image before fusion; m represents the fusionMerging the fused images after the images to be enhanced are combined; pkRepresenting a background image; v represents the final fused image.
Step S3: and training the training set processed in the step S2 by using a FoveaBox network, and identifying the targets with different sizes on different scale feature maps.
Preferably, the step S3 specifically includes:
step S3.1: training the training set obtained in the step S2 by using a FoveaBox network, and obtaining a plurality of feature maps with different scales by using the training images in the training set through an FPN feature extraction network (namely an FPN feature pyramid network);
step S3.2: each feature map respectively selects a plurality of targets in different bounding box area ranges for prediction, so that the feature maps in different scales can predict the targets in different sizes, and one target can be predicted on the plurality of feature maps according to the area;
step S3.3: the feature diagram is trained through a classification branch and a boundary box branch, the classification branch is trained by using a Focal Loss function, and the boundary box branch is trained by using a Smooth L1Loss function; after training is finished, the classification branch and the boundary box branch of the model respectively predict the classification and the boundary box of the target in the characteristic diagram to obtain a final output result.
Preferably, in the process of training the boundary box branches, an area smaller than the real boundary box is adopted for training; as shown in fig. 4, the bright frame is a supervised training boundary frame used during training, the dark frame is a real labeled boundary frame, and the region model smaller than the real boundary frame can learn more representative features, so that the robustness of the model is improved.
Preferably, in step S3.1, 5 feature maps with different sizes are obtained from the training pictures in the training set through the FPN feature extraction network.
Preferably, in this embodiment, in step S3.2, 5 feature maps with different scales are selected to predict the target of the (1,64), (32,128), (64,256), (128,512), (256,2048)) bounding box pixel area range.
Referring to fig. 5a to fig. 9d, this embodiment further provides a comparison between the method in this embodiment and the commonly used cases of processing four scenes by FPN, RetinaNet, FoveaBox, and the like.
After dividing the nwuvhr 10 data set into test sets, the method of this embodiment is used to compare the test accuracy with the methods of FPN, RetinaNet, FoveaBox, etc. on the test sets, and the specific comparison results are shown in table 1.
TABLE 1 test accuracy contrast table
Figure BDA0003152105730000071
As can be seen from fig. 5a to 9d and table 1, the average accuracies of the FoveaBox network and the data fusion enhancement combination (i.e., the FoveaBox & fusion items in table 1) are respectively 6.84%, 7.17% and 4.78% higher than those of FPN, RetinaNet and FoveaBox (the FPN, RetinaNet and FoveaBox are all names of commonly used multi-scale networks), and the accuracies of the ship, baseball field, tennis court, basketball court, harbor, bridge and vehicle categories after the network is selected and expressed by image fusion and feature selection are all improved.
As can be seen from the visualization results in fig. 5a to fig. 9d, the multi-scale selection and expression model with image fusion enhancement (i.e. data fusion enhancement) has better recognition effect on the airplane category, ship, harbor, and vehicle category than other multi-scale networks, especially reduces false detection rate on tennis courts, vehicles, and other categories, and contains the image of the ship on the bank, and FPN, RetinaNet, and FoveaBox are difficult to distinguish the bank from the ship well, while the method combining FoveaBox and data fusion enhancement is relatively more accurate in recognizing the target under the complex background.
The result shows that the image fusion enhancement (namely, data fusion enhancement) can eliminate the problem of class imbalance in the training data, and the effectiveness of the image fusion enhancement can be seen in comparison of the FoveaBox method without data fusion enhancement and the FoveaBox method combined with the image fusion enhancement, so that the performance and the robustness of the model are improved.
In conclusion, the diversity of sample scenes is enhanced through image fusion, and the result of selecting and expressing the multi-scale features on the remote sensing image integrally shows that the targets with different scales and sizes in the optical remote sensing image can be reasonably predicted.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A remote sensing image multi-scale target detection method based on data fusion and feature selection is characterized by comprising the following steps:
step S1: acquiring a training set, and counting the number of bounding boxes of each category in the whole training set;
step S2: enhancing and filling the number difference of the bounding boxes of each category in the training set through data fusion, so that the number of the bounding boxes of each category in the training set is balanced, and a final training set for model training is obtained;
step S3: and training the training set processed in the step S2 by using a FoveaBox network, and identifying the targets with different sizes on different scale feature maps.
2. The method for detecting the multi-scale target of the remote sensing image based on the data fusion and the feature selection as claimed in claim 1, wherein in the step S1, the target detection data set of the remote sensing image is proportionally divided into a training set and a test set.
3. The method for detecting the remote sensing image multi-scale target based on the data fusion and the feature selection as claimed in claim 1, wherein the data fusion enhancement in the step S2 is specifically as follows:
step S2.1: taking an image to be enhanced of a category to be enhanced, then taking a background image which does not contain any target, and obtaining the maximum height h and the maximum width w of the image to be enhanced and the background image;
step S2.2: creating a fusion image with the height h, the width w, the channel number 3 and the pixel value 0;
step S2.3: and fusing the fused image and the image to be enhanced to obtain a fused image fused with the image to be enhanced, and fusing the fused image fused with the image to be enhanced and the background image to obtain a final fused image.
4. The method for detecting the remote sensing image multi-scale target based on the data fusion and the feature selection according to claim 3, wherein in step S2.3, the fused image and the image to be enhanced are fused according to formula (1) to obtain a fused image fused with the image to be enhanced, and the fused image fused with the image to be enhanced is fused with the background image according to formula (2) to obtain a final fused image;
Figure FDA0003152105720000011
Figure FDA0003152105720000012
wherein the content of the first and second substances,
Figure FDA0003152105720000013
the representation matrixes are added according to corresponding coordinates, and x represents the multiplication of the matrixes; coefficient theta is artificially set to [0, 1 ]]A constant of (d); i represents an image to be enhanced; v represents a fused image before fusion; m represents a fused image into which an image to be enhanced is fused; pkRepresenting a background image; v represents the final fused image.
5. The method for detecting the remote sensing image multi-scale target based on the data fusion and the feature selection according to claim 1, wherein the step S3 specifically comprises:
step S3.1: training the training set obtained in the step S2 by using a FoveaBox network, and obtaining a plurality of feature maps with different scales by using the training pictures in the training set through an FPN feature extraction network;
step S3.2: each feature map respectively selects a plurality of targets in different bounding box area ranges for prediction, so that the feature maps in different scales can predict the targets in different sizes, and one target can be predicted on the plurality of feature maps according to the area;
step S3.3: the feature diagram is trained through a classification branch and a boundary box branch, the classification branch is trained by using a FocalLoss Loss function, and the boundary box branch is trained by using a SmoothL1Loss Loss function; after the training is finished, the classification branch and the boundary box branch of the model respectively predict the class and the boundary box of the target in the feature map to obtain a final output result.
6. The method for detecting the remote sensing image multi-scale target based on the data fusion and the feature selection as claimed in claim 5, characterized in that in the process of the bounding box branch training, an area smaller than a real bounding box is adopted for training.
7. The method for detecting the remote sensing image multi-scale target based on the data fusion and the feature selection as claimed in claim 5, wherein in step S3.1, 5 feature maps with different scales are obtained from training pictures in a training set through an FPN feature extraction network.
8. The method for detecting the multi-scale target of the remote sensing image based on the data fusion and the feature selection as claimed in claim 7, wherein in step S3.2, 5 feature maps with different scales are respectively selected to predict the target in the (1,64), (32,128), (64,256), (128,512), (256,2048)) bounding box pixel area range.
CN202110766948.8A 2021-07-07 2021-07-07 Remote sensing image multi-scale target detection method based on data fusion and feature selection Pending CN113449666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110766948.8A CN113449666A (en) 2021-07-07 2021-07-07 Remote sensing image multi-scale target detection method based on data fusion and feature selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110766948.8A CN113449666A (en) 2021-07-07 2021-07-07 Remote sensing image multi-scale target detection method based on data fusion and feature selection

Publications (1)

Publication Number Publication Date
CN113449666A true CN113449666A (en) 2021-09-28

Family

ID=77815312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110766948.8A Pending CN113449666A (en) 2021-07-07 2021-07-07 Remote sensing image multi-scale target detection method based on data fusion and feature selection

Country Status (1)

Country Link
CN (1) CN113449666A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368776A (en) * 2020-03-13 2020-07-03 长安大学 High-resolution remote sensing image classification method based on deep ensemble learning
CN111563473A (en) * 2020-05-18 2020-08-21 电子科技大学 Remote sensing ship identification method based on dense feature fusion and pixel level attention
CN111738114A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN111832615A (en) * 2020-06-04 2020-10-27 中国科学院空天信息创新研究院 Sample expansion method and system based on foreground and background feature fusion
US20210027056A1 (en) * 2019-07-23 2021-01-28 Regents Of The University Of Minnesota Remote-sensing-based detection of soybean aphid induced stress in soybean
CN112348765A (en) * 2020-10-23 2021-02-09 深圳市优必选科技股份有限公司 Data enhancement method and device, computer readable storage medium and terminal equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210027056A1 (en) * 2019-07-23 2021-01-28 Regents Of The University Of Minnesota Remote-sensing-based detection of soybean aphid induced stress in soybean
CN111368776A (en) * 2020-03-13 2020-07-03 长安大学 High-resolution remote sensing image classification method based on deep ensemble learning
CN111563473A (en) * 2020-05-18 2020-08-21 电子科技大学 Remote sensing ship identification method based on dense feature fusion and pixel level attention
CN111832615A (en) * 2020-06-04 2020-10-27 中国科学院空天信息创新研究院 Sample expansion method and system based on foreground and background feature fusion
CN111738114A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN112348765A (en) * 2020-10-23 2021-02-09 深圳市优必选科技股份有限公司 Data enhancement method and device, computer readable storage medium and terminal equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TAO KONG ET AL.: "《FoveaBox: Beyound Anchor-Based Object Detection》", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 29, 23 June 2020 (2020-06-23), pages 1 - 10 *
陆保国 等: "《光学遥感影像飞机目标识别与分类方法》", 《指挥信息系统与技术》, vol. 11, no. 5, 31 October 2020 (2020-10-31) *

Similar Documents

Publication Publication Date Title
CN111126472B (en) SSD (solid State disk) -based improved target detection method
CN111126202B (en) Optical remote sensing image target detection method based on void feature pyramid network
CN108288075A (en) A kind of lightweight small target detecting method improving SSD
CN109376603A (en) A kind of video frequency identifying method, device, computer equipment and storage medium
CN106504233A (en) Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN111353544B (en) Improved Mixed Pooling-YOLOV 3-based target detection method
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN110738160A (en) human face quality evaluation method combining with human face detection
CN109934081A (en) A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
CN115457415A (en) Target detection method and device based on YOLO-X model, electronic equipment and storage medium
CN114897802A (en) Metal surface defect detection method based on improved fast RCNN algorithm
CN114331946A (en) Image data processing method, device and medium
CN111967399A (en) Improved fast RCNN behavior identification method
CN112819008B (en) Method, device, medium and electronic equipment for optimizing instance detection network
CN113962900A (en) Method, device, equipment and medium for detecting infrared dim target under complex background
CN113095404A (en) X-ray contraband detection method based on front and back background partial convolution neural network
CN117495718A (en) Multi-scale self-adaptive remote sensing image defogging method
Funt et al. Does colour really matter? Evaluation via object classification
CN110033443B (en) Display panel defect detection method
CN114494893B (en) Remote sensing image feature extraction method based on semantic reuse context feature pyramid
CN113449666A (en) Remote sensing image multi-scale target detection method based on data fusion and feature selection
Zhao et al. Object detector based on enhanced multi-scale feature fusion pyramid network
CN116385876A (en) Optical remote sensing image ground object detection method based on YOLOX
CN116310688A (en) Target detection model based on cascade fusion, and construction method, device and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210928