CN114078197A - Small sample target detection method and device based on support sample characteristic enhancement - Google Patents

Small sample target detection method and device based on support sample characteristic enhancement Download PDF

Info

Publication number
CN114078197A
CN114078197A CN202111303534.8A CN202111303534A CN114078197A CN 114078197 A CN114078197 A CN 114078197A CN 202111303534 A CN202111303534 A CN 202111303534A CN 114078197 A CN114078197 A CN 114078197A
Authority
CN
China
Prior art keywords
class
training
sample
support
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111303534.8A
Other languages
Chinese (zh)
Inventor
王好谦
王颢涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202111303534.8A priority Critical patent/CN114078197A/en
Publication of CN114078197A publication Critical patent/CN114078197A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a small sample target detection method and a small sample target detection device based on support sample characteristic enhancement, wherein the small sample target detection method specifically comprises the following steps: a model initialization stage; meta-training the first stage: sampling a picture to be detected and a support sample in base class data, and training according to a meta-training mode of the model; and (3) meta-training in a second stage: inputting all base class data serving as support samples into a model, calculating and storing the mean value of class characteristic vectors of all the support samples in each class, using the mean value as an auxiliary supervision signal for class characteristic vector calculation, and repeating the training of the first stage of training; and (3) a meta-test stage: sampling from the base class data according to the number of the new class labels, constructing a balanced data set, sampling the picture to be detected and the support sample, and repeating the training of the second stage of training; and (5) a new class target detection stage. Compared with the traditional small sample target detection method, the method can obtain the characteristics of the support sample with higher quality and improve the detection accuracy of the new type.

Description

Small sample target detection method and device based on support sample characteristic enhancement
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a small sample target detection method and device based on support sample characteristic enhancement.
Background
An object detection (ObjectDetection) task refers to the inclusion of a specific class and position of a predefined class or classes of objects in an output image given an input image of arbitrary size. Object detection is one of the most basic and important problems in the field of computer vision: on one hand, the method is a basic task of a plurality of downstream visual tasks, such as single/multi-target tracking, example segmentation, key point detection and the like; on the other hand, the method is also particularly widely applied to scenes such as industrial quality inspection, security protection, automatic driving and the like. Therefore, target detection has always been a major research problem in the field of computer vision.
Currently, a mainstream target detection model mainly adopts a fully supervised training mode, namely a deep neural network is trained on a large amount of labeled data. Training data often contains tens of millions of labels, which is helpful for the detection model to fully learn the features of each class, and if the training data of a certain class is less, the detection capability of the final model for the class is also obviously reduced. However, collecting and labeling sufficient training data is time, labor and financial intensive, and for some rare classes of objects, it is even impossible to collect training data adequately. This limitation greatly limits the application of object detection models in real scenes.
The background of the invention may contain background information related to the problem or environment of the present invention and does not necessarily describe the prior art, therefore, the inclusion in the background is not an admission by the applicant of prior art.
Disclosure of Invention
In order to improve the detection accuracy of the existing small sample target detection, the invention provides a small sample target detection method and device based on support sample characteristic enhancement.
Therefore, the small sample target detection method based on the support sample characteristic enhancement provided by the invention specifically comprises the following steps:
s1, model initialization stage: constructing any target detection model based on meta-learning;
s2, meta-training first stage: sampling a picture to be detected and a support sample in base class data, and training according to an original meta-training mode of a model selected in model initialization;
s3, meta-training the second stage: inputting all base class data serving as support samples into a model, calculating and storing the mean value of class feature vectors of all the support samples in each class, then taking the mean value as an auxiliary supervision signal for class feature vector calculation, and repeating the training of the first stage of training;
s4, meta-test stage: sampling from the base class data according to the number of the new class labels to construct a balanced data set with all classes containing the same number of labels, then sampling the picture to be detected and the supporting sample from the balanced data set, and repeating the training of the second stage of training;
s5, a new target detection stage: and taking the new class label as a support sample, and carrying out target detection on the input image to obtain the position and the specific class of the target belonging to the new class in the input image.
Further, in step S1, MetaYOLO is selected as the target detection model, and the MetaYOLO is composed of a feature extractor, a re-weighting module, and a prediction layer.
Further, in step S2, in each training batch, the picture to be detected and the support sample are sampled, the sampling manner of the support sample is to sample a bounding box belonging to each base class, and obtain an image where the bounding box is located, then a 0-1 mask is added on the basis of the original RGB channel of the image, the position of the bounding box is indicated, the picture to be detected is input to the feature extractor, and the support sample is input to the re-weighting module.
Further, in step S2, setting the batch size as B, the number of base class categories as N, the number of pictures in the training set as D, and the total number of bounding box labels contained in the D pictures as a, randomly sampling B pictures from the D pictures as an input feature extractor of the image to be detected, then, for N base classes, randomly sampling a bounding box belonging to the class from the a bounding box labels for each category, and obtaining the picture in which the bounding box is located, for the picture in which the bounding box is located, adding a mask channel on the basis of an RGB channel, wherein the value of the mask channel in the region covered by the bounding box is 1, the other regions are 0, N four-channel pictures are input as support samples to the re-weighting module, N outputs of the re-weighting module are used to weight the features corresponding to each picture to be detected, obtaining N feature maps, and then, N feature maps are input to the prediction layer, and the method is used for judging whether the selected picture to be detected contains the object which is consistent with the corresponding support sample category.
Further, in step S3, for each base class category, traversing all bounding box labels of the base class category, and inputting the bounding box labels as support samples into the re-weighting module to obtain a mean value of class feature vectors of all support samples of the category, and then repeating the training in the first stage of the meta training, but adding a SmoothL1 loss function between the newly predicted class feature vector and the previously stored class feature vector to promote that the class feature vector calculated by a single support sample approximates to the mean value of the class feature vector calculated by multiple support samples.
Further, in step S4, an equal number of labels are sampled from each base class according to the number of bounding box labels included in each new class, so as to form a balanced data set, and then the second stage of meta-training is performed on the balanced data set.
Further, in step S4, the total number of classes of the new class is N ', the number of labels included in each class of the new class is K, first, K boundary frames are sampled for each base class in the labels of the a base class boundary frames, so as to obtain (N + N')/K boundary frames, where N is the total number of classes of the base class, pictures in which the boundary frames are located are collected, a balanced data set is constructed, and then, the training of the second phase of meta-training is repeated on the balanced data set.
Further, in step S5, the new class support sample is input into the re-weighting module, a class feature vector of each class of the new class is obtained through calculation, the picture to be detected is input into the feature extractor, the output feature map is re-weighted by using the class feature vector of each class of the new class, and the weighted feature map is input into a subsequent detection layer for detecting the new class.
The small sample target detection device based on the support sample characteristic enhancement specifically comprises a central processing unit and a memory, wherein a computer program capable of being executed by the central processing unit is stored in the memory, and the central processing unit can realize the small sample target detection method based on the support sample characteristic enhancement by executing the computer program.
The computer-readable storage medium provided by the invention stores a computer program capable of being executed by a central processing unit, and the central processing unit can realize the small sample target detection method based on the support sample characteristic enhancement by executing the computer program.
Compared with the prior art, the invention has the following beneficial effects:
compared with other traditional full-supervision target detection models, the method and the device have the advantages that better detection performance can be realized when the object to be detected only contains a small amount of marked training samples, and the characteristics of the small amount of supporting samples can be close to the characteristics of the sufficient amount of supporting samples through supporting sample enhancement, so that the problem of reduction of class characteristic quality caused by too little data is solved.
Drawings
FIG. 1 is a flow chart of a small sample target detection method based on support sample feature enhancement according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a training flow of a small sample target detection method when a MetaYOLO target detection model is adopted in the embodiment of the present invention.
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings.
Small sample target detection aims at training the detection model with a very small number of labels (e.g. only 1-10 labels per class). The mainstream small sample target detection mainly depends on a meta-learning method, and the training aim is to enable a model to learn. On the existing labeled data set, a small sample target detection model based on meta-learning can construct a small sample learning scene in training: in each training batch, a sample (support sample) is sampled for each class to be detected, and the images to be detected are classified and positioned according to the information contained in the sample. Through the training paradigm, the target detection model can learn essential information mined from a very small amount of samples to the corresponding classes and used for detection and classification. Therefore, when facing a scene with only a very small amount of label data for each category, the model can also be quickly fine-tuned and converged with a small amount of label information to accommodate the detection of a new category. Generally, for ease of description, one refers to the classes contained in the existing dataset as base classes, and the classes to be detected that really contain only a small number of labels as new classes.
The embodiment of the invention is improved based on the meta-learning method introduced above, and is used for solving the technical problem provided by the invention. As shown in fig. 1, an embodiment of the present invention provides a small sample target detection method based on supported sample feature enhancement, including the following steps:
s1, model initialization stage: constructing any target detection model (such as MetaYOLO) based on Meta learning, wherein the MetaYOLO consists of a feature extractor, a re-weighting module and a prediction layer, the small sample target detection model based on Meta learning is not limited to Meta YOLO, and can be a small sample target detection model based on Meta learning with any structure, such as AttentionRPN, ONCE, FA and the like, as long as the model is based on Meta learning, branches for processing a picture to be detected and branches for processing a support sample respectively exist, and each image a in a training set is cut into a plurality of sub-images and is rearranged in space to generate a reconstructed image b with the same size as the image a.
S2, meta-training first stage: sampling a picture to be detected and a support sample in base class data, and training according to an original meta-training mode of a model selected in model initialization, specifically, as shown in fig. 2, under the condition that a target detection model based on meta-learning is selected as MetaYOLO, sampling the picture to be detected and the support sample in each training batch, wherein the sampling of the picture to be detected is consistent with the sampling mode of a general target detection model, the sampling mode of the support sample is that for each base class, a boundary frame belonging to the class is sampled, an image where the boundary frame is located is obtained, then, a 0-1 mask is added on the basis of an original RGB channel of the image to indicate the position of the boundary frame, in the training batch, the picture to be detected is input into a feature extractor, the support sample is input into a re-weighting module, for example, the batch size is set as B, the number of base class is set as N, the number of pictures in the training set is D, the total number of bounding box labels contained in the D pictures is A, B pictures are randomly sampled from the D pictures and are input into a feature extractor as an image to be detected, and then, for N base classes, each class randomly samples a bounding box belonging to the class from A bounding box labels to obtain a picture where the bounding box is located, for the picture, a mask channel is added on the basis of the RGB channel, the value of the mask channel at the area covered by the bounding box is 1, and the other areas are 0, N four-channel pictures are used as support samples to be input into the re-weighting module, N outputs of the re-weighting module are used for weighting the corresponding features of each picture to be detected to obtain N feature pictures, and then the N feature pictures are input into the prediction layer to be used for judging whether the selected picture to be detected contains an object with the same type as the corresponding support sample.
S3, meta-training the second stage: inputting all base class data as support samples into a model, calculating and storing the mean value of class feature vectors of all support samples in each class, then taking the mean value as an auxiliary supervision signal for class feature vector calculation, and repeating the training of the first stage of training, specifically, as shown in fig. 2, in the case of selecting a target detection model based on meta learning as MetaYOLO, for each base class, traversing all bounding box labels thereof, and inputting the bounding box labels as the support samples into a re-weighting module to obtain the mean value of class feature vectors of all support samples in the class, then repeating the training of the first stage of training, but adding an auxiliary supervision signal at the output of the re-weighting module, namely adding a SmoothL1 loss function between a newly predicted class feature vector and a previously stored class feature vector, thereby promoting the class feature vectors calculated by a single support sample to approximate the class feature vector calculated by a plurality of support samples, for example, first, a four-channel pictures are constructed and obtained in the manner described in the first stage for a bounding box labels, then, these a four-channel pictures are divided into N groups according to the categories to which the bounding box belongs, and are input into the re-weighting module as support samples, and the average value of the output of the re-weighting module of all the support samples in each category under N categories is recorded, and finally, the first stage of the training is repeated (although the number of training rounds is reduced as appropriate), and at the output of the re-weighting module, a supervision signal is added, that is, the average value recorded previously, and accordingly, a SmoothL1 loss function needs to be added to the loss function to reduce the difference between the output of the re-weighting module and the average value output previously.
S4, meta-test stage: sampling from base class data according to the number of new class labels to construct a balanced data set with all classes containing equal number of labels, then sampling a picture to be detected and a support sample from the balanced data set, and repeating the training of the second stage of meta-training, specifically, sampling from each base class according to the number of boundary frame labels contained in each new class to obtain equal number of labels to construct a balanced data set, then performing the training of the second stage of meta-training on the balanced data set, wherein the average value of the support sample in the auxiliary monitoring signal is consistent with the third part, and the monitoring signal only acts on the base class training data, for example, the total number of the new class classes is set to be N ', the number of the labels contained in each new class is set to be K, firstly sampling K boundary frames for each base class in A base class boundary frame labels, thereby obtaining (N + N')/K boundary frames, the method includes the steps of collecting pictures where the boundary frames are located, constructing a balanced data set, and then repeating a second stage of meta-training on the balanced data set (although the number of training rounds is reduced as appropriate), wherein in the training pictures obtained by sampling, all the boundary frames which do not belong to the boundary frames in the balanced data set are regarded as backgrounds, and the output mean value of a re-weighting module used in the round is consistent with that of the second stage of meta-training, so that the output mean value of a new class is not included, and accordingly, the Smoothl1 loss function only acts on a base class support sample.
S5, a new target detection stage: taking the new class label as a support sample, carrying out target detection on the input image to obtain the position and specific class of a target belonging to the new class in the input image, specifically, under the condition that a target detection model based on meta learning is selected as MetaYOLO, inputting the new class support sample into a re-weighting module, calculating to obtain class characteristic vectors of each class of the new class, inputting the picture to be detected into a characteristic extractor, carrying out re-weighting on the output characteristic graph by using the class characteristic vectors of each class of the new class, and inputting the weighted characteristic graph into a subsequent detection layer for detecting the new class.
The embodiment of the invention provides a small sample target detection device based on support sample characteristic enhancement, which comprises a central processing unit and a memory, wherein a computer program capable of being executed by the central processing unit is stored in the memory, and the central processing unit can realize the small sample target detection method based on support sample characteristic enhancement by executing the computer program.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it should not be understood that the scope of the present invention is limited thereby. It should be noted that those skilled in the art should recognize that they may make equivalent variations to the embodiments of the present invention without departing from the spirit and scope of the present invention.

Claims (10)

1. A small sample target detection method based on support sample characteristic enhancement is characterized by comprising the following steps:
s1, model initialization stage: constructing any target detection model based on meta-learning;
s2, meta-training first stage: sampling a picture to be detected and a support sample in base class data, and training according to an original meta-training mode of a model selected in model initialization;
s3, meta-training the second stage: inputting all base class data serving as support samples into a model, calculating and storing the mean value of class feature vectors of all the support samples in each class, then taking the mean value as an auxiliary supervision signal for class feature vector calculation, and repeating the training of the first stage of training;
s4, meta-test stage: sampling from the base class data according to the number of the new class labels to construct a balanced data set with all classes containing the same number of labels, then sampling the picture to be detected and the supporting sample from the balanced data set, and repeating the training of the second stage of training;
s5, a new target detection stage: and taking the new class label as a support sample, and carrying out target detection on the input image to obtain the position and the specific class of the target belonging to the new class in the input image.
2. The method of claim 1, wherein in step S1, a Meta YOLO is selected as the target detection model, and the Meta YOLO comprises a feature extractor, a re-weighting module, and a prediction layer.
3. The method for detecting the small sample target as claimed in claim 2, wherein in step S2, in each training batch, the picture to be detected and the support sample are sampled, the sampling manner of the support sample is that for each class of base class, a bounding box belonging to the class is sampled, and an image where the bounding box is located is obtained, then a 0-1 mask is added on the basis of an original RGB channel of the image, the position of the bounding box is indicated, the picture to be detected is input to the feature extractor, and the support sample is input to the re-weighting module.
4. The small sample object detection method according to claim 3, wherein in step S2, the batch size is B, the class number of the base class is N, the number of the pictures in the training set is D, and the total number of the bounding box labels contained in the D pictures is A, B pictures are randomly sampled from the D pictures and input as the feature extractor for the image to be detected, then, for the N base classes, each class randomly samples a bounding box belonging to the class from the A bounding box labels, and obtains the picture where the bounding box is located, for the picture where the bounding box is located, on the basis of the RGB channel, a mask channel is added, the value of the mask channel at the region covered by the bounding box is 1, the other regions are 0, the N four-channel pictures can be input as the supporting samples into the re-weighting module, the N outputs of the re-weighting module can weight the corresponding features of each picture to be detected, and obtaining N characteristic graphs, and inputting the N characteristic graphs into the prediction layer to judge whether the selected picture to be detected contains the object consistent with the corresponding support sample category.
5. The method for detecting small sample targets as claimed in claim 2, wherein in step S3, for each base class, all the bounding box labels are traversed and input as support samples into the re-weighting module to obtain the mean value of the class feature vectors of all the support samples of that class, and then the training in the first stage of meta-training is repeated, but a Smooth L1 loss function is added between the newly predicted class feature vector and the previously stored class feature vector to promote the class feature vector calculated from a single support sample to approximate the mean value of the class feature vectors calculated from multiple support samples.
6. The method for detecting small sample objects as claimed in claim 2, wherein in step S4, an equal number of labels are sampled from each base class according to the number of bounding box labels included in each new class, so as to form a balanced data set, and then the second stage of meta-training is performed on the balanced data set.
7. The small sample object detection method according to claim 6, wherein in step S4, the total number of classes of the new class is N ', the number of labels included in each class of the new class is K, and first, in a number of base class bounding box labels, K bounding boxes are sampled for each base class, so as to obtain (N + N') K bounding boxes, where N is the total number of classes of the base class, and pictures of the bounding boxes are collected to construct a balanced data set, and then, the training of the second stage of meta-training is repeated on the balanced data set.
8. The small sample object detection method according to claim 2, wherein in step S5, the new supporting sample is input into a re-weighting module, a class feature vector of each class of the new supporting sample is obtained through calculation, the image to be detected is input into the feature extractor, the output feature map is re-weighted by using the class feature vector of each class of the new supporting sample, and the weighted feature map is input into a subsequent detection layer for detecting the new class.
9. The small sample object detection device based on the support sample characteristic enhancement is characterized by specifically comprising a central processing unit and a memory, wherein the memory stores a computer program capable of being executed by the central processing unit, and the central processing unit can realize the small sample object detection method based on the support sample characteristic enhancement according to any one of claims 1 to 8 by executing the computer program.
10. A computer-readable storage medium storing a computer program executable by a central processing unit, wherein the central processing unit is capable of implementing the small sample target detection method based on support sample feature enhancement according to any one of claims 1 to 8 by executing the computer program.
CN202111303534.8A 2021-11-05 2021-11-05 Small sample target detection method and device based on support sample characteristic enhancement Pending CN114078197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111303534.8A CN114078197A (en) 2021-11-05 2021-11-05 Small sample target detection method and device based on support sample characteristic enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111303534.8A CN114078197A (en) 2021-11-05 2021-11-05 Small sample target detection method and device based on support sample characteristic enhancement

Publications (1)

Publication Number Publication Date
CN114078197A true CN114078197A (en) 2022-02-22

Family

ID=80283640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111303534.8A Pending CN114078197A (en) 2021-11-05 2021-11-05 Small sample target detection method and device based on support sample characteristic enhancement

Country Status (1)

Country Link
CN (1) CN114078197A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114861842A (en) * 2022-07-08 2022-08-05 中国科学院自动化研究所 Few-sample target detection method and device and electronic equipment
CN114898145A (en) * 2022-05-05 2022-08-12 上海人工智能创新中心 Mining method and device for implicit new class instance and electronic equipment
CN117409250A (en) * 2023-10-27 2024-01-16 北京信息科技大学 Small sample target detection method, device and medium
CN114898145B (en) * 2022-05-05 2024-06-07 上海人工智能创新中心 Method and device for mining implicit new class instance and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898145A (en) * 2022-05-05 2022-08-12 上海人工智能创新中心 Mining method and device for implicit new class instance and electronic equipment
CN114898145B (en) * 2022-05-05 2024-06-07 上海人工智能创新中心 Method and device for mining implicit new class instance and electronic equipment
CN114861842A (en) * 2022-07-08 2022-08-05 中国科学院自动化研究所 Few-sample target detection method and device and electronic equipment
CN114861842B (en) * 2022-07-08 2022-10-28 中国科学院自动化研究所 Few-sample target detection method and device and electronic equipment
CN117409250A (en) * 2023-10-27 2024-01-16 北京信息科技大学 Small sample target detection method, device and medium
CN117409250B (en) * 2023-10-27 2024-04-30 北京信息科技大学 Small sample target detection method, device and medium

Similar Documents

Publication Publication Date Title
CN112396002B (en) SE-YOLOv 3-based lightweight remote sensing target detection method
CN110991311B (en) Target detection method based on dense connection deep network
CN114529825B (en) Target detection model, method and application for fire fighting access occupied target detection
CN109086811B (en) Multi-label image classification method and device and electronic equipment
CN113688933B (en) Classification network training method, classification method and device and electronic equipment
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN114078197A (en) Small sample target detection method and device based on support sample characteristic enhancement
CN111767962B (en) One-stage target detection method, system and device based on generation countermeasure network
CN114169381A (en) Image annotation method and device, terminal equipment and storage medium
CN114155474A (en) Damage identification technology based on video semantic segmentation algorithm
CN116823793A (en) Device defect detection method, device, electronic device and readable storage medium
CN114821022A (en) Credible target detection method integrating subjective logic and uncertainty distribution modeling
CN117237733A (en) Breast cancer full-slice image classification method combining self-supervision and weak supervision learning
CN115457415A (en) Target detection method and device based on YOLO-X model, electronic equipment and storage medium
CN114565803A (en) Method, device and mechanical equipment for extracting difficult sample
CN111583321A (en) Image processing apparatus, method and medium
CN116958809A (en) Remote sensing small sample target detection method for feature library migration
CN111882551B (en) Pathological image cell counting method, system and device
CN112801960A (en) Image processing method and device, storage medium and electronic equipment
CN113033397A (en) Target tracking method, device, equipment, medium and program product
Wilson et al. An efficient non-parametric background modeling technique with cuda heterogeneous parallel architecture
CN116912290B (en) Memory-enhanced method for detecting small moving targets of difficult and easy videos
CN117437465B (en) Improved soft-NMS target detection method based on unbalanced data
CN115205555B (en) Method for determining similar images, training method, information determining method and equipment
CN117095244B (en) Infrared target identification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination