CN115035288A - Gradient optimizing method and system for generalized few-sample target detection - Google Patents
Gradient optimizing method and system for generalized few-sample target detection Download PDFInfo
- Publication number
- CN115035288A CN115035288A CN202210954945.1A CN202210954945A CN115035288A CN 115035288 A CN115035288 A CN 115035288A CN 202210954945 A CN202210954945 A CN 202210954945A CN 115035288 A CN115035288 A CN 115035288A
- Authority
- CN
- China
- Prior art keywords
- gradient
- training
- defects
- small
- generalized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a gradient optimizing method and system for generalized few-sample target detection, belonging to the technical field of automotive interior detection, and comprising S1, obtaining an interior board diagram and preprocessing the interior board diagram; s2, carrying out forward propagation processing on the interior board graph through a Faster RCNN network; s3, carrying out back propagation processing on the interior board graph through a Faster RCNN network; s4, calculating a loss function; s5, training a Faster RCNN network model; and S6, evaluating results by adopting the accuracy and the recall rate. The invention improves the gradient calculation strategy of fast RCNN, so that the gradient directions of base and novel sets are considered, and catastrophic forgetting is not easy to happen. The gradient optimization strategy in the application is not only suitable for fast RCNN network, but also suitable for gradient optimization of other networks, does not influence the parameter quantity and inference time of the network, and accelerates the convergence speed.
Description
Technical Field
The invention belongs to the technical field of automotive interior detection, and particularly relates to a method and a system for optimizing gradient of generalized few-sample target detection.
Background
Various defects inevitably occur in the production process of the automotive interior panel. Some of the defects are very small and some of the defects are not obvious, the efficiency is low, the cost is high, most importantly, the missing rate is extremely high by means of judgment of workers, and finally the complaint amount of users is increased in the selling process. Therefore, detection in this field by artificial intelligence deep learning is becoming mainstream.
The defects are divided into a plurality of types, and the types can naturally form a long tail effect in a real application scene, namely, a few defect types account for most of total defects, while a part of defects always occur only rarely and must be detected, which is a serious challenge for the capability of a deep learning model.
Deep learning target detection generally requires enormous amounts of data to allow a model to automatically learn features of a data set to detect a known type. Little sample detection (FSOD) aims at learning to quickly detect new targets through existing large datasets (base sets) and new types of small datasets (novel sets) with few new samples that are not present in the base sets. At present, most researchers use fast RCNN as a basic detection framework, however, due to the lack of customization consideration for data sparse scenes, the detection effect on novel sets is often not satisfactory. Moreover, when the novel set is subjected to incremental training, the detection effect of most models on the base set is also disastrous forgetting. The task of detecting few samples while detecting novel sets to avoid catastrophic forgetting of detection of base sets is called generalized few sample detection (GFSOD).
The target detection based on the deep learning neural network applied in the production line of the automobile interior panel at present mainly comprises two branches: namely two-stage target detection and single-stage target detection. The single-stage target detection mainly comprises yolo series and the like, and the double-stage target detection is most typical of fast RCNN. Although the detection speed of the two-stage target detection is slow, the detection rate is high, so that the two-stage target detection becomes a basic detection framework for most researchers to research the FSOD problem. TFA provides a method based on transfer learning, namely a method for only finely adjusting a detection head capable of outputting novel samples by freezing a network backbone, and a model is enabled to predict a novel set by using the huge data advantage of a base set as much as possible. The Retentive RCNN of the absenteeism team thinks that completely freezing the backbone of the Faster RCNN can prevent the backbone network from learning the regional advising capability for novel classes, thereby reducing the learning effect. The Retentive RCNN assumes double branches of a base network and a novel network in a backbone network and a head network at the same time, then only the branches of the base network are frozen, and finally the two branches are merged, so that the learning capacity of the network is enhanced, meanwhile, the detection effect of a base set is fully protected, and catastrophic forgetting is not easy to happen.
In addition, the life-long learning task (life learning) is similar to the FSOD field due to the problem of less sample learning, the network construction idea can be used for reference, and the A-GEM method of Arslan Chaudhry et al is referred to in the patent to optimize the gradient optimization strategy during reverse propagation.
Objective defects of the prior art:
in the field of detection of automobile interior panel materials, the existing generalized few-sample detection is still not ideal enough for the detection effect of novel types, meanwhile, the detection effect of base types can still be reduced in different degrees, and models with novel effects are often irrecoverable in overlooking, so that the detection method can be more practical in reality by continuously improving.
Meanwhile, due to the beat requirement of an industrial production line, the detection rate is improved, and meanwhile, the model prediction rate cannot be reduced too much, which is difficult to achieve in the method provided by an academic paper, and most of generalized few-sample detection methods with high detection rate have relatively low model performance.
Disclosure of Invention
The invention provides a method and a system for optimizing gradient of generalized few-sample target detection, aiming at solving the technical problems in the known technology. Although the method is provided on the basis of the fast RCNN, the method can be transplanted to various networks to be used as a gradient optimization strategy, does not influence the parameter quantity and the reasoning time of the networks, and accelerates the convergence speed.
The first purpose of the invention is to provide an optimized gradient method for generalized few-sample target detection, which comprises the following steps:
s1, obtaining an interior panel drawing and carrying out pretreatment; the method specifically comprises the following steps:
firstly, manually labeling, and then determining a basic set and a small data set according to the number of labels of each category, wherein the number of labels of the basic set is greater than that of labels of the small data set; finally, dividing the basic set and the small data set into a training set, a verification set and a test set respectively;
s2, carrying out forward propagation processing on the interior board graph through a Faster RCNN network;
s3, carrying out back propagation processing on the interior board graph through a Faster RCNN network; the method specifically comprises the following steps:
during each training, the data of the base set and the small data set of the mini-batch are respectively taken, and the gradient g is calculated b And gradient g n (ii) a When the included angle between the two is acute angle, common backward propagation is carried out, otherwise, gradient g is used b For gradient g n Orthogonal projection is carried out to obtain a new gradient g b2 (ii) a Using a gradient g n For gradient g b Orthogonal projection is carried out to obtain a new gradient g n2 For the new gradient g generated b2 With a new gradient g n2 Carry out averaging operation to obtain the final ladderDegree g 2 Then counter-propagating this new gradient;
s4, calculating a loss function;
l is the total loss, L rpn Suggesting network loss for a region, L box For regression frame loss, L cls To categorical losses;
s5, training a fast RCNN network model; the method specifically comprises the following steps:
training by using a training set, repeatedly evaluating the performance of the model by using a verification set, and changing and adjusting the hyper-parameters of each training;
s6, evaluating results by adopting the accuracy and the recall rate;
accuracy = number of correct defects extracted/total number of defects extracted;
recall = number of correct defect bars extracted/total number of defects in sample.
Preferably, the manual labeling is: all defects in the dataset are labeled for their category and rectangular box according to predefined defect types.
Preferably, the defects of the foundation set comprise impurities, white spots, bump deformation, scratches and missing prints; defects of the small data set include crater, misalignment, corrosion points.
Preferably, the specific counter-propagating gradient is calculated according to the following formula:
wherein: g b Gradient of the data of the basis set, g n For gradients of small dataset data, T denotes transposition.
It is a second object of the present invention to provide an optimized gradient system for generalized sample-less target detection, comprising:
a data initialization module: acquiring an interior board drawing and preprocessing the interior board drawing; the method specifically comprises the following steps:
firstly, manually labeling, and then determining a basic set and a small data set according to the number of labels of each category, wherein the number of labels of the basic set is greater than that of labels of the small data set; finally, dividing the basic set and the small data set into a training set, a verification set and a test set respectively;
a forward processing module: carrying out forward propagation processing on the interior plate drawing through a Faster RCNN network;
a reverse processing module: carrying out back propagation processing on the interior board graph through a Faster RCNN network; the method specifically comprises the following steps:
during each training, the data of the base set and the small data set of the mini-batch are respectively taken, and the gradient g is calculated b And gradient g n (ii) a When the included angle between the two is acute angle, common backward propagation is carried out, otherwise, gradient g is used b For gradient g n Orthogonal projection is carried out to obtain a new gradient g b2 (ii) a Using a gradient g n For gradient g b Orthogonal projection is carried out to obtain a new gradient g n2 For the new gradient g generated b2 With a new gradient g n2 Averaging to obtain final gradient g 2 Then counter-propagating this new gradient;
a loss calculation module: calculating a loss function;
l is the total loss, L rpn Suggesting network loss for a region, L box For regression frame loss, L cls To categorical losses;
a training module: training a fast RCNN network model; the method specifically comprises the following steps:
training by using a training set, repeatedly evaluating the performance of the model by using a verification set, and changing and adjusting the hyper-parameters of each training;
an evaluation module: evaluating results by adopting the accuracy and the recall rate;
accuracy = number of correct defects extracted/total number of defects extracted;
recall = number of correct defect strips extracted/total number of defects in sample.
Preferably, the manual label is: all defects in the dataset are labeled for their category and rectangular box according to predefined defect types.
Preferably, the defects of the basic set comprise impurities, white spots, bump deformation, scratches and missing prints; defects of the small data set include crater, misalignment, corrosion points.
Preferably, the specific counter-propagating gradient is calculated according to the following formula:
wherein: g b Gradient of the data of the basis set, g n For gradients of small dataset data, T denotes transposition.
A third object of the present invention is to provide an information data processing terminal that implements the above-described gradient optimization method for generalized sample-less target detection.
It is a fourth object of the present invention to provide a computer readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the above described generalized gradient-optimized method of sample-less target detection.
The invention has the advantages and positive effects that:
1, the new network gradient design that this patent provided has optimized the propagation direction of backward propagation for the catastrophic forgetting of base set is difficult to take place for novel set data when training, makes the model promote the relevance ratio to rare defect and common defect simultaneously.
2, on the basis of 1, the method of the patent does not influence the size and the parameter quantity of the model, thereby ensuring that the method is applied to various network structures, does not increase the size of the model and the prediction time when improving the detection capability, has very high practicability, and can meet the requirements of an industrial production line on the beat.
3, the patent also accelerates the convergence speed of the model due to the adoption of a more scientific gradient optimization direction.
Drawings
FIG. 1 is a diagram of the fast RCNN model;
FIG. 2 is a schematic diagram of gradient calculation when the included angle is an obtuse angle.
Detailed Description
For a further understanding of the invention, its nature and utility, reference should be made to the following examples, taken in conjunction with the accompanying drawings, in which:
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. Based on the technical solutions in the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
Please refer to fig. 1 and fig. 2.
An optimized gradient method for generalized sample-less target detection, comprising:
the method comprises the steps of 1, manually labeling according to an interior board diagram taken from actual operation, and then determining a basic set (a base set, the number of labels is more than 100) formed by a large number of labels and a small data set (a novel set, the number of labels is less than 100) formed by a small number of labels according to the number of labels of each category. And then, respectively dividing a training set, a verification set and a test set by the base set and the novel set. The base set defects comprise impurities, white spots, bump deformation, scratches and missing prints, and the novel set defects comprise pit packets, deviation and corrosion spots.
2, the network used in this patent is identical to fasterncn in forward propagation process, but redesigns the idea and method of backward propagation, and the fasterncn network includes backbone network, Region Proposal Network (RPN), head network (roinead), classification layer and regression layer, and the structure is as shown in fig. 1;
3, the back propagation method of the invention comprises the following steps: during each training, the method respectively takes base set and novel set data of the mini-batch, and calculates the gradient g of the mini-batch b And gradient g n . When the included angle between the two is acute angle, common backward propagation is carried out, otherwise, the ladder is usedDegree g b For gradient g n Orthogonal projection is carried out to obtain a new gradient g b2 (ii) a Using a gradient g n For gradient g b Orthogonal projection is carried out to obtain a new gradient g n2 For the new gradient g generated b2 With a new gradient g n2 Averaging to obtain final gradient g 2 This new gradient is then counter-propagated. The method of this patent has optimized the propagation direction of back propagation as far as possible for the catastrophic forgetting of base set is difficult to take place for novel data when training, has accelerated the model convergence speed simultaneously, promotes the prediction ability. The specific counter-propagating gradient is calculated according to the following formula:
4, the loss function expression of the present invention is as follows, i.e. the combined loss of the area suggestion network (rpn), the classification output (cls) and the bounding box regression output (box):
and 5, repeatedly evaluating the performance of the model by using the test set, and changing and adjusting the hyper-parameters of each training.
6, the evaluation system of this patent adopts rate of accuracy and recall rate:
accuracy = number of correct defects extracted/total number of defects extracted;
recall = number of correct defect strips extracted/total number of defects in sample;
in the practical application of the automobile interior panel, production enterprises are more concerned about the recall rate, so that the recall rate is used as a main index, and the accuracy rate is used as a secondary index.
7, the result shows that the accuracy of the novel set is slightly improved under the condition that the accuracy of the base set is almost unchanged.
TABLE 1 results of conventional techniques
Table 2, results table of this solution
This patent makes the convergence round number of model follow original 300 rounds and advance to 240 rounds, has saved training time greatly.
An optimized gradient system for generalized sample-less target detection, comprising:
a data initialization module: acquiring an interior board drawing and preprocessing the interior board drawing; the method specifically comprises the following steps:
firstly, manually labeling, and then determining a basic set and a small data set according to the number of labels of each category, wherein the number of labels of the basic set is greater than that of labels of the small data set; finally, dividing the basic set and the small data set into a training set, a verification set and a test set respectively; the manual label is: all defects in the dataset are labeled for their category and rectangular box according to predefined defect types. Preferably, the defects of the foundation set comprise impurities, white spots, bump deformation, scratches and missing prints; defects of the small data set include crater, misalignment, corrosion points.
A forward processing module: carrying out forward propagation processing on the interior board graph through a Faster RCNN network;
a reverse processing module: carrying out back propagation processing on the interior board graph through a Faster RCNN network; the method specifically comprises the following steps:
during each training, the data of the base set and the small data set of the mini-batch are respectively taken, and the gradient g is calculated b And gradient g n (ii) a When the included angle between the two is acute angle, common backward propagation is carried out, otherwise, gradient g is used b For gradient g n Orthogonal projection is carried out to obtain a new gradient g b2 (ii) a Using a gradient g n For gradient g b Orthogonal projection is carried out to obtain a new gradient g n2 For the new gradient g generated b2 With a new gradient g n2 Performing an averaging operation to obtain the mostGradient g after 2 Then counter-propagating this new gradient; the specific counter-propagating gradient is calculated according to the following formula:
wherein: g b Gradient of the data of the basis set, g n For gradients of small dataset data, T denotes transposition.
A loss calculation module: calculating a loss function;
l is the total loss, L rpn Suggesting network loss for a region, L box To return to frame loss, L cls To categorical losses;
a training module: training a fast RCNN network model; the method comprises the following specific steps:
training by using a training set, repeatedly evaluating the performance of the model by using a verification set, and changing and adjusting the hyper-parameters of each training;
an evaluation module: evaluating results by adopting the accuracy and the recall rate;
accuracy = number of correct defects extracted/total number of defects extracted;
recall = number of correct defect bars extracted/total number of defects in sample.
An information data processing terminal for realizing the generalized gradient optimizing method for detecting the target with few samples.
A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the generalized sample-less target detection optimized gradient method described above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and any simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention are within the scope of the technical solution of the present invention.
Claims (10)
1. An optimized gradient method for generalized few-sample target detection, comprising:
s1, obtaining an interior board drawing and preprocessing the interior board drawing; the method specifically comprises the following steps:
firstly, manually labeling, and then determining a basic set and a small data set according to the number of labels of each category, wherein the number of labels of the basic set is greater than that of labels of the small data set; finally, dividing the basic set and the small data set into a training set, a verification set and a test set respectively;
s2, carrying out forward propagation processing on the interior board graph through a Faster RCNN network;
s3, carrying out back propagation processing on the interior board graph through a Faster RCNN network; the method specifically comprises the following steps:
during each training, the data of the base set and the small data set of the mini-batch are respectively taken, and the gradient g is calculated b And gradient g n (ii) a When the included angle between the two is acute angle, common backward propagation is carried out, otherwise, gradient g is used b For gradient g n Orthogonal projection is carried out to obtain a new gradient g b2 (ii) a Using a gradient g n For gradient g b Orthogonal projection is carried out to obtain a new gradient g n2 For the new gradient g generated b2 With a new gradient g n2 Averaging to obtain the final gradient g 2 Then counter-propagating this new gradient;
s4, calculating a loss function;
l is the total loss, L rpn Suggesting network loss for a region, L box For regression frame loss, L cls To categorical losses;
s5, training a fast RCNN network model; the method specifically comprises the following steps:
training by using a training set, repeatedly evaluating the model performance by using a verification set, and changing and adjusting the hyper-parameters of each training;
s6, evaluating results by adopting the accuracy and the recall rate;
accuracy = number of correct defects extracted/total number of defects extracted;
recall = number of correct defect strips extracted/total number of defects in sample.
2. The method of claim 1, wherein the artificial labeling is: and marking all defects in the data set into categories and rectangular frames according to the predefined defect types.
3. The method of claim 1, wherein the defects of the basis set include impurities, white spots, bump deformation, scratches, missing prints; defects of the small data set include craters, dislocations, and corrosion sites.
5. An optimized gradient system for generalized small sample object detection, comprising:
a data initialization module: acquiring an interior board drawing and preprocessing the interior board drawing; the method comprises the following specific steps:
firstly, manually labeling, and then determining a basic set and a small data set according to the number of labels of each category, wherein the number of labels of the basic set is greater than that of labels of the small data set; finally, dividing the basic set and the small data set into a training set, a verification set and a test set respectively;
a forward processing module: carrying out forward propagation processing on the interior plate drawing through a Faster RCNN network;
a reverse processing module: carrying out back propagation processing on the interior board graph through a Faster RCNN network; the method comprises the following specific steps:
during each training, the data of the basic set and the small data set of the mini-batch are respectively taken, and the gradient g is calculated b And gradient g n (ii) a When the included angle between the two is acute angle, common backward propagation is carried out, otherwise, gradient g is used b For gradient g n Orthogonal projection is carried out to obtain a new gradient g b2 (ii) a Using a gradient g n For gradient g b Orthogonal projection is carried out to obtain a new gradient g n2 For the new gradient g generated b2 With a new gradient g n2 Averaging to obtain the final gradient g 2 Then counter-propagating this new gradient;
a loss calculation module: calculating a loss function;
l is the total loss, L rpn Suggesting network loss for a region, L box For regression frame loss, L cls To categorical losses;
a training module: training a fast RCNN network model; the method specifically comprises the following steps:
training by using a training set, repeatedly evaluating the performance of the model by using a verification set, and changing and adjusting the hyper-parameters of each training;
an evaluation module: evaluating results by adopting the accuracy and the recall rate;
accuracy = number of correct defects extracted/total number of defects extracted;
recall = number of correct defect bars extracted/total number of defects in sample.
6. The optimized gradient system for generalized small-sample target detection as claimed in claim 5, wherein the manual labeling is: all defects in the dataset are labeled for their category and rectangular box according to predefined defect types.
7. The optimized gradient system for generalized small-sample target detection as claimed in claim 5, wherein the defects of the basis set include impurities, white spots, bump deformation, scratches, missing prints; defects of the small data set include crater, misalignment, corrosion points.
8. The optimized gradient system for generalized small-sample object detection according to claim 5, wherein the specific back-propagation gradient is calculated according to the following formula:
wherein: g b Gradient of the data of the basis set, g n For gradients of small dataset data, T denotes transposition.
9. An information data processing terminal for implementing the optimized gradient method for generalized small-sample object detection as claimed in any one of claims 1-4.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method for optimized gradient for generalized sample-less target detection as claimed in any one of claims 1-4 above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210954945.1A CN115035288A (en) | 2022-08-10 | 2022-08-10 | Gradient optimizing method and system for generalized few-sample target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210954945.1A CN115035288A (en) | 2022-08-10 | 2022-08-10 | Gradient optimizing method and system for generalized few-sample target detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115035288A true CN115035288A (en) | 2022-09-09 |
Family
ID=83131021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210954945.1A Withdrawn CN115035288A (en) | 2022-08-10 | 2022-08-10 | Gradient optimizing method and system for generalized few-sample target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115035288A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110770790A (en) * | 2017-06-14 | 2020-02-07 | 祖克斯有限公司 | Voxel-based ground plane estimation and object segmentation |
CN112132042A (en) * | 2020-09-24 | 2020-12-25 | 西安电子科技大学 | SAR image target detection method based on anti-domain adaptation |
CN114387473A (en) * | 2022-01-12 | 2022-04-22 | 南通大学 | Small sample image classification method based on base class sample characteristic synthesis |
CN114548256A (en) * | 2022-02-18 | 2022-05-27 | 南通大学 | Small sample rare bird identification method based on comparative learning |
CN114743257A (en) * | 2022-01-23 | 2022-07-12 | 中国电子科技集团公司第十研究所 | Method for detecting and identifying image target behaviors |
-
2022
- 2022-08-10 CN CN202210954945.1A patent/CN115035288A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110770790A (en) * | 2017-06-14 | 2020-02-07 | 祖克斯有限公司 | Voxel-based ground plane estimation and object segmentation |
CN112132042A (en) * | 2020-09-24 | 2020-12-25 | 西安电子科技大学 | SAR image target detection method based on anti-domain adaptation |
CN114387473A (en) * | 2022-01-12 | 2022-04-22 | 南通大学 | Small sample image classification method based on base class sample characteristic synthesis |
CN114743257A (en) * | 2022-01-23 | 2022-07-12 | 中国电子科技集团公司第十研究所 | Method for detecting and identifying image target behaviors |
CN114548256A (en) * | 2022-02-18 | 2022-05-27 | 南通大学 | Small sample rare bird identification method based on comparative learning |
Non-Patent Citations (1)
Title |
---|
KARIM GUIRGUI ET AL: "CFA: Constraint-based Finetuning Approach for Generalized Few-Shot Object Detection", 《ARXIV:2204.05220》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yao et al. | Artificial intelligence-based hull structural plate corrosion damage detection and recognition using convolutional neural network | |
CN110991311A (en) | A target detection method based on densely connected deep network | |
CN115049884B (en) | Broad-sense few-sample target detection method and system based on fast RCNN | |
CN111047193A (en) | Enterprise credit scoring model generation algorithm based on credit big data label | |
CN118628161B (en) | Supply chain demand prediction method and system | |
CN110599459A (en) | Underground pipe network risk assessment cloud system based on deep learning | |
Wu et al. | Hot‐Rolled Steel Strip Surface Inspection Based on Transfer Learning Model | |
CN108490806A (en) | Based on the system resilience Simulation Evaluation method for improving fault modes and effect analysis | |
CN114781937A (en) | Method and device for pre-paid card enterprise risk early warning and storage medium | |
CN111027591A (en) | Node fault prediction method for large-scale cluster system | |
Chen et al. | NHD‐YOLO: Improved YOLOv8 using optimized neck and head for product surface defect detection with data augmentation | |
CN112116265A (en) | A power data-driven construction method of industry prosperity index | |
CN116244647A (en) | Unmanned aerial vehicle cluster running state estimation method | |
Chen et al. | Accounting information disclosure and financial crisis beforehand warning based on the artificial neural network | |
CN114037518A (en) | Construction method, device, electronic device and storage medium of risk prediction model | |
CN115035288A (en) | Gradient optimizing method and system for generalized few-sample target detection | |
CN113379125A (en) | Logistics storage sales prediction method based on TCN and LightGBM combined model | |
CN114842192B (en) | A damage identification model, damage identification method and system for aircraft engine blades | |
CN114297582B (en) | Discrete count data modeling method based on multi-probe local sensitive hash negative binomial regression model | |
CN116595371A (en) | Topic heat prediction model training method, topic heat prediction method and topic heat prediction device | |
CN115880533A (en) | Bridge Apparent Crack Identification Method Based on Adaptive Subset Search and Deep Learning | |
CN116432808A (en) | A network-level multi-modal traffic short-term passenger flow prediction method based on deep learning | |
CN115272406A (en) | Target tracking method, medium and device based on target detection and two-stage matching | |
CN114548470A (en) | Prediction method and device of user complaint amount, computer equipment and storage medium | |
CN107977804B (en) | A risk assessment method for confirming warehouse business |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220909 |
|
WW01 | Invention patent application withdrawn after publication |