CN115512174A - Anchor-frame-free target detection method applying secondary IoU loss function - Google Patents

Anchor-frame-free target detection method applying secondary IoU loss function Download PDF

Info

Publication number
CN115512174A
CN115512174A CN202110697205.XA CN202110697205A CN115512174A CN 115512174 A CN115512174 A CN 115512174A CN 202110697205 A CN202110697205 A CN 202110697205A CN 115512174 A CN115512174 A CN 115512174A
Authority
CN
China
Prior art keywords
target detection
iou
loss
loss function
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110697205.XA
Other languages
Chinese (zh)
Inventor
薛向阳
梁龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai New Helium Brain Intelligence Technology Co ltd
Fudan University
Original Assignee
Shanghai New Helium Brain Intelligence Technology Co ltd
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai New Helium Brain Intelligence Technology Co ltd, Fudan University filed Critical Shanghai New Helium Brain Intelligence Technology Co ltd
Priority to CN202110697205.XA priority Critical patent/CN115512174A/en
Publication of CN115512174A publication Critical patent/CN115512174A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an anchor-frame-free target detection method applying a secondary IoU loss function, which is characterized by comprising the steps of preprocessing an image to be detected and inputting the preprocessed image to a pre-trained target detection model for reasoning to obtain a corresponding image detection result, wherein the target detection model is obtained in advance through the following training of acquiring an image data set for training and supervision information; constructing an initial target detection model, and inputting a training image data set and supervision information into the model; solving the positioning Loss by using a secondary IoU Loss function, and solving the classification Loss by using a Focal local Loss function; respectively deriving model parameters of the initial target detection model by using the positioning loss and the classification loss, and then updating the model parameters by using back propagation; judging whether the updated model parameters reach termination conditions, if so, entering the next step, otherwise, entering the second step of the training process; and storing the updated model parameters and loading to obtain the target detection model.

Description

Anchor-frame-free target detection method applying secondary IoU loss function
Technical Field
The invention belongs to the field of computer vision, and relates to an anchor-frame-free target detection method applying a secondary IoU loss function.
Background
With the rapid development of manufacturing industry and electronic information industry, intelligent devices are rapidly developed on the basis of the rapid development of the manufacturing industry and the electronic information industry, and various types of electronic devices have penetrated into the lives of people. The widespread use of electronic devices has brought about a large amount of image data containing a lot of useful information, and it is extremely time-consuming and laborious to manually process the large amount of image data and screen out the useful information. Therefore, human beings screen out interesting information by designing algorithms that can be understood and executed by machines, such as face detection algorithms that can detect faces in images. How to design an efficient and excellent algorithm for processing image-related data is a problem that needs to be researched in the field of computer vision.
The target detection algorithm is used as the basis of many advanced computer vision algorithms, and has been successfully applied to aspects of people's life, such as face detection and recognition, text recognition, security protection and the like. In 2018, the mainstream target detection algorithm is almost based on the anchor box, and in the deployment process of commercial applications, many codes related to anchor box operations, such as anchor box generation, matching, encoding, decoding and the like, need to be written, so that the difficulty of deployment is increased. Nowadays, anchor-frame-free target detection algorithms begin to prevail, and the landing process of the target detection algorithms is accelerated, so that more and more life scenes are more convenient and faster under the assistance of the algorithms.
In recent years, the anchor-frame-free target detection algorithm has made a great progress, and researchers invented various types of anchor-frame-free target detection algorithms and created many new methods to improve the performance of target detection. The target detection task needs to classify and locate the interested categories in the image at the same time, and researchers make many improvements on the classification loss function and the location loss function to improve the detection effect of the model.
Nowadays, through years of research, the positioning Loss functions of target detection mainly include Smooth L1 Loss, ioU Loss, linear IoU Loss, GIoU Loss, DIoU Loss and CIoU Loss, and these Loss functions are applied to different target detection algorithms, while a series of IoU losses are mainly applied to anchor-frame-free target detection and achieve excellent performance. In general, the localization quality of target detection is expressed in terms of IoU, which is the intersection of the areas of two rectangular boxes compared to the union of their areas. Because the Smooth L1 Loss optimizes the transformation quantities of the coordinates x and y of the central point and the length and width w and h of the rectangular frame respectively, an author of the IoU Loss considers that the Smooth L1 Loss splits the relationship among the four variables of the rectangular frame, so the IoU Loss measures the positioning Loss by using the IoU in a negative logarithmic form, and the positioning quality is taken as an optimization target. The Linear IoU Loss function uses a Linear IoU Loss function, and the derivative function is a constant, so that the problem that the model cannot be optimized due to undefined derivative when the IoU Loss is 0 is solved. The GIoU Loss introduces the concept of the closure area, the model can optimize the target according to the intersecting mode of the prediction frame and the target frame, and the positioning effect of the model is improved. The DIoU Loss and the CIoU Loss use the length and width of the prediction frame and the distance between the central points of the target frame as regularization terms to accelerate model convergence and improve the model effect.
Although the GIoU Loss, the DIoU Loss and the CIoU Loss have better effects on a few target detection algorithm models, the mAP with the amplitude of only 0.2% -0.3% is improved on anchor frame-free target detection algorithms such as FCOS. The IoU Loss does not consider that the overall positioning quality, namely the IoU, is used in the parameter updating process to adjust the parameter updating step length, so that the model cannot fully learn the target positioning in the training stage, and the detection effect is not obviously improved.
Disclosure of Invention
In order to solve the problems, the invention provides an anchor frame-free target detection method applying a secondary IoU loss function, which adopts the following technical scheme:
the invention provides a method for detecting a target without an anchor frame by applying a secondary IoU loss function, which is characterized by comprising the following steps of: s1-1, preprocessing an image to be detected to obtain a preprocessed image; s1-2, inputting the preprocessed image into a pre-trained target detection model to carry out reasoning to obtain a corresponding image detection result, wherein the target detection model is obtained by training in advance through the following steps of S2-1, obtaining an image data set for training and corresponding supervision information, and the supervision information comprises positioning supervision information and category supervision information; s2-2, constructing an initial target detection model, inputting a training image data set and supervision information into the initial target detection model, wherein the initial target model is provided with a feature extraction module, a positioning module and a classification module, the feature extraction module is used for extracting a feature map with a fixed scale according to the training image data set to serve as an output feature map, the positioning module is used for processing the output feature map to obtain the position information of a prediction frame of each point on the output feature map, the classification module is used for processing the output feature map to obtain the confidence coefficient of each point on the output feature map corresponding to all categories, and S2-3, solving the positioning Loss according to the position information of the prediction frame and the positioning supervision information by using a quadratic IoU Loss function, and solving the classification Loss according to the confidence coefficient and the category supervision information by using a Focal Loss function; s2-4, respectively using the positioning loss and the classification loss to conduct derivation on model parameters of the initial target detection model, and then using back propagation to update the model parameters to obtain updated model parameters; s2-5, judging whether the updated model parameters reach termination conditions, if so, entering S2-6, otherwise, entering S2-2; and S2-6, storing the updated model parameters and loading the updated model parameters to the initial target detection model so as to form a target detection model.
The method for detecting the target without the anchor frame by applying the secondary IoU loss function, provided by the invention, also has the technical characteristics that the secondary IoU loss function is as follows:
Loss loc =λ-αIoU-βIoU 2
in the formula, λ, α, β are balance factors which are constants, β ≠ 0, iou is an area intersection ratio of the prediction frame and the supervision target frame, and the value range is [0,1].
The method for detecting the target without the anchor frame by applying the secondary IoU loss function can also have the technical characteristics that when the positioning loss and the classification loss are used for derivation of model parameters of an initial target detection model respectively, a derivative function of the positioning loss contains an IoU term, and the form of the derivative function is as follows:
Loss′ loc =-α-2βIoU
in the formula, alpha and beta are balance factors which are constants, beta is not equal to 0, ioU is the area intersection ratio of the prediction frame and the supervision target frame, and the value range is [0,1].
The anchor-frame-free target detection method applying the secondary IoU loss function provided by the invention can also have the technical characteristics that the feature extraction module at least comprises one or more combinations of ResNet, denseNet and ResNet + FPN.
The anchor-frame-free target detection method applying the secondary IoU loss function provided by the invention can also have the technical characteristics that the termination condition is that the target detection model trains 12 rounds on all training data sets in the training process.
The anchor-frame-free target detection method applying the secondary IoU Loss function provided by the invention can also have the technical characteristics that the Focal local Loss function is as follows:
Figure BDA0003128983650000051
in the formula, p t The class probabilities output by the classification module,
Figure BDA0003128983650000052
the information is marked for the category and,
Figure BDA0003128983650000053
in the case of the positive sample class,
Figure BDA0003128983650000054
as background class, a t To balance positiveNegative sample imbalance, the balance factor being a constant, and gamma is a parameter factor that balances the difficult and easy sample imbalance, the balance factor being a constant.
Action and Effect of the invention
The invention relates to an anchor frame-free target detection method applying a secondary IoU loss function, which is characterized in that an initial target model is provided with a feature extraction module, a positioning module and a classification module, the secondary IoU loss function is used for solving the positioning loss according to the position information and the supervision information of a prediction frame, and then the positioning loss is used for solving the derivation of model parameters of the initial target detection model to obtain a primary IoU function, so that the step length of gradient updating is determined according to the IoU of the prediction frame and the target frame in the training stage, and the tiny change of the position is captured. Further, different gradients are obtained according to the IoU of the prediction frame and the target frame in the parameter updating process to control different sizes of targets to have different optimization effects on parameters of the target detection model, because the small targets are sensitive to position abnormality, and slight changes of the positions and the dimensions of the small target prediction frames can generate relatively large influences on the small target prediction frames and the IoU of the target frame, so that the updating influences on the parameters are larger, and the detection effect of the small target prediction frames in the target detection model is improved more obviously compared with medium and large targets. The self-adaptive adjustment of the gradient plays a crucial role in optimizing the target detection model in the training process, so that the target detection model is more sufficiently learned, and the detection effect of the target detection model is enhanced. The method adjusts the step length of parameter updating according to the IoU of the prediction frame and the IoU of the target frame, and the IoU is added into the parameter updating process as a whole, so that the positioning quality of the target detection model is improved, the detection effect of the target detection model is further improved, and the current target detection algorithm is improved.
Drawings
FIG. 1 is a flowchart of a method for detecting an anchor-frame-free target by applying a secondary IoU loss function according to an embodiment of the present invention;
FIG. 2 is a flow chart of applying a secondary IoU loss function in a target detection task in an embodiment of the invention;
fig. 3 is a functional image of a quadratic IoU loss function when λ =1.5, α =1, and β =0.5 in the embodiment of the present invention.
Detailed Description
In order to make the technical means, creation features, achievement purposes and effects of the present invention easy to understand, the following describes the method for detecting the target without the anchor frame by applying the secondary IoU loss function specifically in combination with the embodiments and the accompanying drawings.
< example >
FIG. 1 is a flowchart of a method for detecting an anchor-frame-free target by applying a quadratic IoU loss function according to an embodiment of the present invention.
As shown in fig. 1, in this embodiment, a secondary IoU loss function is used to replace an original positioning loss function, and then a target detection model is trained, and then the performance of the target detection model is tested, which specifically includes steps S1-1 to S1-2.
And S1-1, preprocessing the image to be detected to obtain a preprocessed image.
And S1-2, inputting the preprocessed image into a pre-trained target detection model to carry out reasoning to obtain a corresponding image detection result.
In this embodiment, the target detection model needs to be obtained by training in advance through a training process, specifically:
fig. 2 is a flowchart of applying a secondary IoU loss function in a target detection task in an embodiment of the present invention.
As shown in fig. 2, the training of the target detection model in the present embodiment includes steps S2-1 to S2-6.
And S2-1, acquiring a training image data set and corresponding monitoring information, wherein the monitoring information comprises positioning monitoring information and category monitoring information.
The training image data set and the corresponding supervision information need to be obtained by preprocessing the image data set and the labeling information. In this embodiment, the preprocessing specifically includes a marking process and a data enhancement process.
The marking treatment is as follows: the positions and sizes of all interested targets in the image are represented by rectangular boxes, and the corresponding categories of all targets are represented by the rectangular boxes.
The data enhancement processing is as follows: subtracting the mean value [102.9801,115.9465 and 122.7717] from the RGB three channels in the image, and normalizing the pixel value of the image to be between-1 and 1; the image size is scaled to a minimum of 800 pixels for the shortest side and a maximum of 1333 pixels for the longest side.
And S2-2, constructing an initial target detection model, and inputting a training image data set and supervision information into the initial target detection model, wherein the initial target model is provided with a feature extraction module, a positioning module and a classification module.
The characteristic extraction module is used for extracting a characteristic diagram with a fixed scale as an output characteristic diagram according to the training image data set.
The positioning module is used for processing the output characteristic diagram to obtain the position information of the prediction frame of each point on the output characteristic diagram.
The classification module is used for processing the output feature map to obtain the confidence of each point on the output feature map corresponding to all categories.
And S2-3, solving the positioning Loss according to the position information of the prediction frame and the positioning supervision information by using a secondary IoU Loss function, and solving the classification Loss according to the confidence coefficient and the class supervision information by using a Focal local Loss function.
The optimizer used in the training is SGD, the classification Loss function is Focal local, and the form is as follows:
Figure BDA0003128983650000081
in the formula, p t The class probabilities output by the classification module,
Figure BDA0003128983650000082
the information is marked for the category and,
Figure BDA0003128983650000083
in the case of the positive sample class,
Figure BDA0003128983650000084
as background class, a t To balance the imbalance of positive and negative samples, the balance factor is a constant, and γ is a parameter factor to balance the imbalance of hard and easy samples, the balance factor is a constant.
The localization loss function is a quadratic IoU loss function of the form:
Loss loc =λ-αIoU-βIoU 2
in the formula, λ, α, β are balance factors which are constants, β ≠ 0, iou is an area intersection ratio of the prediction frame and the supervision target frame, and the value range is [0,1].
Fig. 3 is a functional image of a quadratic IoU loss function when λ =1.5, α =1, and β =0.5 in the embodiment of the present invention.
As shown in fig. 3, the quadratic IoU loss function at this time is a function image when λ =1.5, α =1, and β = 0.5.
In addition, the super parameters alpha and beta of the secondary loss IoU loss function are searched by using a grid search method, namely, the alpha and beta of different combinations are used as parameters of the secondary IoU loss function, and the initial target detection model is trained. And obtaining target detection models with different effects after multiple times of training, and taking alpha and beta of the target detection model with the optimal effect as the final result of the grid search method.
And S2-4, respectively using the positioning loss and the classification loss to conduct derivation on the model parameters of the initial target detection model, and then using back propagation to update the model parameters to obtain updated model parameters.
The derivative function of the positioning loss when the positioning loss and the classification loss are used for deriving the model parameters of the initial target detection model respectively comprises an IoU term, and the form of the derivative function is as follows:
Loss′ loc =-α-2βIoU
in the formula, alpha and beta are balance factors which are constants, beta is not equal to 0, ioU is the area intersection ratio of the prediction frame and the supervision target frame, and the value range is [0,1].
And S2-5, judging whether the updated model parameters reach a termination condition, if so, entering the step S2-6, otherwise, entering the step S2-2.
Wherein, the termination condition is that the target detection model trains 12 rounds for all training data sets in the training process.
And S2-6, storing the updated model parameters and loading the updated model parameters to the initial target detection model so as to form a target detection model.
The training of the target detection model can be completed through the process, so that the target detection based on the target detection model is realized. And then, testing the trained target detection model through a pre-prepared test set and evaluating the effect of the output image detection result.
Compared with other IoU regression loss functions, the secondary IoU loss function used in the embodiment can capture the position sensitivity and improve the positioning effect, the experimental effect is shown in Table 1, mAP is used as an evaluation index, and mAP of objects with corresponding scales is respectively obtained for objects with different scales. Table 1 shows the experimental results of the target detection model using ResNet50 as the backbone network, which is most significant for the lifting of small-scale objects that are sensitive to position, and is better for the lifting amplitude of medium-scale objects and least for the lifting of large-scale objects, because large-scale objects are not sensitive to small changes in position.
TABLE 1 comparison of model effects (bolded loss function as proposed in this example)
Figure BDA0003128983650000101
Effects and effects of the embodiments
According to the anchor-frame-free target detection method applying the secondary IoU loss function provided by the embodiment, because the initial target model is provided with the feature extraction module, the positioning module and the classification module, the secondary IoU loss function is used for solving the positioning loss according to the position information and the supervision information of the prediction frame, and the positioning loss is used for solving the derivation of the model parameters of the initial target detection model to obtain the primary IoU function, the step length of gradient updating is determined according to the IoUs of the prediction frame and the target frame in the training stage, so that the tiny change of the position is captured. Further, in the parameter updating process, different gradients are obtained according to the IoU of the prediction frame and the IoU of the target frame to control different sizes of targets to have different optimization effects on the target detection model parameters, because the small targets are sensitive to position abnormality, and slight changes of the positions and the dimensions of the small target prediction frame can generate relatively large influence on the small target prediction frame and the IoU of the target frame, so that the updating influence on the parameters is larger, and the improvement of the small target prediction frame detection effect in the target detection model is more obvious compared with the medium and large targets. The self-adaptive adjustment of the gradient plays a crucial role in optimizing the target detection model in the training process, so that the target detection model is more sufficiently learned, and the detection effect of the target detection model is enhanced. The method adjusts the step length of parameter updating according to the IoU of the prediction frame and the target frame, and adds the IoU as a whole into the parameter updating process, so that the positioning quality of the target detection model is improved, the detection effect of the target detection model is further improved, and the current target detection algorithm is improved.
In the embodiment, the secondary IoU loss function provided by the invention is used for replacing the positioning loss function in the original target detection frame, so that the position information of the prediction frame is added in the model parameter updating process, and the detection effect of the target detection model is improved.
In the embodiment, the secondary IoU loss function provided by the invention is only used in the training stage, after the training is completed, the image to be detected is preprocessed to obtain a preprocessed image, and the preprocessed image is further input into a pre-trained target detection model to carry out reasoning to obtain a corresponding image detection result, so that the detection effect of the target detection model on the preprocessed image is improved.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the description of the above-described embodiments.

Claims (6)

1. A method for detecting a target without an anchor frame by applying a secondary IoU loss function is characterized by comprising the following steps:
s1-1, preprocessing an image to be detected to obtain a preprocessed image;
s1-2, inputting the preprocessed image into a pre-trained target detection model to carry out reasoning to obtain a corresponding image detection result,
the target detection model is obtained by training in advance through the following steps:
s2-1, acquiring a training image data set and corresponding supervision information, wherein the supervision information comprises positioning supervision information and category supervision information;
step S2-2, an initial target detection model is constructed, the training image data set and the supervision information are input into the initial target detection model, the initial target model is provided with a feature extraction module, a positioning module and a classification module,
the feature extraction module is used for extracting a feature map with a fixed scale according to the training image data set as an output feature map,
the positioning module is used for processing the output characteristic diagram to obtain the position information of the prediction frame of each point on the output characteristic diagram,
the classification module is used for processing the output feature map to obtain the confidence of each point on the output feature map corresponding to all categories,
s2-3, solving a positioning Loss according to the position information of the prediction frame and the positioning supervision information by using a secondary IoU Loss function, and solving a classification Loss according to the confidence coefficient and the category supervision information by using a Focal local Loss function;
s2-4, respectively using the positioning loss and the classification loss to conduct derivation on model parameters of the initial target detection model, and then using back propagation to update the model parameters to obtain updated model parameters;
s2-5, judging whether the updated model parameters reach termination conditions, if so, entering the step S2-6, otherwise, entering the step S2-2;
and S2-6, storing the updated model parameters and loading the updated model parameters to the initial target detection model so as to form the target detection model.
2. The method for detecting targets without anchor frames by applying a quadratic IoU loss function according to claim 1, wherein:
wherein the secondary IoU loss function is:
Loss loc =λ-αIoU-βIoU 2
in the formula, λ, α, β are balance factors which are constants, β ≠ 0, iou is an area intersection ratio of the prediction frame and the supervision target frame, and the value range is [0,1].
3. The method for detecting targets without anchor frames by applying a quadratic IoU loss function according to claim 2, wherein:
when the positioning loss and the classification loss are used for deriving the model parameters of the initial target detection model respectively, the derivative function of the positioning loss contains an IoU term, and the form of the derivative function is as follows:
Loss′ loc =-α-2βIoU
in the formula, alpha and beta are balance factors which are constants, beta is not equal to 0, ioU is the area intersection ratio of the prediction frame and the supervision target frame, and the value range is [0,1].
4. The method for detecting targets without anchor frames by applying a quadratic IoU loss function according to claim 1, wherein:
wherein the feature extraction module at least comprises one or more of ResNet, denseNet, resNet + FPN.
5. The method for detecting targets without anchor frames by applying a quadratic IoU loss function according to claim 1, wherein:
wherein the termination condition is that the target detection model trains 12 rounds on all the training data sets in the training process.
6. The method for detecting targets without anchor frames by applying a quadratic IoU loss function according to claim 1, wherein:
wherein the Focal local Loss function is:
Figure FDA0003128983640000031
in the formula, p t The class probabilities output by the classification module,
Figure FDA0003128983640000032
the information is labeled for the category and,
Figure FDA0003128983640000033
in the case of the positive sample class,
Figure FDA0003128983640000034
as background class, a t To balance the imbalance of positive and negative samples, the balance factor is a constant, and γ is a parameter factor to balance the imbalance of hard and easy samples, the balance factor is a constant.
CN202110697205.XA 2021-06-23 2021-06-23 Anchor-frame-free target detection method applying secondary IoU loss function Pending CN115512174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110697205.XA CN115512174A (en) 2021-06-23 2021-06-23 Anchor-frame-free target detection method applying secondary IoU loss function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110697205.XA CN115512174A (en) 2021-06-23 2021-06-23 Anchor-frame-free target detection method applying secondary IoU loss function

Publications (1)

Publication Number Publication Date
CN115512174A true CN115512174A (en) 2022-12-23

Family

ID=84500549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110697205.XA Pending CN115512174A (en) 2021-06-23 2021-06-23 Anchor-frame-free target detection method applying secondary IoU loss function

Country Status (1)

Country Link
CN (1) CN115512174A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385278A (en) * 2022-12-30 2023-07-04 南京航空航天大学 Low-light image visual characteristic self-supervision representation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385278A (en) * 2022-12-30 2023-07-04 南京航空航天大学 Low-light image visual characteristic self-supervision representation method and system
CN116385278B (en) * 2022-12-30 2023-10-10 南京航空航天大学 Low-light image visual characteristic self-supervision representation method and system

Similar Documents

Publication Publication Date Title
US11581130B2 (en) Internal thermal fault diagnosis method of oil-immersed transformer based on deep convolutional neural network and image segmentation
CN110532920B (en) Face recognition method for small-quantity data set based on FaceNet method
CN105447473B (en) A kind of any attitude facial expression recognizing method based on PCANet-CNN
CN101236608B (en) Human face detection method based on picture geometry
CN110555399B (en) Finger vein identification method and device, computer equipment and readable storage medium
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN106570464A (en) Human face recognition method and device for quickly processing human face shading
CN107423747B (en) A kind of conspicuousness object detection method based on depth convolutional network
CN112801146B (en) Target detection method and system
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN104978569B (en) A kind of increment face identification method based on rarefaction representation
CN114511710A (en) Image target detection method based on convolutional neural network
CN116843999A (en) Gas cylinder detection method in fire operation based on deep learning
CN115797694A (en) Display panel microdefect classification method based on multi-scale twin neural network
CN115131747A (en) Knowledge distillation-based power transmission channel engineering vehicle target detection method and system
WO2020119624A1 (en) Class-sensitive edge detection method based on deep learning
CN115512174A (en) Anchor-frame-free target detection method applying secondary IoU loss function
Imran et al. Image-Based Automatic Energy Meter Reading Using Deep Learning
CN113888538B (en) Industrial anomaly detection method based on memory block model
CN115620083A (en) Model training method, face image quality evaluation method, device and medium
CN111353976B (en) Sand grain target detection method based on convolutional neural network
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception
CN114078106A (en) Defect detection method based on improved Faster R-CNN
CN114882298B (en) Optimization method and device for confrontation complementary learning model
CN109800719A (en) Low resolution face identification method based on sub-unit and compression dictionary rarefaction representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination