WO2023165024A1 - 一种二值化目标检测神经网络结构和模型的训练方法 - Google Patents

一种二值化目标检测神经网络结构和模型的训练方法 Download PDF

Info

Publication number
WO2023165024A1
WO2023165024A1 PCT/CN2022/093066 CN2022093066W WO2023165024A1 WO 2023165024 A1 WO2023165024 A1 WO 2023165024A1 CN 2022093066 W CN2022093066 W CN 2022093066W WO 2023165024 A1 WO2023165024 A1 WO 2023165024A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
target detection
neural network
classification
decoupling
Prior art date
Application number
PCT/CN2022/093066
Other languages
English (en)
French (fr)
Inventor
王东
普菡
李浥东
Original Assignee
北京交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京交通大学 filed Critical 北京交通大学
Publication of WO2023165024A1 publication Critical patent/WO2023165024A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the invention relates to the technical field of neural networks, in particular to a training method for a binary target detection neural network structure and model.
  • Binary quantization of the target detection neural network refers to compressing the neural network in 32-bit floating-point format into a 1-bit specific point format to reduce storage and calculation costs. Binarizing the weights and activations of the object detection neural network can reduce the storage cost by 32 times and the computing cost by 64 times. These characteristics make the binarized object detection neural network particularly suitable for deployment on low-cost edge computing devices with limited resources.
  • the binary target detection neural network in the prior art includes Bi-Det, Auto-BiDet, and LWS-Det.
  • Bi-Det mainly uses the information bottleneck theory to remove redundant information in the binary target detection neural network, that is, to limit the amount of information in the high-level feature map, and to maximize the feature map and target detection head. ) between the mutual information.
  • Auto-BiDet adds a compression level function to control the information bottleneck according to the characteristics of the input data, that is, a lower compression level is used for low-complexity pictures, and a higher compression level is used for high-complexity pictures. level, which realizes the dynamic compression of the information volume of the high-level feature map.
  • LWS-Det Layer-wise Searching for 1-bit Detectors introduces angular (angle) and amplitude (amplitude) loss functions to increase the capacity of the binary target detection neural network.
  • the method utilizes differentiable binary search to minimize the angle error in the student-teacher-guided network framework, and learns the scale factor by minimizing the amplitude loss in the same student-teacher-guided network framework. , so as to increase the network capacity of the binary target detection neural network and improve the performance of the binary target detection neural network.
  • the binary target detection neural network can effectively reduce storage and calculation costs, due to the limited information capacity of the binary neural network itself, the current binary target detection neural network in the existing technology has serious target positioning and target classification features. Imbalanced information extraction (i.e., inconsistent performance of neural networks on localization and classification tasks). Compared with the full-precision target detection neural network, the binary target detection neural network will face the problem of a significant drop in detection accuracy when deployed and applied in actual scenarios.
  • the accuracy of the typical binary target detection neural network BiDet based on it on the PASCAL VOC dataset is only 66.0% (mAP), and the corresponding benchmark full-precision target detection neural network Compared to the 74.3% accuracy on the PASCAL VOC dataset, the accuracy drops by 8.3%.
  • another further optimized binary target detection neural network AutoBiDet achieved 14.3% (mAP@[.5,.95]) on the COCO dataset, and the corresponding full-precision neural network on the COCO dataset Compared to 23.2% accuracy (mAP@[.5,.95]), the accuracy dropped by 8.9%.
  • Embodiments of the present invention provide a binary target detection neural network structure and a training method for a model, so as to achieve better performance of the classification and positioning task consistency of the binary target detection neural network.
  • the present invention adopts the following technical solutions.
  • a training method for a binary target detection neural network structure and model comprising:
  • the binary target detection neural network includes a backbone network, a shared feature pool network, a classification decoupling network and a positioning decoupling network;
  • Synchronous optimization of classification and positioning tasks is performed on the binary target detection neural network.
  • the binary target detection neural network is constructed, and the binary target detection neural network includes a backbone network, a shared feature pool network, a classification decoupling network and a positioning decoupling network, including:
  • the decoupling blocks are connected to the shared feature pooling network, and each feature decoupling block learns a specific feature by applying a decoupling code to a specific layer of the shared feature pooling network.
  • the said binarized target detection neural network is trained for target detection task consistency based on multi-dimensional joint matching, including:
  • the anchor frame sampling strategy comprehensively considers the anchor frame position information and semantic information multi-modal information, and corrects the intersection ratio between the ground truth label and the anchor frame through the confidence score Conf_score of the detection frame.
  • IOU Anchor get the revised intersection and union ratio IOU Amendment , as shown in formula (5):
  • ⁇ and Th r take constant values, where ⁇ is a hyperparameter used to adjust the strength of the intersection ratio correction, and Thr is the confidence score screening threshold;
  • L relevance increases the linearity between the IOU Amendmen by increasing the confidence score Conf_score of the detection frame and the corrected intersection and union ratio between the corresponding true value labels Relevance, to reduce the gap between Conf_scor and ground-truth labels, and increase the consistency of classification and positioning task performance evaluation indicators, as shown in formula (6):
  • the synchronous optimization of classification and positioning tasks for the binary target detection neural network includes:
  • a target loss function with dynamically learnable weights Design a target loss function with dynamically learnable weights, and calculate the relative change values a cls (t-1) and a loc (t-1) of the loss target function of the classification and localization tasks, such as formulas (7) and (8 ), where t represents the training time, and the distillation temperature T is added to the softmax layer, as shown in formulas (9) and (10), to obtain the dynamic weight values ⁇ cls (t) and ⁇ loc of the classification and localization loss functions (t), and finally obtain the target loss function L loss (t) of the dynamic learnable weights of the target detection, as shown in formula (11), the target loss function realizes synchronous optimization of target detection classification and positioning by means of dynamic learning weights Task:
  • the embodiments of the present invention improve the network information capacity of the binary target detection neural network by improving and adding the classification decoupling network and the positioning decoupling network, avoiding classification and positioning features.
  • FIG. 1 is a schematic diagram of a novel structure of a binarized object detection neural network provided by an embodiment of the present invention.
  • Fig. 2 is a training flowchart of the binary target detection neural network structure and model provided by the embodiment of the present invention.
  • the embodiment of the present invention provides a novel binarized target detection neural network that can be deployed in actual scenarios, and conducts multi-dimensional joint matching target detection task consistency training on the network, and performs classification and positioning tasks on the network. Synchronous optimization, so that the classification performance and positioning performance of the final detection frame are better, which greatly reduces the time and calculation expenses of the binary target detection neural network, and can be better deployed in embedded and On edge devices with limited hardware resources such as mobile terminals.
  • the multi-level structure of the neural network automatically learns task-sharing features and task-specific features in an end-to-end manner, so as to eliminate the classification and localization of features. Inconsistency, and effectively improve the representation information capacity of the binary target detection neural network.
  • the task consistency training of multi-dimensional joint matching is carried out on the binary target detection neural network.
  • an improved Anchor sampling strategy and a new loss based on relevance constraints function to optimize and retain high-quality (both correctly classified and correctly positioned) detection boxes.
  • the binarized target detection neural network is optimized synchronously through the target loss function with dynamically learnable weights, and finally a detection frame with better classification performance and localization performance is obtained.
  • Step S10 constructing a binary target detection neural network.
  • FIG. 1 A novel network structure of a binarized object detection neural network provided by an embodiment of the present invention is shown in FIG. 1 . It includes a backbone network and a shared feature pool network, and performs network feature decoupling branch processing on the shared feature pool network to obtain several feature decoupling branch networks including a series of feature decoupling blocks and corresponding task detection heads.
  • the embodiment shown in Figure 1 uses two sets of feature decoupling branch networks, using one of the feature decoupling branch networks to learn classification task features to obtain a classification decoupling network, and using the other feature decoupling branch network to perform positioning task features Learning to obtain a positioning decoupling network, the feature decoupling block in the feature decoupling branch network is connected to the shared feature pool network, and each feature decoupling block learns a specific layer by applying a decoupling code to a specific layer of the shared feature pool network. Characteristics.
  • the target detection neural network extracts the shared features of multi-scale target detection.
  • the VGG16 network structure (other network structures can also be selected, such as ResNet, MobileNet, etc.) is selected as the backbone network, and the shared feature pool network is used as a global network after the backbone network. If the feature pool directly uses the features of the global feature pool for classification or positioning, the learned features will cause information mismatch or conflict in different task performance. Therefore, the binarized target detection neural network proposed by the present invention uses several independent feature decoupling branch networks to perform classification task feature learning and positioning task feature learning. Specifically, in the embodiment of Figure 1, two feature decoupling branch networks are used. , and classifying decoupled networks and localizing decoupled networks.
  • a series of feature decoupling blocks in the feature decoupling branch network are connected with shared feature pooling network branches.
  • Each feature decoupling block learns a specific feature by applying a decoupling code to a specific layer of the shared feature pool network, where the decoupling code is automatically learned in an end-to-end manner throughout the object detection network training process.
  • the obtained feature selector during the training process of the target detection network, the learning of the decoupling code will not directly affect the learning process of the backbone network and the shared feature pool network (that is, there is no direct loss function constraint relationship), therefore, the shared feature pool network
  • the features of the layer and the feature decoupling network can be learned together, the purpose is to maximize the generalization ability of the shared features in classification and localization tasks, and the feature decoupling code can also maximize the overall classification or localization performance of the target detection network.
  • the present invention defines the shared feature output by the jth convolutional layer of the shared feature pool network as For classification or localization tasks, it is through the features in the shared feature pool network Apply decoupling codes to filter features, each feature channel has a decoupling code, where the feature decoupling code of the jth structural block of the classification decoupling network is called The feature decoupling code that locates the jth structural block of the decoupling network is called Then, the first decoupling block of the network feature decoupling branch only takes the features of the shared network layer as input, but the subsequent decoupling blocks, the input is the shared features of the current layer and the previous layer task-specific features or connections, where or It will be passed to the current layer through 3*3 convolution f (j) ; and then passed through two 1*1 convolutions and or and Then through a sigmoid function, the feature decoupling code will be obtained or The decoupling code is
  • the decoupling code and the corresponding shared layer features of the corresponding layer's shared feature pool network are multiplied pixel by pixel to obtain the task-aware features of the corresponding layer and
  • the value of the decoupling code is 1, the features of the feature decoupling branch and the features of the feature sharing layer will be equal, as shown in formulas (3)(4), where ⁇ represents the pixel-by-pixel multiplication operation.
  • the above-mentioned convolutional layer may also be a 1*1 convolutional layer or a convolutional layer of other sizes, which does not affect the implementation effect of the method of the present invention.
  • Step S20 performing multi-dimensional joint matching training on the binarized target detection neural network.
  • the present invention proposes a method for consistent training of target detection tasks based on multi-dimensional joint matching.
  • the main task of the method is to perform multi-dimensional joint matching learning through multiple processing stages in the classification and positioning tasks of the target detection neural network. Therefore, the classification performance and positioning performance of the final detection frame are better.
  • a pre-defined anchor frame Anchor sampling strategy is designed; this strategy is aimed at optimizing Anchor sampling, comprehensively considering multi-modal information such as Anchor's position information and semantic information, that is, not only considering Anchor and GT (Ground Truth, true value label or real detection frame) than the IOU Anchor , and fully consider the richness of the semantic information contained in the Anchor itself.
  • is a hyperparameter used to adjust the strength of the modified intersection ratio
  • Thr is the confidence score screening threshold.
  • the specific values of ⁇ and Th r can be set according to different training data sets.
  • takes a constant value of 2
  • Th r takes a constant value of 0.1 to achieve better detection results
  • Conf_score is used to correct the IOU Anchor between GT and Anchor to obtain the revised IOU Amendment .
  • the purpose is to correct some detection frames that were originally defined as negative samples but rich in semantics as positive samples; at the same time, the original definition is The detection frames with positive samples but less semantic information are corrected as negative samples, which effectively reduces the misleading of the interference samples to the training process and improves the accuracy of the training results.
  • Conf_scor is the confidence score of the detection frame
  • is a hyperparameter used to control the correction degree of Conf_scor of the detection frame
  • IOU Anchor is the intersection and union ratio of Anchor and GT
  • Thr is used to judge the detection frame Whether the confidence level is too low, that is, the confidence level screening threshold.
  • the NMS algorithm first sorts the detection frames according to the confidence score, with high confidence The detection boxes with higher degree scores are easier to retain, but some detection boxes with high IOU scores and second-highest confidence scores are easily suppressed by mistake. Therefore, the algorithm adopts a new relevance constraint loss function L relevance , L relevance increases the linear correlation between the confidence score Conf_score of the detection frame and the corrected IOU Amendment between the corresponding GT, as much as possible Reduce the gap between the two, and increase the consistency of classification and positioning task performance evaluation indicators.
  • Conf_scor represents the confidence score of the detection frame
  • IOU detected-box represents the intersection and union ratio between the detection frame and its corresponding GT.
  • Step S30 performing synchronous optimization of classification and positioning tasks on the binary target detection neural network.
  • the weighting method of the loss objective function in the network training process no longer adopts a fixed value, but is dynamically adjusted according to the learning effect and difficulty of the classification and positioning tasks.
  • a dynamic weight learning strategy The purpose of this strategy is to make the classification and localization tasks learn at a similar speed.
  • the specific process is as follows: firstly calculate the classification and localization task loss objective function (Loss function) The relative change values of a cls (t-1) and a loc (t-1), as shown in formulas (7)(8), where t represents the training time.
  • the distillation temperature T is added to the softmax layer to improve the performance of the distillation, as shown in formula (9) (10) , to get the dynamic weight values ⁇ cls (t) and ⁇ loc (t) of the classification and localization loss functions.
  • the loss objective function L loss (t) with dynamic learnable weights for target detection is obtained.
  • this function realizes synchronous optimization of target detection, classification and positioning tasks through dynamic learning of weights, and achieves The task consistency goal of the detection results is achieved, and the stability of network training is enhanced.
  • the binarized target detection neural network of the present invention can be applied to devices with limited computing resources, such as embedded devices and mobile terminal devices based on mobile phones, due to its high model compression rate and extremely low computational complexity. wait.
  • the classification and positioning task consistency of the binarized target detection neural network realized by the present invention greatly improves the detection accuracy, and can ensure that the neural network can simultaneously have a detection frame with high classification confidence and high positioning accuracy, which can replace full precision
  • the target detection neural network algorithm meets the needs of high-precision and low-cost target detection algorithms in practical application scenarios.
  • the embodiment of the present invention constructs a binary target detection neural network through the effective combination of the benchmark binary target detection neural network and the network feature decoupling branch, so as to solve the problem of the representation ability of the binary target detection neural network.
  • Insufficient feature information extraction in the classification and positioning tasks is not balanced; and the task consistency training of multi-dimensional joint matching is carried out on the constructed neural network to solve the task inconsistency of the Anchor sampling of the binary target detection neural network
  • the classification and positioning tasks of the binary target detection neural network are simultaneously optimized; finally, the comparison of the consistency of the classification and positioning tasks of the binary target detection neural network is realized. Excellent test results.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiments.
  • the device and system embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, It can be located in one place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

一种二值化目标检测神经网络的训练方法,该方法包括:构建二值化目标检测神经网络,二值化目标检测神经网络包括骨干网络、共享特征池网络、分类解耦网络和定位解耦网络;对二值化目标检测神经网络进行基于多维度联合匹配的目标检测任务一致性训练;对二值化目标检测神经网络进行分类和定位任务的同步优化。该方法通过改进的锚框Anchor采样策略和基于关联性约束的新型损失函数算法解决二值化目标检测神经网络中Anchor采样的任务不一致性问题,并通过带有动态可学习权重的目标损失函数对二值化目标检测神经网络进行分类和定位任务的同步优化,能够提升检测框的质量、改善二值化目标检测神经网络的检测精准度和算法的鲁棒性。

Description

一种二值化目标检测神经网络结构和模型的训练方法 技术领域
本发明涉及神经网络技术领域,尤其涉及一种二值化目标检测神经网络结构和模型的训练方法。
背景技术
目标检测神经网络二值量化指将32比特浮点格式的神经网络压缩到1比特定点数格式,以减少存储和计算代价。对目标检测神经网络权重和激活进行二值化能减少32倍的存储和64倍计算成本,这些特定使得二值化目标检测神经网络特别适合于在资源有限的低成本边缘计算设备上部署。
目前,现有技术中的二值化目标检测神经网络有Bi-Det,Auto-BiDet,和LWS-Det。其中,Bi-Det主要是通过信息瓶颈理论对二值化目标检测神经网络中冗余信息进行去除,即限制高层(high-level)特征图的信息量,最大化特征图和目标检测头(head)之间的互信息。在Bi-Det的基础上,Auto-BiDet增加了根据输入数据特性控制信息瓶颈的压缩水平功能,即对低复杂性的图片采用较低的压缩水平,对高复杂性的图片采用较高的压缩水平,实现了对高层特征图的信息量的动态压缩。LWS-Det(Layer-wise Searching for 1-bit Detectors逐层搜索二值检测器)引入angular(角度)和amplitude(振幅)损失函数来增加二值化目标检测神经网络的容量。该方法在1-bit量化层,利用可微分二值化搜索来最小化学生-教师指导网络框架中的角度误差,并通过在相同的学生-教师指导网络框架中最小化振幅损失来学习比例因子,以此增大二值化目标检测神经网络的网络容量,提高二值化目标检测神经网络的性能。
尽管二值化目标检测神经网络可以有效减少存储和计算代价,但由于二值神经网络本身信息容量有限,导致目前现有技术中的二值化目标检测神经网络存在严重的目标定位和目标分类特征信息提取不平衡问题(即神经网络在进行定位和分类任务时性能表现不一致)。相比于全精度目标检测神经网络,二值化目标检测神经网络在进行实际场景的部署和应用时会面临检测精度的大幅度下降的问题。以基准目标检测神经网络SSD300-VGG16为例,基于其的典型二值化目标检测神经网络BiDet在PASCAL VOC数据集上的精度仅为66.0%(mAP),和对应的基准全精度目标检测神经网络在PASCAL VOC数据集上的精度74.3%相比,精度下降了8.3%。另外,另一种经过进一步优化的二值化目标检测神经网络AutoBiDet在COCO数据集上实现了14.3%(mAP@[.5,.95]),和对应的全精度神经网络在COCO数据集上精度23.2%(mAP@[.5,.95])相比,精度下降了8.9%。
发明内容
本发明的实施例提供了一种二值化目标检测神经网络结构和模型的训练方法,以实现二值化目标检测神经网络的分类和定位任务一致性的较优性能。
为了实现上述目的,本发明采取了如下技术方案。
一种二值化目标检测神经网络结构和模型的训练方法,包括:
构建二值化目标检测神经网络,所述二值化目标检测神经网络包括骨干网络、共享特征池网络、分类解耦网络和定位解耦网络;
对所述二值化目标检测神经网络进行基于多维度联合匹配的目标检测任务一致性训练;
对所述二值化目标检测神经网络进行分类和定位任务的同步优化。
优选地,所述的构建二值化目标检测神经网络,所述二值化目标检测神经网络包括骨干网络、共享特征池网络、分类解耦网络和定位解耦网络,包 括:
构建包括骨干网络和共享特征池网络的二值化目标检测神经网络,对所述共享特征池网络进行网络特征解耦分支处理,得到两组包括一系列特征解耦块的特征解耦分支网络;
利用其中一只特征解耦分支网络进行分类任务特征学习得到分类解耦网络,利用另外一只特征解耦分支网络进行定位任务特征学习得到定位解耦网络,所述特征解耦分支网络中的特征解耦块和共享特征池网络相连接,每一个特征解耦块通过对共享特征池网络的特定层应用解耦码来学习特定的特征。
优选地,所述的对所述二值化目标检测神经网络进行基于多维度联合匹配的目标检测任务一致性训练,包括:
设计改进的锚框采样策略,该锚框采样策略综合考虑锚框的位置信息和语义信息多模态信息,通过检测框的置信度分数Conf_score来修正真值标签和锚框之间的交并比IOU Anchor,得到修正后的交并比IOU Amendment,如公式(5)所示:
Figure PCTCN2022093066-appb-000001
σ和Th r取常数值,其中,σ为用来调整交并比修正强度的超参数,Th r为置信度分数筛选门限值;
所述锚框采样策略采用新的关联性约束损失函数L relevance,L relevance通过增大检测框的置信度分数Conf_score和与其对应的真值标签之间修正后的交并比IOU Amendmen之间的线性关联性,来减少Conf_scor和真值标签之间的差距,增大分类和定位任务性能评价指标的一致性,如公式(6)所示:
L relevance=|Conf_score-IOU Amendment|   (6) 。
优选地,所述的对所述二值化目标检测神经网络进行分类和定位任务的同步优化,包括:
设计带有动态可学习权值的目标损失函数,分别计算分类和定位任务损失目标函数的相对变化值a cls(t-1)和a loc(t-1),如公式(7)和(8)所示,其中t表示训练时间,在softmax层加上了蒸馏温度T,如公式(9)和(10)所示,得到分类和定位损失函数的动态权重值λ cls(t)和λ loc(t),最后得到目标检测的动态可学习权值的目标损失函数L loss(t),如公式(11)所示,该目标损失函数通过动态学习权重的方式实现同步优化目标检测分类和定位任务:
Figure PCTCN2022093066-appb-000002
Figure PCTCN2022093066-appb-000003
Figure PCTCN2022093066-appb-000004
Figure PCTCN2022093066-appb-000005
L loss(t)=λ cls(t)L cls(t)+λ loc(t)L loc(t)+L relevance   (11)
其中t代表神经网络训练迭代次数,L cls(t-1)和L loc(t-1)分别代表迭代(t-1)次时的分类和定位Loss值;a cls(t-1)和a loc(t-1)分别代表分类和定位任务Loss值的相对变化值;T代表蒸馏温度以控制不同任务权重的软化程度(softness);K代表任务的数量,在目标检测网络中K=2,即别包含分类任务和定位任务;λ cls(t)和λ loc(t)分别代表分类和定位损失函数的动态权重值。
由上述本发明的实施例提供的技术方案可以看出,本发明实施例通过 改进增加分类解耦网络和定位解耦网络提升了二值目标检测神经网络的网络信息容量,避免了分类和定位特征信息提取不均衡的问题;通过设计改进Anchor采样和基于关联性约束的新型损失函数算法解决了二值化目标检测神经网络中Anchor采样的任务不一致性问题,并通过动态可学习权重的目标损失函数对二值化目标检测神经网络进行分类和定位任务的同步优化,能够提升检测框的质量、改善二值化目标检测网络的检测精准度和算法的鲁棒性。
本发明附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种二值化目标检测神经网络的新型结构示意图。
图2为本发明实施例提供的二值化目标检测神经网络结构和模型的训练流程图。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语)具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样定义,不会用理想化或过于正式的含义来解释。
为便于对本发明实施例的理解,下面将结合附图以几个具体实施例为例做进一步的解释说明,且各个实施例并不构成对本发明实施例的限定。
本发明实施例提供了一种面向实际场景可部署的新型二值化目标检测神经网络,并对该网络进行多维度联合匹配的目标检测任务一致性训练,以及对该网络的分类和定位任务进行同步优化,从而使最终得到的检测框的分类性能和定位性能都表现较优,极大减少了二值化目标检测神经网络在时间上和计算上的开支,可以较好地部署在嵌入式和移动端等硬件资源有限的边缘设备上。
具体来说,在构建的二值化目标检测神经网络中,该多层级结构的神经网络以一种端到端的方式自动学习任务共享特征和特定任务特征,以此 消除特征在分类和定位上的不一致性,并有效提升了二值化目标检测神经网络的表征信息容量。同时,为了进一步解决锚框(Anchor)采样的任务不一致问题,对二值化目标检测神经网络进行多维度联合匹配的任务一致性训练,通过引入改进的Anchor采样策略和基于关联性约束的新型损失函数来优化并保留高质量(既分类正确也定位正确)的检测框。最后,通过带有动态可学习权重的目标损失函数对二值化目标检测神经网络进行同步优化,最终得到分类性能和定位性能均表现较优的检测框。
本发明实施例提供的一种二值化目标检测神经网络结构和模型的训练方法的处理流程如图2所示,包括如下的处理步骤:
步骤S10、构建二值化目标检测神经网络。
本发明实施例提供的一种二值化目标检测神经的新型网络结构如图1所示。包括骨干网络和共享特征池网络,并对共享特征池网络进行网络特征解耦分支处理,得到若干包括一系列特征解耦块的特征解耦分支网络和对应的任务检测头(head)。图1中所示实施例使用了两组特征解耦分支网络,利用其中一只特征解耦分支网络进行分类任务特征学习得到分类解耦网络,利用另外一只特征解耦分支网络进行定位任务特征学习得到定位解耦网络,所述特征解耦分支网络中的特征解耦块和共享特征池网络相连接,每一个特征解耦块通过对共享特征池网络的特定层应用解耦码来学习特定的特征。
首先采取基准网络二值化
目标检测神经网络提取多尺度目标检测共享特征,具体来说,选用VGG16网络结构(也可以选用其他网络结构,例如ResNet、MobileNet等)作为骨干网络,共享特征池网络是在骨干网络之后作为一个全局特征池,若直接用该全局特征池的特征进行分类或定位,会使学习到的特征在不同任务性能上产生信息不匹配或者冲突。因此,本发明提出的二值化目标检测神经网络分别借助若干独立的特征解耦分支网络进行分类任务特征学习和定位任务特征学 习,具体到图1实施例中使用了两个特征解耦分支网络,以及分类解耦网络和定位解耦网络。特征解耦分支网络中的一系列特征解耦块和共享特征池网络分支相连接。每一个特征解耦块通过对共享特征池网络的特定层应用解耦码来学习特定的特征,其中解耦码是从在整个目标检测网络训练过程中中以一种端到端的方式来自动学习得到的特征选择器;在目标检测网络训练过程中,解耦码的学习不会直接影响到骨干网络和共享特征池网络学习过程(即没有直接的损失函数约束关系),因此,共享特征池网络层的特征和特征解耦网络能够共同学习,目的是最大化共享特征在分类和定位任务的泛化能力,特征解耦码也能最大化目标检测网络的整体分类或定位性能。
本发明所提出的二值化目标检测神经网络的具体设计和训练方法如下:首先,本发明把共享特征池网络的第j个卷积层输出的共享特征定义为
Figure PCTCN2022093066-appb-000006
对分类或定位任务,是通过对共享特征池网络中的特征
Figure PCTCN2022093066-appb-000007
应用解耦码来筛选特征,每一个特征通道都有一个解耦码,其中分类解耦网络的第j个结构块的特征解耦码叫做
Figure PCTCN2022093066-appb-000008
定位解耦网络的第j个结构块的特征解耦码叫做
Figure PCTCN2022093066-appb-000009
然后,网络特征解耦分支的第一个解耦块仅以共享网络层的特征作为输入,但是后续的解耦块,输入是当前层共享特征
Figure PCTCN2022093066-appb-000010
和前一层特定任务特征
Figure PCTCN2022093066-appb-000011
或者
Figure PCTCN2022093066-appb-000012
的连接,其中
Figure PCTCN2022093066-appb-000013
或者
Figure PCTCN2022093066-appb-000014
会经过3*3卷积f (j)传递到当前层;然后分别经过两个1*1卷积
Figure PCTCN2022093066-appb-000015
Figure PCTCN2022093066-appb-000016
或者
Figure PCTCN2022093066-appb-000017
Figure PCTCN2022093066-appb-000018
再通过一个sigmoid函数,就会得到特征解耦码
Figure PCTCN2022093066-appb-000019
或者
Figure PCTCN2022093066-appb-000020
解耦码是通过一种反向传播的自监督方式来学习得到的[0,1]之间的掩模信号,如公式(1)(2)所示。最后,解耦码和对应层的共享特征池网络的对应共享层特征做逐像素乘法,得到相应层的任务感知特征
Figure PCTCN2022093066-appb-000021
Figure PCTCN2022093066-appb-000022
当解耦码的值为1时,特征解耦分支的特征和特征共享层的特征就会相等,如公式(3)(4)所示,其中 ·代表逐像素乘法运算。
Figure PCTCN2022093066-appb-000023
Figure PCTCN2022093066-appb-000024
Figure PCTCN2022093066-appb-000025
Figure PCTCN2022093066-appb-000026
Figure PCTCN2022093066-appb-000027
代表共享特征池网络的第j个结构块的输出共享特征;
Figure PCTCN2022093066-appb-000028
Figure PCTCN2022093066-appb-000029
分别代表第(j-1)个结构块的分类任务特征和定位任务特征;
Figure PCTCN2022093066-appb-000030
Figure PCTCN2022093066-appb-000031
为分类解耦网络的第j个结构块的两个卷积层;
Figure PCTCN2022093066-appb-000032
Figure PCTCN2022093066-appb-000033
是定位解耦网络的第j个结构块的两个卷积;f (j)是两个解耦网络的第j个结构块的共享的3*3卷积层;
Figure PCTCN2022093066-appb-000034
Figure PCTCN2022093066-appb-000035
分别是分类特征解耦码和定位特征解耦码;
Figure PCTCN2022093066-appb-000036
Figure PCTCN2022093066-appb-000037
分别是分类任务感知特征和定位任务感知特征。上述卷积层也可以为1*1卷积层或其他尺寸卷积层,并不影响本发明方法实施效果。
步骤S20、对二值化目标检测神经网络进行多维度联合匹配的训练。
本发明提出了一种基于多维度联合匹配的目标检测任务一致性训练方法,该方法的主要任务是通过在目标检测神经网络的分类和定位任务中的多个处理阶段进行多维度联合匹配学习,从而使最终得到的检测框的分类性能和定位性能都表现较优。
首先设计了一种预定义锚框Anchor采样策略;该策略以优化Anchor采样为目的,综合考虑Anchor的位置信息和语义信息等多模态信息,即不仅仅考虑Anchor和GT(Ground Truth,真值标签或真实检测框)的交并比IOU Anchor,而且充分考量Anchor本身包含的语义信息的丰富性。如公式(5)所示,其中,σ为用来调整修正交并比强度的超参数,Th r为置 信度分数筛选门限值。σ和Th r具体取值可根据不同的训练数据集而进行设置,例如,在PASCALVOC数据集上,σ取常数值2,Th r取常数值0.1可取得较好的检测效果;通过检测框的置信度分数Conf_score来修正GT和Anchor之间的IOU Anchor得到修正后的交并比IOU Amendment,目的在于把一些原来定义为负样本但语义丰富的检测框,修正为正样本;同时把原始定义为正样本但语义信息较少的检测框修正为负样本,有效地减少了干扰样本对训练过程的误导,提高了训练结果的准确性。
Figure PCTCN2022093066-appb-000038
其中,Conf_scor是检测框的置信度分数,σ是一个用来控制对检测框的Conf_scor的修正程度的超参数,IOU Anchor是Anchor和GT的交并比,Th r是用来判断该检测框的置信度是否太低的阈值,即置信度筛选门限阈值。
然后,为了解决NMS(Non Maximum Suppression,非极大值抑制)后处理过程中检测框被错误抑制(目标漏检)的现象,即NMS算法首先根据置信度得分对检测框进行排序,具有高置信度分数的检测框更容易保留下来,但是一些具有高IOU分数、次高置信度分数的检测框极易被错误地抑制掉。因此,该算法采用新的关联性约束损失函数L relevance,L relevance通过增大检测框的置信度分数Conf_score和与其对应的GT之间修正后的交并比IOU Amendment之间的线性关联性,尽量减少二者的差距,增大分类和定位任务性能评价指标的一致性。具体来说,因为置信度分数Conf_score和IOU Amendment值的范围都是在[0,1],直接采用两者的绝对值差异来衡量它们之间的距离,如公式(6)所示,简单高效地实现了保留下来的检测框的Conf_score和修正后的IOU Amendmen之间的线性相关性的提升,进而大大提高了检测框分类效 果的衡量指标置信度分数和定位效果的衡量指标IOU之间的一致性。
L relevance=|Conf_score-IOU Amendment|     (6)
其中Conf_scor代表检测框的置信度分数,IOU detected-box代表检测框和与其对应的GT之间的交并比。
步骤S30、对二值化目标检测神经网络进行分类和定位任务的同步优化。
为了有效避免二值化目标检测神经网络结果出现的任务不一致现象(虚警和漏检较多),实现目标检测的分类和定位任务同步优化的目标。因此,网络训练过程中损失目标函数的加权方式不再采用固定值,而是根据分类和定位任务的学习效果和难易程度进行动态调整。为了实现这一目标,我们提出一种动态权重学习策略,该策略的目的是令分类和定位任务以相近的速度来学习,具体过程为:首先分别计算分类和定位任务损失目标函数(Loss函数)值的相对变化值a cls(t-1)和a loc(t-1),如公式(7)(8)所示,其中t表示训练时间。为了使输出的a cls(t-1)和a loc(t-1)学习效果更好,在softmax层加上了蒸馏温度T,使蒸馏的性能提升,如公式(9)(10)所示,得到分类和定位损失函数的动态权重值λ cls(t)和λ loc(t)。最后得到目标检测的带有动态可学习权值的损失目标函数L loss(t),如公式(11)所示,该函数通过动态学习权重的方式实现了同步优化目标检测分类和定位任务,达到了检测结果的任务一致性目标,增强了网络训练的稳定性。
Figure PCTCN2022093066-appb-000039
Figure PCTCN2022093066-appb-000040
Figure PCTCN2022093066-appb-000041
Figure PCTCN2022093066-appb-000042
L loss(t)=λ cls(t)L cls(t)+λ loc(t)L loc(t)+L relevance    (11)
其中t代表神经网络训练迭代次数,L cls(t-1)和L loc(t-1)分别代表迭代(t-1)次时的分类和定位Loss值;a cls(t-1)和a loc(t-1)分别代表分类和定位任务Loss值的相对变化值;T代表蒸馏温度以控制不同任务权重的软化程度(softness);K代表任务的数量,在目标检测网络中K=2,即别包含分类任务和定位任务;λ cls(t)和λ loc(t)分别代表分类和定位损失函数的动态权重值。
本发明的二值化目标检测神经网络以其极高的模型压缩率和极低的计算复杂度的优势,可以应用在计算资源受限的设备上,如嵌入式设备和基于手机的移动端设备等。本发明实现的二值化目标检测神经网络的分类和定位任务一致性极大提高了检测的精度,可保证神经网络能够同时具有高分类置信度和高定位精准度的检测框,能够替代全精度目标检测神经网络算法,满足实际应用场景中对高精度、低成本目标检测算法的需求。
综上所述,本发明实施例通过基准二值化目标检测神经网络和网络特征解耦分支的有效结合构建一种二值化目标检测神经网络,以解决二值化目标检测神经网络的表征能力不足而导致在分类和定位任务上的特征信息提取不均衡问题;并对该构建神经网络进行多维度联合匹配的任务一致性训练,以解决二值化目标检测神经网络的Anchor采样的任务不一致性问题;最后通过带有动态可学习权重的目标损失函数,对二值化目标检测神经网络的分类和定位任务进行同步优化;最终实现二值化目标检测神经网络的分类和定位任务一致性的较优性能的检测结果。
本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例或者实施例的某些部分所述的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置或系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置及系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (4)

  1. 一种二值化目标检测神经网络结构和模型的训练方法,其特征在于,包括:
    构建二值化目标检测神经网络,所述二值化目标检测神经网络包括骨干网络、共享特征池网络、分类解耦网络和定位解耦网络;
    对所述二值化目标检测神经网络进行基于多维度联合匹配的目标检测任务一致性训练;
    对所述二值化目标检测神经网络进行分类和定位任务的同步优化。
  2. 根据权利要求1所述的方法,其特征在于,所述的构建二值化目标检测神经网络,所述二值化目标检测神经网络包括骨干网络、共享特征池网络、分类解耦网络和定位解耦网络,包括:
    构建包括骨干网络和共享特征池网络的二值化目标检测神经网络,对所述共享特征池网络进行网络特征解耦分支处理,得到两组包括一系列特征解耦块的特征解耦分支网络;
    利用其中一只特征解耦分支网络进行分类任务特征学习得到分类解耦网络,利用另外一只特征解耦分支网络进行定位任务特征学习得到定位解耦网络,所述特征解耦分支网络中的特征解耦块和共享特征池网络相连接,每一个特征解耦块通过对共享特征池网络的特定层应用解耦码来学习特定的特征。
  3. 根据权利要求2所述的方法,其特征在于,所述的对所述二值化目标检测神经网络进行基于多维度联合匹配的目标检测任务一致性训练,包括:
    设计改进的锚框采样策略,该锚框采样策略综合考虑锚框的位置信息和语义信息多模态信息,通过检测框的置信度分数Conf_score来修正真值标签和锚框之间的交并比IOU Anchor,得到修正后的交并比IOU Amendment,如公式(5)所示:
    Figure PCTCN2022093066-appb-100001
    σ和Th r取常数值,其中,σ为用来调整交并比修正强度的超参数,Th r为置信度分数筛选门限值;
    所述锚框采样策略采用新的关联性约束损失函数L relevance,L relevance通过增大检测框的置信度分数Conf_score和与其对应的真值标签之间修正后的交并比IOU Amendmen之间的线性关联性,来减少Conf_scor和真值标签之间的差距,增大分类和定位任务性能评价指标的一致性,如公式(6)所示:
    L relevance=|Conf_score-IOU Amendment|  (6)
  4. 根据权利要求3所述的方法,其特征在于,所述的对所述二值化目标检测神经网络进行分类和定位任务的同步优化,包括:
    设计带有动态可学习权值的目标损失函数,分别计算分类和定位任务损失目标函数的相对变化值a cls(t-1)和a loc(t-1),如公式(7)和(8)所示,其中t表示训练时间,在softmax层加上了蒸馏温度T,如公式(9)和(10)所示,得到分类和定位损失函数的动态权重值λ cls(t)和λ loc(t),最后得到目标检测的动态可学习权值的目标损失函数L loss(t),如公式(11)所示,该目标损失函数通过动态学习权重的方式实现同步优化目标检测分类和定位任务:
    Figure PCTCN2022093066-appb-100002
    Figure PCTCN2022093066-appb-100003
    Figure PCTCN2022093066-appb-100004
    Figure PCTCN2022093066-appb-100005
    L loss(t)=λ cls(t)L cls(t)+λ loc(t)L loc(t)+L relevance  (11)
    其中t代表神经网络训练迭代次数,L cls(t-1)和L loc(t-1)分别代表迭代(t-1)次时的分类和定位Loss值;a cls(t-1)和a loc(t-1)分别代表分类和定位任务Loss值的相对变化值;T代表蒸馏温度以控制不同任务权重的软化程度(softness);K代表任务的数量,在目标检测网络中K=2,即别包含分类任务和定位任务;λ cls(t)和λ loc(t)分别代表分类和定位损失函数的动态权重值。
PCT/CN2022/093066 2022-03-01 2022-05-16 一种二值化目标检测神经网络结构和模型的训练方法 WO2023165024A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210197861.8A CN114841307A (zh) 2022-03-01 2022-03-01 一种二值化目标检测神经网络结构和模型的训练方法
CN202210197861.8 2022-03-01

Publications (1)

Publication Number Publication Date
WO2023165024A1 true WO2023165024A1 (zh) 2023-09-07

Family

ID=82561721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093066 WO2023165024A1 (zh) 2022-03-01 2022-05-16 一种二值化目标检测神经网络结构和模型的训练方法

Country Status (2)

Country Link
CN (1) CN114841307A (zh)
WO (1) WO2023165024A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496118A (zh) * 2023-10-23 2024-02-02 浙江大学 一种目标检测模型的窃取脆弱性分析方法和系统
CN117708726A (zh) * 2024-02-05 2024-03-15 成都浩孚科技有限公司 网络模型解耦的开集合类别训练方法、装置及其存储介质
CN118521964A (zh) * 2024-07-22 2024-08-20 山东捷瑞数字科技股份有限公司 一种基于锚框得分优化的机器人密集检测方法及系统
CN118628876A (zh) * 2024-08-14 2024-09-10 珠海亿智电子科技有限公司 目标检测模型的量化感知训练方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073560A1 (en) * 2017-09-01 2019-03-07 Sri International Machine learning system for generating classification data and part localization data for objects depicted in images
CN110287849A (zh) * 2019-06-20 2019-09-27 北京工业大学 一种适用于树莓派的轻量化深度网络图像目标检测方法
CN110298266A (zh) * 2019-06-10 2019-10-01 天津大学 基于多尺度感受野特征融合的深度神经网络目标检测方法
CN110689081A (zh) * 2019-09-30 2020-01-14 中国科学院大学 一种基于分歧学习的弱监督目标分类和定位方法
CN110717534A (zh) * 2019-09-30 2020-01-21 中国科学院大学 一种基于网络监督的目标分类和定位方法
CN111325116A (zh) * 2020-02-05 2020-06-23 武汉大学 一种基于线下训练-线上学习深度可演化的遥感影像目标检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073560A1 (en) * 2017-09-01 2019-03-07 Sri International Machine learning system for generating classification data and part localization data for objects depicted in images
CN110298266A (zh) * 2019-06-10 2019-10-01 天津大学 基于多尺度感受野特征融合的深度神经网络目标检测方法
CN110287849A (zh) * 2019-06-20 2019-09-27 北京工业大学 一种适用于树莓派的轻量化深度网络图像目标检测方法
CN110689081A (zh) * 2019-09-30 2020-01-14 中国科学院大学 一种基于分歧学习的弱监督目标分类和定位方法
CN110717534A (zh) * 2019-09-30 2020-01-21 中国科学院大学 一种基于网络监督的目标分类和定位方法
CN111325116A (zh) * 2020-02-05 2020-06-23 武汉大学 一种基于线下训练-线上学习深度可演化的遥感影像目标检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONG, YANHUA ET AL.: "SSD Mask Detection Based on Residual Structure", COMPUTER TECHNOLOGY AND DEVELOPMENT, vol. 31, no. 12, 31 December 2021 (2021-12-31), XP009548888 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496118A (zh) * 2023-10-23 2024-02-02 浙江大学 一种目标检测模型的窃取脆弱性分析方法和系统
CN117496118B (zh) * 2023-10-23 2024-06-04 浙江大学 一种目标检测模型的窃取脆弱性分析方法和系统
CN117708726A (zh) * 2024-02-05 2024-03-15 成都浩孚科技有限公司 网络模型解耦的开集合类别训练方法、装置及其存储介质
CN117708726B (zh) * 2024-02-05 2024-04-16 成都浩孚科技有限公司 网络模型解耦的开集合类别训练方法、装置及其存储介质
CN118521964A (zh) * 2024-07-22 2024-08-20 山东捷瑞数字科技股份有限公司 一种基于锚框得分优化的机器人密集检测方法及系统
CN118628876A (zh) * 2024-08-14 2024-09-10 珠海亿智电子科技有限公司 目标检测模型的量化感知训练方法、装置、设备及介质

Also Published As

Publication number Publication date
CN114841307A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2023165024A1 (zh) 一种二值化目标检测神经网络结构和模型的训练方法
US11816149B2 (en) Electronic device and control method thereof
CN115937655B (zh) 多阶特征交互的目标检测模型及其构建方法、装置及应用
CN110348447B (zh) 一种具有丰富空间信息的多模型集成目标检测方法
CN106407311A (zh) 获取搜索结果的方法和装置
CN111507370A (zh) 获得自动标注图像中检查标签的样本图像的方法和装置
JP2018514852A (ja) 逐次画像サンプリングおよび微調整された特徴の記憶
CN111382868A (zh) 神经网络结构搜索方法和神经网络结构搜索装置
CN112381763A (zh) 一种表面缺陷检测方法
CN113255714A (zh) 图像聚类方法、装置、电子设备及计算机可读存储介质
US20220156528A1 (en) Distance-based boundary aware semantic segmentation
CN111046847A (zh) 一种视频处理方法、装置、电子设备以及介质
CN114565092A (zh) 一种神经网络结构确定方法及其装置
US20220159278A1 (en) Skip convolutions for efficient video processing
CN111222534A (zh) 一种基于双向特征融合和更平衡l1损失的单发多框检测器优化方法
CN113989655A (zh) 基于自动化深度学习的雷达或声呐图像目标检测与分类方法
Liao et al. Depthwise grouped convolution for object detection
Wu et al. Group guided data association for multiple object tracking
WO2023249821A1 (en) Adapters for quantization
CN116883746A (zh) 一种基于分区池化超图神经网络的图节点分类方法
CN116646001A (zh) 基于联合式跨域注意力模型预测药物靶标结合性的方法
US20240303497A1 (en) Robust test-time adaptation without error accumulation
US20230419087A1 (en) Adapters for quantization
US20230169694A1 (en) Flow-agnostic neural video compression
CN118608792B (zh) 一种基于Mamba的超轻量图像分割方法及计算机装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22929461

Country of ref document: EP

Kind code of ref document: A1