WO2023245873A1 - 一种生成式无数据量化方法、识别方法、装置及存储介质 - Google Patents
一种生成式无数据量化方法、识别方法、装置及存储介质 Download PDFInfo
- Publication number
- WO2023245873A1 WO2023245873A1 PCT/CN2022/116835 CN2022116835W WO2023245873A1 WO 2023245873 A1 WO2023245873 A1 WO 2023245873A1 CN 2022116835 W CN2022116835 W CN 2022116835W WO 2023245873 A1 WO2023245873 A1 WO 2023245873A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- data
- full
- precision
- quantization
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000013139 quantization Methods 0.000 title claims abstract description 49
- 238000012549 training Methods 0.000 claims abstract description 46
- 238000009826 distribution Methods 0.000 claims abstract description 29
- 238000013528 artificial neural network Methods 0.000 claims abstract description 19
- 238000011002 quantification Methods 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 26
- 238000005457 optimization Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 4
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 230000001143 conditioned effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 5
- 238000013140 knowledge distillation Methods 0.000 description 5
- 238000001994 activation Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000000265 homogenisation Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present invention relates to the field of data processing technology, and in particular to a generative data-free quantification method, identification method, device and storage medium.
- Deep neural networks have achieved great success in many fields.
- deep neural networks have a huge number of parameters and high computational costs, making them difficult to deploy on embedded devices.
- Model quantization reduces model size, increases model running speed, and reduces energy consumption by quantizing floating point values to low precision.
- Existing quantization methods usually require training data for calibration or fine-tuning. However, in many practical scenarios such as medical care and finance, training data may not be available due to commercial confidentiality or personal privacy issues. Due to the lack of training data, existing quantification methods are no longer applicable, rendering existing automatic recognition models unusable.
- the object of the present invention is a generative data-free quantification method, identification method, device and storage medium.
- a generative data-free quantification method including the following steps:
- the knowledge matching data generator is trained according to the full-precision pre-training model and generates pseudo data as generated data; among them, the knowledge-matching data generator mines the classification information and distribution information of the original data from the full-precision pre-training model;
- the collection target data set is used to pre-train the full-precision neural network based on the data set to obtain a full-precision pre-training model, including:
- the neural network is trained using the data set to obtain a full-precision pre-trained model.
- training the knowledge matching data generator based on the full-precision pre-training model includes:
- the knowledge matching data generator is defined as follows:
- z is the noise vector conditioned on label y
- y) represents the generator generating data from noise
- Cross-entropy loss is used to train the knowledge matching data generator.
- the loss function in training is:
- CE cross entropy loss
- G is the knowledge matching data generator
- E z the knowledge matching data generator
- y represents the expected value
- y)) represents inputting the generated data into the full-precision model M;
- BNS loss is used to train the knowledge matching data generator.
- the loss function in training is:
- using the generated data to drive the quantization of the full-precision model and obtaining the quantized model includes:
- the loss function in training is:
- Q is the quantization model
- CE represents the cross entropy loss
- quantizing and compressing the full-precision model to obtain a quantized model includes:
- the iterative optimization quantification model includes:
- the knowledge matching data generator G and the quantification model Q are alternately optimized in each iteration; in the alternating optimization strategy, the knowledge matching data generator G generates different data at each update; by increasing the diversity of data, Optimize quantitative model Q;
- the optimized quantitative model is deployed on mobile devices, including mobile smart terminals and control terminals on cars.
- An identification method including the following steps:
- the obtained pictures are input into the quantification model for classification and identification, and the classification results are output; wherein the quantification model is obtained by a generative data-free quantification method as described above;
- classification recognition includes at least one of face recognition, medical image recognition, and traffic scene recognition (such as traffic light recognition, traffic sign recognition).
- a device including:
- At least one memory for storing at least one program
- the at least one processor When the at least one program is executed by the at least one processor, the at least one processor implements the above method.
- a computer-readable storage medium has a processor-executable program stored therein, and the processor-executable program, when executed by the processor, is used to perform the method as described above.
- the present invention mines knowledge that can guide the quantitative model from the full-precision pre-training model through the knowledge matching data generator, such as data category information and distribution information, thereby providing the accuracy of the quantitative model and thereby improving the accuracy of the quantitative model. Accuracy of object classification.
- Figure 1 is a step flow chart of a generative data-free quantification method in an embodiment of the present invention
- Figure 2 is a schematic diagram of generative data-free quantification based on knowledge matching in an embodiment of the present invention
- Figure 3 is a flow chart of steps of an identification method in an embodiment of the present invention.
- orientation descriptions such as up, down, front, back, left, right, etc., are based on the orientation or position relationships shown in the drawings and are only In order to facilitate the description of the present invention and simplify the description, it is not intended to indicate or imply that the device or element referred to must have a specific orientation, be constructed and operate in a specific orientation, and therefore should not be construed as a limitation of the present invention.
- MSE mean square error alignment
- this embodiment provides a generative data-free quantification method.
- the first step requires constructing a target data set and pre-training a full-precision neural network. Then learn a knowledge matching generator to generate meaningful data.
- the generator mines the classification boundaries and distribution information of the original data from the pre-trained full-precision model.
- the pre-trained model is then quantized using the generated data, and the quantized model is fine-tuned using fixed batch normalization statistics (BNS) to obtain more stable accuracy.
- BNS fixed batch normalization statistics
- mean square error alignment is introduced to learn more knowledge directly from the pre-trained model, which has obvious effects on obtaining a quantized model with better performance.
- the generator and the quantized model are alternately trained iteratively until the quantized model converges.
- this embodiment provides a generative data-free quantification method based on knowledge matching, including:
- Step S1 specifically includes:
- S1-1 Collect images from target task scenarios and label the images with categories to build a data set
- S1-2 Divide the annotated data set into three parts: training set, verification set, and test set;
- S1-4 Use the data set to perform regular training on the neural network to obtain a pre-trained full-precision neural network.
- Step S2 specifically includes:
- the present invention proposes a knowledge matching generator capable of generating pseudo-data that can be used for data-free quantification tasks. For this task, although the original data cannot be observed, the number of categories of the original data can be easily determined through the last layer of the pre-trained model.
- a noise vector z conditioned on the label y is introduced.
- the generator maps a prior input noise vector and the given label to the dummy data.
- the knowledge matching generator is defined as follows:
- the BN layer in the pre-trained model contains the distribution information of the training data. If the generated data can retain BNS information, it can make the generated data distribution match the real data distribution. To do this, use BNS loss to train to generate G: in and are the mean and variance of the pseudo data distribution in the l-th BN layer, respectively, and and ⁇ l are the mean and variance parameters stored in the l-th BN layer of the pre-trained full-precision model. In this way, a good generator can be learned to preserve the training data distribution information.
- Step S3 specifically includes:
- the generator can be used to fill in the missing data state when there is no data, and then use the generated meaningful data to quantify the model, that is, generate a data-driven quantification method, and use the knowledge from the pre-trained model to solve the quantified problem.
- Model optimization problem The generator can be used to fill in the missing data state when there is no data, and then use the generated meaningful data to quantify the model, that is, generate a data-driven quantification method, and use the knowledge from the pre-trained model to solve the quantified problem. Model optimization problem.
- quantization may have some limitations.
- direct quantization from a full-precision model can lead to severe performance degradation.
- the quantization model is fine-tuned so that its performance approaches the full-precision model.
- the fine-tuned quantization model Q should be able to correctly classify fake data.
- Q is updated using the cross-entropy loss function CE():
- the quantized model can learn more from the full-precision model.
- BNS normalized statistics
- Step S4 specifically includes:
- S4-2 During the fine-tuning process, the generator G and the quantization model Q are alternately optimized in each epoch. In the alternating training strategy, the generator can generate different data with each update. By increasing the diversity of data, the quantized model Q can be trained to improve performance.
- the existing technology uses KL alignment to fine-tune the quantized model, but the MSE applied by the present invention has superiority and rationality in data free quantification tasks compared to KL alignment.
- MSE alignment is superior for no-data situations.
- KL divergence makes the logits distribution of students close to the teacher, which is suitable and effective enough to extract dark knowledge from teachers to students and optimize the student model with real data.
- the logits distribution of the teacher model provides sufficient information for the student model to utilize knowledge.
- the distribution shift is the distribution shift:
- MSE alignment is reasonable for quantization tasks.
- the teacher-student structure is crucial to forming knowledge transfer.
- teacher and student models always have different sizes and architectures during the distillation process.
- KL divergence uses the probabilities calculated by the softmax operation as soft targets to fit the structural gaps between models.
- the full-precision model as the teacher and the quantization model as the student.
- the student model is a quantized version of the teacher network, where the structure of the network is preserved.
- the generative data-free quantization algorithm based on knowledge matching proposed by the embodiment of the present invention can effectively restore the accuracy of the quantization model through generated pseudo data and MSE knowledge distillation.
- Tables 1 and 2 show the comparison results with the best existing methods on the CIFAR data set and ImageNet data set respectively.
- this embodiment also provides an identification method including the following steps:
- classification recognition includes at least one of face recognition, traffic light recognition, and traffic sign recognition.
- the data used to train the pre-training model may need to be kept confidential and cannot be obtained during quantification, because many pre-models will only publish the model but not the data set, and it is impossible to know what they used. data. There are also many scenarios where data cannot be used due to privacy concerns, whether it is pre-training or quantification. Such as face data, medical imaging data, autonomous driving data, etc.
- Embodiments of the present invention can quantize image classification models such as ResNet and MobileNet without original training data, and train the quantized models to improve classification accuracy.
- the quantified model can be deployed on mobile devices such as mobile phones and cars to achieve image classification tasks such as face recognition, traffic light recognition, and traffic sign recognition.
- ResNet convolutional neural network achieves superior performance in image classification and object recognition.
- Residual networks are characterized by being easy to optimize and can improve accuracy by adding considerable depth.
- the internal residual block uses skip connections to alleviate the vanishing gradient problem caused by increasing depth in deep neural networks.
- MobileNet is a convolutional neural network with smaller model size, less trainable parameters and less calculations, and is suitable for mobile devices. It aims to make full use of limited computing resources and maximize the accuracy of the model to meet various application cases under limited resources. It is one of the commonly used models deployed to the edge side.
- This embodiment also provides a device, including:
- At least one memory for storing at least one program
- the at least one processor When the at least one program is executed by the at least one processor, the at least one processor implements the method shown in Figure 1 or Figure 3 .
- a device in this embodiment can execute the method provided by the method embodiment of the present invention, can execute any combination of implementation steps of the method embodiment, and has the corresponding functions and beneficial effects of the method.
- the embodiment of the present application also discloses a computer program product or computer program.
- the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
- the processor of the computer device can read the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method shown in FIG. 1 or FIG. 3 .
- This embodiment also provides a storage medium that stores instructions or programs that can execute the methods provided by the method embodiments of the present invention. When the instructions or programs are run, any combination of implementation steps of the method embodiments can be executed. The corresponding functions and beneficial effects of the method.
- the functions/operations noted in the block diagrams may occur out of the order noted in the operational illustrations.
- two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality/operations involved.
- the embodiments presented and described in the flow diagrams of the present invention are provided by way of example for the purpose of providing a more comprehensive understanding of the technology. The disclosed methods are not limited to the operations and logical flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of a larger operation are performed independently.
- the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present invention essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present invention.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .
- a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Non-exhaustive list of computer readable media include the following: electrical connections with one or more wires (electronic device), portable computer disk cartridges (magnetic device), random access memory (RAM), Read-only memory (ROM), erasable and programmable read-only memory (EPROM or flash memory), fiber optic devices, and portable compact disc read-only memory (CDROM).
- the computer-readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, and subsequently edited, interpreted, or otherwise suitable as necessary. process to obtain the program electronically and then store it in computer memory.
- various parts of the present invention may be implemented in hardware, software, firmware, or a combination thereof.
- various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
- a logic gate circuit with a logic gate circuit for implementing a logic function on a data signal.
- Discrete logic circuits application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种生成式无数据量化方法、识别方法、装置及存储介质,其中方法包括:收集目标的数据集,根据所述数据集对全精度神经网络进行预训练,获得全精度预训练模型;根据全精度预训练模型训练知识匹配数据生成器,并生成伪数据作为生成数据;其中,知识匹配数据生成器从全精度预训练模型中挖掘原始数据的分类信息和分布信息;使用所述生成数据驱动全精度模型的量化,获得量化模型;根据知识匹配数据生成器迭代优化量化模型。本发明通过知识匹配数据生成器从全精度预训练模型中挖掘对量化模型有指导作用的知识,如数据类别信息和分布信息,从而提供量化模型的精准度,进而提高物体分类的精准度。本发明可广泛应用于数据处理技术领域。
Description
本发明涉及数据处理技术领域,尤其涉及一种生成式无数据量化方法、识别方法、装置及存储介质。
深度神经网络已经在许多领域获得了很大的成功,然而深度神经网络的参数量巨大、计算成本高昂,难以在嵌入式设备上部署。模型量化通过将浮点值量化为低精度来减小模型大小、提高模型运行速度、减少能耗。现有的量化方法通常需要训练数据来进行校准或者微调。然而,在医疗、金融等许多实际场景中,由于商业机密或个人隐私问题,可能无法获得训练数据。由于缺少训练数据,现有量化方法不再适用,导致现有的自动识别模型无法使用。
为了解决上述问题,无数据量化尝试摆脱量化算法对原始训练数据的依赖,仅使用预训练模型实现量化。现有方法使用全精度模型的BN统计量来生成合成数据,促进了从全精度模型到其量化模型的知识迁移。然而,这些方法存在两个明显的问题。首先是生成数据分布的同质化现象,缺少真实数据的多样性。其次是生成的数据缺少真实数据的类别信息。这两个问题导致生成数据的分布与真实数据有很大差异,从而导致了量化模型精度的下降,间接地降低了物体识别的精准度。
发明内容
为至少一定程度上解决现有技术中存在的技术问题之一,本发明的目的在于一种生成式无数据量化方法、识别方法、装置及存储介质。
本发明所采用的技术方案是:
一种生成式无数据量化方法,包括以下步骤:
收集目标的数据集,根据所述数据集对全精度神经网络进行预训练,获得全精度预训练模型;
根据全精度预训练模型训练知识匹配数据生成器,并生成伪数据作为生成数据;其中,知识匹配数据生成器从全精度预训练模型中挖掘原始数据的分类信息和分布信息;
使用所述生成数据驱动全精度模型的量化,获得量化模型;
根据知识匹配数据生成器迭代优化量化模型。
进一步地,所述收集目标的数据集,根据所述数据集对全精度神经网络进行预训练,获得全精度预训练模型,包括:
从目标任务场景中收集图片,并对图片进行类别标注,获得数据集;
将标注后的数据集划分成训练集、验证集、测试集三个部分;
确定目标任务需要使用的神经网络;
使用数据集对所述神经网络进行训练,获得全精度预训练模型。
进一步地,所述根据全精度预训练模型训练知识匹配数据生成器,包括:
知识匹配数据生成器的定义如下:
采用交叉熵损失对知识匹配数据生成器进行训练,训练中的损失函数为:
式中,CE表示交叉熵损失,G为知识匹配数据生成器,E
z,y表示期望值,M(G(z|y))表示将生成数据输入到全精度模型M中;
采用BNS loss对知识匹配数据生成器进行训练,训练中的损失函数为:
进一步地,所述使用所述生成数据驱动全精度模型的量化,获得量化模型,包括:
对全精度模型进行量化压缩,获得量化模型;
使用交叉熵损失函数对量化模型进行训练微调,使量化模型的性能逼近全精度模型,其中训练中的损失函数为:
在量化模型中使用全精度预训练模型的归一化统计量,并固定不变。
进一步地,所述将全精度模型进行量化压缩,获得量化模型,包括:
将离散值θ′截断为θ
q=[-2
b-1,2
b-1-1],θ
q即为量化后的权重和激活量。
进一步地,所述迭代优化量化模型,包括:
在微调过程中,每次迭代中交替优化知识匹配数据生成器G和量化模型Q;交替优化策略中,知识匹配数据生成器G在每次更新时生成不同的数据;通过增加数据的多样性,优化量化模型Q;
持续更新知识匹配数据生成器G和量化模型Q,直至量化模型Q收敛。
进一步地,优化后的量化模型部署在移动设备上,所述移动设备包括移动智能终端、汽车上的控制终端。
本发明所采用的另一技术方案是:
一种识别方法,包括以下步骤:
获取待分类识别的图片;
将获得的图片输入量化模型中进行分类识别,输出分类结果;其中,所述量化模型通过如上所述的一种生成式无数据量化方法获得;
其中,分类识别包括人脸识别、医学影像识别、交通场景识别(如交通灯识别、交通标志识别)中至少一种。
本发明所采用的另一技术方案是:
一种装置,包括:
至少一个处理器;
至少一个存储器,用于存储至少一个程序;
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现上所述方法。
本发明所采用的另一技术方案是:
一种计算机可读存储介质,其中存储有处理器可执行的程序,所述处理器可执行的程序 在由处理器执行时用于执行如上所述方法。
本发明的有益效果是:本发明通过知识匹配数据生成器从全精度预训练模型中挖掘对量化模型有指导作用的知识,如数据类别信息和分布信息,从而提供量化模型的精准度,进而提高物体分类的精准度。
为了更清楚地说明本发明实施例或者现有技术中的技术方案,下面对本发明实施例或者现有技术中的相关技术方案附图作以下介绍,应当理解的是,下面介绍中的附图仅仅为了方便清晰表述本发明的技术方案中的部分实施例,对于本领域的技术人员而言,在无需付出创造性劳动的前提下,还可以根据这些附图获取到其他附图。
图1是本发明实施例中一种生成式无数据量化方法的步骤流程图;
图2是本发明实施例中基于知识匹配的生成式无数据量化示意图;
图3是本发明实施例中一种识别方法的步骤流程图。
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。对于以下实施例中的步骤编号,其仅为了便于阐述说明而设置,对步骤之间的顺序不做任何限定,实施例中的各步骤的执行顺序均可根据本领域技术人员的理解来进行适应性调整。
在本发明的描述中,需要理解的是,涉及到方位描述,例如上、下、前、后、左、右等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
在本发明的描述中,若干的含义是一个或者多个,多个的含义是两个以上,大于、小于、超过等理解为不包括本数,以上、以下、以内等理解为包括本数。如果有描述到第一、第二只是用于区分技术特征为目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量或者隐含指明所指示的技术特征的先后关系。
本发明的描述中,除非另有明确的限定,设置、安装、连接等词语应做广义理解,所属技术领域技术人员可以结合技术方案的具体内容合理确定上述词语在本发明中的具体含义。
术语解释:
BNS:批量归一化统计。
MSE:均方误差对齐。
如图1所示,本实施例提供一种生成式无数据量化方法,第一步需要构建目标数据集,并对全精度的神经网络进行预训练。然后学习一个知识匹配生成器来产生有意义的数据。生成器从预训练的全精度模型中挖掘原始数据的分类边界和分布信息。然后,使用生成的数据对预训练模型进行量化,并使用固定批量归一化统计(BNS)对量化模型进行微调,以获得更稳定的准确度。此外,引入均方误差对齐来直接从预训练模型中学习更多知识,这对于获得性能更好的量化模型有明显的效果。最后将生成器与量化后模型交替迭代训练,直至量化后模型收敛。
以下结合附图对上述方法进行详细解释说明。
如图1和图2所示,本实施例提供一种基于知识匹配的生成式无数据量化方法,包括:
S1、收集目标的数据集,根据数据集对全精度神经网络进行预训练,获得全精度预训练模型。
步骤S1具体包括:
S1-1:从目标任务场景中收集图片并对图片进行类别标注以构建成数据集;
S1-2:将标注后的数据集划分成训练集、验证集、测试集三个部分;
S1-3:确认目标任务需要使用的神经网络;
S1-4:使用数据集对该神经网络进行常规训练,得到预训练全精度神经网络。
S2、根据全精度预训练模型训练知识匹配数据生成器,并生成伪数据作为生成数据;其中,知识匹配数据生成器从全精度预训练模型中挖掘原始数据的分类信息和分布信息。
步骤S2具体包括:
S2-1:当训练一个深度神经网络时,它捕获足够的数据信息来做出决策。因此,预训练的神经网络中包含了一些训练数据的知识信息,如分类边界信息和分布信息。但是这些信息很难被用来恢复接近分类边界的数据。最近,生成式对抗网络(GANs)在产生数据方面取得了相当大的成功。本发明提出了一个知识匹配生成器,能够产生可用于无数据量化任务的伪数据。对于这个任务,虽然无法观察到原始数据,但可以通过预训练模型的最后一层很容易地确定原始数据的类别数量。
为了生成虚假数据,引入了一个以标签y为条件的噪声向量z。从正态分布采样噪声,并从均匀分布{0,1,...,n-1}中采样一个标签。然后,生成器将一个先验输入噪声向量和给定 的标签映射到伪数据
形式上,知识匹配生成器的定义如下:
S2-2:训练知识匹配数据生成器:为了提高量化性能,生成器需要有能力生成对微调量化模型有效的数据。为此,生成的数据应该被全精度预训练模型M分类到相同的类别y。因此引入以下交叉熵损失函数CE()训练生成器G:
S2-3:预训练模型中的BN层包含训练数据的分布信息。如果生成数据能够保留BNS信息,就能促使生成数据分布匹配真实数据分布。为此,使用BNS loss来训练生成G:
其中
和
分别是第l层BN层中伪数据分布的均值和方差,而
和σ
l是存储在预训练全精度模型第l层BN层中的均值和方差参数。这样就可以学习一个好的生成器来保持训练数据分布信息。
S3、使用生成数据驱动全精度模型的量化,获得量化模型。
步骤S3具体包括:
S3-1:借助生成器可以填补无数据情况下的数据缺失状态,进而使用生成的有意义的数据来量化模型,即生成数据驱动的量化方法,并利用来自预训练模型的知识来解决量化后模型的优化问题。
S3-2:模型量化将全精度(32位)的权重和激活量映射到低精度。对于权重和激活,使用简单有效的量化方法。具体来说,给定全权重θ和量化精度b,计算出通过线性量化映射的离散值
其中
η=l·Δ+2
b-1,l和u分别设置为浮点权重θ的最小值和最大值。然后将在对称b位范围内将θ′截断为θ
q=[-2
b-1,2
b-1-1]。θ
q即为量化后的权重和激活量。
S3-3:当没有真实的训练数据时,量化可能会受到一些限制。首先,从全精度模型直接量化可能导致严重的性能下降。为了解决这个问题,微调量化模型使其性能逼近全精度模型。经过微调的量化模型Q该能够正确地对伪造数据进行分类。为此使用交叉熵损失函数CE()更新Q:
S3-4:由于数据是伪造的,仅使用常见的分类损失函数不足以完成微调过程。但是,借助伪造数据可以使用知识蒸馏来进一步提高量化性能。具体而言,在给定相同输入的情况下,量化模型和全精度模型的输出应足够接近,以确保量化模型与全精度模型相比能够实现几乎相同的性能。使用Mean Squared Error(均方误差)函数将预训练全精度模型输出的logits
和量化后模型的logits
进行对齐,用以微调量化模型:
通过优化此函数,量化模型可以从全精度模型中学习更多的知识。
S3-5:使用固定的BNS进行微调:为了稳定微调过程,在量化模型中使用预训练全精度模型的归一化统计量(BNS)并固定不变。借助固定的BNS,量化模型始终保持真实数据的分布信息,以提高量化性能。
S4、根据知识匹配数据生成器迭代优化量化模型。
步骤S4具体包括:
S4-1:为了使Q的微调更加稳定,先单独训练G几次作为热身过程。
S4-2:在微调过程中,每个epoch中交替优化生成器G和量化模型Q。交替训练策略中,生成器能够在每次更新时生成不同的数据。通过增加数据的多样性,可以训练量化模型Q以提高性能。
S4-3:持续更新G和Q模型直至Q收敛。持续训练G可以使假数据会更接近真实训练数据,优化Q的上限也会提高。
现有技术使用的KL对齐来微调量化后模型,而本发明申请的MSE相比于KL对齐,在数据自由量化任务中的优越性和合理性。首先MSE对齐对于无数据情况是优越的。在正常的知识蒸馏中,KL散度使得学生的logits分布接近教师,这足够适合和有效地将暗知识从教师到学生进行提取,并用真实数据优化学生模型。教师模型的logits分布为学生模型利用知识提供了充分的信息。然而,根据分布偏移:
即使训练收敛后KL散度达到最小值,学生的logits分布仍可能与教师模型的分布相距甚远。尤其是在无数据的情况下,教师的知识尤为关键,因为真实数据的信息和知识是无法获得的。如果我们不能充分利用教师模型中的logits信息,我们只会得到一个表现不佳的模型。因此,我们引入MSE对齐来解决数据自由情况下的分布偏移问题。当MSE距离达到最小值时,学生的logits分布更接近于教师模型的分布。
其次MSE对齐对于量化任务是合理的。在知识蒸馏中,师生架构对于形成知识转移至关重要。通常,教师和学生模型在蒸馏过程中总是具有不同的大小和架构。对于大小和架构不同的模型,直接强制两个模型的logits相同是不合理的。因此,KL散度使用softmax操作 计算的概率作为软目标来拟合模型之间的结构间隙。相反,与普通的知识蒸馏不同,在我们的量化任务中,我们使用全精度模型作为老师,量化模型作为学生。学生模型是教师网络的量化版本,其中保留了网络的结构。所以我们考虑使用更严格的度量指标,即均方误差对齐来推动量化模型更接近全精度模型,这在相同架构下是合理的。如果我们使用MSE作为优化函数,我们可以鼓励量化模型更接近于全精度模型以获得更好的性能,同时弥补数据缺失的问题。
总的来说,本发明实施例提出的基于知识匹配的生成式无数据量化算法可以通过生成的伪数据和MSE知识蒸馏,可以有效地恢复量化模型的准确性。表1和表2分别展示了在CIFAR数据集和ImageNet数据集上与已有最好方法的对比结果。应用本实施例方法之后,在两个常用的图像识别数据集上均能实现高精度的无数据量化,相比于已有方法有极大的提升,逼近使用数据量化的精度。
表1
表2
如图3所示,本实施例还提供一种识别方法包括以下步骤:
A1、获取待分类识别的图片;
A2、将获得的图片输入量化模型中进行分类识别,输出分类结果;其中,所述量化模型通过如图1所示的一种生成式无数据量化方法获得;
其中,分类识别包括人脸识别、交通灯识别、交通标志识别中至少一种。
在现实一些实施例中,用来训练预训练模型的数据可能是需要保密的,量化的时候无法获取,因为很多预模型只会公布模型而不会公布数据集,根本无法得知他用了什么数据。也 有很多场景下数据是涉及隐私无法使用的,无论是预训练还是量化。如人脸数据、医疗影像数据、自动驾驶数据等。
本发明实施例能够在没有原始训练数据的情况下,将ResNet、MobileNet等图片分类模型进行量化,并对量化后模型进行训练提升分类精度。量化后的模型可以部署在手机、汽车等移动设备上,实现人脸识别、交通灯识别、交通标志识别等图片分类任务。
其中,ResNet卷积神经网络在图像分类和物体识别上获得优越的性能。残差网络的特点是容易优化,并且能够通过增加相当的深度来提高准确率。其内部的残差块使用了跳跃连接,缓解了在深度神经网络中增加深度带来的梯度消失问题。MobileNet是一种模型体积较小、可训练参数及计算量较少并适用于移动设备的卷积神经网络。旨在充分利用有限的计算资源,最大化模型的准确性,以满足有限资源下的各种应用案例,是部署至边缘侧常用的模型之一。
本实施例还提供一种装置,包括:
至少一个处理器;
至少一个存储器,用于存储至少一个程序;
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如图1或图3所示方法。
本实施例的一种装置,可执行本发明方法实施例所提供的法,可执行方法实施例的任意组合实施步骤,具备该方法相应的功能和有益效果。
本申请实施例还公开了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存介质中。计算机设备的处理器可以从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行图1或图3所示的方法。
本实施例还提供了一种存储介质,存储有可执行本发明方法实施例所提供的方法的指令或程序,当运行该指令或程序时,可执行方法实施例的任意组合实施步骤,具备该方法相应的功能和有益效果。
在一些可选择的实施例中,在方框图中提到的功能/操作可以不按照操作示图提到的顺序发生。例如,取决于所涉及的功能/操作,连续示出的两个方框实际上可以被大体上同时地执行或所述方框有时能以相反顺序被执行。此外,在本发明的流程图中所呈现和描述的实施例以示例的方式被提供,目的在于提供对技术更全面的理解。所公开的方法不限于本文所呈现的操作和逻辑流程。可选择的实施例是可预期的,其中各种操作的顺序被改变以及其中被描述为较大操作的一部分的子操作被独立地执行。
此外,虽然在功能性模块的背景下描述了本发明,但应当理解的是,除非另有相反说明,所述的功能和/或特征中的一个或多个可以被集成在单个物理装置和/或软件模块中,或者一个或多个功能和/或特征可以在单独的物理装置或软件模块中被实现。还可以理解的是,有关每个模块的实际实现的详细讨论对于理解本发明是不必要的。更确切地说,考虑到在本文中公开的装置中各种功能模块的属性、功能和内部关系的情况下,在工程师的常规技术内将会了解该模块的实际实现。因此,本领域技术人员运用普通技术就能够在无需过度试验的情况下实现在权利要求书中所阐明的本发明。还可以理解的是,所公开的特定概念仅仅是说明性的,并不意在限制本发明的范围,本发明的范围由所附权利要求书及其等同方案的全部范围来决定。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。
计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施 方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
在本说明书的上述描述中,参考术语“一个实施方式/实施例”、“另一实施方式/实施例”或“某些实施方式/实施例”等的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
尽管已经示出和描述了本发明的实施方式,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施方式进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。
以上是对本发明的较佳实施进行了具体说明,但本发明并不限于上述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。
Claims (10)
- 一种生成式无数据量化方法,其特征在于,包括以下步骤:收集目标的数据集,根据所述数据集对全精度神经网络进行预训练,获得全精度预训练模型;根据全精度预训练模型训练知识匹配数据生成器,并生成伪数据作为生成数据;其中,知识匹配数据生成器从全精度预训练模型中挖掘原始数据的分类信息和分布信息;使用所述生成数据驱动全精度模型的量化,获得量化模型;根据知识匹配数据生成器迭代优化量化模型。
- 根据权利要求1所述的一种生成式无数据量化方法,其特征在于,所述收集目标的数据集,根据所述数据集对全精度神经网络进行预训练,获得全精度预训练模型,包括:从目标任务场景中收集图片,并对图片进行类别标注,获得数据集;将标注后的数据集划分成训练集、验证集、测试集三个部分;确定目标任务需要使用的神经网络;使用数据集对所述神经网络进行训练,获得全精度预训练模型。
- 根据权利要求1所述的一种生成式无数据量化方法,其特征在于,所述根据全精度预训练模型训练知识匹配数据生成器,包括:知识匹配数据生成器的定义如下:采用交叉熵损失对知识匹配数据生成器进行训练,训练中的损失函数为:式中,CE表示交叉熵损失,G为知识匹配数据生成器,E z,y表示期望值,M(G(z|y))表示将生成数据输入到全精度模型M中;采用BNS loss对知识匹配数据生成器进行训练,训练中的损失函数为:
- 根据权利要求1所述的一种生成式无数据量化方法,其特征在于,所述迭代优化量化模型,包括:在微调过程中,每次迭代中交替优化知识匹配数据生成器G和量化模型Q;交替优化策略中,知识匹配数据生成器G在每次更新时生成不同的数据;通过增加数据的多样性,优化量化模型Q;持续更新知识匹配数据生成器G和量化模型Q,直至量化模型Q收敛。
- 根据权利要求1所述的一种生成式无数据量化方法,其特征在于,优化后的量化模型部署在移动设备上,所述移动设备包括移动智能终端、汽车上的控制终端。
- 一种识别方法,其特征在于,包括以下步骤:获取待分类识别的图片;将获得的图片输入量化模型中进行分类识别,输出分类结果;其中,所述量化模型通过如权利要求1-7任一项所述的一种生成式无数据量化方法获得;其中,分类识别包括人脸识别、医学影像识别、交通场景识别中至少一种。
- 一种装置,其特征在于,包括:至少一个处理器;至少一个存储器,用于存储至少一个程序;当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现权利要求1-8任一项所述方法。
- 一种计算机可读存储介质,其中存储有处理器可执行的程序,其特征在于,所述处理器可执行的程序在由处理器执行时用于执行如权利要求1-8任一项所述方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210703685.0A CN115223209A (zh) | 2022-06-21 | 2022-06-21 | 一种生成式无数据量化方法、识别方法、装置及存储介质 |
CN202210703685.0 | 2022-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023245873A1 true WO2023245873A1 (zh) | 2023-12-28 |
Family
ID=83607709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/116835 WO2023245873A1 (zh) | 2022-06-21 | 2022-09-02 | 一种生成式无数据量化方法、识别方法、装置及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115223209A (zh) |
WO (1) | WO2023245873A1 (zh) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111985523A (zh) * | 2020-06-28 | 2020-11-24 | 合肥工业大学 | 基于知识蒸馏训练的2指数幂深度神经网络量化方法 |
CN114239861A (zh) * | 2021-12-16 | 2022-03-25 | 华侨大学 | 基于多教师联合指导量化的模型压缩方法及系统 |
CN114429209A (zh) * | 2022-01-27 | 2022-05-03 | 厦门大学 | 基于细粒度数据分布对齐的神经网络后训练量化方法 |
-
2022
- 2022-06-21 CN CN202210703685.0A patent/CN115223209A/zh active Pending
- 2022-09-02 WO PCT/CN2022/116835 patent/WO2023245873A1/zh unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111985523A (zh) * | 2020-06-28 | 2020-11-24 | 合肥工业大学 | 基于知识蒸馏训练的2指数幂深度神经网络量化方法 |
CN114239861A (zh) * | 2021-12-16 | 2022-03-25 | 华侨大学 | 基于多教师联合指导量化的模型压缩方法及系统 |
CN114429209A (zh) * | 2022-01-27 | 2022-05-03 | 厦门大学 | 基于细粒度数据分布对齐的神经网络后训练量化方法 |
Also Published As
Publication number | Publication date |
---|---|
CN115223209A (zh) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112949786B (zh) | 数据分类识别方法、装置、设备及可读存储介质 | |
WO2020228655A1 (zh) | 优化量化模型的方法、装置、电子设备及计算机存储介质 | |
CN111275107A (zh) | 一种基于迁移学习的多标签场景图像分类方法及装置 | |
US11776236B2 (en) | Unsupervised representation learning with contrastive prototypes | |
US11977974B2 (en) | Compression of fully connected / recurrent layers of deep network(s) through enforcing spatial locality to weight matrices and effecting frequency compression | |
CN113159073B (zh) | 知识蒸馏方法及装置、存储介质、终端 | |
CN116702843A (zh) | 投影神经网络 | |
CN112132149B (zh) | 一种遥感影像语义分割方法及装置 | |
CN110569359B (zh) | 识别模型的训练及应用方法、装置、计算设备及存储介质 | |
KR20170106338A (ko) | 모델 압축 및 미세-튜닝 | |
CN113837370B (zh) | 用于训练基于对比学习的模型的方法和装置 | |
CN113221983B (zh) | 迁移学习模型的训练方法及装置、图像处理方法及装置 | |
CN113420775A (zh) | 基于非线性度自适应子域领域适应的极少量训练样本下图片分类方法 | |
CN114863092A (zh) | 一种基于知识蒸馏的联邦目标检测方法及系统 | |
CN112164077A (zh) | 基于自下而上路径增强的细胞实例分割方法 | |
WO2020118553A1 (zh) | 一种卷积神经网络的量化方法、装置及电子设备 | |
Cheng et al. | MIFNet: A lightweight multiscale information fusion network | |
CN115546840A (zh) | 基于半监督知识蒸馏的行人重识别模型训练方法及装置 | |
CN116090504A (zh) | 图神经网络模型训练方法及装置、分类方法、计算设备 | |
CN113850012B (zh) | 数据处理模型生成方法、装置、介质及电子设备 | |
WO2023245873A1 (zh) | 一种生成式无数据量化方法、识别方法、装置及存储介质 | |
Zhang et al. | A small target detection algorithm based on improved YOLOv5 in aerial image | |
CN117095460A (zh) | 基于长短时关系预测编码的自监督群体行为识别方法及其识别系统 | |
CN115797642A (zh) | 基于一致性正则化与半监督领域自适应图像语义分割算法 | |
CN111091198A (zh) | 一种数据处理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22947613 Country of ref document: EP Kind code of ref document: A1 |