CN117496191B - A data-weighted learning method based on model collaboration - Google Patents

A data-weighted learning method based on model collaboration Download PDF

Info

Publication number
CN117496191B
CN117496191B CN202410004710.5A CN202410004710A CN117496191B CN 117496191 B CN117496191 B CN 117496191B CN 202410004710 A CN202410004710 A CN 202410004710A CN 117496191 B CN117496191 B CN 117496191B
Authority
CN
China
Prior art keywords
target
image
sample
auxiliary
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410004710.5A
Other languages
Chinese (zh)
Other versions
CN117496191A (en
Inventor
梁栋
杜云
孙悦
黄圣君
陈松灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202410004710.5A priority Critical patent/CN117496191B/en
Publication of CN117496191A publication Critical patent/CN117496191A/en
Application granted granted Critical
Publication of CN117496191B publication Critical patent/CN117496191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/72Data preparation, e.g. statistical preprocessing of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于模型协作的数据加权学习方法,定义了一种面向图像数据的样本重加权方法。对输入的一张图像样本,利用预训练好的辅助模型集合和待训练的目标模型,利用各自得到的预测结果与真实标签计算辅助权重及目标权重,然后通过一定比例融合得到最终的样本权重。使用上述策略计算样本的加权损失,目标模型通过最小化加权损失来更新参数,让目标模型侧重于学习有价值的样本。通过本发明可以在大幅减少筛选低质量数据的人工成本,提高效率。同时本发明适用于多种计算机视觉任务,如图像分割、图像分类、目标检测等,能够为数据有效性评估方案建立一种新的标准通用框架。

The invention discloses a data weighted learning method based on model collaboration, and defines a sample reweighting method oriented to image data. For an input image sample, the pre-trained auxiliary model set and the target model to be trained are used to calculate the auxiliary weight and target weight using the respective prediction results and real labels, and then the final sample weight is obtained through a certain proportion of fusion. Using the above strategy to calculate the weighted loss of the sample, the target model updates the parameters by minimizing the weighted loss, allowing the target model to focus on learning valuable samples. Through the present invention, the labor cost of screening low-quality data can be greatly reduced and the efficiency can be improved. At the same time, the present invention is suitable for a variety of computer vision tasks, such as image segmentation, image classification, target detection, etc., and can establish a new standard universal framework for data validity evaluation schemes.

Description

Data weighted learning method based on model collaboration
Technical Field
The invention relates to the technical field of image instance segmentation, in particular to a data weighted learning method based on model collaboration.
Background
In recent years, artificial intelligence technology has been rapidly developed, and new sparks are continuously collided with by the combination of computer vision, and penetrate into various aspects of learning, work and life, so that the artificial intelligence technology is closely related to human science and technology development. Image classification, object detection, and image segmentation are three major core problems of computer vision. Image instance segmentation is an important branch of research in the field of image segmentation. The example segmentation task refers to locating the position of a potential target in an image, wherein the position is represented by a detection target frame, and pixel-by-pixel marking is performed in different target areas by using a semantic segmentation mode.
With the proposal of the full convolution network, image instance segmentation algorithm research based on deep learning starts to appear in the field of view of people. The method has the outstanding advantages of automatic and multi-layer feature extraction, training of the network by inputting a large number of image samples with labels or notes into the learning network, adjusting the weight of connection between neurons according to error detection, repeatedly optimizing the network to realize end-to-end classification learning, and predicting the image samples without labels after completion. The application effect of the method in the aspect of image instance segmentation is greatly improved in accuracy compared with the traditional algorithm.
However, deep learning requires a large number of training image samples to be accurately labeled when fitting a model, and such a finely labeled image dataset is mainly completed through manual labeling, so that the cost is high and the efficiency is low, and therefore, the training image dataset is difficult to acquire in a real scene. In the process of collecting and labeling an image data set, the problems of low quality of an image sample and inconsistent labeling quality generally occur, and the problem of inconsistent difficulty of the image sample also exists in model training, so that the difficulty of identifying and processing the low-quality image sample is great.
Learning data-driven curriculum for very deep neural networks on corrupted labls, in ICML, 2018 pre-trains an additional auxiliary network and then uses it to select a clean instance to guide training of the target network. The two networks are trained simultaneously by the Decoupling "white to update" from "how to update". In NeurIPS, 2017, and for each batch of samples, the prediction results are given to the two networks respectively, and when the prediction results are inconsistent, the backward propagation gradient update is carried out, but as the training times increase, the two networks gradually tend to be consistent, and functionally gradually degenerate into a self-training single target model. In order to solve the problems, co-training is Robust training of deep neural networks with extremely noisy labes, in NeurIPS, 2018 trains two deep neural networks at the same time, in each small batch of data, the two networks select label data which is as clean as possible according to the acquired loss result when feeding all data forward, the two networks determine the trained data by selecting a small loss method, and finally the other network performs back propagation on the selected data to update the super parameters of the network weight. "How does disagreement help generalization against label corruption". Then, among these inconsistent data, each network selects its own small-loss data to teach the other network, propagates the small-loss data from the other network, and updates its own parameters.
The method is focused on detecting the low-quality image sample, and re-labeling the image data set after discarding the low-quality image sample, however, the method does not dig the potential value of the low-quality image sample, so that the waste of invalid acquisition is caused.
Disclosure of Invention
The invention aims to: aiming at the problems in the background art, the invention provides a data weighted learning method based on model cooperation, which considers that the existing data effectiveness evaluation is mainly focused on detecting low-quality image samples, and the data is re-marked after being discarded, and the value of the image samples is not dug out, so that a new standard general framework is designed for the data effectiveness evaluation scheme, namely, the quality of the image samples is evaluated, the image sample re-weighting scheme is adopted to enable a target model to tend to learn the image samples with high quality, and the discarding of too many low-quality image samples is avoided, so that the model training is assisted by using the model.
The technical scheme is as follows: in order to achieve the above purpose, the invention adopts the following technical scheme:
a data weighted learning method based on model collaboration comprises the following steps:
step S1, giving a general image datasetTarget image dataset->Verification set->Giving an auxiliary model set consisting of a plurality of auxiliary models differing in both decision boundary and learning ability>Wherein->Indicate->Auxiliary model->For natural numbers greater than 0, give the target model +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein the set of auxiliary models->In general image dataset->Pre-training is carried out on the device;
step S2, randomly selecting the target image datasetImage sample->Auxiliary model set->Is->The individual auxiliary models are all for image samples +.>Generating a prediction result to obtain a prediction result setWherein->Indicate->The individual auxiliary model is +.>The prediction result generated is then passed through the image samples +.>The set of prediction results generated->And image sample->Corresponding real tag calculate image sample +.>Is->Auxiliary weights are obtained to obtain auxiliary weight set +.>Wherein->Indicate->Auxiliary weights generated by the auxiliary model are then calculated +.>Averaging the auxiliary weights to obtain an image sampleFinal auxiliary weights of (2);
finally, the target image dataset is calculatedFinal auxiliary weight of other image samples in the target image dataset, and according to the size of the final auxiliary weight +.>Ordering all the image samples in the image, if the final auxiliary weight of a certain image sample is smaller than the set threshold +.>Then the image sample is +.>Discarding to obtain cleaned target image dataset +.>
Step S3, randomly selecting a target image datasetThe image sample is defined as +.>Through the object model->Generating an image sample->Is not yet controlled by the pre-processing of (2)Measurement of->Through image sample->Generated prediction result->Image sample->Corresponding real tag calculate image sample +.>Target weight and sample loss of (2), then using the target model +.>Calculating the target image dataset +.>Target weights and sample losses for all other image samples;
step S4, collecting the target image data setThe final auxiliary weight and the target weight obtained by each image sample in the image are fused to obtain sample weight;
step S5, collecting the target image data setRe-weighting the sample loss obtained for each image sample in the model with the sample weight to obtain a weighted sample loss, and finally updating the target model by using gradient calculation>Is->Is used for training;
step S6, when the target image data setThe training is completed, and when all training rounds are completed, the target model +.>After training, each training round comprises a training process of steps S3-S5;
step S7, verifying the setInput to the trained object model +.>And outputting a result after the image instance to be segmented is segmented.
Preferably, the implementation process of obtaining the final auxiliary weight in step S2 is as follows:
step S2.1, randomly selecting a target image datasetImage sample->Based on image sample->Is->Obtaining an image sample->Predicted target bounding box setsPredicted target mask setAnd predicted target edge setsWherein->Representing image samples +.>First->Target bounding box predicted by the auxiliary model +.>Representing image samples +.>First->Target mask for individual auxiliary model prediction, +.>Representing image samples +.>First->Target edges predicted by the auxiliary models; calculate->、/>Andconfidence score of (2) and calculate +.>、/>And->An average confidence score of the confidence scores of +.>The method comprises the steps of carrying out a first treatment on the surface of the Then calculate the image sample +.>Average confidence scores obtained from other auxiliary models respectively, all of the average confidence scores forming a setThe method comprises the steps of carrying out a first treatment on the surface of the Aggregating average confidence scoresIn combination with the set of evaluation index scores as image sample +.>Is a difficulty score set of (1)The evaluation index score set is expressed asWhereinRepresenting image samples +.>Is>Score of each evaluation index->Representing image samples +.>Is>The individual difficulty scores:
wherein the method comprises the steps ofRepresenting a confidence score; />Representing image samples +.>Is set to be the target bounding box true value of (1),representing image samples +.>Target mask true value,/->Representing image samples +.>Target edge truth values of (2); />For measuring the image sample->First->Degree of overlap between target bounding box and target bounding box truth values predicted by the respective auxiliary model, ++>Is used for measuring the image sample +.>First->An average indicator of the degree of overlap between the target mask and the target mask truth values predicted by the respective auxiliary models,is used for measuring the image sample +.>First->Matching degree between the target edge predicted by the auxiliary model and the target edge true value;
step S2.2, based on the prediction result setAnd image sample->At the coordinates ofCorresponding real label at pixel->Calculate image sample +.>Is a set of tag quality scores for (a)Wherein->Represents the>The mass fraction of each label is as follows:
wherein the method comprises the steps ofRepresenting image samples +.>Width of->Representing image samples +.>Height of->、/>For parameters->For counting symbols +.>Expressed in coordinates +.>Corresponding real label at pixels of (2)>Confidence of category->For the set threshold value, calculate the image sample +.>True mark at other pixelsConfidence of sign category, statistics of +.>The individual auxiliary models are +.>Confidence in the prediction results at all pixels in (a) is greater than the set threshold +.>The number of pixels of (2) is recorded as +.>
S2.3, obtaining an image sample based on the UNIQUE without reference quality evaluation indexIs>Wherein->Representing image samples +.>Is>The image quality scores are:
step S2.4, sample the imageThe difficulty score set, the label quality score set and the image quality score set are fused to obtain an image sample +.>Auxiliary weight set ∈ ->
Wherein the method comprises the steps of,/>,/>Is a superparameter for balancing the difficulty score set +.>Tag quality score set->Image quality score set +.>For->Contribution to->The auxiliary weights are averaged to obtain an image sample +.>Then calculates the final auxiliary weights of the target image dataset +.>Setting a threshold t for final auxiliary weights of other image samples, and if the final auxiliary weight is smaller than the threshold t, indicating that the quality of the image sample corresponding to the final auxiliary weight is low, and taking the image sample from the target image datasetDiscarding to obtain cleaned target image dataset +.>
Preferably, the implementation process of step S3 is as follows:
step S3.1, calculating target weight: for the target image datasetIs a randomly selected image sample +.>Based on the object model->Predicted outcome of->Obtaining a target model->Confidence score of predicted target bounding box, confidence score of predicted target mask and confidence score of predicted target edge, and calculate average confidence score of predicted target bounding box, confidence score of predicted target mask and confidence score of predicted target edge, combine average confidence score with evaluation index score as image sample>Is then based on the objective model +.>Predicted outcome of->And image sample->Corresponding real label calculation image sample +.>Then obtaining an image sample ++based on the no-reference quality evaluation index UNIQUE>Is to sample the image +.>The difficulty score, the label quality score and the image quality score are fused to obtain an image sample +.>The operation of fusing is as follows: />
Wherein the method comprises the steps ofFor image sample->Target weight of->For image sample->Difficulty score of->For image sample->Label mass fraction,/>For the image quality score +.>,/>,/>Is a super parameter for balancing the difficulty fraction +.>Tag quality score set->And image quality score->Weight of target->Contribution of (2);
step S3.2, calculating an image sampleSample loss of->
Wherein,representing class loss of target frame, using class cross entropy loss to measure differences between predicted and real target frame classes,/>True value representing target frame class, ++>A probability value representing a predicted target class for the target frame; />Representing regression loss of target frame, using average absolute error loss, measuring difference between predicted target frame and real target frame, +.>True value representing the position coordinates of the target frame, +.>Predicted value representing target frame position coordinates, +.>Representing the number of target boxes +.>Is a parameter; />Representing the segmentation penalty of the target mask, using the cross entropy penalty at pixel level,/for the target mask>True value representing target mask, +_>A predicted value representing a target mask, wherein +.>Representing image samples +.>Width of->Representing image samples +.>Height of->,/>For parameters->The number of categories is indicated and,cis a parameter.
Preferably, the sample weight obtained in S4 is:
wherein,for sample weight, ++>For the target weight, ++>For final auxiliary weight, ++>For adjusting target weight->And final auxiliary weight +.>Is a parameter of (a).
Preferably, in the step S5, the target model is updatedIs a weighted sample loss->The method comprises the following steps:
the beneficial effects of the invention include the following aspects:
(1) According to the invention, the model is used for evaluating the data validity instead of manual work, and the process of evaluating the data validity is automated, so that the labor cost of data screening can be greatly reduced, and the efficiency is improved. Data in scenes such as crowdsourcing are cleaned, so that higher-quality data can be obtained while the data collection cost is reduced;
(2) The invention calculates the loss weight of each image sample by evaluating the quality of the label sample together with the auxiliary model and the target model, and under the weighted loss, the target model tends to learn the image sample with high quality, thereby improving the performance of the image data set containing a large number of low-quality image samples;
(3) The method is not only suitable for the field of image instance segmentation, but also hopefully achieves positive effects in the visual fields of image semantic segmentation, image classification, target detection and the like.
Drawings
FIG. 1 is a flow chart of a data weighted learning method based on model collaboration provided by the invention;
FIG. 2 is a simplified flow chart of the method for deriving a cleaned dataset based on auxiliary weights;
fig. 3 is a schematic diagram of a data weighting framework based on model collaboration provided by the invention.
Description of the embodiments
The invention will be further described with reference to the accompanying drawings. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a data weighted learning method based on model collaboration, the specific principle is shown in figure 1, and the method comprises the following steps:
step S1, giving a general image datasetTarget image dataset->Verification set->Giving an auxiliary model set consisting of a plurality of auxiliary models differing in both decision boundary and learning ability>Wherein->Indicate->Auxiliary model->For natural numbers greater than 0, give the target model +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein the set of auxiliary models->In general image dataset->Pre-training is carried out on the device;
step S2, randomly selecting the target image datasetImage sample->Auxiliary model set->Is->The individual auxiliary models are all for image samples +.>Generating a prediction result to obtain a prediction result setWherein->Indicate->The individual auxiliary model is +.>The prediction result generated is then passed through the image samples +.>The set of prediction results generated->And image sample->The corresponding real tag calculates the image sample +.>Sets of individual auxiliary weights->Wherein->Indicate->Auxiliary weights generated by the auxiliary model are then applied to this +.>The auxiliary weights are averaged to obtain an image sample +.>Final auxiliary weights of (2);
finally, the target image dataset is calculatedFinal auxiliary weight of other image samples in the target image dataset, and according to the size of the final auxiliary weight +.>Ordering all the image samples in the image, if the final auxiliary weight of a certain image sample is smaller than the set threshold +.>Then the image sample is +.>Discarding to obtain cleaned target image dataset +.>
Specifically, referring to fig. 2, the method includes the following steps:
step S2.1, randomly selecting a target image datasetImage sample->Based on image sample->Is->Obtaining an image sample->Predicted target bounding box setsPredicted target mask setAnd predicted target edge setsWherein->Representing image samples +.>First->Target bounding box predicted by the auxiliary model +.>Representing image samples +.>First->Target mask for individual auxiliary model prediction, +.>Representing image samples +.>First->Target edges predicted by the auxiliary models; calculate->、/>Andand calculates the average confidence score +.>The method comprises the steps of carrying out a first treatment on the surface of the Then calculate the image sample +.>Average confidence scores obtained from other auxiliary models respectively, all the average confidence scores constituting the set +.>The method comprises the steps of carrying out a first treatment on the surface of the The average confidence score set +.>In combination with the set of evaluation index scores as image sample +.>Is ++A of difficulty score sets>The evaluation index score set is expressed as +.>Wherein->Representing image samples +.>Is>Score of each evaluation index->Representing image samples +.>Is the first of (2)The individual difficulty scores:
wherein the method comprises the steps ofRepresenting a confidence score; />Representing image samples +.>Is set to be the target bounding box true value of (1),representing image samples +.>Target mask true value,/->Representing image samples +.>Target edge truth values of (2); />For measuring the image sample->First->Target bounding box and target boundary for individual auxiliary model predictionDegree of overlap between frame truth values, +.>Is used for measuring the image sample +.>First->An average indicator of the degree of overlap between the target mask and the target mask truth values predicted by the respective auxiliary models,is used for measuring the image sample +.>First->Matching degree between the target edge predicted by the auxiliary model and the target edge true value;
step S2.2, based on the prediction result setAnd image sample->At the coordinates ofCorresponding real label at pixel->Calculate image sample +.>Is a set of tag quality scores for (a)Wherein->Represents the>The mass fraction of each label is as follows:
wherein the method comprises the steps ofRepresenting image samples +.>Width of->Representing image samples +.>Height of->、/>As a function of the parameters,for counting symbols +.>Expressed in coordinates +.>Corresponding real label at pixels of (2)>Confidence of category->For the set threshold value, calculate +.>Confidence of true tag class at other pixels, statistics of +.>Confidence in the prediction results of the auxiliary models for all pixels in the image sample is greater than a set threshold +.>The number of pixels of (2) is recorded as +.>
S2.3, obtaining an image sample based on the UNIQUE without reference quality evaluation indexIs>Wherein->Representing image samples +.>Is>The image quality scores are:
step S2.4, sample the imageThe difficulty score set, the label quality score set and the image quality score set are fused to obtain an image sample +.>Auxiliary weight set ∈ ->
Wherein the method comprises the steps of,/>,/>Is a superparameter for balancing the difficulty score set +.>Tag quality score set->Image quality score set +.>For->Contribution to->The auxiliary weights are averaged to obtain an image sample +.>Then calculates the final auxiliary weights of the target image dataset +.>Final auxiliary weights of other image samples in the image data set, setting a threshold t, and adding the final auxiliary weights to the target image data set>Screening and discarding low-quality image samples, wherein the method comprises the following steps: if the final auxiliary weight is less than the thresholdA value t indicating that the quality of the image sample corresponding to the final auxiliary weight is low, and this image sample is taken from the target image dataset +.>Discarding to obtain cleaned target image dataset +.>
Step S3, randomly selecting a target image datasetThe image sample is defined as +.>Through the object model->Generating an image sample->Predicted outcome of->Through image sample->Generated prediction result->Image sample->Corresponding real tag calculate image sample +.>Target weight and sample loss of (2), then using the target model +.>Calculating the target image dataset +.>Target weights and sample losses for all other image samples; specifically, please see fig. 3:
step S3.1, calculating target weight: for the target image datasetIs a randomly selected image sample +.>Based on the object model->Predicted outcome of->Obtaining a target model->Confidence scores of the predicted target bounding box, the predicted target mask and the predicted target edge, and calculating an average confidence score of the three confidence scores, combining the average confidence score with the evaluation index score as an image sample>Is then based on the objective model +.>Predicted outcome of->And image sample->Corresponding real label calculation image sample +.>Then obtaining an image sample ++based on the no-reference quality evaluation index UNIQUE>Is to sample the image +.>The difficulty score, the label quality score and the image quality score are fused to obtain an image sample +.>The operation of fusing is as follows: />
Wherein the method comprises the steps ofFor image sample->Target weight of->For image sample->Difficulty score of->For image sample->Label mass fraction,/>For the image quality score +.>,/>,/>Is a super parameter for balancing the difficulty and the easinessCount->Tag quality score set->And image quality score->Weight of target->Contribution of (2);
step S3.2, calculating an image sampleSample loss of->
Wherein,representing class loss of target frame, using class cross entropy loss to measure difference between class of predicted target frame and class of real target frame, +.>True value representing target frame class, ++>A probability value representing a predicted target class for the target frame; />Representing regression loss of target frame, using average absolute error loss, measuring difference between predicted target frame and real target frame, +.>True value representing the position coordinates of the target frame, +.>Predicted value representing target frame position coordinates, +.>Representing the number of target boxes +.>Is a parameter; />Representing the segmentation penalty of the target mask, using the cross entropy penalty at pixel level,/for the target mask>True value representing target mask, +_>A predicted value representing a target mask, wherein +.>Representing image samples +.>Width of->Representing image samples +.>Height of->,/>For parameters->The number of categories is indicated and,cis a parameter.
Step S4, collecting the target image data setThe final auxiliary weight and the target weight obtained by each image sample are fused to obtain sample weight:
wherein,for sample weight, ++>For the target weight, ++>For final auxiliary weight, ++>For adjusting target weights->And final auxiliary weight +.>Is a parameter of (a).
Step S5, object image data setRe-weighting the sample loss obtained by each image sample and the sample weight to obtain weighted sample loss, and finally utilizing gradientIs used for updating the target model>Is->Is to (1) training:
s6, when the training set training is completed and all training rounds are completed, the target modelAfter training, each training round comprises a training process of steps S2-S5;
step S7, verifying the setInput to the trained object model +.>And outputting a result after the picture instance to be segmented is segmented.
A data weighted learning method based on model collaboration is implemented in a specific example segmentation task. Two models, mask-RCNN, pointRend, were chosen as the auxiliary model, the yolact model as the target model, and experiments were performed on CoCo datasets, with the results shown in table 1.
Table 1: experimental results of training the yolact example segmentation model under the CoCo dataset:
in table 1, the first row shows the results obtained by training the yolact model directly on the CoCo dataset. The second line shows that re-weighting the final sample weights to the loss of the target model proceeds using only the difficulty score of the image sample as the sample score for the auxiliary model and the target modelAnd updating the row model to obtain a result. The third line shows the results obtained after model updating by re-weighting the final sample weights to the loss of the target model using only the label quality scores of the image samples as the sample scores for the auxiliary model and the target model. The last line shows the difficulty score for the simultaneous use of the image samplesTag mass fraction of image sample->Image quality fraction of image sample->And weighting the final sample weight to the loss of the target model according to a certain proportion as the sample fraction of the auxiliary model and the target model, and carrying out model updating to obtain a result.
By observing the table, the used sample fraction indexes can be found to improve the segmentation performance of the model to a certain extent, and the effectiveness of the method is further verified.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (3)

1.一种基于模型协作的数据加权学习方法,其特征在于,包括以下步骤:1. A data weighted learning method based on model collaboration, characterized in that it includes the following steps: 步骤S1、给定通用图像数据集,目标图像数据集/>,验证集/>,给定由决策边界和学习能力均不同的多个辅助模型组成的辅助模型集合/>,其中表示第/>个辅助模型,/>为大于0的自然数,给定目标模型/>;其中所述辅助模型集合/>在通用图像数据集/>上进行预训练;Step S1. Given a general image data set , target image data set/> , validation set/> , given an auxiliary model set consisting of multiple auxiliary models with different decision boundaries and learning capabilities /> ,in Indicates the first/> auxiliary model,/> is a natural number greater than 0, given the target model/> ;The auxiliary model collection/> In general image dataset/> pre-training; 步骤S2、随机选择所述目标图像数据集中的图像样本/>,辅助模型集合/>中的个辅助模型都对图像样本/>生成预测结果,得到预测结果集合,其中/>表示第/>个辅助模型对图像样本/>生成的预测结果,接着通过图像样本/>生成的预测结果集合/>与图像样本/>对应的真实标签计算出图像样本/>的/>个辅助权重,得到辅助权重集合/>,其中/>表示第/>个辅助模型生成的辅助权重,然后对计算出的/>个辅助权重进行平均得到图像样本的最终辅助权重;Step S2: Randomly select the target image data set Image samples in/> , auxiliary model collection/> middle Each auxiliary model is suitable for image samples/> Generate prediction results and obtain a set of prediction results , of which/> Indicates the first/> Auxiliary models on image samples/> The generated prediction results are then passed through the image samples/> Generated set of prediction results/> with image samples/> The corresponding real label is calculated as an image sample/> of/> auxiliary weights to obtain the auxiliary weight set/> , of which/> Indicates the first/> auxiliary weights generated by an auxiliary model, and then apply the calculated /> Averaging auxiliary weights to obtain image samples The final auxiliary weight; 最后计算所述目标图像数据集中其他图像样本的最终辅助权重,并根据最终辅助权重的大小对所述目标图像数据集/>中的所有图像样本进行排序,若某个图像样本的最终辅助权重小于设置的阈值/>,则将此图像样本从目标图像数据集/>丢弃,得到清洗后的目标图像数据集/>Finally calculate the target image data set the final auxiliary weights of other image samples in the target image data set/> All image samples in are sorted, if the final auxiliary weight of an image sample is less than the set threshold/> , then this image sample is extracted from the target image dataset/> Discard and obtain the cleaned target image data set/> ; 步骤S3、随机选择目标图像数据集中的图像样本,将该图像样本定义为/>,通过目标模型/>生成图像样本/>的预测结果/>,通过图像样本/>生成的预测结果/>、图像样本/>对应的真实标签计算出图像样本/>的目标权重以及样本损失,然后利用所述目标模型/>计算目标图像数据集/>中其他所有图像样本的目标权重以及样本损失;Step S3: Randomly select the target image data set image sample in , define the image sample as/> , through the target model/> Generate image samples/> Prediction results/> , via image sample/> Generated prediction results/> , image sample/> The corresponding real label is calculated as an image sample/> The target weight and sample loss, and then use the target model/> Compute target image data set/> The target weights and sample losses of all other image samples in 步骤S4、将目标图像数据集中的每个图像样本得到的最终辅助权重和目标权重进行融合得到样本权重;Step S4: Convert the target image data set to The final auxiliary weight and target weight obtained for each image sample in are fused to obtain the sample weight; 步骤S5、将目标图像数据集中的每个图像样本得到的样本损失与样本权重进行重新加权得到加权后的样本损失,最后利用梯度的计算来更新目标模型/>的参数引导目标模型/>的训练;Step S5: Convert the target image data set to The sample loss obtained for each image sample in the image is reweighted with the sample weight to obtain the weighted sample loss, and finally the gradient calculation is used to update the target model/> The parameters guide the target model/> training; 步骤S6、当目标图像数据集训练完成,且完成所有训练轮次时,目标模型/>训练完毕,其中每一个训练轮次均包括步骤S3-S5的训练过程;Step S6: When the target image data set When training is completed and all training rounds are completed, the target model/> After training is completed, each training round includes the training process of steps S3-S5; 步骤S7、将验证集输入到训练好的目标模型/>中,输出对待分割图像实例分割后的结果;Step S7: Convert the verification set to Input to the trained target model/> , output the result of segmentation of the image instance to be segmented; 步骤S2中获取最终辅助权重的实现过程为:The implementation process of obtaining the final auxiliary weight in step S2 is: 步骤S2.1、随机选取目标图像数据集中的图像样本/>,基于图像样本/>的预测结果集合/>得到图像样本/>预测的目标边界框集合、预测的目标掩码集合和预测的目标边缘集合,其中/>表示图像样本/>第/>个辅助模型预测的目标边界框,/>表示图像样本/>第/>个辅助模型预测的目标掩码,/>表示图像样本/>第/>个辅助模型预测的目标边缘;计算/>、/>的置信度分数,并计算/>、/>和/>的置信度分数的平均置信度分数/>;接着计算图像样本/>由其他辅助模型分别得到的平均置信度分数,全部的平均置信度分数构成集合/>;将平均置信度分数集合/>与评价指标分数集合相结合作为图像样本/>的难易度分数集合/>,评价指标分数集合表示为,其中/>表示图像样本/>的第/>个评价指标分数,/>表示图像样本/>的第/>个难易度分数:Step S2.1, randomly select the target image data set Image samples in/> , based on image samples/> The set of prediction results/> Get image sample/> A set of predicted object bounding boxes , predicted target mask set and the predicted target edge set , of which/> Represent image sample/> No./> The target bounding box predicted by the auxiliary model, /> Represent image sample/> No./> target mask predicted by the auxiliary model,/> Represent image sample/> No./> Target edge predicted by an auxiliary model; calculation/> ,/> and confidence score and calculate/> ,/> and/> The average confidence score of the confidence score/> ;Then calculate the image samples/> The average confidence scores obtained by other auxiliary models respectively, and all the average confidence scores form a set /> ;Gather the average confidence scores/> Combined with the evaluation index score set as an image sample/> A collection of difficulty scores/> , the evaluation index score set is expressed as , of which/> Represent image sample/> of/> evaluation index scores,/> Represent image sample/> of/> difficulty scores: ; ; ; 其中表示置信度分数;/>表示图像样本/>的目标边界框真值,/>表示图像样本/>的目标掩码真值,/>表示图像样本/>的目标边缘真值;用来衡量图像样本/>第/>个辅助模型预测的目标边界框和目标边界框真值之间的重叠程度,/>是用来衡量图像样本/>第/>个辅助模型预测的目标掩码和目标掩码真值之间的重叠程度的平均指标,是用来衡量图像样本/>第/>个辅助模型预测的目标边缘和目标边缘真值之间的匹配程度;in Indicates the confidence score;/> Represent image sample/> The ground truth value of the target bounding box, /> Represent image sample/> The true value of the target mask, /> Represent image sample/> The target edge truth value; Used to measure image samples/> No./> The degree of overlap between the target bounding box predicted by the auxiliary model and the true value of the target bounding box, /> is used to measure image samples/> No./> The average indicator of the degree of overlap between the target mask predicted by the auxiliary models and the true value of the target mask, is used to measure image samples/> No./> The degree of matching between the target edge predicted by the auxiliary model and the true value of the target edge; 步骤S2.2、基于预测结果集合与图像样本/>在坐标为像素处对应的真实标签/>计算图像样本/>的标签质量分数集合,其中/>表示图像样本的第/>个标签质量分数:Step S2.2, based on the prediction result set with image samples/> At the coordinates of The corresponding real label at the pixel/> Calculate image samples/> A collection of label quality scores , of which/> Represents the image sample number/> tag quality score: ; ; 其中表示图像样本/>的宽度,/>表示图像样本/>的高度,/>、/>为参数,/>为计数符号,/>表示在坐标为/>的像素处对应的真实标签/>类别的置信度,/>为设定的阈值,计算图像样本/>其他像素处真实标签类别的置信度,统计第/>个辅助模型对于图像样本/>中所有像素处的预测结果中置信度大于设定的阈值的像素数量,数量记为/>in Represent image sample/> The width of /> Represent image sample/> height,/> ,/> is a parameter,/> is the counting symbol,/> Expressed at the coordinates/> The real label corresponding to the pixel/> Confidence of the category,/> Calculate image samples for the set threshold/> Confidence of the true label category at other pixels, statistics/> Auxiliary models for image samples/> The confidence level of the prediction results at all pixels is greater than the set threshold The number of pixels, the number is recorded as/> ; 步骤S2.3、基于无参考质量评价指标UNIQUE得到图像样本的图像质量分数集合,其中/>表示图像样本/>的第/>个图像质量分数,则有:Step S2.3. Obtain image samples based on the no-reference quality evaluation index UNIQUE. A collection of image quality scores for , of which/> Represent image sample/> of/> image quality scores, then there are: ; 步骤S2.4、将图像样本的难易度分数集合、标签质量分数集合以及图像质量分数集合进行融合得到图像样本/>的辅助权重集合 />Step S2.4: Image samples The difficulty score set, label quality score set and image quality score set are fused to obtain the image sample/> The auxiliary weight set of : ; 其中,/>,/>是超参数,用来平衡难易度分数集合/>、标签质量分数集合/>以及图像质量分数集合/>的贡献,对/>个辅助权重进行平均得到图像样本/>的最终辅助权重,然后计算目标图像数据集/>中其他图像样本的最终辅助权重,设置阈值t,若最终辅助权重小于该阈值t,表明该最终辅助权重所对应的图像样本的质量低,将此图像样本从目标图像数据集/>丢弃,得到清洗后的目标图像数据集/>in ,/> ,/> is a hyperparameter used to balance the difficulty score set/> , label quality score set/> and a collection of image quality scores/> right contribution to/> Averaging auxiliary weights to obtain image samples/> The final auxiliary weight is then calculated for the target image dataset/> Set the threshold t for the final auxiliary weight of other image samples in Discard and obtain the cleaned target image data set/> ; 步骤S3的实现过程为:The implementation process of step S3 is: 步骤S3.1、计算目标权重:对于所述目标图像数据集中随机选取的一个图像样本,基于目标模型/>的预测结果/>得到目标模型/>预测的目标边界框的置信度分数、预测的目标掩码的置信度分数和预测的目标边缘的置信度分数,并计算预测的目标边界框的置信度分数、预测的目标掩码的置信度分数和预测的目标边缘的置信度分数的平均置信度分数,将平均置信度分数与评价指标分数相结合作为图像样本/>的难易度分数,然后基于目标模型/>的预测结果/>与图像样本/>对应的真实标签计算图像样本/>的标签质量分数,接着基于无参考质量评价指标UNIQUE得到图像样本/>的图像质量分数,将图像样本/>的难易度分数、标签质量分数以及图像质量分数进行融合得到图像样本/>的目标权重,进行融合的操作为:/>Step S3.1. Calculate target weight: For the target image data set An image sample randomly selected from , based on the target model/> Prediction results/> Get target model/> The confidence score of the predicted object bounding box, the confidence score of the predicted object mask, and the confidence score of the predicted object edge, and calculate the confidence score of the predicted object bounding box, the confidence score of the predicted object mask and the average confidence score of the confidence score of the predicted target edge, and combine the average confidence score with the evaluation index score as an image sample/> The difficulty score is then based on the target model/> Prediction results/> with image samples/> Corresponding real label calculation image sample/> label quality score, and then obtain image samples based on the no-reference quality evaluation index UNIQUE/> The image quality score of the image sample/> The difficulty score, label quality score and image quality score are fused to obtain the image sample/> The target weight of , the fusion operation is:/> ; 其中为图像样本/>的目标权重,/>为图像样本/>的难易度分数,/>为图像样本/>的标签质量分数,/>为图像质量分数,/>,/>,/>是超参数,用来平衡难易度分数、标签质量分数集合/>和图像质量分数/>对目标权重/>的贡献;in for image samples/> The target weight of /> for image samples/> 's difficulty score,/> for image samples/> label quality score,/> is the image quality score,/> ,/> ,/> is a hyperparameter used to balance the difficulty score , label quality score set/> and image quality score/> To target weight/> contribution; 步骤S3.2、计算图像样本的样本损失/>Step S3.2. Calculate image samples sample loss/> : ; ; ; ; 其中,表示目标框的分类损失,使用二分类交叉熵损失,用来衡量预测目标框的类别与真实目标框类别之间的差异,/>表示目标框类别的真值,/>表示目标框预测为目标类别的概率值;/>表示目标框的回归损失,使用平均绝对误差损失,用衡量预测目标框和真实目标框之间的差异,/>表示目标框位置坐标的真值,/>表示目标框位置坐标的预测值,/>表示目标框的数量,/>为参数;/>表示目标掩码的分割损失,使用像素级的交叉熵损失,/>表示目标掩码的真值,/>表示目标掩码的预测值,其中表示图像样本/>的宽度,/>表示图像样本/>的高度,/>,/>为参数,/>表示类别数,c为参数。in, Represents the classification loss of the target frame, using a binary cross-entropy loss to measure the difference between the category of the predicted target frame and the category of the real target frame, /> Represents the true value of the target box category, /> Represents the probability value that the target frame is predicted to be the target category;/> Represents the regression loss of the target box, using the average absolute error loss to measure the difference between the predicted target box and the real target box, /> Represents the true value of the target frame position coordinates,/> Represents the predicted value of the target frame position coordinates,/> Indicates the number of target boxes,/> is a parameter;/> Represents the segmentation loss of the target mask, using pixel-level cross-entropy loss, /> Represents the true value of the target mask, /> represents the predicted value of the target mask, where Represent image sample/> The width of /> Represent image sample/> height,/> ,/> is a parameter,/> Represents the number of categories, c is a parameter. 2.根据权利要求1所述的一种基于模型协作的数据加权学习方法,其特征在于,所述S4中得到的样本权重为:2. A data weighted learning method based on model collaboration according to claim 1, characterized in that the sample weight obtained in S4 is: ; 其中,为样本权重,/>为目标权重,/>为最终辅助权重,/>为用于调整目标权重/>和最终辅助权重/>的参数。in, is the sample weight,/> is the target weight,/> is the final auxiliary weight,/> For adjusting target weight/> and final auxiliary weight/> parameters. 3.根据权利要求2所述的一种基于模型协作的数据加权学习方法,其特征在于,所述步骤S5中用于更新目标模型的加权后的样本损失/>为:3. A data weighted learning method based on model collaboration according to claim 2, characterized in that step S5 is used to update the target model. The weighted sample loss/> for: .
CN202410004710.5A 2024-01-03 2024-01-03 A data-weighted learning method based on model collaboration Active CN117496191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410004710.5A CN117496191B (en) 2024-01-03 2024-01-03 A data-weighted learning method based on model collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410004710.5A CN117496191B (en) 2024-01-03 2024-01-03 A data-weighted learning method based on model collaboration

Publications (2)

Publication Number Publication Date
CN117496191A CN117496191A (en) 2024-02-02
CN117496191B true CN117496191B (en) 2024-03-29

Family

ID=89678670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410004710.5A Active CN117496191B (en) 2024-01-03 2024-01-03 A data-weighted learning method based on model collaboration

Country Status (1)

Country Link
CN (1) CN117496191B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3731144A1 (en) * 2019-04-25 2020-10-28 Koninklijke Philips N.V. Deep adversarial artifact removal
CN112233095A (en) * 2020-10-16 2021-01-15 哈尔滨市科佳通用机电股份有限公司 A method for detecting various failure forms of a locking plate device of a railway freight car
WO2022045877A1 (en) * 2020-08-28 2022-03-03 Mimos Berhad A system and method for identifying occupancy of parking lots
CN114252423A (en) * 2021-12-24 2022-03-29 汉姆德(宁波)智能医疗科技有限公司 Method and device for generating fully sampled image of super-resolution microscope
CN115310130A (en) * 2022-08-15 2022-11-08 南京航空航天大学 A method and system for multi-site medical data analysis based on federated learning
WO2023077821A1 (en) * 2021-11-07 2023-05-11 西北工业大学 Multi-resolution ensemble self-training-based target detection method for small-sample low-quality image
WO2023093346A1 (en) * 2021-11-25 2023-06-01 支付宝(杭州)信息技术有限公司 Exogenous feature-based model ownership verification method and apparatus
CN116824216A (en) * 2023-05-22 2023-09-29 南京信息工程大学 A passive unsupervised domain adaptive image classification method
WO2023185785A1 (en) * 2022-03-28 2023-10-05 华为技术有限公司 Image processing method, model training method, and related apparatuses
CN117036897A (en) * 2023-05-29 2023-11-10 中北大学 Method for detecting few sample targets based on Meta RCNN
CN117173497A (en) * 2023-11-02 2023-12-05 腾讯科技(深圳)有限公司 Image generation method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3133729A1 (en) * 2020-10-08 2022-04-08 Royal Bank Of Canada System and method for machine learning fairness test
US20230360246A1 (en) * 2022-05-05 2023-11-09 Elm Company Method and System of Real-Timely Estimating Dimension of Signboards of Road-side Shops

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3731144A1 (en) * 2019-04-25 2020-10-28 Koninklijke Philips N.V. Deep adversarial artifact removal
WO2022045877A1 (en) * 2020-08-28 2022-03-03 Mimos Berhad A system and method for identifying occupancy of parking lots
CN112233095A (en) * 2020-10-16 2021-01-15 哈尔滨市科佳通用机电股份有限公司 A method for detecting various failure forms of a locking plate device of a railway freight car
WO2023077821A1 (en) * 2021-11-07 2023-05-11 西北工业大学 Multi-resolution ensemble self-training-based target detection method for small-sample low-quality image
WO2023093346A1 (en) * 2021-11-25 2023-06-01 支付宝(杭州)信息技术有限公司 Exogenous feature-based model ownership verification method and apparatus
CN114252423A (en) * 2021-12-24 2022-03-29 汉姆德(宁波)智能医疗科技有限公司 Method and device for generating fully sampled image of super-resolution microscope
WO2023185785A1 (en) * 2022-03-28 2023-10-05 华为技术有限公司 Image processing method, model training method, and related apparatuses
CN115310130A (en) * 2022-08-15 2022-11-08 南京航空航天大学 A method and system for multi-site medical data analysis based on federated learning
CN116824216A (en) * 2023-05-22 2023-09-29 南京信息工程大学 A passive unsupervised domain adaptive image classification method
CN117036897A (en) * 2023-05-29 2023-11-10 中北大学 Method for detecting few sample targets based on Meta RCNN
CN117173497A (en) * 2023-11-02 2023-12-05 腾讯科技(深圳)有限公司 Image generation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于掩膜组合的多类弹载图像目标分割算法;袁汉钦;陈栋;杨传栋;王昱翔;刘桢;;舰船电子工程;20200620(第06期);112-117 *

Also Published As

Publication number Publication date
CN117496191A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN114283287B (en) Robust field adaptive image learning method based on self-training noise label correction
CN112308158A (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN110490239B (en) Training method, quality classification method, device and equipment of image quality control network
Zhang et al. PortraitNet: Real-time portrait segmentation network for mobile device
Li et al. Weather GAN: Multi-domain weather translation using generative adversarial networks
CN114723957A (en) Multi-class pipeline defect detection, tracking and counting method based on self-attention mechanism
CN114581486B (en) Template updating target tracking algorithm based on multi-layer characteristics of full convolution twin network
CN110414380A (en) A student behavior detection method based on target detection
CN114998603B (en) An underwater target detection method based on deep multi-scale feature factor fusion
CN112613552A (en) Convolutional neural network emotion image classification method combining emotion category attention loss
CN114565826A (en) Agricultural pest and disease identification and diagnosis method, system and device
CN118379288B (en) Embryo prokaryotic target counting method based on fuzzy rejection and multi-focus image fusion
CN116452904B (en) Image aesthetic quality determination method
CN112016618A (en) Measurement method for generalization capability of image semantic segmentation model
Khalil et al. A comprehensive study of vision transformers in image classification tasks
CN117496191B (en) A data-weighted learning method based on model collaboration
CN113808079B (en) Industrial product surface defect self-adaptive detection method based on deep learning model AGLNet
CN112258420B (en) DQN-based image enhancement processing method and device
CN117668372B (en) Virtual exhibition system on digital wisdom exhibition line
CN118279320A (en) Target instance segmentation model building method based on automatic prompt learning and application thereof
CN111369124A (en) An image aesthetic prediction method based on self-generated global features and attention
CN114821219B (en) Unsupervised multi-source domain adaptive classification method based on deep joint semantics
CN110334204A (en) A recommendation method for calculating the similarity of exercise problems based on user records
CN116563735A (en) A focus judgment method for power transmission tower inspection images based on deep artificial intelligence
CN115439791A (en) Cross-domain video action recognition method, device, equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant