CN111860631A - Method for optimizing loss function by adopting error-cause strengthening mode - Google Patents

Method for optimizing loss function by adopting error-cause strengthening mode Download PDF

Info

Publication number
CN111860631A
CN111860631A CN202010669159.8A CN202010669159A CN111860631A CN 111860631 A CN111860631 A CN 111860631A CN 202010669159 A CN202010669159 A CN 202010669159A CN 111860631 A CN111860631 A CN 111860631A
Authority
CN
China
Prior art keywords
loss function
training
correlation
model
cross entropy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010669159.8A
Other languages
Chinese (zh)
Other versions
CN111860631B (en
Inventor
于效宇
陈颖璐
刘艳
谈海平
李富超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China Zhongshan Institute
Original Assignee
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China Zhongshan Institute filed Critical University of Electronic Science and Technology of China Zhongshan Institute
Priority to CN202010669159.8A priority Critical patent/CN111860631B/en
Priority to PCT/CN2020/116176 priority patent/WO2022011827A1/en
Publication of CN111860631A publication Critical patent/CN111860631A/en
Application granted granted Critical
Publication of CN111860631B publication Critical patent/CN111860631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for optimizing a loss function by adopting a miscause strengthening mode, wherein the optimized loss function is named as Corloss, and a punishment item is added on the basis of the original cross entropy loss function and comprises a punishment degree adjusting factor T, a model training module and a penalty degree adjusting module, wherein the punishment degree adjusting factor T is used for adjusting the strength of the influence of correlation on the cross entropy loss function and can be used for setting a T value according to the actual condition during model training; correlation between classes of data sets
Figure 641033DEST_PATH_IMAGE001
Testing the output of all article categories through a primary model, and obtaining the correlation after calculating by using an information entropy formula
Figure 56970DEST_PATH_IMAGE001
(ii) a Probability of related classes
Figure 589583DEST_PATH_IMAGE002
Namely, the probability that the target object is identified as the object class related to the target object in the training process, and the non-fixed value is dynamically adjusted according to each training condition of the model; by adding punishment, the accuracy of the model for identifying the article is improved, and the identification accuracy of the deep learning network model can be improved.

Description

Method for optimizing loss function by adopting error-cause strengthening mode
Technical Field
The invention relates to a method for optimizing a loss function, in particular to a method for optimizing a loss function by adopting a miscause strengthening mode.
Background
When the article type is identified, the article is often easily identified as an article related to the appearance and the characteristics of the article, so that the identification result is inaccurate, and the influence of the similarity between the articles on the model precision is not considered in the existing loss function, so that the model is difficult to improve after being learned to a certain degree.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for optimizing a loss function by adopting a fault-cause strengthening mode, which has high identification accuracy.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method for optimizing a loss function by adopting a miscause strengthening mode is characterized in that the optimized loss function is named as Corloss, and a punishment item is added on the basis of the original cross entropy loss function to realize the optimization, wherein the punishment item comprises the following three modules:
the punishment degree adjusting factor T is used for adjusting the strength of the influence of the correlation on the cross entropy loss function, when the T is 0, the punishment degree adjusting factor T has no influence on the cross entropy loss function, at the moment, the corross is the cross entropy loss function, and the T value can be set according to the actual condition during model training;
correlation X between classes of data setsijdTesting the output of all article categories through a primary model, and obtaining the correlation X after calculation by using an information entropy formula ijd
Probability of related classes
Figure BDA0002581660430000011
I.e. the target item is identified as being related to it during the training processThe probability of the item type(s), the non-constant value, is dynamically adjusted according to each training condition of the model.
The method comprises the following steps:
step S1, carrying out preliminary training and obtaining the correlation of each category;
step S2, adding punishment item dynamically according to the identification result;
step S3, constructing a new loss function;
step S4: setting an overflow mechanism;
step S5: training was performed using corlos.
The specific steps of step S1 are as follows: and carrying out primary training on the model by adopting a loss function of cross entropy loss, wherein the model after the primary training is used for testing the correlation items of all the classes and the correlation among all the classes.
Step S2 includes the following specific steps: and dynamically adding a penalty item according to the recognition result, monitoring the recognition result of each picture, adding the output of the model to the related item as a part of the penalty item in the calculation of the loss function in a probability score mode, simultaneously using an overflow mechanism to protect the loss function to continue the calculation in the training process, and using the original cross entropy loss function once the loss function overflows.
The specific steps of step S3 are as follows: on the basis of an original cross entropy loss function, after a primary model is trained through the original cross entropy loss function, relevant items of all categories are tested, namely classification error factors, are introduced into the calculation of the loss function during formal training, a new loss function is constructed, and the specific formula of the corross is obtained as follows:
Figure BDA0002581660430000021
Wherein i refers to correctly classified article categories, j refers to article categories having a correlation with i, and d refers to the number of related article categories; t is an adjusting factor used for adjusting the punishment degree of loss calculation; xijdIs a cause value, namely the correlation between similar article categories, and is expressed by information entropyThe larger the correlation, the higher the probability of model identification error;
Figure BDA0002581660430000022
the output probability of the relevant category in the training process;
wherein ,XijdThe specific formula of (A) is as follows:
Figure BDA0002581660430000023
wherein BijTo average the output of class i pictures, the picture output values for each class are added by location and then averaged to ensure that the output is at a normal level.
And d values can be set according to actual conditions during model training, when d is 0, other article categories do not influence the category identification of the target article, and the corross is a cross entropy loss function.
The specific steps of step S4 are as follows: when in use
Figure BDA0002581660430000031
Then, the optimized loss function corross is used, and once overflowing, the original cross entropy loss function is used for calculation.
Step S5 includes the following training modes:
A. the first training mode is as follows: training twice, wherein the first training is primary training, the primary model is trained by using an original cross entropy loss function, then the correlation among all categories of the data set is measured by using the primary model, the correlation table is arranged into a correlation table, then formal training is carried out, the second training is carried out by using corross, the correlation item of each article category is added into the corross in an index mode during the formal training for calculation, and corresponding punishment items are searched from the correlation table and added into the corross for calculation according to the recognition condition of the primary model to each picture;
B. The second training mode is as follows: the method comprises the following steps of performing training only once, wherein N epochs are obtained, the training comprises two stages, the first stage takes a model when the epochs are int [ kN ] as a preliminary model, wherein k is more than 0 and less than 1, the preliminary model is used for measuring the correlation among various categories of a data set and sorting the correlation into a correlation table, the second stage uses corross to start breakpoint continuous training from the position where the epochs are int [ kN ] +1, wherein k is more than 0 and less than 1, and in the breakpoint continuous training process, aiming at the recognition condition of each picture by the model, corresponding penalty terms are searched from the correlation table and added into a loss function for calculation.
The invention has the beneficial effects that: by adding punishment, the invention improves the accuracy of the model for identifying the article and can improve the identification accuracy of the deep learning network model.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a flow chart of the steps of a first training mode;
fig. 3 is a flowchart of the steps of the second training mode.
Detailed Description
Refer to fig. 1 to 3.
A method for optimizing a loss function by adopting a miscause strengthening mode is characterized in that the optimized loss function is named as Corloss, and a punishment item is added on the basis of the original cross entropy loss function to realize the optimization, wherein the punishment item comprises the following three modules:
The punishment degree adjusting factor T is used for adjusting the strength of the influence of the correlation on the cross entropy loss function, when the T is 0, the punishment degree adjusting factor T has no influence on the cross entropy loss function, at the moment, the corross is the cross entropy loss function, and the T value can be set according to the actual condition during model training;
correlation X between classes of data setsijdTesting the output of all article categories through a primary model, and obtaining the correlation X after calculation by using an information entropy formulaijd
Probability of related classes
Figure BDA0002581660430000041
I.e. the target object is identified as being in the training processThe probability, non-constant, of the item class associated with it is dynamically adjusted according to each training instance of the model.
By adding punishment, the accuracy of the model for identifying the article is improved, and the identification accuracy of the deep learning network model can be improved.
The method comprises the following steps:
step S1, carrying out preliminary training and obtaining the correlation of each category;
step S2, adding punishment item dynamically according to the identification result;
step S3, constructing a new loss function;
step S4: setting an overflow mechanism;
step S5: training was performed using corlos.
The specific steps of step S1 are as follows: and carrying out primary training on the model by adopting a loss function of cross entropy loss, wherein the model after the primary training is used for testing the correlation items of all the classes and the correlation among all the classes.
Step S2 includes the following specific steps: and dynamically adding a penalty item according to the recognition result, monitoring the recognition result of each picture, adding the output of the model to the related item as a part of the penalty item in the calculation of the loss function in a probability score mode, simultaneously using an overflow mechanism to protect the loss function to continue the calculation in the training process, and using the original cross entropy loss function once the loss function overflows.
The specific steps of step S3 are as follows: on the basis of an original cross entropy loss function, after a primary model is trained through the original cross entropy loss function, relevant items of all categories are tested, namely classification error factors, are introduced into the calculation of the loss function during formal training, a new loss function is constructed, and the specific formula of the corross is obtained as follows:
Figure BDA0002581660430000042
wherein i is the correctly classified item class, j is the item class associated with i, and d is the number of associated item classesCounting; t is an adjusting factor used for adjusting the punishment degree of loss calculation; xijdThe correlation between the similar article categories is represented by information entropy, and the higher the information entropy is, the higher the correlation is, and the higher the probability of the model identification error is;
Figure BDA0002581660430000051
The output probability of the relevant category in the training process;
wherein ,XijdThe specific formula of (A) is as follows:
Figure BDA0002581660430000052
wherein BijTo average the output of class i pictures, the picture output values for each class are added by location and then averaged to ensure that the output is at a normal level.
And d values can be set according to actual conditions during model training, when d is 0, other article categories do not influence the category identification of the target article, and the corross is a cross entropy loss function.
The specific steps of step S4 are as follows: when in use
Figure BDA0002581660430000053
Then, the optimized loss function, corross, is used, and once overflowed, the original cross entropy loss function is used for calculation (put another way, when
Figure BDA0002581660430000054
Then, using optimized loss function Corloss, otherwise using original cross entropy loss function to calculate)
Step S5 includes the following training modes:
A. the first training mode is as follows: training twice, wherein the first training is primary training, the primary model is trained by using an original cross entropy loss function, then the correlation among all categories of the data set is measured by using the primary model, the correlation table is arranged into a correlation table, then formal training is carried out, the second training is carried out by using corross, the correlation item of each article category is added into the corross in an index mode during the formal training for calculation, and corresponding punishment items are searched from the correlation table and added into the corross for calculation according to the recognition condition of the primary model to each picture;
B. The second training mode is as follows: the training process includes two stages, the first stage takes a model when the epoch is int [ kN ] as a preliminary model, wherein 0 < k < 1, measures the correlation between each category of the data set by using the preliminary model, arranges the correlation into a correlation table, and the second stage starts breakpoint continuous training from the epoch [ kN ] +1 by using corlos, wherein 0 < k < 1, and searches a corresponding penalty item from the correlation table to calculate in a loss function according to the recognition condition of the model to each picture in the breakpoint continuous training process.
As shown in fig. 2, the first training mode requires two times of training, the first training mode is a preliminary training mode, the second training mode is a formal training mode, the preliminary training mode adopts the original cross entropy loss function training mode, in order to obtain the correlation items and the correlations between the categories, during the preliminary training, the original cross entropy loss function is used to train a preliminary model, then the preliminary model is used to measure the correlations between the categories of the data set, and is arranged into a correlation table, the formal training mode adopts a corross training mode, during the formal training mode, the correlation items of each item category are added into corross in an index mode to be calculated, meanwhile, aiming at the recognition condition of each picture by the preliminary model, the corresponding punishment items are searched from the correlation table and are added into the loss function to be calculated, the correlation table is called to reconstruct the cross entropy loss function, the correlations are added and form a new loss function (corross), the model was then formally trained using corlos.
As shown in fig. 3, the second training mode includes two stages, the first stage is to obtain correlation terms and correlations between classes of the data set, N epochs (when a complete data set passes through the neural network once and returns once, this process is called an epoch once), the first stage uses the original cross entropy loss function to perform preliminary training on the model, then takes the model when epoch is int [ kN ], where 0 < k < 1, measures the correlations between classes of the data set using the preliminary model, and arranges them into a correlation table, the second stage uses softmax + corross to start breakpoint training from epoch int [ kN ] +1, where 0 < k < 1, in the penalty breakpoint training process, for the recognition situation of each picture by the model, searches corresponding terms from the correlation table, and adds the correlations into the reconstruction of the loss function (call the cross entropy loss table, the correlation term, the correlation, is added to the computation of the loss function corross in an indexed manner).
The above embodiments do not limit the scope of the present invention, and those skilled in the art can make equivalent modifications and variations without departing from the overall concept of the present invention.

Claims (7)

1. A method for optimizing loss function by adopting a miscause strengthening mode is characterized in that penalty items are added on the basis of the original cross entropy loss function to realize the optimization, and the penalty items comprise the following three modules:
the punishment degree adjusting factor T is used for adjusting the strength of the influence of the correlation on the cross entropy loss function, when T =0, the punishment degree adjusting factor T has no influence on the cross entropy loss function, at the moment, the corross is the cross entropy loss function, and the T value can be set according to the actual condition during model training;
correlation between classes of data sets
Figure 191615DEST_PATH_IMAGE001
Testing the output of all article categories through a primary model, and obtaining the correlation after calculating by using an information entropy formula
Figure 776180DEST_PATH_IMAGE001
Probability of related classes
Figure 881539DEST_PATH_IMAGE002
I.e. the probability of identifying the target item as the item class associated with it during the training process, is not constant and is dynamically adjusted according to each training situation of the model.
2. The method for optimizing a loss function using a cause-error enhancement method according to claim 1, comprising the steps of:
step S1, carrying out preliminary training and obtaining the correlation of each category;
step S2, adding punishment item dynamically according to the identification result;
Step S3, constructing a new loss function;
step S4: setting an overflow mechanism;
step S5: training was performed using corlos.
3. The method for optimizing a loss function by employing a cause-error enhancement mode as claimed in claim 2, wherein the step S1 comprises the following steps: and carrying out primary training on the model by adopting a loss function of cross entropy loss, wherein the model after the primary training is used for testing the correlation items of all the classes and the correlation among all the classes.
4. The method for optimizing a loss function by employing a fault-cause-enhancement mode as claimed in claim 3, wherein the step S2 comprises the following specific steps: and dynamically adding a penalty item according to the recognition result, monitoring the recognition result of each picture, adding the output of the model to the related item as a part of the penalty item in the calculation of the loss function in a probability score mode, simultaneously using an overflow mechanism to protect the loss function to continue the calculation in the training process, and using the original cross entropy loss function once the loss function overflows.
5. The method for optimizing a loss function by employing a cause-error enhancement method as claimed in claim 4, wherein the step S3 comprises the following steps: on the basis of an original cross entropy loss function, after a primary model is trained through the original cross entropy loss function, relevant items of all categories are tested, namely classification error factors, are introduced into the calculation of the loss function during formal training, a new loss function is constructed, and the specific formula of the corross is obtained as follows:
Figure 893358DEST_PATH_IMAGE003
,
Wherein i refers to correctly classified article categories, j refers to article categories having a correlation with i, and d refers to the number of related article categories; t is an adjusting factor used for adjusting the punishment degree of loss calculation;
Figure 513695DEST_PATH_IMAGE004
the correlation between the similar article categories is represented by information entropy, and the higher the information entropy is, the higher the correlation is, and the higher the probability of the model identification error is;
Figure 901951DEST_PATH_IMAGE005
the output probability of the relevant category in the training process;
wherein ,
Figure 127396DEST_PATH_IMAGE001
the specific formula of (A) is as follows:
Figure 44536DEST_PATH_IMAGE006
wherein
Figure 621011DEST_PATH_IMAGE007
Adding the picture output values of each category according to positions for the average value output by the classified i pictures, and then calculating the average value to ensure that the output is a normal level;
and d values can be set according to actual conditions during model training, when d =0, other article categories do not influence the category identification of the target article, and the corross is a cross entropy loss function.
6. The method for optimizing a loss function by employing a cause-error enhancement mode as claimed in claim 5, wherein the step S4 comprises the following steps: when in use
Figure 547379DEST_PATH_IMAGE008
Then, the optimized loss function corross is used, and once overflowing, the original cross entropy loss function is used for calculation.
7. The method for optimizing a loss function using a cause-error enhancement method according to claim 6, wherein the step S5 comprises the following training methods:
A. The first training mode is as follows: training twice, wherein the first training is primary training, the primary model is trained by using an original cross entropy loss function, then the correlation among all categories of the data set is measured by using the primary model, the correlation table is arranged into a correlation table, then formal training is carried out, the second training is carried out by using corross, the correlation item of each article category is added into the corross in an index mode during the formal training for calculation, and corresponding punishment items are searched from the correlation table and added into the corross for calculation according to the recognition condition of the primary model to each picture;
B. the second training mode is as follows: the method comprises the following steps of carrying out training only once, wherein N epochs are obtained, the training comprises two stages, the model when the epochs = int [ kN ] is taken as a primary model in the first stage, k is more than 0 and less than 1, the correlation among all classes of a data set is measured by the primary model and is arranged into a correlation table, the breakpoint training is started from the position of epoch = int [ kN ] +1 in the second stage by using Corloss, k is more than 0 and less than 1, and in the breakpoint training process, corresponding penalty terms are searched from the correlation table according to the recognition condition of the model to each picture and are added into a loss function for calculation.
CN202010669159.8A 2020-07-13 2020-07-13 Article identification method adopting error factor reinforcement mode to optimize loss function Active CN111860631B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010669159.8A CN111860631B (en) 2020-07-13 2020-07-13 Article identification method adopting error factor reinforcement mode to optimize loss function
PCT/CN2020/116176 WO2022011827A1 (en) 2020-07-13 2020-09-18 Method for optimizing loss function by means of error cause reinforcement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010669159.8A CN111860631B (en) 2020-07-13 2020-07-13 Article identification method adopting error factor reinforcement mode to optimize loss function

Publications (2)

Publication Number Publication Date
CN111860631A true CN111860631A (en) 2020-10-30
CN111860631B CN111860631B (en) 2023-08-22

Family

ID=72984762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010669159.8A Active CN111860631B (en) 2020-07-13 2020-07-13 Article identification method adopting error factor reinforcement mode to optimize loss function

Country Status (2)

Country Link
CN (1) CN111860631B (en)
WO (1) WO2022011827A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580714A (en) * 2020-12-15 2021-03-30 电子科技大学中山学院 Method for dynamically optimizing loss function in error-cause strengthening mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9691395B1 (en) * 2011-12-31 2017-06-27 Reality Analytics, Inc. System and method for taxonomically distinguishing unconstrained signal data segments
CN108984539A (en) * 2018-07-17 2018-12-11 苏州大学 The neural machine translation method of translation information based on simulation future time instance
CN110569338A (en) * 2019-07-22 2019-12-13 中国科学院信息工程研究所 Method for training decoder of generative dialogue system and decoding method
CN111242245A (en) * 2020-04-26 2020-06-05 杭州雄迈集成电路技术股份有限公司 Design method of classification network model of multi-class center

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108344574B (en) * 2018-04-28 2019-09-10 湖南科技大学 A kind of Wind turbines Method for Bearing Fault Diagnosis based on depth joint adaptation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9691395B1 (en) * 2011-12-31 2017-06-27 Reality Analytics, Inc. System and method for taxonomically distinguishing unconstrained signal data segments
CN108984539A (en) * 2018-07-17 2018-12-11 苏州大学 The neural machine translation method of translation information based on simulation future time instance
CN110569338A (en) * 2019-07-22 2019-12-13 中国科学院信息工程研究所 Method for training decoder of generative dialogue system and decoding method
CN111242245A (en) * 2020-04-26 2020-06-05 杭州雄迈集成电路技术股份有限公司 Design method of classification network model of multi-class center

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580714A (en) * 2020-12-15 2021-03-30 电子科技大学中山学院 Method for dynamically optimizing loss function in error-cause strengthening mode
WO2022126809A1 (en) * 2020-12-15 2022-06-23 电子科技大学中山学院 Method for dynamically optimizing loss function by means of error cause reinforcement
CN112580714B (en) * 2020-12-15 2023-05-30 电子科技大学中山学院 Article identification method for dynamically optimizing loss function in error-cause reinforcement mode

Also Published As

Publication number Publication date
CN111860631B (en) 2023-08-22
WO2022011827A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
CN109086799A (en) A kind of crop leaf disease recognition method based on improvement convolutional neural networks model AlexNet
CN110135459B (en) Zero sample classification method based on double-triple depth measurement learning network
US11257140B2 (en) Item recommendation method based on user intention in a conversation session
CN109902756A (en) A kind of crowdsourcing mechanism auxiliary sort method and system based on Active Learning
CN111343147A (en) Network attack detection device and method based on deep learning
CN111144462B (en) Unknown individual identification method and device for radar signals
CN111860631A (en) Method for optimizing loss function by adopting error-cause strengthening mode
CN113869463B (en) Long tail noise learning method based on cross enhancement matching
Subali et al. A new model for measuring the complexity of SQL commands
CN111711816B (en) Video objective quality evaluation method based on observable coding effect intensity
CN105701501A (en) Trademark image identification method
CN111382265B (en) Searching method, device, equipment and medium
CN113139464B (en) Power grid fault detection method
CN116152194A (en) Object defect detection method, system, equipment and medium
CN113889274B (en) Method and device for constructing risk prediction model of autism spectrum disorder
CN115456693A (en) Automatic evaluation method for automobile exterior design driven by big data
CN112580714B (en) Article identification method for dynamically optimizing loss function in error-cause reinforcement mode
CN114627496A (en) Robust pedestrian re-identification method based on depolarization batch normalization of Gaussian process
CN110297978A (en) Personalized recommendation algorithm based on integrated recurrence
CN111325097B (en) Enhanced single-stage decoupled time sequence action positioning method
CN118036555B (en) Low-sample font generation method based on skeleton transfer and structure contrast learning
CN113821642B (en) Method and system for cleaning text based on GAN clustering
CN112905487A (en) Self-adaptive measuring method and system for enterprise business situation
CN112561811A (en) Big data processing method and system
CN113033281A (en) Object re-identification method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant