CN111178443B - Model parameter selection, image classification and information identification methods, devices and equipment - Google Patents

Model parameter selection, image classification and information identification methods, devices and equipment Download PDF

Info

Publication number
CN111178443B
CN111178443B CN201911415591.8A CN201911415591A CN111178443B CN 111178443 B CN111178443 B CN 111178443B CN 201911415591 A CN201911415591 A CN 201911415591A CN 111178443 B CN111178443 B CN 111178443B
Authority
CN
China
Prior art keywords
model
training data
current
iteration
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911415591.8A
Other languages
Chinese (zh)
Other versions
CN111178443A (en
Inventor
侯广健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201911415591.8A priority Critical patent/CN111178443B/en
Publication of CN111178443A publication Critical patent/CN111178443A/en
Application granted granted Critical
Publication of CN111178443B publication Critical patent/CN111178443B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a model parameter selection method, an image classification and information identification method, a device and equipment, wherein the model parameter selection method iteratively executes the following steps: obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained; taking the minimum difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; generating a current optimal residual error model by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration; and calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating the residual label value of the training data after the current iteration. And taking the fusion model iterated when the first preset stopping condition is reached as a final output model.

Description

Model parameter selection, image classification and information identification methods, devices and equipment
Technical Field
The application relates to the field of automatic machine learning, in particular to a model parameter selection method, an image classification method, an information identification device and information identification equipment.
Background
In automated machine learning (Automation of machine learning, autoML), the model building process is the core process of automated machine learning. Currently, the model building process includes: the method comprises the steps of screening by adopting a preset algorithm (such as Bayesian optimization, reinforcement learning, heuristic algorithm and the like) based on a preset optimization target to obtain target model parameters, and constructing a model based on the target model parameters. However, the model determined by the above-described model construction process is inferior in performance, so that an expected effect cannot be achieved when a preset task (e.g., image classification or information recognition, etc.) is performed using the model.
Disclosure of Invention
In view of this, the embodiments of the present application provide a model parameter selection method, an image classification method, an information identification method, an apparatus, and a device, which can construct a model with better performance, so as to achieve an expected effect when a preset task (for example, image classification or information identification) is performed by using the model.
In order to solve the above problems, the technical solution provided by the embodiment of the present application is as follows:
a method of model parameter selection, the method comprising:
obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data;
generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration;
calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating a residual label value of the training data after the current iteration;
and repeatedly executing the target model obtaining and the subsequent steps until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model.
In one possible implementation manner, the solving a set of target parameters of the current residual model to be trained with a difference between an output result of the current residual model to be trained on training data and a residual label value of the training data after a last iteration as a training target includes:
selecting a group of model parameters of the current residual model to be trained;
inputting training data into a current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference value between an output result corresponding to the group of model parameters and a residual label value of the training data after the last iteration;
and repeatedly executing the steps of selecting a group of model parameters of the current residual model to be trained and the follow-up steps until a second preset stopping condition is reached, and determining a group of model parameters corresponding to the smallest difference value in the difference values as a group of target parameters of the current residual model to be trained.
In one possible implementation manner, the inputting training data into the current residual model to be trained corresponding to the set of model parameters, and obtaining an output result corresponding to the set of model parameters output by the current residual model to be trained corresponding to the set of model parameters includes:
Inputting the ith training data into a current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference between an output result corresponding to the set of model parameters and a residual label value of the training data after the last iteration, including:
calculating a first difference value between an ith output result corresponding to the group of model parameters and a residual error label value of the ith training data after the last iteration;
and calculating the square sum of the N first differences, and determining the square sum as the difference between the output result corresponding to the set of model parameters and the residual label value of the training data after the last iteration.
In one possible implementation, the model parameters include model hyper-parameters and model intra-parameters.
In one possible implementation manner, the calculating the difference between the result tag value of the training data and the output result of the fusion model after the current iteration on the training data, to generate the residual tag value of the training data after the current iteration includes:
Inputting the ith training data into the fusion model after the current iteration to obtain an ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
and calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual error label value of the training data after the current iteration.
A method of image classification, the method comprising:
acquiring an image to be classified;
inputting the images to be classified into a target fusion model to obtain an image classification result output by the target fusion model;
the construction process of the target fusion model comprises the following steps:
obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the training data comprises a positive sample medical image and a negative sample medical image; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of a positive sample medical image;
Generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration;
calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating a residual label value of the training data after the current iteration; the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is the probability value of a positive sample medical image;
and repeatedly executing the target model obtaining and subsequent steps until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a target fusion model.
An information identification method, the method comprising:
acquiring information to be identified;
inputting the information to be identified into a target fusion model to obtain information attribute categories output by the target fusion model;
The construction process of the target fusion model comprises the following steps:
obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the training data comprises positive sample text information and negative sample text information; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of positive sample text information;
generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration;
Calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating a residual label value of the training data after the current iteration; the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is the probability value of the positive sample text information;
and repeatedly executing the target model obtaining and subsequent steps until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a target fusion model.
A model parameter selection apparatus, the apparatus comprising:
the target model generation unit is used for obtaining a target model, and the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
the target parameter solving unit is used for solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data;
The fusion model generation unit is used for generating a current optimal residual model of the current residual model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual model to obtain a fusion model after the current iteration;
the residual label generating unit is used for calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data and generating the residual label value of the training data after the current iteration;
and the output model determining unit is used for returning to execute the obtained target model in the target model generating unit until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model.
An image classification apparatus, the apparatus comprising:
the image acquisition unit is used for acquiring images to be classified;
the image classification unit is used for inputting the images to be classified into a target fusion model to obtain an image classification result output by the target fusion model;
the target fusion model construction unit is used for constructing the target fusion model;
The target fusion model construction unit comprises:
the target model generation unit is used for obtaining a target model, and the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
the target parameter solving unit is used for solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the training data comprises a positive sample medical image and a negative sample medical image; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of a positive sample medical image;
the fusion model generation unit is used for generating a current optimal residual model of the current residual model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual model to obtain a fusion model after the current iteration;
The residual label generating unit is used for calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data and generating the residual label value of the training data after the current iteration; the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is the probability value of a positive sample medical image;
and the output model determining unit is used for repeatedly returning to the obtained target model in the target model generating unit until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a target fusion model.
An information identifying apparatus, the apparatus comprising:
the information acquisition unit is used for acquiring information to be identified;
the information identification unit is used for inputting the information to be identified into the target fusion model to obtain the information attribute category output by the target fusion model;
the target fusion model construction unit is used for constructing the target fusion model;
The target fusion model construction unit comprises:
the target model generation unit is used for obtaining a target model, and the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
the target parameter solving unit is used for solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the training data comprises positive sample text information and negative sample text information; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of positive sample text information;
the fusion model generation unit is used for generating a current optimal residual model of the current residual model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual model to obtain a fusion model after the current iteration;
The residual label generating unit is used for calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data and generating the residual label value of the training data after the current iteration; the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is the probability value of the positive sample text information;
and the output model determining unit is used for repeatedly returning to the obtained target model in the target model generating unit until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a target fusion model.
A model parameter selection apparatus comprising: the model parameter selection method comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the model parameter selection method when executing the computer program.
An image classification apparatus comprising: the image classification system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the image classification method when executing the computer program.
An information identifying apparatus, comprising: the information identification system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the information identification method when executing the computer program.
A computer readable storage medium having instructions stored therein which, when executed on a terminal device, cause the terminal device to perform the model parameter selection method, or to perform the image classification method, or to perform the information identification method.
From this, the embodiment of the application has the following beneficial effects:
in the model parameter selection method provided by the embodiment of the application, the following steps can be iteratively executed: obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained; taking the minimum difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; generating a current optimal residual error model of a current residual error model to be trained by utilizing target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration; and calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating the residual label value of the training data after the current iteration. And stopping the iterative process when the first preset stopping condition is determined to be reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model.
The training target of the current residual model to be trained is that the difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration is the smallest, so that the current optimal residual model determined based on the training target can be close to the residual label value of the training data after the last iteration to the greatest extent, the current optimal residual model can make up the difference between the output result of the fusion model to be trained after the last iteration and the result label value of the training data to the greatest extent, and the output result of the fusion model to be trained after the current iteration obtained by fusion of the fusion model to be the last iteration and the current optimal residual model is close to the result label value of the training data, so that the final output model determined by fusion of multiple iterations can have better performance, and further the expected effect can be achieved when the final output model is used for executing preset tasks (such as image classification or information identification).
Drawings
FIG. 1 is a flowchart of a model parameter selection method according to an embodiment of the present application;
FIG. 2 is a flowchart of a specific implementation of solving a set of target parameters of a current residual model to be trained according to an embodiment of the present application;
FIG. 3 is a flowchart of an image classification method according to an embodiment of the present application;
FIG. 4 is a flowchart of an information identification method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a model parameter selection device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image classification device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an information identifying apparatus according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of embodiments of the application will be rendered by reference to the appended drawings and appended drawings.
The inventor finds that the performance of a fusion model formed by fusing a plurality of models is better than that of a single model in the research of the traditional model parameter selection method, and the construction process of the fusion model is as follows: after obtaining the multiple single models with better performance, a preset fusion method (such as stacking or voting) can be adopted to fuse the multiple single models with better performance to obtain a fusion model. Based on the above, in the process of constructing the fusion model, the model optimization iterative process and the model fusion process are two independent processes, and the optimization objective of the model optimization iterative process is to make the performance of a single model better, so that the fusion model obtained by construction can only achieve local optimum and can not achieve global optimum, thus resulting in poor performance of the fusion model.
Based on the above, the embodiment of the application provides a model parameter selection method, which comprises the following steps: obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained; taking the minimum difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; generating a current optimal residual error model of a current residual error model to be trained by utilizing target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration; calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating the residual label value of the training data after the current iteration; and repeatedly executing the steps of obtaining the target model and the follow-up steps until the first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model. The current optimal residual error model determined based on the training target in the current iteration process can make up the gap between the output result of the fusion model after the previous iteration on the training data and the result label value of the training data to the greatest extent, so that the output result of the fusion model after the current iteration on the training data, which is obtained by fusing the fusion model after the previous iteration and the current optimal residual error model, is closer to the result label value of the training data, the fusion model obtained by multiple iteration fusion can achieve global optimal, and therefore the final output model performance is better, and the expected effect can be achieved when the final output model is used for executing preset tasks (such as image classification or information identification and the like).
It should be noted that, the method for selecting model parameters provided by the embodiment of the application can be applied to any field and scene where fusion models are required to be applied, such as fields of finance, aviation, government, medical treatment, traffic and the like, various scenes of image classification, data processing, data classification, text processing and the like. Specifically, based on the model parameter selection method provided by the embodiment of the application, the embodiment of the application also provides an image classification method and an information identification method.
For easy understanding, the model parameter selection method provided by the embodiment of the application will be described below with reference to the accompanying drawings.
Referring to fig. 1, which is a flowchart of a model parameter selection method according to an embodiment of the present application, as shown in fig. 1, the method may include S101-S106:
s101: and obtaining a target model, wherein the target model is obtained by fusing the fusion model after the last iteration with the current residual model to be trained.
In the embodiment of the present application, after the fusion model after the previous iteration and the current residual model to be trained in the current iteration process are obtained, the fusion model after the previous iteration and the current residual model to be trained in the current iteration process may be fused to obtain the target model in the current iteration process, and the obtaining process may specifically be: for the 1 st iteration process, the current residual model to be trained in the 1 st iteration process can be directly determined as the target model in the 1 st iteration process. In addition, for the m+1th iteration process, where m is a positive integer, and m is greater than or equal to 1, the fusion model after the m iteration process needs to be fused with the current residual model to be trained in the m+1th iteration process, so as to obtain the target model in the m+1th iteration process.
S102: and solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration as a training target.
The output result of the current residual model to be trained on the training data refers to the output result of the current residual model to be trained after the processing of the training data. It should be noted that the embodiment of the present application provides an implementation manner for obtaining an output result of a current residual model to be trained on training data, please refer to the following detailed description.
The residual label value of the training data after the last iteration is generated according to the difference between the output result of the fusion model after the last iteration to the training data and the result label value of the training data. Wherein the resulting tag value of the training data is used to characterize the actual tag of the training data. Note that, in the manner of obtaining the residual tag value of the training data after the previous iteration, please refer to S104 below.
In addition, since the output result of the fusion model after the last iteration to the training data does not exist in the first iteration process, the result label value of the training data can be directly determined to be the optimization target of the first iteration process. Based on this, when iterating for the first time, the parameter tag value of the training data generated after the last iteration is the result tag value of the training data.
The training target is a target which needs to be reached in the training process of the current residual model to be trained; moreover, the training target is obtained by theoretical deduction according to the optimization target of the fusion model. To facilitate understanding of the training object, theoretical derivation of the training object and actual meaning are described below in conjunction with formulas (1) to (5).
In the embodiment of the application, the fusion model optimization target in the iterative process is that the difference between the output result of the target model on the training data and the result label value of the training data is the smallest (as shown in a formula (1)), and the embodiment of the application can adopt a square error loss function shown in a formula (2) to measure the difference between the output result of the target model on the ith training data and the result label value of the ith training data. The output result of the ith training data is calculated by using the formula (3) according to the target model, the formula (3) can be substituted into the formula (2), and the formula (5) can be obtained by using the deduction process shown by the formula (4), wherein the formula (5) is the training target of the iterative process in the embodiment of the application. Based on this, in the embodiment of the present application, if the current residual model to be trained in the mth+1th iteration process can reach the training target in the mth+1th iteration process, it indicates that the fusion model after the mth+1th iteration process can reach the fusion model optimization target in the mth+1th iteration process.
L(y i ,f m+1 (x i ))=(y i -f m+1 (x i )) 2 (2)
f m+1 (x i )=f m (x i )+M(x im+1 ) (3)
In θ m+1 A set of target parameters representing the current residual model to be trained during the m+1th iteration (i.e., a set of model parameters that cause the current residual model to be trained during the m+1th iteration to reach a training target);representing a group of model parameters corresponding to the fusion model optimization target in the m+1th iteration process (namely, selecting a group of model parameters which enable the output result of the training data by the residual model to be trained before in the m+1th iteration process to have the minimum difference with the residual label value of the training data after the last iteration); l (y) i ,f m+1 (x i ) Representing the output result of the target model on the ith training data and the ith training in the m+1th iteration processTraining the difference between the result label values of the data; l () represents a squaring error loss function; y is i A result tag value representing the ith training data; f (f) m+1 (x i ) Representing the output result of the target model in the m+1st iteration process to the ith training data; f (f) m (x i ) Representing the output result of the fusion model after the mth iteration on the ith training data; m (x) im+1 ) Representing the utilization model parameter θ in the m+1th iteration process m+1 Is a current residual model to be trained; x is x i Representing the ith training data; i is a positive integer, i is less than or equal to N; m is a positive integer. F is the same as that of the above 0 (x i )=0。
The target parameters are a group of model parameters capable of enabling the current residual model to be trained to reach a training target, and specifically are: the target parameters in the m+1th iteration process refer to a group of model parameters capable of enabling the current residual model to be trained in the m+1th iteration process to reach the training target in the m+1th iteration process, namely a group of model parameters capable of enabling the target model in the m+1th iteration process to reach the fusion model optimization target in the m+1th iteration process. The model parameters may include model super parameters and model internal parameters, among others.
Based on the foregoing, in the embodiment of the present application, after the current residual model to be trained in the m+1th iteration process is obtained, a set of target parameters of the current residual model to be trained in the m+1th iteration process may be solved according to the minimum difference between the output result of the current residual model to be trained in the m+1th iteration process and the residual label value of the training data after the m iteration process as a training target, so that the current optimal residual model of the current residual model to be trained in the m+1th iteration process may be generated by using the set of target parameters.
In addition, the embodiment of the application further provides an implementation manner for solving a set of target parameters of the current residual model to be trained, and please refer to the following detailed description.
S103: and generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration.
In the embodiment of the application, after solving a group of target parameters of the current residual model to be trained in the m+1th iteration process, the group of target parameters can be utilized to generate the current optimal residual model in the m+1th iteration process, and the fusion model after the m iteration process and the current optimal residual model in the m+1th iteration process are fused to obtain the fusion model after the m+1th iteration process. Wherein m is an integer, and m is not less than 0. It should be noted that, because the fusion model after the 0 th iteration does not exist, for the 1 st iteration process, after solving a set of target parameters of the current residual model to be trained in the 1 st iteration process, the current optimal residual model in the 1 st iteration process may be generated by using the set of target parameters, and the current optimal residual model in the 1 st iteration process may be determined as the fusion model after the 1 st iteration.
S104: and calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating the residual label value of the training data after the current iteration.
The resulting tag value of the training data is used to characterize the actual tag of the training data.
The output result of the training data by the fusion model after the iteration is the result obtained by processing the training data by the fusion model after the iteration. It should be noted that, the embodiment of the present application is not limited to the method for obtaining the output result of the training data by the fusion model after the current iteration, and is described below with reference to two examples.
As a first example, as shown in the formula (6), the process of obtaining the output result of the training data by the fusion model after the (m+1) th iteration is: when m=0, determining the output result of the current optimal residual error model in the m+1th iteration process to the training data as the output result of the fusion model after the m+1th iteration to the training data; when i is more than or equal to 1, adding the output result of the fusion model after the mth iteration to the output result of the training data by the current optimal residual error model in the mth+1th iteration process to determine the output result of the fusion model after the mth+1th iteration to the training data.
Wherein f m+1 (x) Representing the output result of the fusion model after the (m+1) th iteration on training data; f (f) m (x) Representing the output result of the fusion model after the mth iteration on the training data; m (x, θ) m+1 ) Representing a current optimal residual error model in the m+1st iteration process; θ m+1 Representing target parameters in the m+1st iteration process; x represents training data; m is a positive integer, and m is more than or equal to 0.
As a second example, the output result of the training data by the fusion model after the (m+1) th iteration is obtained by: and inputting the training data into the fusion model after the (m+1) th iteration for processing, and obtaining an output result of the fusion model after the (m+1) th iteration as an output result of the fusion model after the (m+1) th iteration on the training data.
It should be noted that, the process of obtaining the output result of the training data by the fusion model after the (m+1) th iteration provided by the second example is similar to the process of obtaining the output result of the training data by the current residual model to be trained.
The residual label value of the training data after the iteration is used for representing the gap between the processing performance of the fusion model and the processing performance of the ideal model after the iteration, and can also be used for representing the residual model optimization target of the next iteration process.
Based on the foregoing, in the embodiment of the present application, after the fusion model after the (m+1) -th iteration is obtained, the residual label value of the training data after the (m+1) -th iteration may be determined according to the difference between the result label value of the training data and the output result of the fusion model after the (m+1) -th iteration on the training data, so that the residual label value of the training data after the (m+1) -th iteration may be utilized in the (m+2) -th iteration process to determine the training target of the current residual model to be trained in the (m+2) -th iteration process.
S105: judging whether a first preset stopping condition is reached, if so, executing S106; if not, the process returns to S101.
The embodiment of the application is not limited to the first preset stopping condition, for example, the first preset stopping condition may be that the difference between the result label value of the training data and the output result of the training data by the fusion model after the (m+1) th iteration is lower than the preset difference; the change rate of the difference between the result label value of the training data and the output result of the training data by the fusion model after the (m+1) th iteration is lower than a preset change threshold; the update times of the fusion model can also reach the preset update times.
S106: and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model.
As an example, when the first preset stop condition is reached at the 100 th iteration, S106 is specifically: and when the 100 th iteration reaches the first preset stopping condition, taking the fusion model after the 100 th iteration as a final output model.
Based on the above-mentioned content of S101 to S106, in the model parameter selection method provided in the embodiment of the present application, the following steps may be iteratively performed: obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained; taking the minimum difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; generating a current optimal residual error model of a current residual error model to be trained by utilizing target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration; and calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating the residual label value of the training data after the current iteration. And stopping the iterative process when the first preset stopping condition is determined to be reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model.
The training target of the current residual model to be trained is that the difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration is the smallest, so that the current optimal residual model determined based on the training target can be close to the residual label value of the training data after the last iteration to the greatest extent, the current optimal residual model can make up the difference between the output result of the fusion model to be trained after the last iteration and the result label value of the training data to the greatest extent, and the output result of the fusion model to be trained after the current iteration obtained by fusion of the fusion model to be the last iteration and the current optimal residual model is close to the result label value of the training data, so that the final output model determined by fusion of multiple iterations can have better performance, and further the expected effect can be achieved when the final output model is used for executing preset tasks (such as image classification or information identification).
In one possible implementation manner of the embodiment of the present application, a specific implementation manner of solving a set of target parameters (that is, S102) of a current residual model to be trained is provided, where, as shown in fig. 2, S102 may specifically include S1021-S1025:
S1021: a set of model parameters of a current residual model to be trained is selected.
The embodiment of the application does not limit the model parameters, for example, the model parameters can include model super parameters and model internal parameters.
In addition, the embodiment of the application is not limited to the selection method of the model parameters, and for example, the selection method can be a random selection method or a preset screening method.
In addition, the embodiment of the present application further provides an implementation manner of S1021, which may specifically be: and selecting a group of model parameters of the current residual model to be trained in a preset parameter space range.
The method comprises the steps that a preset parameter space range is used for representing a selection space range of model parameters of a current residual model to be trained; moreover, the embodiment of the application does not limit the acquisition mode of the preset parameter space range.
It should be noted that the model parameters selected each time are different, that is, the model parameters selected at the j-th time are different from the model parameters selected at the previous j-1 time, j is a positive integer, and j is not less than 2.
S1022: and inputting training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters.
In the embodiment of the application, after the model parameters are selected, the current residual model to be trained corresponding to the group of model parameters can be constructed by utilizing the selected model parameters, and then training data is input into the current residual model to be trained corresponding to the group of model parameters so as to obtain the output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters.
In addition, the embodiment of the present application further provides an implementation manner of S1022, which may specifically include the following steps: inputting the ith training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining the ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters. And i is an integer from 1 to N, wherein N is the number of the training data.
Based on the above, S1022 may specifically be: inputting the 1 st training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining the 1 st output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters; inputting the 2 nd training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining the 2 nd output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters; … … (and so on); and inputting the N training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining an N output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters. It should be noted that, the embodiment of the present application is not limited to the order of obtaining the 1 st output result to the nth output result.
S1023: and calculating the difference value between the output result corresponding to the group of model parameters and the residual label value of the training data after the last iteration.
In the embodiment of the application, after the output results corresponding to the group of model parameters are obtained, the difference value between the output results corresponding to the group of model parameters and the residual label value of the training data after the last iteration can be calculated. As an example, when S1022 obtains the ith output result to the nth output result corresponding to the set of model parameters, as shown in formula (7), S1023 may specifically be: firstly, calculating a first difference value between an ith output result corresponding to the group of model parameters and a residual error label value of the ith training data after the last iteration, wherein i is an integer from 1 to N, and N is the number of the training data; and calculating the square sum of the N first differences, and determining the square sum as the difference between the output result corresponding to the group of model parameters and the residual label value of the training data after the last iteration.
Wherein E is j Representing the difference value between the output result corresponding to the j-th group model parameter and the residual label value of the training data after the last iteration, wherein j is a positive integer, and j is more than or equal to 1; r is (r) i j Representing an ith output result corresponding to the jth group of model parameters; t (T) i And representing the residual label value of the ith training data after the last iteration, wherein i is a positive integer, and i is less than or equal to N, and N is the number of the training data.
S1024: judging whether a second preset stopping condition is reached, if so, executing S1025; if not, the above-mentioned S1021 is executed.
The embodiment of the application is not limited to the second preset stop condition, for example, the second preset stop condition is that the preset times are reached.
S1025: and determining a group of model parameters corresponding to the minimum difference value in the difference values as a group of target parameters of the current residual model to be trained.
In the embodiment of the application, after the second preset stopping condition is determined to be reached, the minimum difference value can be determined from the difference value between the output result corresponding to each group of model parameters and the residual label value of the training data after the last iteration, and the group of model parameters corresponding to the minimum difference value is determined as a group of target parameters of the current residual model to be trained. As an example, as shown in formula (8), S1025 may specifically be: and determining the minimum difference value from the difference value between the output result corresponding to the N groups of model parameters and the residual label value of the training data after the last iteration, and determining a group of model parameters corresponding to the minimum difference value as a group of target parameters of the current residual model to be trained.
In θ m+1 Representing a group of target parameters of a current residual model to be trained in the m+1th iteration process, wherein m is an integer, and m is more than or equal to 0; e (E) j Representing the difference value between the output result corresponding to the j-th group model parameters and the residual label value of the training data after the last iteration, wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to G; g is the number of model parameters used by the current residual model to be trained in the m+1th iteration process.
Based on the above contents of S1021 to S1025, the embodiment of the present application may determine a set of target parameters of the current residual model to be trained in the m+1th iteration process from the multiple sets of model parameters, so that the set of target parameters of the current residual model to be trained in the m+1th iteration process may satisfy the training target of the current residual model to be trained in the m+1th iteration process. m is an integer, and m is more than or equal to 0.
In a possible implementation manner of the embodiment of the present application, a specific implementation manner of generating a residual label value (that is, S104) of training data after the present iteration is provided, which may specifically include: inputting the ith training data into the fusion model after the current iteration to obtain an ith output result output by the fusion model after the current iteration; and calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual error label value of the training data after the current iteration. And i is an integer from 1 to N, wherein N is the number of the training data.
Based on the foregoing, in the embodiment of the present application, S104 may specifically include the following steps:
the first step: inputting the 1 st training data into the fusion model after the current iteration to obtain a 1 st output result output by the fusion model after the current iteration; and calculating the difference between the 1 st result label value of the training data and the 1 st output result output by the fusion model after the current iteration, and generating the 1 st residual error label value of the training data after the current iteration.
And a second step of: inputting the 2 nd training data into the fusion model after the current iteration to obtain a 2 nd output result output by the fusion model after the current iteration; and calculating the difference between the 2 nd result label value of the training data and the 2 nd output result output by the fusion model after the current iteration, and generating the 2 nd residual error label value of the training data after the current iteration.
… … (analogize in sequence)
And a third step of: inputting the Nth training data into the fusion model after the current iteration to obtain an Nth output result output by the fusion model after the current iteration; and calculating the difference between the Nth result label value of the training data and the Nth output result output by the fusion model after the current iteration, and generating the Nth residual error label value of the training data after the current iteration.
It should be noted that, the embodiment of the present application does not limit the generation sequence of each residual tag value.
Based on the above, it can be known that, according to the output result corresponding to each training data and the result label of each training data output by the fusion model after the current iteration, the embodiment of the application can determine the residual label value of each training data after the current iteration, so as to determine the training target in the next iteration process according to the residual label value of each training data after the current iteration.
In addition, an embodiment of the present application further provides an image classification method, based on the model parameter selection method provided in the foregoing embodiment, a target fusion model obtained according to the model parameter selection method may be applied to image classification, referring to fig. 3, which is a flowchart of the image classification method provided in the embodiment of the present application, where the method may include:
s301: and acquiring an image to be classified.
S302: and inputting the images to be classified into the target fusion model to obtain an image classification result output by the target fusion model.
In this embodiment, an image to be classified is first obtained, and the image to be classified is input into a target fusion model constructed in advance, so as to obtain an image classification result corresponding to the image to be classified. When the method is specifically implemented, the target fusion model can output not only the classification result corresponding to the image to be classified, but also the probability value corresponding to each classification result, so that a user can directly know the classification condition of the image to be classified.
In practical application, the image to be classified may be a medical image, and the target fusion model used is a model capable of classifying the medical image, and a specific classification result of the medical image may be obtained by inputting the medical image (or a feature map corresponding thereto) into the target fusion model. For example, it may be identified whether the input medical image is a medical image carrying a certain feature or having a certain classification result or a medical image not carrying a certain feature or not having a certain classification result.
The construction process of the target fusion model comprises the following steps:
obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data;
Generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration;
calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating a residual label value of the training data after the current iteration;
and repeatedly executing the steps of obtaining the target model and the follow-up steps until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model (namely, a target fusion model).
It should be noted that the training data in this embodiment may include a positive sample medical image and a negative sample medical image. The positive sample medical image is a medical image to be trained carrying a certain feature or having a certain classification result, and the result label value of the positive sample medical image may be 1. The negative-sample medical image refers to a medical image to be trained which does not carry a certain feature or does not have a certain classification result; and the resulting label value of the negative-sample medical image may be 0. The output result of the current residual error model to be trained on the training data can be that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is the probability value of the positive sample medical image; the output result of the fusion model after the iteration to the training data can be that the training data is input into the fusion model after the iteration, and the training data output by the fusion model after the iteration is the probability value of the positive sample medical image.
In one possible implementation manner, the solving a set of target parameters of the current residual model to be trained with a difference between an output result of the current residual model to be trained on training data and a residual label value of the training data after a last iteration as a training target includes:
selecting a group of model parameters of the current residual model to be trained;
inputting training data into a current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference value between an output result corresponding to the group of model parameters and a residual label value of the training data after the last iteration;
and repeatedly executing the steps of selecting a group of model parameters of the current residual model to be trained and the follow-up steps until a second preset stopping condition is reached, and determining a group of model parameters corresponding to the smallest difference value in the difference values as a group of target parameters of the current residual model to be trained.
In one possible implementation manner, the inputting training data into the current residual model to be trained corresponding to the set of model parameters, and obtaining an output result corresponding to the set of model parameters output by the current residual model to be trained corresponding to the set of model parameters includes:
Inputting the ith training data into a current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference between an output result corresponding to the set of model parameters and a residual label value of the training data after the last iteration, including:
calculating a first difference value between an ith output result corresponding to the group of model parameters and a residual error label value of the ith training data after the last iteration;
and calculating the square sum of the N first differences, and determining the square sum as the difference between the output result corresponding to the set of model parameters and the residual label value of the training data after the last iteration.
In one possible implementation, the selecting a set of model parameters of the current residual model to be trained includes:
and selecting a group of model parameters of the current residual model to be trained in a preset parameter space range.
In one possible implementation, the model parameters include model hyper-parameters and model intra-parameters.
In one possible implementation, the second preset stop condition is reaching a preset number of times.
In one possible implementation manner, the calculating the difference between the result tag value of the training data and the output result of the fusion model after the current iteration on the training data, to generate the residual tag value of the training data after the current iteration includes:
inputting the ith training data into the fusion model after the current iteration to obtain an ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
and calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual error label value of the training data after the current iteration.
It should be further noted that, in this embodiment, the description of the specific generation process of the target fusion model may refer to the flow of the method described in fig. 1, and this embodiment is not repeated here.
According to the method and the device for classifying the images to be classified, the target fusion model is used for classifying the images to be classified, and the target fusion model can achieve global optimum, so that the classification result output by the target fusion model is more accurate, and the accuracy of the classification result is further improved.
In addition, the embodiment of the present application further provides an information identification method, based on the model parameter selection method provided in the foregoing embodiment, a target fusion model obtained according to the model parameter selection method may be applied to information identification, referring to fig. 4, which is a flowchart of the information identification method provided in the embodiment of the present application, where the method may include:
s401: and acquiring information to be identified.
S402: and inputting the information to be identified into the target fusion model to obtain the information attribute category output by the target fusion model.
In this embodiment, first, information to be identified is obtained, and the information to be identified is input into a target fusion model constructed in advance, so as to obtain an information attribute category corresponding to the information to be identified. When the method is specifically implemented, the target fusion model can output not only the information attribute category corresponding to the information to be identified, but also the probability value corresponding to each information attribute category, so that a user can directly know the classification condition corresponding to the information to be identified.
In practical application, the information to be identified can be text information to be identified, and the target fusion model used is a model capable of classifying information attributes of the text information to be identified, and a specific information attribute classification result of the text information to be identified can be obtained by inputting the text information (or the corresponding characteristic vector) to be identified into the target fusion model. For example, it is possible to recognize whether the inputted text information to be recognized is trusted information or fraudulent information.
The construction process of the target fusion model comprises the following steps:
obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data;
generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration;
calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating a residual label value of the training data after the current iteration;
and repeatedly executing the steps of obtaining the target model and the follow-up steps until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model (namely, a target fusion model).
It should be noted that, the training data in this embodiment may include positive sample text information and negative sample text information. The positive sample text information may refer to trusted information, and the result tag value of the positive sample text information is 1. The negative-sample text information may refer to fraud information, and the resulting tag value of the negative-sample text information is 0. The output result of the current residual error model to be trained on the training data can be the probability value that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is the positive sample text information; the output result of the fusion model after the iteration to the training data can be that the training data is input into the fusion model after the iteration, and the training data output by the fusion model after the iteration is the probability value of the positive sample text information.
In one possible implementation manner, the solving a set of target parameters of the current residual model to be trained with a difference between an output result of the current residual model to be trained on training data and a residual label value of the training data after a last iteration as a training target includes:
selecting a group of model parameters of the current residual model to be trained;
Inputting training data into a current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference value between an output result corresponding to the group of model parameters and a residual label value of the training data after the last iteration;
and repeatedly executing the steps of selecting a group of model parameters of the current residual model to be trained and the follow-up steps until a second preset stopping condition is reached, and determining a group of model parameters corresponding to the smallest difference value in the difference values as a group of target parameters of the current residual model to be trained.
In one possible implementation manner, the inputting training data into the current residual model to be trained corresponding to the set of model parameters, and obtaining an output result corresponding to the set of model parameters output by the current residual model to be trained corresponding to the set of model parameters includes:
inputting the ith training data into a current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
Calculating a difference between an output result corresponding to the set of model parameters and a residual label value of the training data after the last iteration, including:
calculating a first difference value between an ith output result corresponding to the group of model parameters and a residual error label value of the ith training data after the last iteration;
and calculating the square sum of the N first differences, and determining the square sum as the difference between the output result corresponding to the set of model parameters and the residual label value of the training data after the last iteration.
In one possible implementation, the selecting a set of model parameters of the current residual model to be trained includes:
and selecting a group of model parameters of the current residual model to be trained in a preset parameter space range.
In one possible implementation, the model parameters include model hyper-parameters and model intra-parameters.
In one possible implementation, the second preset stop condition is reaching a preset number of times.
In one possible implementation manner, the calculating the difference between the result tag value of the training data and the output result of the fusion model after the current iteration on the training data, to generate the residual tag value of the training data after the current iteration includes:
Inputting the ith training data into the fusion model after the current iteration to obtain an ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
and calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual error label value of the training data after the current iteration.
It should be further noted that, in this embodiment, the description of the specific generation process of the target fusion model may refer to the flow of the method described in fig. 1, and this embodiment is not repeated here.
According to the method and the device for classifying the attribute of the information to be identified, the attribute classification of the information to be identified is identified by utilizing the target fusion model, and the target fusion model can achieve global optimum, so that the classification result output by the target fusion model is more accurate, and the accuracy of the classification result is further improved.
Based on the model parameter selection method provided by the method embodiment, the embodiment of the application also provides a model parameter selection device, and the model parameter selection device is described below with reference to the accompanying drawings.
Referring to fig. 5, the structure of a model parameter selection device according to an embodiment of the present application is shown. As shown in fig. 5, the model parameter selecting means includes:
A target model generating unit 501, configured to obtain a target model, where the target model is obtained by fusing a fusion model after a previous iteration with a current residual model to be trained;
the target parameter solving unit 502 is configured to solve a set of target parameters of the current residual model to be trained, where the difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration is the smallest as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data;
a fusion model generating unit 503, configured to generate a current optimal residual model of the current residual model to be trained by using the target parameter, and fuse the fusion model after the previous iteration with the current optimal residual model to obtain a fusion model after the current iteration;
a residual label generating unit 504, configured to calculate a difference between a result label value of the training data and an output result of the training data by the fusion model after the current iteration, and generate a residual label value of the training data after the current iteration;
An output model determining unit 505, configured to repeatedly return to the obtained target model in the target model generating unit 501 until a first preset stop condition is reached, and take, as a final output model, the fusion model after the iteration when the first preset stop condition is reached.
In one possible implementation manner, the target parameter solving unit 502 includes:
a model parameter selection subunit, configured to select a set of model parameters of the current residual model to be trained;
the output result determining subunit is used for inputting training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
a result difference value calculating subunit, configured to calculate a difference value between an output result corresponding to the set of model parameters and a residual label value of the training data after the previous iteration;
and the target parameter determining subunit is used for returning to execute the group of model parameters for selecting the current residual model to be trained in the model parameter selecting subunit until a second preset stopping condition is reached, and determining the group of model parameters corresponding to the smallest difference value in the difference values as the group of target parameters of the current residual model to be trained.
In one possible implementation, the output result determining subunit includes:
the residual model determining subunit is used for inputting the ith training data into the current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
the output result obtaining subunit is used for obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
the result difference calculation subunit includes:
a first difference calculating subunit, configured to calculate a first difference between an ith output result corresponding to the set of model parameters and a residual label value of the ith training data after a previous iteration;
and the result difference value determining subunit is used for calculating the square sum of the N first difference values, and determining the square sum as the difference value between the output result corresponding to the group of model parameters and the residual label value of the training data after the last iteration.
In a possible implementation, the model parameter selection subunit is specifically configured to: and selecting a group of model parameters of the current residual model to be trained in a preset parameter space range.
In one possible implementation, the model parameters include model hyper-parameters and model intra-parameters.
In one possible implementation, the second preset stop condition is reaching a preset number of times.
In one possible implementation manner, the residual label generating unit 504 includes:
the fusion result output subunit is used for inputting the ith training data into the fusion model after the current iteration to obtain the ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
and the residual label determining subunit is used for calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual label value of the training data after the current iteration.
It should be noted that, for the technical details of the model parameter selection device provided above, please refer to the above method embodiment.
Referring to fig. 6, the structure of an image classification device according to an embodiment of the present application is shown. As shown in fig. 6, the image classification apparatus includes:
an image acquisition unit 601, configured to acquire an image to be classified;
The image classification unit 602 is configured to input the image to be classified into a target fusion model, and obtain an image classification result output by the target fusion model;
a target fusion model construction unit 603, configured to construct the target fusion model;
the target fusion model construction unit comprises:
the target model generation unit is used for obtaining a target model, and the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
the target parameter solving unit is used for solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the training data comprises a positive sample medical image and a negative sample medical image; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of a positive sample medical image;
The fusion model generation unit is used for generating a current optimal residual model of the current residual model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual model to obtain a fusion model after the current iteration;
the residual label generating unit is used for calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data and generating the residual label value of the training data after the current iteration; the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is the probability value of a positive sample medical image;
and the output model determining unit is used for repeatedly returning to the obtained target model in the target model generating unit until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a target fusion model.
In one possible implementation manner, the target parameter solving unit includes:
A model parameter selection subunit, configured to select a set of model parameters of the current residual model to be trained;
the output result determining subunit is used for inputting training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
a result difference value calculating subunit, configured to calculate a difference value between an output result corresponding to the set of model parameters and a residual label value of the training data after the previous iteration;
and the target parameter determining subunit is used for returning to execute the group of model parameters for selecting the current residual model to be trained in the model parameter selecting subunit until a second preset stopping condition is reached, and determining the group of model parameters corresponding to the smallest difference value in the difference values as the group of target parameters of the current residual model to be trained.
In one possible implementation, the output result determining subunit includes:
the residual model determining subunit is used for inputting the ith training data into the current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
The output result obtaining subunit is used for obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
the result difference calculation subunit includes:
a first difference calculating subunit, configured to calculate a first difference between an ith output result corresponding to the set of model parameters and a residual label value of the ith training data after a previous iteration;
and the result difference value determining subunit is used for calculating the square sum of the N first difference values, and determining the square sum as the difference value between the output result corresponding to the group of model parameters and the residual label value of the training data after the last iteration.
In a possible implementation, the model parameter selection subunit is specifically configured to: and selecting a group of model parameters of the current residual model to be trained in a preset parameter space range.
In one possible implementation, the model parameters include model hyper-parameters and model intra-parameters.
In one possible implementation, the second preset stop condition is reaching a preset number of times.
In one possible implementation manner, the residual label generating unit includes:
The fusion result output subunit is used for inputting the ith training data into the fusion model after the current iteration to obtain the ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
and the residual label determining subunit is used for calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual label value of the training data after the current iteration.
It should be noted that, for the technical details of the image classification device provided above, please refer to the above method embodiment.
Referring to fig. 7, a schematic structural diagram of an information identifying apparatus according to an embodiment of the present application is shown. As shown in fig. 7, the information identifying apparatus includes:
an information acquisition unit 701 for acquiring information to be identified;
the information identifying unit 702 is configured to input the information to be identified into a target fusion model, and obtain an information attribute category output by the target fusion model;
a target fusion model construction unit 703, configured to construct the target fusion model;
the target fusion model construction unit comprises:
The target model generation unit is used for obtaining a target model, and the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
the target parameter solving unit is used for solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the training data comprises positive sample text information and negative sample text information; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of positive sample text information;
the fusion model generation unit is used for generating a current optimal residual model of the current residual model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual model to obtain a fusion model after the current iteration;
The residual label generating unit is used for calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data and generating the residual label value of the training data after the current iteration; the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is the probability value of the positive sample text information;
and the output model determining unit is used for repeatedly returning to the obtained target model in the target model generating unit until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a target fusion model.
In one possible implementation manner, the target parameter solving unit includes:
a model parameter selection subunit, configured to select a set of model parameters of the current residual model to be trained;
the output result determining subunit is used for inputting training data into the current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
A result difference value calculating subunit, configured to calculate a difference value between an output result corresponding to the set of model parameters and a residual label value of the training data after the previous iteration;
and the target parameter determining subunit is used for returning to execute the group of model parameters for selecting the current residual model to be trained in the model parameter selecting subunit until a second preset stopping condition is reached, and determining the group of model parameters corresponding to the smallest difference value in the difference values as the group of target parameters of the current residual model to be trained.
In one possible implementation, the output result determining subunit includes:
the residual model determining subunit is used for inputting the ith training data into the current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
the output result obtaining subunit is used for obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
the result difference calculation subunit includes:
a first difference calculating subunit, configured to calculate a first difference between an ith output result corresponding to the set of model parameters and a residual label value of the ith training data after a previous iteration;
And the result difference value determining subunit is used for calculating the square sum of the N first difference values, and determining the square sum as the difference value between the output result corresponding to the group of model parameters and the residual label value of the training data after the last iteration.
In a possible implementation, the model parameter selection subunit is specifically configured to: and selecting a group of model parameters of the current residual model to be trained in a preset parameter space range.
In one possible implementation, the model parameters include model hyper-parameters and model intra-parameters.
In one possible implementation, the second preset stop condition is reaching a preset number of times.
In one possible implementation manner, the residual label generating unit includes:
the fusion result output subunit is used for inputting the ith training data into the fusion model after the current iteration to obtain the ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
and the residual label determining subunit is used for calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual label value of the training data after the current iteration.
It should be noted that, for the technical details of the information identifying apparatus provided above, please refer to the above method embodiment.
In addition, the embodiment of the application also provides model parameter selection equipment, which comprises the following steps: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any implementation mode of the model parameter selection method according to the embodiment when executing the computer program.
The embodiment of the application also provides an image classification device, which comprises: the image classification system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any implementation mode of the image classification method according to the embodiment when executing the computer program.
The embodiment of the application also provides information identification equipment, which comprises the following steps: the information identification system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any implementation mode of the information identification method according to the embodiment when executing the computer program. In addition, the embodiment of the present application further provides a computer readable storage medium, where instructions are stored, where the instructions, when executed on a terminal device, cause the terminal device to perform any implementation of the model parameter selection method according to the embodiment described above, or perform the image classification method according to the embodiment described above, or perform the information identification method according to the embodiment described above.
It should be noted that, in the present description, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system or device disclosed in the embodiments, since it corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant points refer to the description of the method section.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of model parameter selection, the method comprising:
obtaining a target model, wherein the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target, and solving a group of target parameters of the current residual model to be trained; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration is the difference between the result label value of the training data and the output result of the target model on the training data; the training data comprises a positive sample medical image and a negative sample medical image; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of a positive sample medical image; alternatively, the training data includes positive sample text information and negative sample text information; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of positive sample text information;
Generating a current optimal residual error model of the current residual error model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual error model to obtain a fusion model after the current iteration;
calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data, and generating a residual label value of the training data after the current iteration; when the training data comprises a positive sample medical image and a negative sample medical image, the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is a probability value of the positive sample medical image; when the training data comprises positive sample text information and negative sample text information, the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is a probability value of the positive sample text information;
Repeatedly executing the target model obtaining and the subsequent steps until a first preset stopping condition is reached, and taking the fusion model after the iteration when the first preset stopping condition is reached as a final output model; when the training data comprises a positive sample medical image and a negative sample medical image, the final output model is a target fusion model; the target fusion model is used for inputting the acquired images to be classified and outputting image classification results; the image to be classified is a medical image, and the image classification result is a specific classification result of the medical image; when the training data comprises positive sample text information and negative sample text information, the final output model is a target fusion model; the target fusion model is used for inputting the acquired information to be identified and outputting information attribute types; the information to be identified is text information to be identified, and the information attribute category is a specific information attribute classification result of the text information to be identified.
2. The method according to claim 1, wherein solving the set of target parameters of the current residual model to be trained with a minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target comprises:
Selecting a group of model parameters of the current residual model to be trained;
inputting training data into a current residual model to be trained corresponding to the group of model parameters, and obtaining an output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference value between an output result corresponding to the group of model parameters and a residual label value of the training data after the last iteration;
and repeatedly executing the steps of selecting a group of model parameters of the current residual model to be trained and the follow-up steps until a second preset stopping condition is reached, and determining a group of model parameters corresponding to the smallest difference value in the difference values as a group of target parameters of the current residual model to be trained.
3. The method according to claim 2, wherein inputting training data into the current residual model to be trained corresponding to the set of model parameters, and obtaining an output result corresponding to the set of model parameters output by the current residual model to be trained corresponding to the set of model parameters, includes:
inputting the ith training data into a current residual model to be trained corresponding to the group of model parameters; i is an integer from 1 to N, N is the number of the training data;
Obtaining an ith output result corresponding to the group of model parameters output by the current residual model to be trained corresponding to the group of model parameters;
calculating a difference between an output result corresponding to the set of model parameters and a residual label value of the training data after the last iteration, including:
calculating a first difference value between an ith output result corresponding to the group of model parameters and a residual error label value of the ith training data after the last iteration;
and calculating the square sum of the N first differences, and determining the square sum as the difference between the output result corresponding to the set of model parameters and the residual label value of the training data after the last iteration.
4. A method according to claim 2 or 3, wherein the model parameters comprise model super parameters and model internal parameters.
5. The method according to claim 1, wherein the calculating the difference between the result tag value of the training data and the output result of the fusion model after the current iteration on the training data, generating the residual tag value of the training data after the current iteration, includes:
inputting the ith training data into the fusion model after the current iteration to obtain an ith output result output by the fusion model after the current iteration; i is an integer from 1 to N, N is the number of the training data;
And calculating the difference between the ith result label value of the training data and the ith output result output by the fusion model after the current iteration, and generating the ith residual error label value of the training data after the current iteration.
6. A model parameter selection apparatus, the apparatus comprising:
the target model generation unit is used for obtaining a target model, and the target model is obtained by fusing a fusion model after the last iteration with a current residual model to be trained;
the target parameter solving unit is used for solving a group of target parameters of the current residual model to be trained by taking the minimum difference between the output result of the current residual model to be trained on training data and the residual label value of the training data after the last iteration as a training target; when the first iteration is performed, the parameter label value of the training data generated after the last iteration is the result label value of the training data; the difference between the output result of the current residual model to be trained on the training data and the residual label value of the training data after the last iteration is the difference between the result label value of the training data and the output result of the target model on the training data; the training data comprises a positive sample medical image and a negative sample medical image; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of a positive sample medical image; alternatively, the training data includes positive sample text information and negative sample text information; the output result of the current residual error model to be trained on training data is that the training data is input into the current residual error model to be trained, and the training data output by the current residual error model to be trained is a probability value of positive sample text information;
The fusion model generation unit is used for generating a current optimal residual model of the current residual model to be trained by utilizing the target parameters, and fusing the fusion model after the last iteration with the current optimal residual model to obtain a fusion model after the current iteration;
the residual label generating unit is used for calculating the difference between the result label value of the training data and the output result of the fusion model after the current iteration on the training data and generating the residual label value of the training data after the current iteration; when the training data comprises a positive sample medical image and a negative sample medical image, the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is a probability value of the positive sample medical image; when the training data comprises positive sample text information and negative sample text information, the output result of the fusion model after the current iteration to the training data is that the training data is input into the fusion model after the current iteration, and the training data output by the fusion model after the current iteration is a probability value of the positive sample text information;
An output model determining unit, configured to return to executing the obtained target model in the target model generating unit until a first preset stop condition is reached, and take the fusion model after the iteration when the first preset stop condition is reached as a final output model; when the training data comprises a positive sample medical image and a negative sample medical image, the final output model is a target fusion model; the target fusion model is used for inputting the acquired images to be classified and outputting image classification results; the image to be classified is a medical image, and the image classification result is a specific classification result of the medical image; when the training data comprises positive sample text information and negative sample text information, the final output model is a target fusion model; the target fusion model is used for inputting the acquired information to be identified and outputting information attribute types; the information to be identified is text information to be identified, and the information attribute category is a specific information attribute classification result of the text information to be identified.
7. A model parameter selection apparatus, characterized by comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the model parameter selection method according to any one of claims 1-5 when executing the computer program.
8. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein instructions, which when run on a terminal device, cause the terminal device to perform the model parameter selection method according to any of claims 1-5.
CN201911415591.8A 2019-12-31 2019-12-31 Model parameter selection, image classification and information identification methods, devices and equipment Active CN111178443B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911415591.8A CN111178443B (en) 2019-12-31 2019-12-31 Model parameter selection, image classification and information identification methods, devices and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911415591.8A CN111178443B (en) 2019-12-31 2019-12-31 Model parameter selection, image classification and information identification methods, devices and equipment

Publications (2)

Publication Number Publication Date
CN111178443A CN111178443A (en) 2020-05-19
CN111178443B true CN111178443B (en) 2023-10-31

Family

ID=70652391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911415591.8A Active CN111178443B (en) 2019-12-31 2019-12-31 Model parameter selection, image classification and information identification methods, devices and equipment

Country Status (1)

Country Link
CN (1) CN111178443B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897737A (en) * 2017-01-24 2017-06-27 北京理工大学 A kind of high-spectrum remote sensing terrain classification method based on the learning machine that transfinites
CN107209746A (en) * 2015-11-16 2017-09-26 华为技术有限公司 model parameter fusion method and device
CN109376615A (en) * 2018-09-29 2019-02-22 苏州科达科技股份有限公司 For promoting the method, apparatus and storage medium of deep learning neural network forecast performance
CN109886349A (en) * 2019-02-28 2019-06-14 成都新希望金融信息有限公司 A kind of user classification method based on multi-model fusion
CN110135386A (en) * 2019-05-24 2019-08-16 长沙学院 A kind of human motion recognition method and system based on deep learning
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN110322423A (en) * 2019-04-29 2019-10-11 天津大学 A kind of multi-modality images object detection method based on image co-registration

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9147129B2 (en) * 2011-11-18 2015-09-29 Honeywell International Inc. Score fusion and training data recycling for video classification
EP3745284A1 (en) * 2015-11-16 2020-12-02 Huawei Technologies Co., Ltd. Model parameter fusion method and apparatus
WO2018009887A1 (en) * 2016-07-08 2018-01-11 University Of Hawaii Joint analysis of multiple high-dimensional data using sparse matrix approximations of rank-1
CN106548210B (en) * 2016-10-31 2021-02-05 腾讯科技(深圳)有限公司 Credit user classification method and device based on machine learning model training

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107209746A (en) * 2015-11-16 2017-09-26 华为技术有限公司 model parameter fusion method and device
CN106897737A (en) * 2017-01-24 2017-06-27 北京理工大学 A kind of high-spectrum remote sensing terrain classification method based on the learning machine that transfinites
CN109376615A (en) * 2018-09-29 2019-02-22 苏州科达科技股份有限公司 For promoting the method, apparatus and storage medium of deep learning neural network forecast performance
CN109886349A (en) * 2019-02-28 2019-06-14 成都新希望金融信息有限公司 A kind of user classification method based on multi-model fusion
CN110322423A (en) * 2019-04-29 2019-10-11 天津大学 A kind of multi-modality images object detection method based on image co-registration
CN110135386A (en) * 2019-05-24 2019-08-16 长沙学院 A kind of human motion recognition method and system based on deep learning
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于改进融合模型的电影推荐系统的研究与实现;于晨砚;《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》(第7期);I138-2058 *
基于深度学习模型融合的电压暂降源识别方法;郑智聪等;《中国电机工程学报》(第1期);第97-107页 *
多核信息融合模型及其应用;杨勃等;《仪器仪表学报》;第31卷(第2期);第248-252页 *

Also Published As

Publication number Publication date
CN111178443A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
US11403554B2 (en) Method and apparatus for providing efficient testing of systems by using artificial intelligence tools
JP6384065B2 (en) Information processing apparatus, learning method, and program
KR102219346B1 (en) Systems and methods for performing bayesian optimization
Wang et al. Efficient learning by directed acyclic graph for resource constrained prediction
WO2020144508A1 (en) Representative-based metric learning for classification and few-shot object detection
CN110046706B (en) Model generation method and device and server
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN111079780A (en) Training method of space map convolution network, electronic device and storage medium
CN110909868A (en) Node representation method and device based on graph neural network model
CN111340233B (en) Training method and device of machine learning model, and sample processing method and device
CN110597965B (en) Emotion polarity analysis method and device for article, electronic equipment and storage medium
CN109685104B (en) Determination method and device for recognition model
CN110930017A (en) Data processing method and device
CN109615080B (en) Unsupervised model evaluation method and device, server and readable storage medium
CN113449011A (en) Big data prediction-based information push updating method and big data prediction system
WO2016200408A1 (en) Hybrid classification system
CN111310829A (en) Confusion matrix-based classification result detection method and device and storage medium
CN111159481B (en) Edge prediction method and device for graph data and terminal equipment
Wang et al. Transfer learning based co-surrogate assisted evolutionary bi-objective optimization for objectives with non-uniform evaluation times
CN115797735A (en) Target detection method, device, equipment and storage medium
JPWO2016125500A1 (en) Feature conversion device, recognition device, feature conversion method, and computer-readable recording medium
CN109728958B (en) Network node trust prediction method, device, equipment and medium
CN111178443B (en) Model parameter selection, image classification and information identification methods, devices and equipment
CN113239272B (en) Intention prediction method and intention prediction device of network management and control system
CN113238947B (en) Man-machine collaborative dialogue system evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant