CN111325284A - Self-adaptive learning method and device based on multi-target dynamic distribution - Google Patents

Self-adaptive learning method and device based on multi-target dynamic distribution Download PDF

Info

Publication number
CN111325284A
CN111325284A CN202010158303.1A CN202010158303A CN111325284A CN 111325284 A CN111325284 A CN 111325284A CN 202010158303 A CN202010158303 A CN 202010158303A CN 111325284 A CN111325284 A CN 111325284A
Authority
CN
China
Prior art keywords
population
image data
mapping space
optimal
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010158303.1A
Other languages
Chinese (zh)
Inventor
何发智
李浩然
罗锦坤
梁亚倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010158303.1A priority Critical patent/CN111325284A/en
Publication of CN111325284A publication Critical patent/CN111325284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Abstract

The invention discloses an image classification method based on multi-target dynamic distribution self-adaption, which comprises the following steps of firstly obtaining first image data and second image data, and then searching by adopting a multi-target optimization algorithm to obtain a target mapping space, wherein the method comprises the following steps: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; screening an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution; mapping the first image data to a target mapping space to obtain third image data; training an image classifier based on the third image data; and predicting the label of the second image data by using the trained image classifier, and labeling the second image data. The invention can solve the technical problem of unbalanced distribution in the existing data migration learning process.

Description

Self-adaptive learning method and device based on multi-target dynamic distribution
Technical Field
The invention relates to the field of transfer learning, in particular to a self-adaptive learning method and device based on multi-target dynamic distribution.
Background
In the field of image recognition applications, tagged data is always rare due to the high cost of manual tagging. However, there are a huge amount of unlabeled data in the same category on the network, and how to improve the accuracy of the graph classifier by using the huge amount of unlabeled data is a challenge.
In the prior art, a data migration method is adopted, for example, a neighborhood adaptive data migration method is adopted, but the problem of data distribution imbalance exists in the data migration learning process in the prior art.
Disclosure of Invention
The invention provides an image classification method and device based on multi-target dynamic distribution self-adaption, which are used for solving or at least partially solving the technical problem of unbalanced data distribution in the conventional method.
In order to solve the above technical problem, a first aspect of the present invention provides a method for adaptive learning based on multi-target dynamic distribution, including:
s1: acquiring first image data and second image data, wherein the first image data is provided with a label, and the second image data is not provided with the label;
s2: obtaining a target mapping space by adopting a multi-objective optimization algorithm, comprising the following steps: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; screening an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution;
s3: mapping the first image data to a target mapping space to obtain third image data;
s4: training an image classifier based on the third image data;
s5: and predicting the label of the second image data by using the trained image classifier, and labeling the second image data.
In one embodiment, initializing the population in S2 and encoding the mapping space into the population includes:
s2.1.1: randomly setting corresponding mapping space conditions for a plurality of populations;
s2.1.2: taking parameters of a mapping space as decision variables, calculating corresponding conditional distribution probability and marginal distribution probability according to a fitness calculation formula, and taking the conditional distribution probability and the marginal distribution probability as target functions, wherein the mapping space condition is determined by the parameters of the mapping space;
s2.1.3: and simultaneously packaging the decision variables and the objective function into population individuals, wherein a plurality of individuals form an initial population.
In one embodiment, the step S2 is searching for an optimal population using a multi-objective optimization algorithm, where the optimal population is a pareto solution set that satisfies two objectives, and includes:
s2.2.1: in the current generation, carrying out evolution treatment on the current population by using a crossover operator and a mutation operator of a genetic algorithm to obtain a progeny population;
s2.2.2: in the current generation, a reverse learning algorithm is utilized to carry out reverse change on the sub-generation population, and after the populations before and after the reverse are combined, non-dominated sorting is carried out to obtain a better reverse sub-population;
s2.2.3: in the current generation, calculating the explosion radius and the number of sparks of the reverse sub-population by using a firework algorithm, taking the reverse sub-population as a firework population to generate the spark population in the peripheral range based on the explosion radius and the number of sparks, and carrying out non-dominated sorting on the firework population and the spark population to generate a next generation population;
s2.2.4: taking the next generation population as the current generation population, repeatedly executing S2.2.1-S2.2.3, and performing an iterative evolution process;
s2.2.5: and when the iterative evolution process meets the termination condition, terminating the iterative evolution and outputting a pareto solution set.
In one embodiment, the step S2 of screening out an optimal solution from the pareto solution set, and obtaining the target mapping space according to the optimal solution includes:
s2.3.1: in a solution space, connecting a theoretical optimal point and a global optimal point to form a reference vector line, and taking the vertical distance from each point in a pareto solution set to the reference vector line as the distance value of the point in the solution set;
s2.3.2: selecting a point with the minimum distance value as an optimal solution according to the calculated distance value;
s2.3.3: and setting parameters of the mapping space according to the optimal solution to obtain a target mapping space.
In one embodiment, after S5, the method further comprises:
calculating the accuracy of the prediction;
and updating the population individuals in the multi-objective optimization algorithm according to the calculated accuracy.
Based on the same inventive concept, the second aspect of the present invention provides an adaptive learning apparatus based on multi-target dynamic distribution, comprising:
the data acquisition module is used for acquiring first image data and second image data, wherein the first image data is provided with a label, and the second image data is not provided with the label;
the target mapping space searching module is used for searching and obtaining a target mapping space by adopting a multi-objective optimization algorithm, and comprises the following steps: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; screening an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution;
the mapping module is used for mapping the first image data to a target mapping space to obtain third image data;
the classifier training module is used for training an image classifier based on the third image data;
and the label prediction module is used for predicting the label of the second image data by utilizing the trained image classifier and labeling the second image data.
Based on the same inventive concept, a third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, performs the method of the first aspect.
Based on the same inventive concept, a fourth aspect of the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the program.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
the invention provides a multi-target dynamic distribution-based adaptive learning method, which comprises the steps of firstly, acquiring first image data with a label and second image data without the label; then, a target mapping space is obtained by adopting a multi-objective optimization algorithm, and then the first image data is mapped to the target mapping space to obtain third image data; training an image classifier based on the third image data; and finally, predicting the label of the second image data by using the trained image classifier, and labeling the second image data.
In the prior art, in the existing mapping space, data with tags and data without tags show the characteristics of independent and different distributions, and the problem of unbalanced distribution exists in the data migration learning process; at this time, the label of the third image data is the same as that of the first image data, but is different from the mapping space of the first image data, and then an image classifier is trained based on the third image data; and finally, predicting the label of the second image data by using the trained image classifier, and labeling the second image data, so that the first image data with the label and the second image data without the label can be mapped to a new mapping space (target mapping space), the distribution is the same or similar, and the technical problem of unbalanced data distribution in the data migration learning process is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation process of a multi-objective dynamic distribution-based adaptive learning method according to the present invention;
FIG. 2 is a schematic diagram of a flow chart of an implementation of a multi-objective optimization algorithm employed in an embodiment of the present invention;
FIG. 3 is a block diagram of an adaptive learning apparatus based on multi-objective dynamic distribution according to an embodiment of the present invention;
FIG. 4 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention;
fig. 5 is a block diagram of a computer device in an embodiment of the present invention.
Detailed Description
The inventor of the application finds out through a great deal of research and practice that: the existing method has the following problems in the data migration process: the process of data migration needs to consider both the edge distribution target and the conditional distribution target.
Therefore, the invention provides a multi-target dynamic distribution self-adaptive learning method and device, which are used for solving the problems of too little labeled data in machine learning and unbalanced distribution in a data migration process. By mapping the non-label data to the hidden space, the characteristics of the label data and the non-label data are combined in the hidden space, and the accurate label is predicted for the non-label data to realize data migration.
The construction method of the classifier based on the cluster classification joint mechanism is characterized in that the Bayesian classifier is constructed, the classification core part of the classifier is a relation matrix with optimal search, multi-target algorithm search is adopted, the data migration learning is aimed at, the core part of the classifier is used for changing mapping space, and the multi-target search algorithm reverse search is the optimal mapping space. The Bayes classifier is constructed by searching an optimal relation matrix through a multi-objective algorithm, and the optimal mapping space is searched through the multi-objective algorithm, so that the labeled data and the unlabeled data are distributed in the same way, and the problem of unbalanced distribution in the data migration learning process is solved.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The embodiment provides a method for adaptive learning based on multi-target dynamic distribution, please refer to fig. 1, the method includes:
s1: first image data and second image data are acquired, wherein the first image data is labeled and the second image data is unlabeled.
Specifically, the first image data and the second image are data of an existing mapping space, the first image data and the second image data are UCI image data, the first image data is used as a source sample, and the second image data is used as a target sample.
S2: searching and obtaining a target mapping space by adopting a multi-target optimization algorithm, wherein the method comprises the following steps: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; and screening out an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution.
Specifically, the mapping space: equivalent to a data space or a hidden space, inside which the distribution of samples changes after the mapping of the samples.
Feature extraction is performed on the UCI image data, and the extracted features are expressed by a two-dimensional (or multi-dimensional) array, namely, the distribution of samples is expressed. The mapping space can be regarded as a coordinate system, and the values of the arrays are changed to generate new values.
Pareto solution sets refer to solutions that exhibit non-dominance on two targets, equivalent to a non-dominated mapping space.
S3: and mapping the first image data to a target mapping space to obtain third image data.
Specifically, the first image data is mapped to the target mapping space, the label of the first image data is not changed, the first image data and the third image data contain the same original data, the first image data and the third image data are formed through different mappings, and the first image data and the third image data can be converted through one mapping.
S4: an image classifier is trained based on the third image data.
Specifically, an image classifier is trained using third image data obtained from the target mapping space as training data.
S5: and predicting the label of the second image data by using the trained image classifier, and labeling the second image data.
Specifically, in the step, the trained image classifier is mainly used for performing label prediction on image data without labels, so that second image data are labeled, and the first image data and the second image data are mapped into a target mapping space to be distributed in the same or similar manner, so that transfer learning is realized, and the problem of unbalanced distribution is solved.
In one embodiment, initializing the population in S2 and encoding the mapping space into the population includes:
s2.1.1: randomly setting corresponding mapping space conditions for a plurality of populations;
s2.1.2: taking parameters of a mapping space as decision variables, calculating corresponding conditional distribution probability and marginal distribution probability according to a fitness calculation formula, and taking the conditional distribution probability and the marginal distribution probability as target functions, wherein the mapping space condition is determined by the parameters of the mapping space;
s2.1.3: and simultaneously packaging the decision variables and the objective function into population individuals, wherein a plurality of individuals form an initial population.
Specifically, the mapping space (hidden space) is determined by partial parameters of the mapping space, a new distribution situation is obtained by mapping the samples into the hidden space, and the edge and conditional distribution probabilities are calculated according to the new distribution situation. The mapping of different mapping spaces can be regarded as the characteristics of data obtained through a series of complex operations, and the method is more complex than a common characteristic extraction method.
In one embodiment, the step S2 is searching for an optimal population using a multi-objective optimization algorithm, where the optimal population is a pareto solution set that satisfies two objectives, and includes:
s2.2.1: in the current generation, carrying out evolution treatment on the current population by using a crossover operator and a mutation operator of a genetic algorithm to obtain a progeny population;
s2.2.2: in the current generation, a reverse learning algorithm is utilized to carry out reverse change on the sub-generation population, and after the populations before and after the reverse are combined, non-dominated sorting is carried out to obtain a better reverse sub-population;
s2.2.3: in the current generation, calculating the explosion radius and the number of sparks of the reverse sub-population by using a firework algorithm, taking the reverse sub-population as a firework population to generate the spark population in the peripheral range based on the explosion radius and the number of sparks, and carrying out non-dominated sorting on the firework population and the spark population to generate a next generation population;
s2.2.4: taking the next generation population as the current generation population, repeatedly executing S2.2.1-S2.2.3, and performing an iterative evolution process;
s2.2.5: and when the iterative evolution process meets the termination condition, terminating the iterative evolution and outputting a pareto solution set.
Specifically, in the evolutionary algorithm, when a new generation population is generated, in order to ensure the diversity of the new generation population, a solution that is not preferred by fitness is selected with a certain small probability by a reverse learning algorithm. Thus, the better reverse sub-population obtained in step S2.2.2.
Non-dominated sorting is a sorting algorithm that sorts the chromosomes into layers, the layers being referred to as first-level non-dominated layers, second-level non-dominated layers … where the first-level non-dominated layers are at the Pareto frontier (Pareto Front).
The Fireworks Algorithm (Fireworks Algorithm), abbreviated as FWA, is a swarm intelligence Algorithm that is proposed inspired by Fireworks explosions in night sky. The firework algorithm starts iteration, and an explosion operator, a mutation operator, a mapping rule and a selection strategy are sequentially utilized until a termination condition is reached, namely the precision requirement of the problem is met or the maximum function evaluation times are reached.
The implementation of the firework algorithm comprises the following steps:
1) fireworks are randomly generated in a particular solution space, each fireworks representing a solution of the solution space.
2) And calculating the fitness value of each firework according to the fitness function, and generating sparks according to the fitness value. The number of sparks is calculated based on the idea of immunity concentration in immunology, i.e. the number of sparks generated by fireworks with better fitness value is larger.
3) According to the actual firework property and the actual situation of the search problem, sparks are generated in the radiation space of the fireworks. (the size of the explosion amplitude of a certain firework is determined by the fitness value of the firework on the function, and the larger the fitness value is, the larger the explosion amplitude is, and vice versa). Each spark represents one solution in the solution space. In order to ensure the diversity of the population, the fireworks need to be subjected to appropriate variation, such as Gaussian variation.
4) And calculating the optimal solution of the population, judging whether the optimal solution meets the requirements, stopping searching if the optimal solution meets the requirements, and continuing iteration if the optimal solution does not meet the requirements. The initial value of the iteration is the best solution obtained by this loop and the other solutions selected.
Wherein the termination condition in the step S2.2.5 includes: the iteration times meet the condition that the maximum iteration times are reached and the change of the whole population is smaller than the set minimum population change.
Referring to fig. 2, a schematic diagram of an implementation process of the multi-objective optimization algorithm is shown, after population initialization, an offspring population is obtained in a current generation (parent population) through a crossover operator and a mutation operator, then a candidate population, i.e., a better reverse sub-population, is obtained through an evaluation operation, and then a next generation population is obtained through a selection operation.
In one embodiment, the step S2 of screening out an optimal solution from the pareto solution set, and obtaining the target mapping space according to the optimal solution includes:
s2.3.1: in a solution space, connecting a theoretical optimal point and a global optimal point to form a reference vector line, and taking the vertical distance from each point in a pareto solution set to the reference vector line as the distance value of the point in the solution set;
s2.3.2: selecting a point with the minimum distance value as an optimal solution according to the calculated distance value;
s2.3.3: and setting parameters of the mapping space according to the optimal solution to obtain a target mapping space.
Specifically, a point with the minimum distance value is selected as an optimal solution, that is, a solution of the minimum distance value is selected, so as to set parameters of the mapping space, and obtain the target mapping space.
In one embodiment, after S5, the method further comprises:
calculating the accuracy of the prediction;
and updating the population individuals in the multi-objective optimization algorithm according to the calculated accuracy.
In particular, the method of calculating the accuracy of the prediction may be performed by existing methods, for example, by classifying the correct samples as a proportion of the total number of samples.
The lower right part of fig. 2 specifically shows the update process of population individuals, 1, an initial population is generated, which contains different data spaces and corresponding target samples have different forms; 2. taking the first image data as a source sample and the second image data as a target sample, calculating by a multi-target optimization algorithm to obtain a new mapping space (namely a target mapping space), generating a new source sample from the first image data in the new mapping space, and 3, training a classifier by using the new source sample; 4. and 5, predicting the target sample to obtain accuracy, and updating the target sample according to the accuracy condition, wherein the target samples corresponding to different data spaces are distributed differently. 6. And updating the target sample to enter the next iteration.
The multi-objective optimization algorithm is adjusted according to the predicted accuracy, for example, 1 population has 10 solutions, one solution corresponds to one individual, each solution contains 1 mapping space, 10 parent populations generate 10 offspring populations, and the updating process can sort out 10 solutions from 10 parent +10 offspring and 20 solutions in total according to the accuracy to enter the next cycle. In each iteration, the multi-objective optimization algorithm changes the values of the target data, the new target data represents a new mapping space, and the values of the target data refer to values (characteristic values) of the multi-dimensional array, and the conditional distribution and the probability distribution can be calculated by the values.
In order to verify the effectiveness of the method provided by the invention, the multi-target dynamic distribution self-adaptive algorithm provided by the invention is compared with other 7 advanced migration algorithms on 4 migration tasks, the prediction accuracy is compared, and the experimental result proves that the method provided by the invention has higher accuracy than the existing migration algorithms, so that the algorithm provided by the invention is more excellent.
Example two
Based on the same inventive concept, the present embodiment provides a device for adaptive learning based on multi-target dynamic distribution, please refer to fig. 3, the device includes:
a data acquiring module 201, configured to acquire first image data and second image data, where the first image data has a label and the second image data has no label;
the target mapping space searching module 202 is configured to obtain a target mapping space by using a multi-objective optimization algorithm, and includes: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; screening an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution;
a mapping module 203, configured to map the first image data to a target mapping space, so as to obtain third image data;
a classifier training module 204, configured to train an image classifier based on the third image data;
and the label prediction module 205 is configured to predict a label of the second image data by using the trained image classifier, and label the second image data.
Since the apparatus described in the second embodiment of the present invention is an apparatus used for implementing the multi-objective dynamic distribution-based adaptive learning method in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and deformation of the apparatus based on the method described in the first embodiment of the present invention, and thus details thereof are not described herein. All the devices adopted in the method of the first embodiment of the present invention belong to the protection scope of the present invention.
EXAMPLE III
Referring to fig. 4, based on the same inventive concept, the present application further provides a computer-readable storage medium 300, on which a computer program 311 is stored, which when executed implements the method according to the first embodiment.
Since the computer-readable storage medium introduced in the third embodiment of the present invention is a computer-readable storage medium used for implementing the multi-objective dynamic distribution-based adaptive learning method in the first embodiment of the present invention, based on the method introduced in the first embodiment of the present invention, those skilled in the art can understand the specific structure and deformation of the computer-readable storage medium, and therefore, no further description is given here. Any computer readable storage medium used in the method of the first embodiment of the present invention is within the scope of the present invention.
Example four
Based on the same inventive concept, the present application further provides a computer device, please refer to fig. 5, which includes a storage 401, a processor 402, and a computer program 403 stored in the storage and running on the processor, and when the processor 402 executes the above program, the method in the first embodiment is implemented.
Since the computer device introduced in the fourth embodiment of the present invention is a computer device used for implementing the multi-target dynamic distribution-based adaptive learning method in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and deformation of the computer device based on the method introduced in the first embodiment of the present invention, and thus, no further description is given here. All the computer devices used in the method in the first embodiment of the present invention are within the scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (8)

1. A multi-target dynamic distribution-based adaptive learning method is characterized by comprising the following steps:
s1: acquiring first image data and second image data, wherein the first image data is provided with a label, and the second image data is not provided with the label;
s2: searching and obtaining a target mapping space by adopting a multi-target optimization algorithm, wherein the method comprises the following steps: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; screening an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution;
s3: mapping the first image data to a target mapping space to obtain third image data;
s4: training an image classifier based on the third image data;
s5: and predicting the label of the second image data by using the trained image classifier, and labeling the second image data.
2. The method of claim 1, wherein initializing the population in S2 and encoding the mapping space into the population comprises:
s2.1.1: randomly setting corresponding mapping space conditions for a plurality of populations;
s2.1.2: taking parameters of a mapping space as decision variables, calculating corresponding conditional distribution probability and marginal distribution probability according to a fitness calculation formula, and taking the conditional distribution probability and the marginal distribution probability as target functions, wherein the mapping space condition is determined by the parameters of the mapping space;
s2.1.3: and simultaneously packaging the decision variables and the objective function into population individuals, wherein a plurality of individuals form an initial population.
3. The method of claim 1, wherein the step of searching for the optimal population using the multi-objective optimization algorithm in S2, wherein the optimal population is a pareto solution set that satisfies two objectives, comprises:
s2.2.1: in the current generation, carrying out evolution treatment on the current population by using a crossover operator and a mutation operator of a genetic algorithm to obtain a progeny population;
s2.2.2: in the current generation, a reverse learning algorithm is utilized to carry out reverse change on the sub-generation population, and after the populations before and after the reverse are combined, non-dominated sorting is carried out to obtain a better reverse sub-population;
s2.2.3: in the current generation, calculating the explosion radius and the number of sparks of the reverse sub-population by using a firework algorithm, taking the reverse sub-population as a firework population to generate the spark population in the peripheral range based on the explosion radius and the number of sparks, and carrying out non-dominated sorting on the firework population and the spark population to generate a next generation population;
s2.2.4: taking the next generation population as the current generation population, repeatedly executing S2.2.1-S2.2.3, and performing an iterative evolution process;
s2.2.5: and when the iterative evolution process meets the termination condition, terminating the iterative evolution and outputting a pareto solution set.
4. The method of claim 3, wherein the step of screening out the optimal solution from the pareto solution set in S2, and obtaining the target mapping space according to the optimal solution comprises:
s2.3.1: in a solution space, connecting a theoretical optimal point and a global optimal point to form a reference vector line, and taking the vertical distance from each point in a pareto solution set to the reference vector line as the distance value of the point in the solution set;
s2.3.2: selecting a point with the minimum distance value as an optimal solution according to the calculated distance value;
s2.3.3: and setting parameters of the mapping space according to the optimal solution to obtain a target mapping space.
5. The method of claim 1, wherein after S5, the method further comprises:
calculating the accuracy of the prediction;
and updating the population individuals in the multi-objective optimization algorithm according to the calculated accuracy.
6. A self-adaptive learning device based on multi-target dynamic distribution is characterized by comprising:
the data acquisition module is used for acquiring first image data and second image data, wherein the first image data is provided with a label, and the second image data is not provided with the label;
the target mapping space searching module is used for searching and obtaining a target mapping space by adopting a multi-objective optimization algorithm, and comprises the following steps: initializing a population, coding the condition of a mapping space into the population, and searching an optimal population by using a multi-objective optimization algorithm, wherein the optimal population is a pareto solution set which meets two targets, and the two targets are a conditional distribution probability and an edge distribution probability; screening an optimal solution from the pareto solution set, and obtaining a target mapping space according to the optimal solution;
the mapping module is used for mapping the first image data to a target mapping space to obtain third image data;
the classifier training module is used for training an image classifier based on the third image data;
and the label prediction module is used for predicting the label of the second image data by utilizing the trained image classifier and labeling the second image data.
7. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed, implements the method of any one of claims 1 to 5.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the program.
CN202010158303.1A 2020-03-09 2020-03-09 Self-adaptive learning method and device based on multi-target dynamic distribution Pending CN111325284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158303.1A CN111325284A (en) 2020-03-09 2020-03-09 Self-adaptive learning method and device based on multi-target dynamic distribution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158303.1A CN111325284A (en) 2020-03-09 2020-03-09 Self-adaptive learning method and device based on multi-target dynamic distribution

Publications (1)

Publication Number Publication Date
CN111325284A true CN111325284A (en) 2020-06-23

Family

ID=71173183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158303.1A Pending CN111325284A (en) 2020-03-09 2020-03-09 Self-adaptive learning method and device based on multi-target dynamic distribution

Country Status (1)

Country Link
CN (1) CN111325284A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836794A (en) * 2021-01-26 2021-05-25 深圳大学 Method, device and equipment for determining image neural architecture and storage medium
CN112836796A (en) * 2021-01-27 2021-05-25 北京理工大学 Method for super-parameter collaborative optimization of system resources and model in deep learning training
CN113252586A (en) * 2021-04-28 2021-08-13 深圳大学 Hyperspectral image reconstruction method, terminal device and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260714A1 (en) * 2017-03-10 2018-09-13 Yun Li Global optimization, search and machine learning method based on the lamarckian principle of inheritance of acquired characteristics
CN110097088A (en) * 2019-04-08 2019-08-06 燕山大学 A kind of dynamic multi-objective evolvement method based on transfer learning Yu particular point strategy
CN110210545A (en) * 2019-05-27 2019-09-06 河海大学 Infrared remote sensing water body classifier construction method based on transfer learning
CN110490234A (en) * 2019-07-19 2019-11-22 武汉大学 The construction method and classification method of classifier based on Cluster Classification associative mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260714A1 (en) * 2017-03-10 2018-09-13 Yun Li Global optimization, search and machine learning method based on the lamarckian principle of inheritance of acquired characteristics
CN110097088A (en) * 2019-04-08 2019-08-06 燕山大学 A kind of dynamic multi-objective evolvement method based on transfer learning Yu particular point strategy
CN110210545A (en) * 2019-05-27 2019-09-06 河海大学 Infrared remote sensing water body classifier construction method based on transfer learning
CN110490234A (en) * 2019-07-19 2019-11-22 武汉大学 The construction method and classification method of classifier based on Cluster Classification associative mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
茹蓓 等: "基于改进粒子群优化的无标记数据鲁棒聚类算法", 《计算机应用研究》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836794A (en) * 2021-01-26 2021-05-25 深圳大学 Method, device and equipment for determining image neural architecture and storage medium
CN112836794B (en) * 2021-01-26 2023-09-29 深圳大学 Method, device, equipment and storage medium for determining image neural architecture
CN112836796A (en) * 2021-01-27 2021-05-25 北京理工大学 Method for super-parameter collaborative optimization of system resources and model in deep learning training
CN112836796B (en) * 2021-01-27 2022-07-01 北京理工大学 Method for super-parameter collaborative optimization of system resources and model in deep learning training
CN113252586A (en) * 2021-04-28 2021-08-13 深圳大学 Hyperspectral image reconstruction method, terminal device and computer-readable storage medium
CN113252586B (en) * 2021-04-28 2023-04-28 深圳大学 Hyperspectral image reconstruction method, terminal equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN111325284A (en) Self-adaptive learning method and device based on multi-target dynamic distribution
CN110390347B (en) Condition-guided countermeasure generation test method and system for deep neural network
Tavakoli Modeling genome data using bidirectional LSTM
US20230134531A1 (en) Method and system for rapid retrieval of target images based on artificial intelligence
CN111325264A (en) Multi-label data classification method based on entropy
CN114283888A (en) Differential expression gene prediction system based on hierarchical self-attention mechanism
Chouaib et al. Feature selection combining genetic algorithm and adaboost classifiers
CN112613391B (en) Hyperspectral image waveband selection method based on reverse learning binary rice breeding algorithm
CN116805157B (en) Unmanned cluster autonomous dynamic evaluation method and device
CN112508177A (en) Network structure searching method and device, electronic equipment and storage medium
CN115296898B (en) Multi-target evolution characteristic selection method for constructing network intrusion detection system
US20220336057A1 (en) Efficient voxelization for deep learning
US11515010B2 (en) Deep convolutional neural networks to predict variant pathogenicity using three-dimensional (3D) protein structures
Babatunde et al. Comparative analysis of genetic algorithm and particle swam optimization: An application in precision agriculture
WO2022221587A1 (en) Artificial intelligence-based analysis of protein three-dimensional (3d) structures
AU2022259667A1 (en) Efficient voxelization for deep learning
Ramesh Deep Learning for Taxonomy Prediction
US20230047347A1 (en) Deep neural network-based variant pathogenicity prediction
CN113469244B (en) Volkswagen app classification system
KR102111396B1 (en) Apparatus for training neural networks and operating method thereof
Goel et al. Implementing RNN with Non-Randomized GA for the Storage of Static Image Patterns
Yulita et al. Combining inception-V3 and support vector machine for garbage classification
CN114329006A (en) Image retrieval method, device, equipment and computer readable storage medium
CN115937602A (en) Warriors and horses fragment classification method based on intuitive fuzzy niche technology whale optimization
Czejdo : Classifying and Generating Repetitive Elements in the Genome Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623