CN116434273A - Multi-label prediction method and system based on single positive label - Google Patents

Multi-label prediction method and system based on single positive label Download PDF

Info

Publication number
CN116434273A
CN116434273A CN202310270107.7A CN202310270107A CN116434273A CN 116434273 A CN116434273 A CN 116434273A CN 202310270107 A CN202310270107 A CN 202310270107A CN 116434273 A CN116434273 A CN 116434273A
Authority
CN
China
Prior art keywords
model
sample
label
mark
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310270107.7A
Other languages
Chinese (zh)
Inventor
徐宁
吴永迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202310270107.7A priority Critical patent/CN116434273A/en
Publication of CN116434273A publication Critical patent/CN116434273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-label prediction method and a system based on a single positive label, which are characterized in that firstly, sample data are subjected to analysis pretreatment to obtain sample characteristic data and a sample label data set; dividing the preprocessed data into a training set and a verification set, constructing a prediction model and a mark residual model based on the mark missing property for the training set data, constructing a sample feature space correlation matrix and a mark space correlation matrix by utilizing the correlation of a sample feature space and a mark space, and constructing a training model by utilizing the Cookmar-Johnson divergence and constraint conditions; optimizing parameters in the model alternate iteration to obtain an optimal multi-mark prediction model; and finally, calculating the descriptive degree of the test set sample about each category, and performing multi-label prediction by using threshold division.

Description

Multi-label prediction method and system based on single positive label
Technical Field
The invention belongs to the technical field of classification detection of machine learning, and mainly relates to a multi-label prediction method and system based on single positive labels.
Background
With the rapid development of the machine learning field in recent years, a learning process of multi-label prediction is to learn mapping from an instance to a plurality of labels, so as to generate a prediction model to predict. For one example, multi-label prediction is performed, compared with single-label learning, it is obvious that something has more than one label, but compared with marking a sample with a plurality of labels, the cost of the task is obviously higher than that of marking a sample with only one single label, so that the process of marking a sample with expensive and complicated multi-label prediction results through single-label learning can be obviously saved, and therefore, a multi-label learning method based on single label obviously has a large application market.
Further, conventional multi-label learning is an important learning model, and mainly uses auxiliary information, i.e., correlation between samples, to construct a correlation model of samples, so as to use correlation between samples to perform label diffusion, including manifold-based learning and deep-learning-based main methods. However, conventional multi-label learning methods often rely on a large number of training samples and a large number of labels. In the scene of single positive mark learning, the training effect is often poor due to the lack of enough sample marks, so that we construct a distribution model by using the correlation of samples, react the correlation into a mark space, carry out mark diffusion on the mark space, and carry out multi-mark learning by using the obtained mark distribution. In summary, when learning the multi-label problem, a large amount of label information is often required, and the sample marking is often a very expensive and complicated work, and in view of this, a method capable of solving the problem of insufficient label information in the multi-label learning prediction process is urgently needed, so that time and economic cost are saved.
Disclosure of Invention
Aiming at the problem of insufficient marking information in the prior art, the invention provides a multi-marking prediction method and a multi-marking prediction system based on a single positive label, which are characterized in that sample data are firstly analyzed and preprocessed to obtain sample characteristic data and a sample marking data set; dividing the preprocessed data into a training set and a verification set, constructing a prediction model and a mark residual model based on the mark missing property for the training set data, constructing a sample feature space correlation matrix and a mark space correlation matrix by utilizing the correlation of a sample feature space and a mark space, and constructing a training model by utilizing the Cookmar-Johnson divergence and constraint conditions; optimizing parameters in the model alternate iteration to obtain an optimal multi-mark prediction model; and finally, calculating the descriptive degree of the test set sample about each category, and performing multi-label prediction by using threshold division.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a multi-label learning method based on a single positive label comprises the following steps:
s1, data preprocessing: analyzing the sample data to obtain sample feature data and a sample marking data set, extracting sample features, normalizing and sorting the extracted features, and converting the extracted features into numerical vector data;
s2, training of a prediction model: dividing the data preprocessed in the step S1 into a training set and a verification set, constructing a prediction model and a mark residual model based on the mark missing property for the training set data, constructing a sample feature space correlation matrix and a mark space correlation matrix by utilizing the correlation of a sample feature space and a mark space, and constructing a training model by utilizing the Cookmar-Johnson divergence and constraint conditions; wherein, the liquid crystal display device comprises a liquid crystal display device,
the prediction model W obtains a reasonable label value through WX matrix operation with sample characteristics to classify and predict targets;
the marked residual model S supplements the original sample mark Y through SX matrix operation with sample characteristics, so that the value of SX+Y is equal to the real mark of the sample;
s3, model optimization: optimizing parameters in the model alternate iteration through the training model constructed in the optimization training step S2 to obtain an optimal multi-mark prediction model;
s4: multi-label prediction: and (3) optimizing the trained optimal multi-label prediction model by using the step (S3), calculating the descriptive degree of the test set sample about each category, and carrying out multi-label prediction by using threshold division.
As an improvement of the present invention, in the step S1, a specific method for converting the extracted feature into the numerical vector data is as follows: an original value x of the characteristic attribute A is normalized and mapped into a value x' in an interval [0,1] through min-max, and the formula is as follows:
Figure BDA0004134318150000031
as another improvement of the present invention, the step S2 specifically includes the following steps:
s21, training set D i ={X i ,Y i Input to the predictive model, where X i For the feature space of the sample, Y i For a labeling space of a sample, for a feature space, constructing a feature similarity distribution matrix P by using similarity among sample features, wherein each P ij Representing sample x i And x j Similarity between:
Figure BDA0004134318150000032
wherein x is i 、x j 、x k And x l Each expressing characteristics of the selected two adjacent samples, σ being the variance of all samples selected;
and is characteristic of the selected two adjacent samples, is the variance of all samples selected S22: constructing a similarity distribution model of a sample mark space Y, constructing a similarity model Q by using the relation between marks of the sample and the marks, and constructing a similarity model Q for each Q ij The sign y i And y j Similarity between:
wherein y is i 、y j 、y k And y l Values that each express the labels of the selected two neighboring samples and are also the labels of the selected two neighboring samples
S23, obtaining reasonable mark distribution L by minimizing C prediction based on the Cookmar-Johnson divergence between the mark distribution model and the sample similarity distribution model, wherein the value of the Cookmar-Johnson divergence C is as follows:
Figure BDA0004134318150000034
s24, after reasonable marking distribution L is obtained in the step S23, a training model is built to train a prediction model W, a marking residual model matrix S is built to supplement fitting based on the difference between the initial marking matrix Y and the prediction matrix, the prediction model W is trained, and L is utilized based on the sparsity of the residual matrix S 1 Norms are constrained.
As another improvement of the present invention, the objective function of the prediction model W in the step S24 is:
Figure BDA0004134318150000035
wherein W is a prediction model, S is a mark residual model, Y is an initial single positive mark matrix, L is mark distribution obtained in the step S23, and alpha, beta and gamma are super parameters of model training.
As a further improvement of the present invention, the step S3 further includes:
s31: the mark distribution L and the residual matrix S in the control model are kept unchanged, and the gradient descent iteration optimization prediction model W is utilized:
Figure BDA0004134318150000041
s32: after optimizing the prediction model W, keeping the prediction model W and the marking distribution L in the model unchanged, and optimizing the marking residual error model S by using an ADMM method:
Figure BDA0004134318150000042
s33: after optimizing the marking residual error model S, keeping W and residual error matrix S in the model unchanged, and optimizing marking distribution L by using a simulated annealing method:
Figure BDA0004134318150000043
s34: and repeating the steps S31-S33 until the objective function omega is minimized, and obtaining the optimal multi-label prediction model W.
As a further improvement of the present invention, in the step S4, the test set sample X te As an input to the optimal multi-label prediction model W, a label distribution WX is output te And obtaining marked distribution prediction through binarization, and considering that a prediction result belongs to the category when the median value of the distribution exceeds a threshold value, or else, not belonging to the category.
In order to achieve the above purpose, the invention also adopts the technical scheme that: a single positive label based multi-label learning system comprising a computer program which when executed by a processor performs the steps of the method as any one of the above.
Compared with the prior art, the invention has the beneficial effects that:
(1) The problem that the labeling of the sample is complicated and expensive is solved, and the manpower and financial resources are saved;
(2) Facing a single-label sample we can extend it into a multi-label sample by this method, which can help to increase sample information.
Drawings
FIG. 1 is a workflow diagram of the method of the present invention;
fig. 2 is a schematic flow chart of the model iterative training phase in step S2 of the method of the present invention.
Detailed Description
The present invention is further illustrated in the following drawings and detailed description, which are to be understood as being merely illustrative of the invention and not limiting the scope of the invention.
Example 1
For a given multi-label classification task based on only a single positive label sample, it can be seen that an efficient label training model is trained using a training set, after which test samples are predicted and classified using the model. Thus, a multi-label learning method based on a single positive label, as shown in fig. 1, comprises the steps of: a data preprocessing stage, an iterative training stage, an optimizing stage and a prediction classifying stage.
S1, data preprocessing stage
Analyzing the sample to obtain sample characteristic data and a sample marking data set, extracting sample characteristics, carrying out normalization arrangement on the extracted characteristics, and converting the extracted characteristics into numerical vector data.
The event feature includes processing the data to obtain a sample feature description string_id and a single positive sign description string label. Binary representation is carried out on event behavior description string_id and string label, for the attribute value of each binary bit sample feature of string id, the value is between 0 and 1, the higher the attribute value is, the greater the correlation degree between the sample feature and the sample feature is, and a vector matrix is formed; the security tag stringlabel represents a class label value of the sample, and when the class label is 0, the sample is not or is not but is not labeled, and when the class label is 1, the sample is in the class, so as to form a vector matrix, which specifically comprises:
the sample features were assigned a maximum value of 1 and a minimum value of 0, with other values distributed among them. For each attribute, minA and maxA are respectively the minimum value and the maximum value of the attribute A, and one original value x of A is mapped into a value x' in an interval [0,1] through min-max standardization, wherein the formula is as follows:
Figure BDA0004134318150000051
secondly, setting a single class value of the sample in a marking matrix of the sample as 1 and the rest as 0, and dividing the data into a training set and a testing set
And S2, in an iterative training stage, optimizing the multi-mark training model.
Dividing the data preprocessed in the step S1 into a training set and a verification set, constructing a prediction model and a mark residual model based on the mark missing property for the training set data, constructing a sample feature space correlation matrix and a mark space correlation matrix by utilizing the correlation of a sample feature space and a mark space, and constructing a training model by utilizing a Cookmar-Johnson divergence and constraint conditions, wherein the prediction model W is the prediction matrix, and classifying and predicting targets by obtaining a reasonable label value through WX matrix operation with sample features; the residual model S is also a classification prediction matrix, and the main function of the residual model S is to supplement the original sample mark Y through SX matrix operation with sample characteristics so that the value of SX+Y is equal to the real mark of the sample.
S21, taking the training set as the input of the model, and for one training set, D i ={X i ,Y i (wherein X is i For the feature space of the sample, Y i For the labeling space of the sample, firstly, for the feature space, a feature similarity distribution matrix P is constructed by utilizing the similarity among the features of the sample, wherein each P ij
Representing sample x i And x j Similarity between:
Figure BDA0004134318150000061
wherein x in the molecule i And x j And x in denominator k Sum x of l Each expressing the characteristics of the selected two adjacent samples, and the numerator divided by the denominator expresses the characteristic similarity measure between the two sample points i and j divided by the sum of the characteristic similarity measures between all samples, so that all p ij Sum is 1, σ is the variance of all samples selected;
s22, constructing a similarity distribution model of the marks Y of the sample, and constructing a similarity model Q by using the relation between the marks of the sample, wherein for each Q ij Representation mark u i And y j Similarity between:
Figure BDA0004134318150000062
wherein u in the molecule i And u j And y in denominator k And y l Each expressing the value of the label of the selected two adjacent samples, the numerator divided by the denominator expressing the label similarity measure between two sample points i and j divided by the sum of the label similarity measures between all samples, such that all q ij The sum is 1;
s23, obtaining reasonable mark distribution L based on the minimum C prediction based on the Cookimension-Johnson divergence between the mark distribution model and the sample similarity distribution model, wherein the value of the Cookimension-Johnson divergence C is as follows:
Figure BDA0004134318150000071
s24, constructing a training model to train a prediction model W after obtaining reasonable mark distribution L in the step S23, constructing a residual matrix S to supplement the fitting part based on the difference between the initial mark matrix Y and the prediction matrix so as to help train the prediction model W, and utilizing L based on the sparsity of the residual matrix 1 The norm constrains it, and the final constructed objective function is:
Figure BDA0004134318150000072
wherein W is a prediction model, S is a mark residual model, X is a characteristic matrix of a sample, Y is an initial single positive mark matrix, L is mark distribution obtained in the step S23, alpha, beta and gamma are super parameters of model training, and the effect of the model is controlled through artificial setting and adjustment control in the training process.
S3, optimizing: for the objective function Ω, a set of suitable super-parameters is selected for iterative alternating optimization, as shown in fig. 2;
s31, firstly, controlling L and S in a model to remain unchanged, and iteratively optimizing W of the model by utilizing gradient descent:
Figure BDA0004134318150000073
s32, after optimizing the model W, keeping W and mark distribution L in the model unchanged, and optimizing the model S by using an ADMM method:
Figure BDA0004134318150000074
s33, after optimizing the model S, keeping W and S in the model unchanged, and optimizing the mark distribution L by using a simulated annealing method:
Figure BDA0004134318150000075
and S34, repeating the steps S31-S33 until the objective function omega is minimized, and obtaining the optimal mark prediction model W.
S4, prediction classification stage
The preprocessed test set data X te As a model input, a marker distribution WX is calculated using a predictive model W te And selecting a reasonable threshold t, and regarding the distribution median exceeding the threshold t, considering that the predicted result belongs to the category, or else, not belonging to the category.
Example 2
For example, we are now about to construct a predictive model for an animal to achieve multi-labeled classification of the animal by collecting some basic characteristics of the animal.
S1, data preprocessing stage
The characteristics of different animals were collected such as: information on length, age, color, tail, beard, pupil, recipe, natural enemy, etc. is digitized to obtain a diagnosis matrix X, and a label is collected, for example, for a cat: a tag Y in the bos cat, mammal, feline.
S2, model training phase:
the objective function through final construction as in example 1 above is:
Figure BDA0004134318150000081
finally, optimizing iterative training to obtain a prediction model W.
S3, model prediction phase:
finally we predict the characteristic information X of the sample te Calculating a marker distribution WX using a predictive model W te And selecting a reasonable threshold t, and regarding the distribution median exceeding the threshold t, considering that the predicted result belongs to the category, or else, not belonging to the category. For example, finally, a sample with characteristics of long hair, fat circle, small ear, round tip, tail, character temperature tame, smart, omnivore, natural enemy mouse and the like is input, and finally, labels of Persian cat, feline, mammal and the like are predicted.
It should be noted that the foregoing merely illustrates the technical idea of the present invention and is not intended to limit the scope of the present invention, and that a person skilled in the art may make several improvements and modifications without departing from the principles of the present invention, which fall within the scope of the claims of the present invention.

Claims (7)

1. A multi-label prediction method based on a single positive label is characterized by comprising the following steps:
s1, data preprocessing: analyzing the sample data to obtain sample feature data and a sample marking data set, extracting sample features, normalizing and sorting the extracted features, and converting the extracted features into numerical vector data;
s2, training of a prediction model: dividing the data preprocessed in the step S1 into a training set and a verification set, constructing a prediction model and a mark residual model based on the mark missing property for the training set data, constructing a sample feature space correlation matrix and a mark space correlation matrix by utilizing the correlation of a sample feature space and a mark space, and constructing a training model by utilizing the Cookmar-Johnson divergence and constraint conditions,
the prediction model W obtains a reasonable label value through WX matrix operation with sample characteristics to classify and predict targets;
the marked residual model S supplements the original sample mark Y through SX matrix operation with sample characteristics, so that the value of SX+Y is equal to the real mark of the sample;
s3, model optimization: optimizing parameters in the model alternate iteration through the training model constructed in the optimization training step S2 to obtain an optimal multi-mark prediction model;
s4: multi-label prediction: and (3) optimizing the trained optimal multi-label prediction model by using the step (S3), calculating the descriptive degree of the test set sample about each category, and carrying out multi-label prediction by using threshold division.
2. The multi-label prediction method based on single positive labels according to claim 1, wherein: in the step S1, the specific method for converting the extracted features into numerical vector data is as follows: an original value x of the characteristic attribute A is normalized and mapped into a value x' in an interval [0,1] through min-max, and the formula is as follows:
Figure FDA0004134318140000011
3. the multi-label prediction method according to claim 1, wherein the step S2 specifically includes the steps of:
s21, training set D i ={X i ,Y i Input to the predictive model, where X i For the feature space of the sample, Y i For a labeling space of a sample, for a feature space, constructing a feature similarity distribution matrix P by using similarity among sample features, wherein each P ij Representing sample x i And x j Similarity between:
Figure FDA0004134318140000021
wherein x is i 、x j 、x k And x l Each expressing characteristics of the selected two adjacent samples, σ being the variance of all samples selected;
s22: constructing a similarity distribution model of the sample mark space Y, constructing a similarity model Q by using the relation between marks of the samples, and for each of the marksQ ij The sign y i And y j Similarity between:
Figure FDA0004134318140000022
wherein y is i 、y j 、y k And y l Each expressing the values of the markers of the selected two adjacent samples;
s23, obtaining reasonable mark distribution L by minimizing C prediction based on the Cookmar-Johnson divergence between the mark distribution model and the sample similarity distribution model, wherein the value of the Cookmar-Johnson divergence C is as follows:
Figure FDA0004134318140000023
s24, after reasonable marking distribution L is obtained in the step S23, a training model is built to train a prediction model W, a marking residual model matrix S is built to supplement fitting based on the difference between the initial marking matrix Y and the prediction matrix, the prediction model W is trained, and L is utilized based on the sparsity of the residual matrix S 1 Norms are constrained.
4. A single positive label based multi-label prediction method as claimed in claim 3, wherein: the objective function of the prediction model W in step S24 is as follows:
Figure FDA0004134318140000024
wherein W is a prediction model, S is a mark residual model, X is a characteristic matrix of a sample, Y is an initial single positive mark matrix, L is mark distribution obtained in the step S23, and alpha, beta and gamma are super parameters of model training.
5. A single positive label based multi-label prediction method according to claim 3 or 4, wherein: the step S3 further includes:
s31: the mark distribution L and the mark residual model S in the control model are kept unchanged, and the gradient descent iteration optimization prediction model W is utilized:
Figure FDA0004134318140000031
s32: after optimizing the prediction model W, keeping the prediction model W and the marking distribution L in the model unchanged, and optimizing the marking residual error model S by using an ADMM method:
Figure FDA0004134318140000032
s33: after optimizing the marking residual error model S, keeping a prediction model W and the marking residual error model S in the model unchanged, and optimizing marking distribution L by using a simulated annealing method:
Figure FDA0004134318140000033
s34: and repeating the steps S31-S33 until the objective function omega is minimized, and obtaining the optimal multi-label prediction model W.
6. The multi-label prediction method based on single positive labels according to claim 5, wherein: in the step S4, a test set sample X te As an input to the optimal multi-label prediction model W, a label distribution WX is output te And obtaining marked distribution prediction through binarization, and considering that a prediction result belongs to the category when the median value of the distribution exceeds a threshold value, or else, not belonging to the category.
7. A single positive label based multi-label prediction system comprising a computer program characterized by: the computer program, when executed by a processor, implements the steps of the method as described in any of the above.
CN202310270107.7A 2023-03-20 2023-03-20 Multi-label prediction method and system based on single positive label Pending CN116434273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310270107.7A CN116434273A (en) 2023-03-20 2023-03-20 Multi-label prediction method and system based on single positive label

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310270107.7A CN116434273A (en) 2023-03-20 2023-03-20 Multi-label prediction method and system based on single positive label

Publications (1)

Publication Number Publication Date
CN116434273A true CN116434273A (en) 2023-07-14

Family

ID=87083816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310270107.7A Pending CN116434273A (en) 2023-03-20 2023-03-20 Multi-label prediction method and system based on single positive label

Country Status (1)

Country Link
CN (1) CN116434273A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274726A (en) * 2023-11-23 2023-12-22 南京信息工程大学 Picture classification method and system based on multi-view supplementary tag

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274726A (en) * 2023-11-23 2023-12-22 南京信息工程大学 Picture classification method and system based on multi-view supplementary tag
CN117274726B (en) * 2023-11-23 2024-02-23 南京信息工程大学 Picture classification method and system based on multi-view supplementary tag

Similar Documents

Publication Publication Date Title
Adams et al. Automating image matching, cataloging, and analysis for photo-identification research
CN107563444A (en) A kind of zero sample image sorting technique and system
CN110909820A (en) Image classification method and system based on self-supervision learning
CN101551855B (en) Auxiliary diagnostic system for tracing self-adaptive kernel matching and auxiliary diagnostic method thereof
CN108985360A (en) Hyperspectral classification method based on expanding morphology and Active Learning
CN111949535B (en) Software defect prediction device and method based on open source community knowledge
CN113761259A (en) Image processing method and device and computer equipment
CN116434273A (en) Multi-label prediction method and system based on single positive label
CN114627467A (en) Rice growth period identification method and system based on improved neural network
CN115602337A (en) Cryptocaryon irritans disease early warning method and system based on machine learning
CN110310012B (en) Data analysis method, device, equipment and computer readable storage medium
CN109657710B (en) Data screening method and device, server and storage medium
CN114422450B (en) Network traffic analysis method and device based on multi-source network traffic data
CN113158878B (en) Heterogeneous migration fault diagnosis method, system and model based on subspace
CN114067165A (en) Image screening and learning method and device containing noise mark distribution
CN112529084B (en) Similar landslide recommendation method based on landslide section image classification model
CN116503674B (en) Small sample image classification method, device and medium based on semantic guidance
CN117196832B (en) Animal husbandry living mortgage supervision system and method thereof
CN113360633B (en) Cross-domain test document classification method based on depth domain adaptation
CN115482419B (en) Data acquisition and analysis method and system for marine fishery products
Rahman et al. Detection of Diseases and Pests on The Leaves of Sweet Potato Plants sing Yolov4
Lan et al. Evaluating Method of the Innovation ability for Agricultural Science and Technology Enterprises Based on Machine Learning
Bhandari et al. PLANT SPECIES AND DISEASE DETECTION
Gao Machine Learning Approaches to High Throughput Phenotyping
CN117372144A (en) Wind control strategy intelligent method and system applied to small sample scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination