CN117274692A - Image classification method based on new genetic programming structure and genetic modification - Google Patents
Image classification method based on new genetic programming structure and genetic modification Download PDFInfo
- Publication number
- CN117274692A CN117274692A CN202311222642.1A CN202311222642A CN117274692A CN 117274692 A CN117274692 A CN 117274692A CN 202311222642 A CN202311222642 A CN 202311222642A CN 117274692 A CN117274692 A CN 117274692A
- Authority
- CN
- China
- Prior art keywords
- image
- model
- individuals
- layer
- population
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012239 gene modification Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000005017 genetic modification Effects 0.000 title claims abstract description 45
- 235000013617 genetically modified food Nutrition 0.000 title claims abstract description 45
- 230000002068 genetic effect Effects 0.000 title claims abstract description 22
- 238000000605 extraction Methods 0.000 claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 12
- 238000012360 testing method Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 239000010410 layer Substances 0.000 claims description 77
- 239000013598 vector Substances 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 7
- 230000014509 gene expression Effects 0.000 claims description 7
- 238000011282 treatment Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000002790 cross-validation Methods 0.000 claims description 4
- 239000002346 layers by function Substances 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/12—Computing arrangements based on biological models using genetic models
- G06N3/126—Evolutionary algorithms, e.g. genetic algorithms or genetic programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B5/00—ICT specially adapted for modelling or simulations in systems biology, e.g. gene-regulatory networks, protein interaction networks or metabolic networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Physiology (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Genetics & Genomics (AREA)
- Mathematical Physics (AREA)
- Biotechnology (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image classification method based on a new genetic programming structure and genetic modification, which comprises the following steps: preprocessing a training data set; adopting an evolution learning method to evolve the GP model, and combining a gene modification method to find an optimal GP model; and placing the test data set into the generated optimal GP model, and acquiring the precision of the test data set. According to the invention, an image color feature extraction layer is added into a new GP model to obtain the color moment of each image, so that the extracted image features contain color information, the final model can effectively solve the problem of gray level/color image classification, and a genetic modification process is added into an algorithm process. In the process of gene modification, a gene modification individual may obtain a GP model with better fitness, and if the condition is met, the running time of the program can be shortened; at the same time, the introduction of genetic modification also contributes to the diversity of populations, which has an important role in improving the effectiveness and performance of the algorithm.
Description
Technical Field
The invention belongs to the technical field of computer software, and particularly relates to an image classification method based on a new genetic programming structure and genetic modification.
Background
Genetic programming is an evolutionary learning algorithm that automatically evolves a computational program that solves problems using principles of biological genetics and natural selection. The deep neural network has a fixed network structure, so the fixed structure of the deep neural network has larger limitation for different image classification tasks, and the interpretation of the fixed structure of the deep neural network is not strong. GPs with flexible representations can find the best solution without using domain knowledge and can develop different GP model expressions for different types of tasks (datasets).
Most GP-based image classification methods typically use gray-scale data sets, only a few of which can process color images, and these methods are often implemented by extracting as many non-image color features as possible. It is noted that in these methods, the classification effect for a gray image may not be ideal on a color image. Currently, most existing GP-based methods do not perform well in color image classification. In addition, in the GP-based image classification method, a long time is often required for the process of evolving the GP model. This is because genetic programming algorithms involve searching and optimizing a large number of solution spaces, requiring finding the best solution among many possibilities. Because of the complexity of the image data, especially in color image classification tasks, more time is spent to accommodate the variation of different color channels and features.
Disclosure of Invention
The present invention aims to provide an image classification method based on a new genetic programming structure and genetic modification, so as to solve the problems in the prior art. These problems include the challenges of existing genetic programming-based methods that do not perform well in color image classification and that require long time to evolve GP model procedures.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a new genetic programming-based architecture comprising the following functional layers:
1. pooling layer: performing compressed size processing on an input image;
2. image filtering layer: performing image enhancement, extraction, image texture reconstruction and other treatments on an input image;
3. edge detection layer: identifying a salient portion of the input image;
4. feature extraction layer: extracting different types of information in the input image;
5. color feature extraction layer: extracting color features in an input image;
6. a tie layer: the image features obtained by different treatments are connected in series;
7. classification layer: applying a classifier to the image features to classify the images;
8. decision layer: and voting out a final result according to the prediction vectors output by different classifiers.
Preferably, the variable layer allows the GP model to have multiple pooling layers or image filtering layers to extract image features, so that the present invention can develop more suitable GP model expressions for different types of tasks due to the flexibility of the GP model expressions.
Preferably, for the color feature extraction layer, the color distribution of the color moment extraction image is adopted, and the output is in a vector form of 1×9.
Preferably, for the classification layer, three-fold cross validation is adopted when each classifier is trained, and the average value of the accuracy of three training sets is taken as the final accuracy.
A method of classifying genetically modified images based on a new genetic programming structure, comprising the steps of:
s1, preprocessing a training data set;
s2, evolving a GP model by adopting an evolution learning method, and searching an optimal GP model by combining a gene modification method;
s3, placing the test data set into the generated optimal GP model, and acquiring the precision of the test data set.
Preferably, the preprocessing stage of the training data set includes reducing the image size by 1/4, converting the image size into a numpy array format, and finally storing the data set as a file with a suffix of a npy format.
Preferably, when the optimal GP model stage is found and evolution is started, a population is initialized at first, and fitness evaluation is carried out on individuals in the population; if the individuals meet the set conditions, directly outputting an optimal GP model; otherwise, carrying out a genetic modification strategy, dividing all individuals in a population into two types according to the number of sub-classifiers, firstly carrying out genetic modification of individuals with 2 sub-classifications, selecting two parents from the population, scoring the sub-classifiers of each parent to establish a scoring table, selecting sub-classifiers with high scores to combine to obtain a genetic modification individual, carrying out fitness evaluation on the genetic modification individual, outputting an optimal GP model if the genetic modification individual accords with a set condition, otherwise, judging whether the genetic modification individual with 3 sub-classifiers accords with the set condition; outputting if the set condition is met, otherwise, putting two genetically modified individuals generated in the genetic modification process into the population; copying, crossing and mutating all individuals in the population to generate new individuals, wherein the new individuals and the original population form a next generation population; and evaluating the fitness of the new generation population, and repeating the steps until an optimal GP model meeting the conditions is found.
Preferably, the algorithm has a list for storing the fitness values of all the individuals in the previous generation population, and if the individuals in the next generation population are consistent with the individuals in the previous generation population, the fitness values stored in the list can be directly called.
The invention has the technical effects and advantages that: compared with the prior art, the image classification method based on the novel genetic programming structure and the genetic modification has the following advantages:
1. adding an image color feature extraction layer into the new GP model to obtain the color moment of each image, so that the extracted image features contain color information, and the final model can effectively solve the problem of gray level/color image classification;
2. a genetic modification process is added into the algorithm process, in the genetic modification process, a genetic modification individual can obtain a GP model with better fitness, and if the condition is met, the running time of the program can be shortened; at the same time, the introduction of genetic modification also contributes to the diversity of populations, which has an important role in improving the effectiveness and performance of the algorithm.
Drawings
FIG. 1 is a flow chart of an algorithm of the present invention;
FIG. 2 is a flow chart of the genetic modification process of the present invention;
FIG. 3 is a GP tree model diagram of the present invention
FIG. 4 is a schematic view of a tie layer of the present invention;
FIG. 5 is a schematic diagram of a decision layer according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a novel genetic programming structure as shown in fig. 1-5, which comprises the following functional layers:
1. pooling layer: performing compressed size processing on an input image;
2. image filtering layer: performing image enhancement, extraction, image texture reconstruction and other treatments on an input image;
3. edge detection layer: identifying a salient portion of the input image;
4. feature extraction layer: extracting different types of information in the input image;
5. color feature extraction layer: extracting color features in an input image;
6. a tie layer: the image features obtained by different treatments are connected in series;
7. classification layer: applying a classifier to the image features to classify the images;
8. decision layer: and voting out a final result according to the prediction vectors output by different classifiers.
Preferably, the variable layer allows the GP model to have a plurality of pooling layers or image filtering layers to extract image features, and due to the flexibility of the GP model expression, the invention can develop more suitable GP model expressions for different types of tasks.
Preferably, for the color feature extraction layer, the color distribution of the color moment extraction image is adopted, and the output is in a 1×9 vector form.
Preferably, for the classification layer, three-fold cross validation is adopted when each classifier is trained, and the average value of the accuracy of three training sets is taken as the final accuracy.
A method of classifying genetically modified images based on a new genetic programming structure, comprising the steps of:
s1, preprocessing a training data set;
s2, evolving a GP model by adopting an evolution learning method, and searching an optimal GP model by combining a gene modification method;
s3, placing the test data set into the generated optimal GP model, and acquiring the precision of the test data set.
Preferably, the preprocessing stage of the training data set includes reducing the image size by 1/4, converting the image size into a numpy array format, and finally storing the data set as a file with a suffix of a npy format.
Preferably, when the optimal GP model stage is found and evolution is started, a population is initialized at first, and fitness evaluation is carried out on individuals in the population; if the individuals meet the set conditions, directly outputting an optimal GP model; otherwise, carrying out a genetic modification strategy, dividing all individuals in a population into two types according to the number of sub-classifiers, firstly carrying out genetic modification of individuals with 2 sub-classifications, selecting two parents from the population, scoring the sub-classifiers of each parent to establish a scoring table, selecting sub-classifiers with high scores to combine to obtain a genetic modification individual, carrying out fitness evaluation on the genetic modification individual, outputting an optimal GP model if the genetic modification individual accords with a set condition, otherwise, judging whether the genetic modification individual with 3 sub-classifiers accords with the set condition; outputting if the set condition is met, otherwise, putting two genetically modified individuals generated in the genetic modification process into the population; copying, crossing and mutating all individuals in the population to generate new individuals, wherein the new individuals and the original population form a next generation population; and evaluating the fitness of the new generation population, and repeating the steps until an optimal GP model meeting the conditions is found.
Preferably, the algorithm has a list for storing the fitness values of all the individuals in the previous generation population, and if the individuals in the next generation population are consistent with the individuals in the previous generation population, the fitness values stored in the list can be directly called.
Referring to fig. 1, a global flow chart of the overall algorithm: the training data set is input into GP model individuals in the population, and output is obtained after a series of operations. In the fitness evaluation phase, fitness values of the previous generation population are stored in a list. Individuals in the modern population are searched to see if there are the same individuals as in the previous generation population. If the number exists, the fitness value of the individual in the previous generation population is assigned to the modern individual; otherwise, calculating the output classification accuracy and taking the classification accuracy as the fitness value of the modern individuals. The GP-based system consists of two parts, including GP model program structure (evolution learning/training) and classification (testing) with GP models, note that the average of the accuracy of each part after five-fold intersection is used in fitness evaluation.
Referring to fig. 2, the genetic modification process flow: and when the optimal GP model stage is found and evolution is started, initializing a population at first, and evaluating the fitness of individuals in the population. If the individuals meet the set conditions, the optimal GP model is directly output. Otherwise, carrying out a genetic modification strategy, dividing all individuals in the population into two types according to the number of the sub-classifiers, firstly carrying out genetic modification of individuals with 2 sub-classifications, selecting two parents from the population, scoring the sub-classifiers of each parent to establish a scoring table, selecting sub-classifiers with high scores to combine to obtain a genetic modification individual, carrying out fitness evaluation on the genetic modification individual, outputting an optimal GP model if the genetic modification individual accords with a set condition, otherwise, judging whether the genetic modification individual with 3 sub-classifiers accords with the set condition. Outputting if the set condition is met, otherwise placing two genetically modified individuals generated in the genetic modification process into the population. All individuals in the population are replicated, crossed and mutated to generate new individuals, and the new individuals and the original population form the next generation population. And evaluating the fitness of the new generation population, and repeating the steps until an optimal GP model meeting the conditions is found.
Referring to fig. 3, a new genetic programming tree structure: the solid line block diagram is a fixed layer, and there is only one layer in the GP model; the dashed box is a variable layer, which may not be present in the GP model or may have multiple layers. One GP model is provided with 2 or 3 sub-classifiers, and each sub-classifier is provided with three branches corresponding to three image feature processing branches of a feature extraction layer, an edge detection layer and a color feature extraction layer. The reason why the image filter layer must be first subjected to the feature extraction is that the extracted features may contain noise in the image that is not subjected to the filtering denoising process.
The feature extracted by the branches of the feature extraction layer and the color feature extraction layer are vectors, and the feature extracted by the branches of the edge detection layer is an image parameter, so that the connecting layer needs to perform the operation of the Fcon part in the fourth figure, and each row of the image parameter features is connected in series and then connected in series with the feature vectors; and performing the operation of a sea part by utilizing the feature vector obtained by Fcon and the feature vector extracted by the branches of the color feature extraction layer to form a mixed feature.
GP tree model: the new GP tree structure has many different layers, such as an input layer, a pooling layer, an image filtering layer, a feature extraction layer, an edge detection layer, a color feature extraction layer, a tie layer, a classification layer, a decision layer, and an output layer. The pooling layer can effectively reduce the size of the image; the image filtering layer applies a filtering operation or other operation on the image; the feature extraction layer extracts features from the image by using a plurality of feature extraction methods; the edge detection layer acquires information of an important area in the image; the color feature extraction layer acquires image color distribution; the connecting layer connects the image features obtained by different operations in series; the classifying layer applies a classifier to the image characteristics to classify the images; the decision layer decides the final class of the input image. (the maximum depth of the GP tree is 10).
Referring to fig. 4, the tie layer is illustrated: the features obtained by processing the 3 branches are connected in series, and there are two cases that the input is in the form of a vector, and the input contains both the image parameter form and the vector form. For the first case, directly connecting the features in the form of vectors in series to obtain a vector; for the second case, each row of image parameter features is first concatenated, then concatenated with the feature vector.
Referring to fig. 5, in the classification layer, a method of triple-folded cross-validation is applied, taking the average value of three classification accuracy as the final fraction of the sub-classifier. The decision layer integrates the outputs of the two or three sub-classifiers, and the final classification result of the image is obtained according to the predictive vector voting. And if the obtained category numbers are the same, randomly outputting one of the categories. The decision layer is schematic: and comparing the results of 2/3 predictive vectors, and voting out a large number of results as the final classification category of the image.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present invention, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principles of the present invention.
Claims (8)
1. A new genetic programming-based structure, comprising the following functional layers:
1. pooling layer: performing compressed size processing on an input image;
2. image filtering layer: performing image enhancement, extraction, image texture reconstruction and other treatments on an input image;
3. edge detection layer: identifying a salient portion of the input image;
4. feature extraction layer: extracting different types of information in the input image;
5. color feature extraction layer: extracting color features in an input image;
6. a tie layer: the image features obtained by different treatments are connected in series;
7. classification layer: applying a classifier to the image features to classify the images;
8. decision layer: and voting out a final result according to the prediction vectors output by different classifiers.
2. The method for classifying images based on new genetic programming structures and genetic modifications according to claim 1, wherein: the variable layer allows the GP model to have a plurality of pooling layers or image filtering layers to extract image features, and due to the flexibility of the GP model expression, the invention can develop more suitable GP model expressions for different types of tasks.
3. The method for classifying images based on new genetic programming structures and genetic modifications according to claim 1, wherein: the color distribution of the color moment extraction image is adopted for the color feature extraction layer, and the color distribution is output in a vector form of 1 multiplied by 9.
4. The method for classifying images based on new genetic programming structures and genetic modifications according to claim 1, wherein: for the classification layer, three-fold cross validation is adopted when each classifier is trained, and the average value of the accuracy of three training sets is taken as the final accuracy.
5. A method of classifying genetically modified images based on a new genetic programming structure according to claim 1, comprising the steps of:
s1, preprocessing a training data set;
s2, evolving a GP model by adopting an evolution learning method, and searching an optimal GP model by combining a gene modification method;
s3, placing the test data set into the generated optimal GP model, and acquiring the precision of the test data set.
6. The method for classifying genetically modified images based on a new genetic programming construct of claim 5, wherein: the preprocessing stage of the training data set comprises the steps of reducing the image size by 1/4, converting the image size into a numpy array format, and finally storing the data set into a file with the suffix of the format of npy.
7. The method for classifying genetically modified images based on a new genetic programming construct of claim 5, wherein: when the evolution starts, firstly initializing a population, and evaluating the fitness of individuals in the population; if the individuals meet the set conditions, directly outputting an optimal GP model; otherwise, carrying out a genetic modification strategy, dividing all individuals in a population into two types according to the number of sub-classifiers, firstly carrying out genetic modification of individuals with 2 sub-classifications, selecting two parents from the population, scoring the sub-classifiers of each parent to establish a scoring table, selecting sub-classifiers with high scores to combine to obtain a genetic modification individual, carrying out fitness evaluation on the genetic modification individual, outputting an optimal GP model if the genetic modification individual accords with a set condition, otherwise, judging whether the genetic modification individual with 3 sub-classifiers accords with the set condition; outputting if the set condition is met, otherwise, putting two genetically modified individuals generated in the genetic modification process into the population; copying, crossing and mutating all individuals in the population to generate new individuals, wherein the new individuals and the original population form a next generation population; and evaluating the fitness of the new generation population, and repeating the steps until an optimal GP model meeting the conditions is found.
8. The method for classifying genetically modified images based on a new genetic programming construct of claim 7, wherein: the algorithm is provided with a list for storing the fitness values of all the individuals in the previous generation population, and if the individuals in the next generation population are consistent with the individuals in the previous generation population, the fitness values stored in the list can be directly called.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311222642.1A CN117274692A (en) | 2023-09-20 | 2023-09-20 | Image classification method based on new genetic programming structure and genetic modification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311222642.1A CN117274692A (en) | 2023-09-20 | 2023-09-20 | Image classification method based on new genetic programming structure and genetic modification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117274692A true CN117274692A (en) | 2023-12-22 |
Family
ID=89207549
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311222642.1A Pending CN117274692A (en) | 2023-09-20 | 2023-09-20 | Image classification method based on new genetic programming structure and genetic modification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117274692A (en) |
-
2023
- 2023-09-20 CN CN202311222642.1A patent/CN117274692A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108491765B (en) | Vegetable image classification and identification method and system | |
US20190228268A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
CN111696101A (en) | Light-weight solanaceae disease identification method based on SE-Inception | |
JP6908302B2 (en) | Learning device, identification device and program | |
CN112685504B (en) | Production process-oriented distributed migration chart learning method | |
CN112085059B (en) | Breast cancer image feature selection method based on improved sine and cosine optimization algorithm | |
CN111783841A (en) | Garbage classification method, system and medium based on transfer learning and model fusion | |
CN113159067A (en) | Fine-grained image identification method and device based on multi-grained local feature soft association aggregation | |
CN111680755A (en) | Medical image recognition model construction method, medical image recognition device, medical image recognition medium and medical image recognition terminal | |
CN112784921A (en) | Task attention guided small sample image complementary learning classification algorithm | |
Sharma et al. | Automatic identification of bird species using audio/video processing | |
CN111371611A (en) | Weighted network community discovery method and device based on deep learning | |
KR102149355B1 (en) | Learning system to reduce computation volume | |
CN114241564A (en) | Facial expression recognition method based on inter-class difference strengthening network | |
CN117516937A (en) | Rolling bearing unknown fault detection method based on multi-mode feature fusion enhancement | |
CN111860601A (en) | Method and device for predicting large fungus species | |
CN116051924B (en) | Divide-and-conquer defense method for image countermeasure sample | |
CN116821905A (en) | Knowledge search-based malicious software detection method and system | |
CN117274692A (en) | Image classification method based on new genetic programming structure and genetic modification | |
Parameshachari et al. | Plant Disease Detection and Classification Using Transfer Learning Inception Technique | |
CN115063374A (en) | Model training method, face image quality scoring method, electronic device and storage medium | |
CN114330650A (en) | Small sample characteristic analysis method and device based on evolutionary element learning model training | |
CN112434145A (en) | Picture-viewing poetry method based on image recognition and natural language processing | |
CN111950615A (en) | Network fault feature selection method based on tree species optimization algorithm | |
Hintz-Madsen et al. | Design and evaluation of neural classifiers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |