CN113553917B - Office equipment identification method based on pulse transfer learning - Google Patents
Office equipment identification method based on pulse transfer learning Download PDFInfo
- Publication number
- CN113553917B CN113553917B CN202110741347.1A CN202110741347A CN113553917B CN 113553917 B CN113553917 B CN 113553917B CN 202110741347 A CN202110741347 A CN 202110741347A CN 113553917 B CN113553917 B CN 113553917B
- Authority
- CN
- China
- Prior art keywords
- domain
- pulse
- layer
- migration
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013526 transfer learning Methods 0.000 title claims abstract description 25
- 230000005012 migration Effects 0.000 claims abstract description 34
- 238000013508 migration Methods 0.000 claims abstract description 34
- 238000013528 artificial neural network Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 21
- 230000006870 function Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000000644 propagated effect Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 230000000946 synaptic effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000012528 membrane Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an office equipment identification method based on pulse transfer learning, which takes office equipment images with clean background, high resolution and no noise and labels as a source domain and images with different noise and resolution acquired under an actual scene as a target domain according to transfer learning tasks; initializing pulse neural network parameters to obtain a migration learning network model in an initial state; converting the source domain and the target domain into pictures in a pulse form, inputting the pictures into a migration learning network model in an initial state, and transmitting the pictures forward; training the whole network by using an Adamax optimizer according to the domain migration loss and the classification loss until the model converges or the maximum iteration number is reached, so as to obtain a trained migration learning network model; and storing the trained transfer learning network model, inputting all samples of the target domain into the trained transfer learning network model, and predicting to obtain a final classification result.
Description
Technical Field
The invention belongs to the fields of computer science and neural network technology, and relates to an office equipment identification method based on pulse transfer learning.
Background
With the improvement of living standard, noble office equipment (such as personal computers, printers and the like) is also becoming popular, so that the demand for security supervision of office equipment is also increasing. A common method is to collect images with a camera and identify and supervise office equipment. However, the difference of the resolution of the images collected in different office places is large, and the noise distribution is obvious, so that the same classification neural network can not accurately identify equipment in all different scenes, and manual labeling and retraining are required for the images collected in different scenes, so that a large amount of resource consumption is caused.
The prior art scheme comprises a deep neural network-based pre-training scheme: firstly, pre-training a neural network model by utilizing the existing office equipment image with the characteristics of clean background, high resolution and no noise and with marks; and then, carrying out data annotation according to a small amount of actual noise images, and carrying out parameter fine adjustment on the pre-trained neural network model until convergence. And one technical scheme is a deep neural network migration learning scheme: the method comprises the steps of taking a clean background, high-resolution and noiseless marked office equipment image as a source domain and taking collected images with different noises and resolutions as a target domain. The neural network receives marked source domain data and unmarked target domain data at the same time, and training the targets comprises improving the recognition accuracy of the classification model in a noise and multi-resolution scene and reducing the feature distribution difference of the source domain and the target domain. The training converged neural network can directly classify images in specific application scenes.
The prior art scheme has the following defects:
for deep neural network pretraining schemes: data labeling is still required for part of the actually acquired noisy images, and the effect is poor in noise robustness. For deep neural network migration learning schemes: the method based on the traditional deep neural network has high power consumption and great difficulty in training convergence.
Conventional artificial neural networks use continuous values as inputs and outputs for neurons. The impulse neural network is used as a third generation neural network to truly simulate the characteristics of biological neurons. The neuron has membrane potential, and the membrane potential is ignited after reaching the threshold value, and the input and output signals are binary pulse time sequence signals. The sparsity discrete pulses of the pulse neural network can greatly reduce power consumption and relieve the overfitting phenomenon in deep learning. The application and method of impulse neural networks are also increasingly studied. Patent CN201910087183.8 and patent CN201810430121.8 also respectively make improvements of the application of convolutional neural network on impulse neural network and the handwriting digital recognition method of impulse neural network. Patent CN201910088572.2 proposes a video recognition method based on a pulse neural network for human body tumbling action, and the pulse neural network is applied to a real task.
However, for office equipment images with different noise and different resolutions in different scenes, an effective migration learning method still does not exist in the deep pulse neural network.
Disclosure of Invention
The invention aims at: the office equipment identification method based on pulse transfer learning solves the defects of the prior art.
The technical scheme adopted by the invention is as follows:
an office equipment identification method based on pulse transfer learning comprises the following steps:
step 2, initializing pulse neural network parameters including synaptic connection weight, pulse ignition threshold, delay constant, learning rate and time window length to obtain a migration learning network model in an initial state;
step 3, converting the source domain and the target domain into pictures in a pulse form, inputting the pictures into the migration learning network model in the initial state in the step 2, and transmitting the pictures forward; training the whole network by using an Adamax optimizer according to the domain migration loss and the classification loss until the model converges or the maximum iteration number is reached, so as to obtain a trained migration learning network model;
and 4, storing the trained transfer learning network model, inputting all samples of the target domain into the trained transfer learning network model, and predicting to obtain a final classification result.
Further, the migration learning network model in the step 2 or the step 3 comprises a coding layer, a feature layer and a classification layer; the characteristic layer comprises three convolution layers, three average pooling layers and two full-connection layers, the final classification layer is the full-connection layer, and the output shape is the number of samples, the number of categories and the length of a time window.
Further, the migration learning network model in the step 2 or the step 3 receives the data X of the source domain and the target domain simultaneously S And X T Firstly, converting a picture into a pulse form through a coding layer to obtain X S_e And X T_e Then extracting the characteristics through the characteristic layer;
the source domain and the target domain of the feature layer share weights to obtain respective features X S_f And X T_f The training network is reversely propagated according to the domain migration loss, so that the training network learns public knowledge from the source domain and the target domain at the same time, and the extracted characteristics of the two domains are more approximate in distribution;
meanwhile, the features obtained through the feature layer need to be subjected to a classification layer to obtain a final prediction classification result X S_c And X T_c The training network is back propagated through the classification loss, so that the training classification layer can accurately classify the features extracted by the feature layer and having the common knowledge of two domains.
Further, the classification loss uses a multi-classification cross entropy loss function, which is mathematically expressed as:
wherein M is the number of categories, y c Is an indicator variable which is 1 if the class is the same as the class of the sample, otherwise is 0, p c For the prediction probability that an observed sample belongs to a class, X represents all samples and N is the total number of samples.
In step 3, the conversion process of the source domain and the target domain includes that the original image is subjected to edge feature extraction through the laplace operator, then converted into a pulse form, and finally, the final coded image is obtained by combining the pulse form image converted by the original image.
Further, the domain migration loss employs a core-centering approach.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. the method is suitable for pulse transfer learning strategies of the image of the real office equipment under the conditions of different noise and resolution.
2. The method combines the advantages of the impulse neural network, solves the problem that the image data acquired in different scenes need to be re-labeled and trained by the office equipment identification technology in the actual application scene, saves the manpower required by labeling the data, and improves the image classification application efficiency.
Drawings
For a clearer description of the technical solutions of embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and should not be considered limiting in scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is an example of a noiseless picture according to the present invention;
FIG. 2 is an example of a high resolution low noise picture of the present invention;
FIG. 3 is an example of a low resolution white noise picture of the present invention;
FIG. 4 is a schematic diagram of the network architecture of the present invention;
FIG. 5 is a schematic diagram of the coding layer operation of the present invention;
fig. 6 is a flow chart of the pulse timing feature processing of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with examples.
Example 1
An office equipment identification method based on pulse transfer learning comprises the following steps:
step 2, initializing pulse neural network parameters including synaptic connection weight, pulse ignition threshold, delay constant, learning rate and time window length to obtain a migration learning network model in an initial state;
step 3, converting the source domain and the target domain into pictures in a pulse form, inputting the pictures into the migration learning network model in the initial state in the step 2, and transmitting the pictures forward; training the whole network by using an Adamax optimizer according to the domain migration loss and the classification loss until the model converges or the maximum iteration number is reached, so as to obtain a trained migration learning network model;
and 4, storing the trained transfer learning network model, inputting all samples of the target domain into the trained transfer learning network model, and predicting to obtain a final classification result.
Example two
This embodiment is based on the first embodiment,
as shown in fig. 4; the transfer learning network model in the step 2 or the step 3 comprises a coding layer, a characteristic layer and a classification layer; the characteristic layer comprises three convolution layers, three average pooling layers and two full-connection layers, the final classification layer is the full-connection layer, and the output shape is the number of samples, the number of categories and the length of a time window.
Further, the migration learning network model in the step 2 or the step 3 receives the data X of the source domain and the target domain simultaneously S And X T Firstly, converting a picture into a pulse form through a coding layer to obtain X S_e And X T_e Then extracting the characteristics through the characteristic layer;
the source domain and the target domain of the feature layer share weights to obtain respective features X S_f And X T_f The training network is reversely propagated according to the domain migration loss, so that the training network learns public knowledge from the source domain and the target domain at the same time, and the extracted characteristics of the two domains are more approximate in distribution;
meanwhile, the features obtained through the feature layer need to be subjected to a classification layer to obtain a final prediction classification result X S_c And X T_c The training network is back propagated through the classification loss, so that the training classification layer can accurately classify the features extracted by the feature layer and having the common knowledge of two domains.
Further, the classification loss uses a multi-classification cross entropy loss function, which is mathematically expressed as:
wherein M is the number of categories, y c Is an indicator variable which is 1 if the class is the same as the class of the sample, otherwise is 0, p c For the prediction probability that an observed sample belongs to a class, X represents all samples and N is the total number of samples.
In step 3, as shown in fig. 5 and fig. 6, the conversion process of the source domain and the target domain includes that the original image is subjected to edge feature extraction by the laplace operator, then converted into a pulse form, and finally combined with the pulse form image converted by the original image, the final encoded image is obtained.
In the step 3, as shown in fig. 6, when domain migration loss is calculated, on one hand, the features extracted by the feature layer are added in the time dimension to obtain the frequency representation of the feature layer, and then the frequency representation is brought into the kernel center alignment function to calculate domain migration loss; on the other hand, the extracted features continue to propagate forward into the classification layer.
Further, the laplace operator is:
the laplace matrix is:
carrying out convolution operation through the Laplace matrix to extract the image edge; after passing through the feature layer, the pulse time sequence features are converted into frequency domain features in a summation mode, and then the frequency domain features of the source domain and the target domain are input into the domain migration loss for calculation.
Further, the core center alignment function of the domain migration penalty is:
wherein, the liquid crystal display device comprises a liquid crystal display device,
k and L are both corresponding kernel functions, which can be expressed as K ij =k(x i ,x j ) And L ij =l(y i ,y j ) K (x, y) =l (x, y) =x under the linear kernel T y; the H matrix is the center matrix and,I n is a unitary matrix, 1 n Is a full 1 vector.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and modifications within the spirit and principles of the invention will become apparent to those skilled in the art.
Claims (4)
1. An office equipment identification method based on pulse transfer learning is characterized by comprising the following steps of: the method comprises the following steps:
step 1, according to a migration learning task, taking a clean background, high-resolution and noiseless office equipment image with a label as a source domain, and taking different resolution and noise images acquired by an actual application scene as a target domain;
step 2, initializing pulse neural network parameters including synaptic connection weight, pulse ignition threshold, delay constant, learning rate and time window length to obtain a migration learning network model in an initial state;
step 3, converting the source domain and the target domain into pictures in a pulse form, inputting the pictures into the migration learning network model in the initial state in the step 2, and transmitting the pictures forward; training the whole network by using an Adamax optimizer according to the domain migration loss and the classification loss until the model converges or the maximum iteration number is reached, so as to obtain a trained migration learning network model;
step 4, saving the trained transfer learning network model, inputting all samples of the target domain into the trained transfer learning network model, and predicting to obtain a final classification result;
the transfer learning network model in the step 2 or the step 3 comprises a coding layer, a characteristic layer and a classification layer; the characteristic layer comprises three convolution layers, three average pooling layers and two full-connection layers, the final classification layer is the full-connection layer, and the output shape is as follows: batch sample number, category number, time window length;
the migration learning network model in the step 2 or the step 3 receives the data X of the source domain and the target domain simultaneously s And X T Firstly, converting a picture into a pulse form through a coding layer to obtain X S_e And X T_e Then extracting the characteristics through the characteristic layer;
the source domain and the target domain of the feature layer share weights to obtain respective features X S_f And X T_f The training network is back propagated according to the domain migration loss, the purpose is that the training network learns public knowledge from the source domain and the target domain at the same time,the extracted features of the two domains are more similar in distribution;
meanwhile, the features obtained through the feature layer need to be subjected to a classification layer to obtain a final prediction classification result X S_c And X T_c The training network is back propagated through the classification loss, so that the training classification layer can accurately classify the features extracted by the feature layer and having the common knowledge of two domains.
2. The office equipment identification method based on pulse transfer learning according to claim 1, wherein: classification losses use a multi-class cross-entropy loss function, which has the mathematical form:
wherein M is the number of categories, y c Is an indicator variable which is 1 if the class is the same as the class of the sample, otherwise is 0, p c For the prediction probability that an observed sample belongs to a class, X represents all samples and N is the total number of samples.
3. The office equipment identification method based on pulse transfer learning according to claim 1, wherein: in the step 3, the conversion process of the source domain and the target domain includes that the original image is subjected to edge feature extraction through a Laplacian operator, pulse time sequence features are converted into frequency domain features through a summation mode after the original image passes through a feature layer, and then the frequency domain features of the source domain and the target domain are input into domain migration loss for calculation; and converting the picture into a pulse form, and finally combining the picture of the pulse form converted by the original picture to obtain a final coded picture.
4. The office equipment identification method based on pulse transfer learning according to claim 3, wherein: the domain migration loss adopts a core center alignment method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110741347.1A CN113553917B (en) | 2021-06-30 | 2021-06-30 | Office equipment identification method based on pulse transfer learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110741347.1A CN113553917B (en) | 2021-06-30 | 2021-06-30 | Office equipment identification method based on pulse transfer learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113553917A CN113553917A (en) | 2021-10-26 |
CN113553917B true CN113553917B (en) | 2023-04-28 |
Family
ID=78102638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110741347.1A Active CN113553917B (en) | 2021-06-30 | 2021-06-30 | Office equipment identification method based on pulse transfer learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113553917B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116910594B (en) * | 2023-09-14 | 2023-12-01 | 青岛明思为科技有限公司 | Rolling bearing fault diagnosis method based on impulse neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110220709A (en) * | 2019-06-06 | 2019-09-10 | 北京科技大学 | Fault Diagnosis of Roller Bearings based on CNN model and transfer learning |
CN110717526A (en) * | 2019-09-23 | 2020-01-21 | 华南理工大学 | Unsupervised transfer learning method based on graph convolution network |
CN112396119A (en) * | 2020-11-25 | 2021-02-23 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112926547A (en) * | 2021-04-13 | 2021-06-08 | 北京航空航天大学 | Small sample transfer learning method for classifying and identifying aircraft electric signals |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190354850A1 (en) * | 2018-05-17 | 2019-11-21 | International Business Machines Corporation | Identifying transfer models for machine learning tasks |
US20190362226A1 (en) * | 2018-05-23 | 2019-11-28 | International Business Machines Corporation | Facilitate Transfer Learning Through Image Transformation |
-
2021
- 2021-06-30 CN CN202110741347.1A patent/CN113553917B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110220709A (en) * | 2019-06-06 | 2019-09-10 | 北京科技大学 | Fault Diagnosis of Roller Bearings based on CNN model and transfer learning |
CN110717526A (en) * | 2019-09-23 | 2020-01-21 | 华南理工大学 | Unsupervised transfer learning method based on graph convolution network |
CN112396119A (en) * | 2020-11-25 | 2021-02-23 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112926547A (en) * | 2021-04-13 | 2021-06-08 | 北京航空航天大学 | Small sample transfer learning method for classifying and identifying aircraft electric signals |
Non-Patent Citations (2)
Title |
---|
Yihan Xiao等.radar signal recognition based on transfer learning and feature fusion.Mobile networks and applications.2019,1563-1571. * |
纪坤华 ; 廖天明 ; 陈新 ; .基于迁移学习的电能质量扰动分类方法研究.电气时代.2018,(第09期),57-58. * |
Also Published As
Publication number | Publication date |
---|---|
CN113553917A (en) | 2021-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tian et al. | Designing and training of a dual CNN for image denoising | |
CN108133188B (en) | Behavior identification method based on motion history image and convolutional neural network | |
Zhang et al. | Image denoising method based on a deep convolution neural network | |
CN109345508B (en) | Bone age evaluation method based on two-stage neural network | |
CN111126386B (en) | Sequence domain adaptation method based on countermeasure learning in scene text recognition | |
CN110232341B (en) | Semi-supervised learning image identification method based on convolution-stacking noise reduction coding network | |
CN112307958A (en) | Micro-expression identification method based on spatiotemporal appearance movement attention network | |
CN108121975B (en) | Face recognition method combining original data and generated data | |
CN114283287B (en) | Robust field adaptive image learning method based on self-training noise label correction | |
CN111858989A (en) | Image classification method of pulse convolution neural network based on attention mechanism | |
CN109743642B (en) | Video abstract generation method based on hierarchical recurrent neural network | |
CN112733965B (en) | Label-free image classification method based on small sample learning | |
CN109242097B (en) | Visual representation learning system and method for unsupervised learning | |
CN107480723B (en) | Texture Recognition based on partial binary threshold learning network | |
Bawane et al. | Object and character recognition using spiking neural network | |
CN111079514A (en) | Face recognition method based on CLBP and convolutional neural network | |
CN111738169A (en) | Handwriting formula recognition method based on end-to-end network model | |
CN114186672A (en) | Efficient high-precision training algorithm for impulse neural network | |
CN115563327A (en) | Zero sample cross-modal retrieval method based on Transformer network selective distillation | |
CN112883931A (en) | Real-time true and false motion judgment method based on long and short term memory network | |
CN115062727A (en) | Graph node classification method and system based on multi-order hypergraph convolutional network | |
CN113553917B (en) | Office equipment identification method based on pulse transfer learning | |
CN114882278A (en) | Tire pattern classification method and device based on attention mechanism and transfer learning | |
Wang et al. | Facial expression recognition based on CNN | |
CN114428234A (en) | Radar high-resolution range profile noise reduction identification method based on GAN and self-attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |