CN113537307A - Self-supervision domain adaptation method based on meta-learning - Google Patents
Self-supervision domain adaptation method based on meta-learning Download PDFInfo
- Publication number
- CN113537307A CN113537307A CN202110727430.3A CN202110727430A CN113537307A CN 113537307 A CN113537307 A CN 113537307A CN 202110727430 A CN202110727430 A CN 202110727430A CN 113537307 A CN113537307 A CN 113537307A
- Authority
- CN
- China
- Prior art keywords
- network
- domain
- meta
- target domain
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000006978 adaptation Effects 0.000 title claims abstract description 26
- 230000008569 process Effects 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims description 36
- 230000006870 function Effects 0.000 claims description 12
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 239000004576 sand Substances 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 241000764238 Isis Species 0.000 claims description 2
- 230000004913 activation Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000009191 jumping Effects 0.000 claims description 2
- 238000013508 migration Methods 0.000 abstract description 4
- 230000005012 migration Effects 0.000 abstract description 4
- 238000012546 transfer Methods 0.000 abstract description 2
- 238000002372 labelling Methods 0.000 abstract 2
- 238000009826 distribution Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a self-supervision domain adaptation method based on meta-learning, which takes target domain image reconstruction as a self-supervision task in a domain adaptation process, wherein supervision information is the target domain image, and additional target domain image labeling information is not needed, so that a large amount of manual labeling cost is saved; in addition, the reconstruction process of the target domain image enables the network to learn richer high-level semantic information in the target domain image, so that the network can assist the network to transfer the knowledge learned in the source domain data to the target domain by utilizing the intrinsic characteristics of the target domain data, and the performance of the domain adaptation method is improved. By introducing the meta-learning strategy into the self-supervision domain adaptation, the updating directions of the target domain self-supervision task, the source domain classification and other specific tasks to the network parameters tend to be consistent, so that the network can better extract domain invariant features, the problem of negative migration caused by the inconsistency of the updating directions of the domain adaptation task and the specific tasks to the network parameters is solved, and the domain adaptation performance is improved.
Description
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to an unsupervised domain adaptation method based on meta-learning and image reconstruction, which can enable a source domain image classification task and a target domain image reconstruction task to tend to be consistent in a parameter updating direction of a feature extraction network, namely, the parameter updating direction of source domain image category information to the feature extraction network tends to tend to the parameter updating direction of the feature extraction network by utilizing information such as the spatial relationship, illumination and the like of an object in a target domain image, so that the features obtained by the feature extraction network are domain-invariant features, and the domain adaptation of a source domain and a target domain is realized.
Background
In recent years, supervised learning based on a deep neural network has been applied in many aspects, and is widely applied in the fields of image classification, target detection, semantic segmentation, natural language processing and the like, and the combination of artificial intelligence technology and real life is greatly promoted. However, many supervised learning methods usually assume that probability distributions of a training set sample and a test set sample obey the same distribution, and in addition, in order to make a network have better generalization performance and avoid an overfitting problem, a large number of labeled training samples are usually required in a training stage in a supervised learning mode. However, with the advent of the big data era, the data scale is increasing, and the problems of statistical property difference among different data sets, high labor cost of data annotation and the like are gradually revealed. In order to solve the above problems, unsupervised domain adaptation methods are widely studied. Unsupervised domain adaptation is a method to resolve the distribution differences between source and target domains by learning some generalizable knowledge in tagged source domain data and applying it to the task on untagged target domain data to improve the performance of the network on the target domain.
At present, many unsupervised domain adaptation methods mostly focus on aligning the feature probability distribution of source domain data and target domain data by adopting a distance measurement-based or countermeasure learning-based mode, so that the inter-domain difference is reduced, and a network can learn a domain-invariant feature space. However, these methods only perform distribution alignment from the whole of the source domain data and the target domain data, and do not pay attention to the influence of the intrinsic characteristics of the target domain data on the knowledge migration in the domain adaptation process, so the self-supervision domain adaptation method based on image reconstruction and the like performs joint training on the network by setting self-supervision tasks such as image reconstruction, image rotation angle prediction, image restoration and the like on the non-labeled target domain data in cooperation with the labeled source domain data, that is, assists the network to migrate the knowledge learned from the source domain data to the target domain by mining the intrinsic characteristics of the target domain data, and in addition, because the method based on image reconstruction keeps the integrity of the data while extracting the migratable characteristics in the domain adaptation, it is ensured that the information for improving the performance of a specific task in the target domain data is not damaged, and the original distribution of the target domain data is kept as much as possible in the knowledge migration process of the source domain data, the learned knowledge in the source domain data can thus be better applied in the target domain tasks. However, in the domain adaptation process, many existing unsupervised domain adaptation methods simply update parameters of the feature extraction network through specific tasks such as an unsupervised task and image classification respectively, whether the update directions of the two types of tasks to the parameters of the feature extraction network are consistent or not is not considered, negative effects may be caused to feature learning of the specific tasks such as the image classification by the supervised task, the specific tasks such as the unsupervised task and the image classification can be respectively used as a trainer and a tester through meta-learning, and the parameters of the tester network are updated through a loss function in the trainer, so that the update directions of the two types of tasks to the parameters of the network tend to be consistent.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a self-supervision domain adaptation method based on Meta learning (Meta learning).
A self-supervision domain adaptation method based on meta-learning comprises the following steps:
step 1, setting a trainer and a tester:
and taking the reconstruction process of the target domain sample as a trainer in meta-learning, and taking the classification process of the source domain sample as a tester in the meta-learning.
Step 2, performing an image reconstruction task by using the target domain sample and calculating reconstruction loss:
inputting the label-free target domain sample into a feature extraction network to obtain target domain sample features, inputting the target domain sample features into an image reconstruction network to reconstruct images and calculating reconstruction loss.
And 3, updating parameters of the feature extraction network:
the parameters of the characteristic extraction network in the trainer are updated together with the parameters of the characteristic extraction network in the trainer due to weight sharing, namely, the parameter updating direction of the network in the tester tends to the parameter updating direction of the network in the trainer.
Step 4, performing a classification task by using the source domain samples and calculating a classification loss:
and inputting the source domain data with the labels into the feature extraction network after the parameters are updated to obtain the source domain data features, and then inputting the source domain data features into a classification network to perform an image classification task and calculate the classification loss.
Step 5, calculating a total loss function and updating parameters of all networks:
and calculating a total loss function, and updating parameters of the feature extraction network, the reconstruction network and the classification network in the trainer and the tester.
The invention has the following beneficial effects:
(1) the target domain image reconstruction is used as an automatic supervision task in the domain adaptation process, the supervision information is the target domain image, and no additional target domain image marking information is needed, so that a large amount of manual marking cost is saved; in addition, the reconstruction process of the target domain image enables the network to learn richer high-level semantic information in the target domain image, so that the network can assist the network to transfer the knowledge learned in the source domain data to the target domain by utilizing the intrinsic characteristics of the target domain data, and the performance of the domain adaptation method is improved.
(2) By introducing the meta-learning strategy into the self-supervision domain adaptation, the updating directions of the target domain self-supervision task, the source domain classification and other specific tasks to the network parameters tend to be consistent, so that the network can better extract domain invariant features, the problem of negative migration caused by the inconsistency of the updating directions of the domain adaptation task and the specific tasks to the network parameters is solved, and the domain adaptation performance is improved.
Drawings
Fig. 1 is a flowchart of an adaptive method for an adaptive domain based on meta-learning according to the present invention.
Fig. 2 is a network diagram of an adaptive method for an autonomous domain based on meta-learning according to the present invention.
Fig. 3 is a schematic diagram of basic units of a ResNet network.
Fig. 4 is a schematic view of a fully connected layer structure.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, a method for adaptive self-supervision domain based on meta-learning includes the following steps:
step 1, setting a trainer and a tester: as shown in fig. 2, target domain samples xtThe reconstruction process of (2) is used as a Meta-trainer (Meta-train) in Meta-learning, and the source domain sample x is used as a Meta-trainersAs a Meta-tester (Meta-test) in Meta-learning. The source domain with the label therein is denoted as S ═ { Xs,Ys},xs∈XsAnd ys∈YsRespectively representing source domain samples and corresponding labels, and an unlabeled target domain is denoted as T ═ XtIn which xt∈XtRepresenting a target domain sample.
Step 2, performing an image reconstruction task by using the target domain sample and calculating reconstruction loss:
target domain samples x to be unlabeledtObtaining target domain sample characteristics f by input characteristic extraction network GtThen the target domain sample characteristics ftThe input image reconstruction network D carries out image reconstruction to obtain a target domain reconstruction sampleAnd meterCalculating reconstruction loss Lr. The feature extraction network G adopts a ResNet-50 structure, the basic unit of ResNet is shown in figure 3, the output of the previous layer is added with the output calculated by the current layer through jumping, and the summed result is input into the activation function as the output of the current layer. Obtaining target domain sample characteristics f through the characteristic extraction process of ResNet-50t=G(xt). The image reconstruction network D adopts a decoder structure and carries out a series of upsampling on the target domain sample characteristics ftReduced to the original size, i.e.The reconstruction loss is:
wherein N istIs the number of samples in the target domain, and j is the jth sample in the target domain.
And 3, updating parameters of the feature extraction network:
using reconstruction loss L in the trainerrUpdating parameters of the feature extraction network G, namely:
whereinThe parameters of the network are extracted for the current features,extracting the parameters of the network for the updated features, alpha is the learning rate,are the parameters of the decoder D and,representation of parametersAnd (5) solving the gradient, wherein a random gradient descent algorithm is adopted for gradient descent. Due to weight sharing, the parameter theta of the feature extraction network G in the tester and the parameter theta of the feature extraction network G in the trainer are updated together, namely, the parameter updating direction of a specific task such as image classification in the tester to the network is forced to trend to the parameter updating direction of an automatic supervision image reconstruction task in the trainer to the network.
Step 4, performing a classification task by using the source domain samples and calculating a classification loss: source domain data x to be taggedsThe input parameter updated feature extraction network G obtains the source domain data feature fsThen the source domain data is characterized by fsAnd inputting the image into a classification network C to perform an image classification task and calculating a classification loss. The classification network C adopts a structure of multiple fully-connected layers and a softmax layer, a schematic diagram of the fully-connected layer structure is shown in FIG. 4, each node of the classification network C is connected with all nodes of the previous layer and used for integrating the extracted features and outputting image prediction tags through the softmax layerNamely, it isIs classified asNsIs the number of samples in the source domain, and k is the kth sample in the source domain.
Step 5, calculating a total loss function and updating parameters of all networks:
calculating a total loss function L, and updating parameters of a feature extraction network G, a reconstruction network D and a classification network C in the trainer and the tester, namely:
where β is the learning rate, { θG,θD,θC}tFor the parameters of the network at the current moment, the total loss function is as follows:
wherein, thetaCExpressing the parameters of the classification network, lambda is a hyper-parameter and is used for controlling the influence of the image reconstruction task and the image classification task on the network parameter updating, Lr(θG,θD) Expressed in a network parameter of thetaGAnd thetaDThe loss of the image reconstruction obtained by the calculation, expressed in the network parameters ofAnd thetaCThe calculated image classification loss.
Claims (6)
1. A self-supervision domain adaptation method based on meta-learning is characterized by comprising the following steps:
step 1, setting a trainer and a tester:
taking the reconstruction process of the target domain sample as a trainer in meta-learning, and taking the classification process of the source domain sample as a tester in the meta-learning;
step 2, performing an image reconstruction task by using the target domain sample and calculating reconstruction loss:
inputting the label-free target domain sample into a feature extraction network to obtain target domain sample features, inputting the target domain sample features into an image reconstruction network to reconstruct images and calculating reconstruction loss;
and 3, updating parameters of the feature extraction network:
the parameters of the characteristic extraction network in the tester and the parameters of the characteristic extraction network in the trainer are updated together due to weight sharing, namely the parameter updating direction of the network in the tester tends to the parameter updating direction of the network in the trainer;
step 4, performing a classification task by using the source domain samples and calculating a classification loss:
inputting the labeled source domain data into a feature extraction network after the parameters are updated to obtain source domain data features, and then inputting the source domain data features into a classification network to perform an image classification task and calculate classification loss;
step 5, calculating a total loss function and updating parameters of all networks:
and calculating a total loss function, and updating parameters of the feature extraction network, the reconstruction network and the classification network in the trainer and the tester.
2. The meta-learning based adaptive method for the supervised domain according to claim 1, wherein the specific method in step 1 is as follows:
sample the target domain xtThe reconstruction process of (2) is used as a Meta-trainer (Meta-train) in Meta-learning, and the source domain sample x is used as a Meta-trainersAs a Meta-tester (Meta-test) in Meta-learning; the source domain with the label therein is denoted as S ═ { Xs,Ys},xs∈XsAnd ys∈YsRespectively representing source domain samples and corresponding labels, and an unlabeled target domain is denoted as T ═ XtIn which xt∈XtRepresenting a target domain sample.
3. The meta-learning based adaptive method for the supervised domain as claimed in claim 2, wherein the step 2 is as follows:
target domain samples x to be unlabeledtObtaining target domain sample characteristics f by input characteristic extraction network GtThen the target domain sample characteristics ftThe input image reconstruction network D carries out image reconstruction to obtain target domain reconstructionSample(s)And calculating a reconstruction loss Lr(ii) a The characteristic extraction network G adopts a ResNet-50 structure, the basic unit of ResNet adds the output of the previous layer and the calculated output of the current layer through jumping, and inputs the summed result into an activation function as the output of the current layer; obtaining target domain sample characteristics f through the characteristic extraction process of ResNet-50t=G(xt) (ii) a The image reconstruction network D adopts a decoder structure and carries out a series of upsampling on the target domain sample characteristics ftReduced to the original size, i.e.(ft) (ii) a The reconstruction loss is:
wherein N istIs the number of samples in the target domain, and j is the jth sample in the target domain.
4. The meta-learning based adaptive method for the supervised domain as claimed in claim 3, wherein the step 3 is as follows:
using reconstruction loss L in the trainerrUpdating parameters of the feature extraction network G, namely:
whereinThe parameters of the network are extracted for the current features,is passing throughThe updated parameters of the feature extraction network, alpha is the learning rate,are the parameters of the decoder D and,representation of parametersSolving gradient, wherein a random gradient descent algorithm is adopted for gradient descent; due to weight sharing, the parameter theta of the feature extraction network G in the tester and the parameter theta of the feature extraction network G in the trainer are updated together, namely, the parameter updating direction of a specific task such as image classification in the tester to the network is forced to trend to the parameter updating direction of an automatic supervision image reconstruction task in the trainer to the network.
5. The meta-learning based adaptive method for the supervised domain according to claim 4, wherein the specific method in step 4 is as follows:
source domain data x to be taggedsThe input parameter updated feature extraction network G obtains the source domain data feature fsThen the source domain data is characterized by fsInputting the image into a classification network C to perform an image classification task and calculating classification loss; the classification network C adopts a structure of multiple fully-connected layers and a softmax layer, each node of the fully-connected layer is connected with all nodes of the previous layer, and is used for integrating the extracted features and outputting image prediction labels through the softmax layerNamely, it isIs classified as NsIs the number of samples in the source domain, and k is the kth sample in the source domain.
6. The meta-learning based adaptive method for the supervised domain as claimed in claim 5, wherein the specific method in step 5 is as follows:
calculating a total loss function L, and updating parameters of a feature extraction network G, a reconstruction network D and a classification network C in the trainer and the tester, namely:
where β is the learning rate, { θG,θD,θC}tFor the parameters of the network at the current moment, the total loss function is as follows:
wherein, thetaCExpressing the parameters of the classification network, lambda is a hyper-parameter and is used for controlling the influence of the image reconstruction task and the image classification task on the network parameter updating, Lr(θG,θD) Expressed in a network parameter of thetaGAnd thetaDThe loss of the image reconstruction obtained by the calculation, expressed in the network parameters ofAnd thetaCThe calculated image classification loss.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110727430.3A CN113537307B (en) | 2021-06-29 | 2021-06-29 | Self-supervision domain adaptation method based on meta learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110727430.3A CN113537307B (en) | 2021-06-29 | 2021-06-29 | Self-supervision domain adaptation method based on meta learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113537307A true CN113537307A (en) | 2021-10-22 |
CN113537307B CN113537307B (en) | 2024-04-05 |
Family
ID=78097130
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110727430.3A Active CN113537307B (en) | 2021-06-29 | 2021-06-29 | Self-supervision domain adaptation method based on meta learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113537307B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113971746A (en) * | 2021-12-24 | 2022-01-25 | 季华实验室 | Garbage classification method and device based on single hand teaching and intelligent sorting system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111275092A (en) * | 2020-01-17 | 2020-06-12 | 电子科技大学 | Image classification method based on unsupervised domain adaptation |
CN112784790A (en) * | 2021-01-29 | 2021-05-11 | 厦门大学 | Generalization false face detection method based on meta-learning |
US20210182618A1 (en) * | 2018-10-29 | 2021-06-17 | Hrl Laboratories, Llc | Process to learn new image classes without labels |
-
2021
- 2021-06-29 CN CN202110727430.3A patent/CN113537307B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210182618A1 (en) * | 2018-10-29 | 2021-06-17 | Hrl Laboratories, Llc | Process to learn new image classes without labels |
CN111275092A (en) * | 2020-01-17 | 2020-06-12 | 电子科技大学 | Image classification method based on unsupervised domain adaptation |
CN112784790A (en) * | 2021-01-29 | 2021-05-11 | 厦门大学 | Generalization false face detection method based on meta-learning |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113971746A (en) * | 2021-12-24 | 2022-01-25 | 季华实验室 | Garbage classification method and device based on single hand teaching and intelligent sorting system |
Also Published As
Publication number | Publication date |
---|---|
CN113537307B (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111814854B (en) | Target re-identification method without supervision domain adaptation | |
CN112232416B (en) | Semi-supervised learning method based on pseudo label weighting | |
CN112819065B (en) | Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information | |
CN111612051B (en) | Weak supervision target detection method based on graph convolution neural network | |
CN114912612A (en) | Bird identification method and device, computer equipment and storage medium | |
CN113128620B (en) | Semi-supervised domain self-adaptive picture classification method based on hierarchical relationship | |
Li et al. | A review of deep learning methods for pixel-level crack detection | |
CN106156805A (en) | A kind of classifier training method of sample label missing data | |
CN109829414B (en) | Pedestrian re-identification method based on label uncertainty and human body component model | |
CN114037653B (en) | Industrial machine vision defect detection method and system based on two-stage knowledge distillation | |
CN111239137B (en) | Grain quality detection method based on transfer learning and adaptive deep convolution neural network | |
CN112052818A (en) | Unsupervised domain adaptive pedestrian detection method, unsupervised domain adaptive pedestrian detection system and storage medium | |
CN112749675A (en) | Potato disease identification method based on convolutional neural network | |
CN114863091A (en) | Target detection training method based on pseudo label | |
CN117152503A (en) | Remote sensing image cross-domain small sample classification method based on false tag uncertainty perception | |
CN116310647A (en) | Labor insurance object target detection method and system based on incremental learning | |
CN115797701A (en) | Target classification method and device, electronic equipment and storage medium | |
CN116258990A (en) | Cross-modal affinity-based small sample reference video target segmentation method | |
CN112668633B (en) | Adaptive graph migration learning method based on fine granularity field | |
CN114818931A (en) | Fruit image classification method based on small sample element learning | |
CN113537307A (en) | Self-supervision domain adaptation method based on meta-learning | |
Lonij et al. | Open-world visual recognition using knowledge graphs | |
CN117574262A (en) | Underwater sound signal classification method, system and medium for small sample problem | |
CN117058394A (en) | Zero sample semantic segmentation method | |
CN113537325B (en) | Deep learning method for image classification based on extracted high-low layer feature logic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |