CN113505701A - Variational self-encoder zero sample image identification method combined with knowledge graph - Google Patents

Variational self-encoder zero sample image identification method combined with knowledge graph Download PDF

Info

Publication number
CN113505701A
CN113505701A CN202110786235.8A CN202110786235A CN113505701A CN 113505701 A CN113505701 A CN 113505701A CN 202110786235 A CN202110786235 A CN 202110786235A CN 113505701 A CN113505701 A CN 113505701A
Authority
CN
China
Prior art keywords
semantic
category
training
encoder
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110786235.8A
Other languages
Chinese (zh)
Inventor
张海涛
苏琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Technical University
Original Assignee
Liaoning Technical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Technical University filed Critical Liaoning Technical University
Priority to CN202110786235.8A priority Critical patent/CN113505701A/en
Publication of CN113505701A publication Critical patent/CN113505701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a variational self-encoder zero sample image identification method combining a knowledge graph, which comprises the following steps: training the visual characteristic learning network, inputting the training image into CNN convolutional neural network, encoding the extracted image characteristic into low-dimensional characteristic vector Z by VAEiInputting a potential feature space; training a semantic feature learning network; training of each modality-specific decoder; fusing unknown vision and language by using learned networkKnowledge is defined to infer the class of the sample. The method adopts the variational self-encoder of the generative model to generate the potential characteristics of the corresponding category, reduces the data imbalance between the visible category and the invisible category, and improves the accuracy to a certain extent. The invention combines the abundant semantic knowledge in the knowledge map into the generation model by constructing a combined category hierarchical structure and taking the hierarchical structured knowledge map of the category text description and the word vector as a semantic information base, thereby improving the classification accuracy with the maximum effect.

Description

Variational self-encoder zero sample image identification method combined with knowledge graph
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a variational self-encoder zero sample image recognition method combined with a knowledge graph.
Background
With the wide application of deep learning in the field of artificial intelligence in recent years, the image classification accuracy reaches a new height. However, each category of the traditional classification task needs to collect a large amount of training data, and meanwhile, manual labeling needs to be performed one by one, which is time-consuming, labor-consuming, expensive, and difficult for some rare objects to acquire data, so that zero-sample image recognition becomes one of the research hotspots in the field of machine vision in recent years. The main idea of Zero-shot learning (ZSL) is to mimic the process of human learning and logical reasoning for touching new things. For example, a human being can speculatively identify an animal without seeing the animal through some semantic descriptions, and such a "class bypass" learning process can be summarized as semantic descriptions using common sense or a priori knowledge to establish a relationship between a known class and an unknown class.
Zero sample learning can be roughly divided into two directions at present, wherein one direction is based on learning of an embedded model, namely learning of a compatible cross-modal mapping function, embedding features under two modes into a space, and then performing nearest neighbor search to predict unknown class labels. Another direction is zero-sample learning based on generation models such as the Generative Adaptive Networks (GAN) and the Variational self-encoders (VAE) that have been newly developed in recent years, that is, generating samples (features) for unknown classes to control the ratio between known classes and unknown classes.
The zero sample classification method based on the mapping space can be embodied into three major categories: (1) visual features are embedded into a semantic space, and Frome et al and Akata et al propose that a mapping function from the visual space to the semantic space and other similarity measures are learned to compare the embedded visual and semantic features for classification; (2) semantic features are embedded into a visual space, Kodirov et al use a semantic autoencoder to perform zero sample classification recognition, and mapping from the semantic space to the visual space can alleviate a hub point problem (hub clearance); (3) visual and semantic features are jointly embedded into a potential space, and Romera-Paredes et al map two modal features into a space where a nearest neighbor search is performed to predict category labels. Changpinyo et al classify by aligning the class embedding space and the composite classifier of the weighted bipartite graph.
The zero sample method based on the mapping space mostly needs to match features under different modes, but a semantic interval problem is caused due to a large semantic gap between features of different modes, a known class and an unknown class can be completely different classes, and an embedded model only learned from the known class can generate deviation due to the lack of unknown class samples when used for prediction of the unknown class.
The F-CLSWGAN proposed by Xian et al increases classification regularization based on WGAN, so as to generate more discriminative visual features to ensure classification accuracy; the ABPZSL proposed by Zhu et al improves GAN by optimizing a generator and a back propagation function, and improves the classification accuracy; however, because of the instability of GAN in the training process, VAE becomes a better choice, and the CVAE model proposed by Mishra generates potential features through VAE learning, and then performs zero sample classification; the CADA-VAE model proposed by Schonfeld et al maps the generated low-dimensional visual features and semantic features to a potential space, and classifies according to the potential features.
However, most of these generation methods are based on semantic auxiliary information such as attribute annotation and word vector text description. The single information has not strong enough characterization capability on the category, and when the auxiliary information has small difference, the generated features have some ambiguity, for example: when generating samples for zebra using the attribute "stripe", another tiger labeled "stripe" may also obtain a synthesized sample similar to zebra (i.e., domain-shifting problem), which may affect the classification result to a large extent.
Disclosure of Invention
Based on the defects of the prior art, the technical problem to be solved by the invention is to provide a variation self-encoder zero sample image recognition method combined with a knowledge graph, which adopts a generation model variation self-encoder to generate potential features of corresponding categories, so as to reduce semantic intervals, and simultaneously converts ZSL into a traditional classification task, thereby reducing data imbalance between visible categories and invisible categories and improving accuracy to a certain extent.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention discloses a variational self-encoder zero sample image identification method combined with a knowledge graph, which comprises the following steps of:
step S1: training the visual characteristic learning network, inputting the training image into CNN convolutional neural network, encoding the extracted image characteristic into low-dimensional characteristic vector Z by VAEiInputting a potential feature space;
step S2: training a semantic feature learning network, sending category semantic vectors into a deep neural network module based on a knowledge graph, performing aggregation updating on nodes in the graph through a graph variation self-encoder, and encoding to generate a new low-dimensional semantic vector ZjInputting a potential feature space;
step S3: training each mode specific decoder, decoding the generated potential vector by using another mode decoder respectively under the condition of the same category, and reconstructing original data, namely, each mode decoder is trained by the extracted potential feature vectors of other modes;
step S4: and (4) fusing unknown visual and semantic knowledge by using the well-learned network to deduce the category of the sample.
Preferably, in step S2, in the KG embedding process, the model performs aggregation update on each node in KG through a graph variational self-encoder learning function to obtain semantic vector code, and generates a set of low-dimensional semantic vectors S ═ { S ═ of the relevant node information in aggregation1,S2,...SnEmbedded as category semantics.
Optionally, in step S3, two sets of variational encoders VAE and VGAE are used to learn vector representations of two modes, respectively, and a variational alignment loss and a variational cross loss are introduced to constrain the model.
Therefore, the zero sample image recognition method of the variational self-encoder combined with the knowledge graph generates the potential features of the corresponding categories by adopting the variational self-encoder generating model, so that the semantic interval is reduced, the ZSL is converted into the traditional classification task, the data imbalance between the visible category and the invisible category is reduced, and the accuracy is improved to a certain extent. Aiming at the problem of domain deviation caused by weak category characterization capability of single information, the method provided by the invention has the advantages that a combined category hierarchical structure is constructed, a hierarchical structured knowledge graph of category text description and word vectors is used as a semantic information base, and abundant semantic knowledge in the knowledge graph is combined into a generated model, so that the classification accuracy is improved to the greatest extent.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following detailed description is given in conjunction with the preferred embodiments, together with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments will be briefly described below.
FIG. 1 is a flow chart of a variation self-encoder zero-sample image recognition method in combination with a knowledge-graph according to the present invention.
Detailed Description
Other aspects, features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which form a part of this specification, and which illustrate, by way of example, the principles of the invention. In the referenced drawings, the same or similar components in different drawings are denoted by the same reference numerals.
The specific process flow of the invention is shown in FIG. 1The method comprises a model training stage and a classification recognition stage. The training model stage is divided into three parts: (1) training of a visual feature learning network. Training image IiInputting CNN convolution neural network, and extracting image feature XiEncoding into a low-dimensional feature vector Z by VAEiAnd inputting a potential feature space. (2) Training a semantic feature learning network. Sending category semantic vectors (such as word embedding vectors) into a deep neural network module based on a knowledge graph, performing aggregation updating on nodes in the graph through a graph variation self-encoder, and encoding to generate new low-dimensional semantic vectors ZjAnd inputting a potential feature space. (3) Training of each modality specific decoder. And respectively decoding the generated potential vectors by using a decoder of another modality under the condition of the same category, and reconstructing the original data, namely, the decoder of each modality is trained by the extracted potential feature vectors of other modalities. On this basis, a softmax classifier is trained. And in the classification and identification stage, the classification of the sample is deduced by fusing unknown visual and semantic knowledge with a well-learned network.
The Graph (Graph) is composed of nodes (Vertex) and edges (Edge), and is denoted by G ═ V, E. A Knowledge Graph (KG) is essentially a Knowledge base of a semantic network, which can be interpreted as a multi-relationship Graph, which contains various types of nodes and edges, where the nodes represent semantic symbols and the edges represent relationships between semantics.
KG selects WordNet as a construction basis, takes correlation among categories as edges, takes word embedding of category labels as nodes, the nodes comprise known categories in training data and unknown categories of test data, and each node represents a semantic category, namely V ═ { V ═ V { (V)1,V2,...Vn}; if the nodes are related in WordNet, connecting the corresponding related nodes as the edges of KG, namely modeling the hierarchical relationship between categories by 'father-son class', and E ═ { E ═ E { (E)1,E2,...En,., and the correlation between categories is represented by an n × n dimensional adjacency matrix a.
During KG embedding, the model passes through a Graph variable auto-encoder (VGAE)) The learning function carries out aggregation updating on each node in KG to obtain semantic vector codes, and a group of low-dimensional semantic vectors S ═ S aggregating relevant node information is generated1,S2,...SnEmbedded as category semantics.
The KG-VAE model respectively learns vector representation of two modes (visual characteristics and semantic characteristics) by utilizing two groups of variational self-encoders VAE and VGAE, and introduces variational alignment loss L to improve the robustness of the modelVDSum-variation cross loss LVCAnd constraining the model.
Variation alignment loss LVD: in order to relieve the problem of decisive information loss caused by semantic interval and dimension difference between features in different modes, mean value vectors generated in the encoding process are minimized between the distribution of the two modes
Figure BDA0003159288660000061
Sum standard deviation vector
Figure BDA0003159288660000062
The distance between them is the 2-Wasserstein distance proposed in WGAN.
Figure BDA0003159288660000063
The variation alignment penalty is then:
Figure BDA0003159288660000064
variation cross loss LVC: in order to reduce the loss of characteristic information in the generation and reconstruction process and alleviate the problem of information domain offset, the capability of an encoder for fusing characteristics of different modes is enhanced, and original data is reconstructed by decoding potential characteristics of the same type of another mode, namely: each decoder is trained on the latent feature vectors obtained by the other modality.
The variation cross loss is then:
Figure BDA0003159288660000071
the expansion is as follows:
LVC=(|x-x′(z1)|+|x-x′(z2)|+|s-s′(z1)|+|s-s′(z2)|)
the overall loss function of the model is therefore:
L=LVAE+LVGAE+ζLVD+γLVC
ζ and γ are weight values of variation alignment loss and variation cross loss.
The present invention evaluates the model approach using four data sets, CUB, SUN, AWA1, AWA2, which are widely used for zero sample image recognition.
In the zero sample image recognition experiment, the experimental results are shown in table 1. In table 1, bold is the optimal value for each column, "-" indicates that the data set was not tested in the original text.
TABLE 1 different models zero sample class harmonic mean accuracy (unit:%)
Figure BDA0003159288660000081
It can be derived from the table that the comparative embedding models DEVISE, ALE, SYNC, SAE, KG-VAE are clearly superior to these methods over all datasets; for CVAE and F-CLSWGAN models for generating visual features, KG-VAE is improved on data sets CUB and SUN to a certain extent; CUB and SUN belong to fine-grained data sets, wherein the categories are close, the characteristic difference is small, the requirement on the model is higher, and after the KG-VAE hierarchically structures the category information through a knowledge graph, the error of the generated auxiliary semantic vector is effectively reduced, the knowledge transfer between the known category and the unknown category is promoted, and the classification accuracy is improved; in addition, compared with a reference model (CADA-VAE), KG-VAE is respectively improved by 0.5%, 0.7%, 0.8% and 0.6% on four data sets of CUB, SUN, AWA1 and AWA 2; the fact that the introduction of the knowledge graph effectively reserves the core characteristics of semantic categories, aligns the characteristic information between different modes of the same category more accurately, relieves the problem of domain drift and improves the generalization capability of the model is proved.
To further demonstrate the effectiveness of the model, generalized zero-sample experiments were performed on the basis of the reproduced CADA-VAE model results (baseline), and comparative experiments were performed with 12 main-stream methods, respectively, including the classical ZSL method DEVISE, ALE, SKE, EZSL, LATEM, SYNC, SAE, visual feature generation models F-CLSWGAN, CVAE, SE, and ABPZSL. The results of the experiment are shown in table 2:
TABLE 2 generalized zero sample Classification and average accuracy for different models (Unit:%)
Figure BDA0003159288660000091
In table 2, bold is the optimal value for each column, "-" indicates that the data set was not tested in the original text. For the classical ZSL method, DEVISE, ALE, SJE, EZSL, LATEM use linear compatibility functions or other similarity measures to compare the embedded visual and semantic features for classification; SYNC is classified by a composite classifier of an alignment class embedding space and a weighted bipartite graph, and SAE uses a semantic self-encoder to perform zero sample classification identification. For F-CLSWGAN, CVAE, SE, ABPZSL, model learning generates artificial visual data, thereby converting the zero sample learning problem into a generation model for increasing a sample data task. Compared with the methods, the KG-VAE has different improvements in classification accuracy. In addition, compared with a reference method CADA-VAE, KG-VAE is respectively improved by 0.5%, 0.6%, 0.7% and 0.5% in four data sets of CUB, SUN, AWA1 and AWA 2; experiments prove that the model has good classification accuracy, the introduction of the knowledge graph plays a positive role while keeping potential core characteristics and effective judgment information of two modes, and the abundant hierarchical structured semantic information has better expansibility and is more effective than single attribute auxiliary information. The KG-VAE model has certain improvement in the aspect of generalized zero-sample image recognition.
When the knowledge graph is constructed, the relevance among the categories is taken as a side, the word embedding of the category label is taken as a node, the common sense knowledge such as attribute information is added, the ConceptNet except WordNet is selected on the basis of construction, and the background relation among the categories is embedded into the knowledge graph. Meanwhile, node semantics are gathered through a graph convolutional neural network, and mapping from a semantic space to a visual feature space is learned through a deep embedded network. Secondly, inputting the features obtained by class embedding mapping and the original image features into VAE for cross-modal alignment. Third, class embedding and image features are converted into latent features by a trained deep embedding network and VAE encoder. Finally, a simple softmax classifier is trained using these latent features to achieve zero sample classification.
The method comprises the steps of constructing a combined category hierarchical structure, using a hierarchical structured knowledge graph of category text description and word vectors as a semantic information base, combining rich semantic knowledge in the knowledge graph into a generation model, coding through a graph variation self-encoder to generate semantic potential features for each category, generating visual potential features of corresponding categories through the variation self-encoder, and training a softmax classifier to realize a classification function through cross reconstruction of potential feature learning mapping relations under different modes. From experimental results, the introduction of KG plays a positive role in classification accuracy, and particularly for fine-grained data sets, domain drift and semantic intervals among different modal characteristics are effectively relieved; experiments prove that the rich semantic information in KG has stronger representation capability on categories and better effect on knowledge migration of known categories and unknown categories. The contributions of this method mainly include the following three aspects:
(1) KG describes the characteristics of classes in ZSL more efficiently and comprehensively compared with conventional semantic information, and promotes knowledge migration between known classes and unknown classes;
(2) the domain drift problem in zero sample learning is better relieved through the semantic auxiliary information of the hierarchical structure in KG, so that the model has generalization;
(3) the KG-VAE reduces the problem of cross-modal semantic interval by aligning the generated different modal potential features, and improves the robustness of the model.
While the foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (3)

1. A variational self-encoder zero sample image recognition method combined with a knowledge graph is characterized by comprising the following steps:
step S1: training the visual characteristic learning network, inputting the training image into CNN convolutional neural network, encoding the extracted image characteristic into low-dimensional characteristic vector Z by VAEiInputting a potential feature space;
step S2: training a semantic feature learning network, sending category semantic vectors into a deep neural network module based on a knowledge graph, performing aggregation updating on nodes in the graph through a graph variation self-encoder, and encoding to generate a new low-dimensional semantic vector ZjInputting a potential feature space;
step S3: training each mode specific decoder, decoding the generated potential vector by using another mode decoder respectively under the condition of the same category, and reconstructing original data, namely, each mode decoder is trained by the extracted potential feature vectors of other modes;
step S4: and (4) fusing unknown visual and semantic knowledge by using the well-learned network to deduce the category of the sample.
2. The method as claimed in claim 1, wherein in step S2, in the KG embedding process, the model performs aggregation update on each node in KG through the graph-variational self-encoder learning function to obtain semantic vector encoding, and generates a set of low-dimensional semantic vectors S ═ { S ═ S of the relevant node information1,S2,...SnEmbedded as category semantics.
3. The method for identifying zero-sample images of a knowledge-graph-based variational auto-encoder as claimed in claim 1, wherein in step S3, two sets of variational auto-encoders VAE and VGAE are used to learn vector representations of two modes respectively, and the model is constrained by introducing variational alignment loss and variational cross loss.
CN202110786235.8A 2021-07-12 2021-07-12 Variational self-encoder zero sample image identification method combined with knowledge graph Pending CN113505701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110786235.8A CN113505701A (en) 2021-07-12 2021-07-12 Variational self-encoder zero sample image identification method combined with knowledge graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110786235.8A CN113505701A (en) 2021-07-12 2021-07-12 Variational self-encoder zero sample image identification method combined with knowledge graph

Publications (1)

Publication Number Publication Date
CN113505701A true CN113505701A (en) 2021-10-15

Family

ID=78012710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110786235.8A Pending CN113505701A (en) 2021-07-12 2021-07-12 Variational self-encoder zero sample image identification method combined with knowledge graph

Country Status (1)

Country Link
CN (1) CN113505701A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240891A (en) * 2021-12-17 2022-03-25 重庆大学 Welding spot quality identification method fusing knowledge graph and graph convolution neural network
CN114334068A (en) * 2021-11-15 2022-04-12 深圳市龙岗中心医院(深圳市龙岗中心医院集团、深圳市第九人民医院、深圳市龙岗中心医院针灸研究所) Radiology report generation method, device, terminal and storage medium
CN115170704A (en) * 2022-07-06 2022-10-11 北京信息科技大学 Three-dimensional scene animation automatic generation method and system
CN117456266A (en) * 2023-11-16 2024-01-26 上海城建职业学院 Classification method and system based on knowledge extraction and convolution self-encoder

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503204A (en) * 2018-05-17 2019-11-26 国际商业机器公司 Identification is used for the migration models of machine learning task
CN110580501A (en) * 2019-08-20 2019-12-17 天津大学 Zero sample image classification method based on variational self-coding countermeasure network
CN112100380A (en) * 2020-09-16 2020-12-18 浙江大学 Generation type zero sample prediction method based on knowledge graph
CN112287641A (en) * 2020-12-25 2021-01-29 上海旻浦科技有限公司 Synonym sentence generating method, system, terminal and storage medium
CN112925920A (en) * 2021-03-23 2021-06-08 西安电子科技大学昆山创新研究院 Smart community big data knowledge graph network community detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503204A (en) * 2018-05-17 2019-11-26 国际商业机器公司 Identification is used for the migration models of machine learning task
CN110580501A (en) * 2019-08-20 2019-12-17 天津大学 Zero sample image classification method based on variational self-coding countermeasure network
CN112100380A (en) * 2020-09-16 2020-12-18 浙江大学 Generation type zero sample prediction method based on knowledge graph
CN112287641A (en) * 2020-12-25 2021-01-29 上海旻浦科技有限公司 Synonym sentence generating method, system, terminal and storage medium
CN112925920A (en) * 2021-03-23 2021-06-08 西安电子科技大学昆山创新研究院 Smart community big data knowledge graph network community detection method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114334068A (en) * 2021-11-15 2022-04-12 深圳市龙岗中心医院(深圳市龙岗中心医院集团、深圳市第九人民医院、深圳市龙岗中心医院针灸研究所) Radiology report generation method, device, terminal and storage medium
CN114334068B (en) * 2021-11-15 2022-11-01 深圳市龙岗中心医院(深圳市龙岗中心医院集团、深圳市第九人民医院、深圳市龙岗中心医院针灸研究所) Radiology report generation method, device, terminal and storage medium
CN114240891A (en) * 2021-12-17 2022-03-25 重庆大学 Welding spot quality identification method fusing knowledge graph and graph convolution neural network
CN114240891B (en) * 2021-12-17 2023-07-18 重庆大学 Welding spot quality identification method integrating knowledge graph and graph convolution neural network
CN115170704A (en) * 2022-07-06 2022-10-11 北京信息科技大学 Three-dimensional scene animation automatic generation method and system
CN115170704B (en) * 2022-07-06 2024-04-02 北京信息科技大学 Automatic generation method and system for three-dimensional scene animation
CN117456266A (en) * 2023-11-16 2024-01-26 上海城建职业学院 Classification method and system based on knowledge extraction and convolution self-encoder

Similar Documents

Publication Publication Date Title
CN113505701A (en) Variational self-encoder zero sample image identification method combined with knowledge graph
CN109598279B (en) Zero sample learning method based on self-coding countermeasure generation network
CN111563554A (en) Zero sample image classification method based on regression variational self-encoder
CN111709518A (en) Method for enhancing network representation learning based on community perception and relationship attention
CN111460201B (en) Cross-modal retrieval method for modal consistency based on generative countermeasure network
CN112597296B (en) Abstract generation method based on plan mechanism and knowledge graph guidance
Ji et al. Human-centric clothing segmentation via deformable semantic locality-preserving network
CN108985298B (en) Human body clothing segmentation method based on semantic consistency
CN114791958B (en) Zero sample cross-modal retrieval method based on variational self-encoder
CN115311605B (en) Semi-supervised video classification method and system based on neighbor consistency and contrast learning
CN114332519A (en) Image description generation method based on external triple and abstract relation
CN113920379B (en) Zero sample image classification method based on knowledge assistance
CN115115883A (en) License classification method and system based on multi-mode feature fusion
CN113836319B (en) Knowledge completion method and system for fusion entity neighbors
CN114359656A (en) Melanoma image identification method based on self-supervision contrast learning and storage device
Zhang et al. DHNet: Salient object detection with dynamic scale-aware learning and hard-sample refinement
CN114168773A (en) Semi-supervised sketch image retrieval method based on pseudo label and reordering
CN112668543B (en) Isolated word sign language recognition method based on hand model perception
CN114519107A (en) Knowledge graph fusion method combining entity relationship representation
CN112035689A (en) Zero sample image hash retrieval method based on vision-to-semantic network
CN117152504A (en) Space correlation guided prototype distillation small sample classification method
CN113723345B (en) Domain self-adaptive pedestrian re-identification method based on style conversion and joint learning network
CN116486093A (en) Information processing apparatus, information processing method, and machine-readable storage medium
CN114385845A (en) Image classification management method and system based on graph clustering
CN113297385A (en) Multi-label text classification model and classification method based on improved GraphRNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination