CN113051927B - Social network emergency detection method based on multi-modal graph convolutional neural network - Google Patents
Social network emergency detection method based on multi-modal graph convolutional neural network Download PDFInfo
- Publication number
- CN113051927B CN113051927B CN202110265390.5A CN202110265390A CN113051927B CN 113051927 B CN113051927 B CN 113051927B CN 202110265390 A CN202110265390 A CN 202110265390A CN 113051927 B CN113051927 B CN 113051927B
- Authority
- CN
- China
- Prior art keywords
- visual
- text
- graph structure
- graph
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Abstract
The invention discloses a social network emergency detection method based on a multi-modal graph convolutional neural network, which comprises the following steps of: aiming at the picture information, detecting and extracting the characteristics of key visual targets in the picture by using a target detection model, and constructing a visual graph structure according to the interrelation between the visual targets; aiming at text information, performing feature extraction on each vocabulary in a sentence by utilizing natural language processing, and building a text graph structure by learning semantic dependency relationship among the vocabularies; respectively updating features on a visual graph structure and a text graph structure by using a graph convolution neural network, and mining the interrelation between different visual targets and different vocabularies so as to learn to obtain feature representation containing rich semantic information; and aggregating a plurality of visual target characteristics into an integral visual characteristic in a visual graph structure, aggregating a plurality of vocabulary characteristics into an integral text characteristic in a text graph structure, and combining the visual characteristic and the text characteristic into an overall characteristic to detect and recognize the event type by utilizing a pooling operation.
Description
Technical Field
The invention relates to the field of social network emergency detection, in particular to a social network emergency detection method based on a multi-mode graph convolution neural network.
Background
With the rapid development and the increasing popularization of the internet, a social network is rapidly started, the social network contains a large amount of data and information, people browse news, share viewpoints, spread information and report of a large number of hot spots and emergencies on a social media platform are first seen in the social network, the social network contains a large amount of various types of data, the data contains a lot of hidden values, social media mining is more and more concerned by a plurality of researchers, and event detection is one of important contents of the social media mining. The social network event detection research is beneficial to fast grasping social hotspot information, promotes further management and supervision of the social network and is beneficial to promoting social development.
Conventional event detection method[1]The method depends on more precise characteristic design and a series of complex natural language processing tools, the process is complex, the generalization capability is low, more and more deep learning models are applied to various machine learning tasks along with the rapid development of deep learning and the universal application of a neural network, and good effect is achieved[2]、NPNs[3]The method applies the deep neural network to event detection, automatically extracts the characteristics of vocabulary level, detects the event and obtains good effect. But they only consider the lexical and sentence features, neglect the structural information between the lexical features, and the graph convolution neural network[4]Structured information among features can be better learned, and the accuracy of event detection is improved. Most of the existing methods only mine information from text modes to detect events, and the accuracy of event detection is low.
Disclosure of Invention
The invention provides a social network emergency detection method based on a multi-modal graph convolutional neural network, which utilizes multi-modal data such as images and texts as input, respectively extracts fine-grained characteristics of a visual mode and a text mode, constructs a graph structure according to similarity relation between the characteristics, fully excavates structural information among the modes by utilizing the graph convolutional neural network, integrates the structural information into overall characteristics which are global and rich in semantic information, is used for event detection and identification, improves the accuracy of event detection, and is described in detail as follows:
a social network incident detection method based on a multimodal graph convolutional neural network, the method comprising:
aiming at the picture information, detecting and extracting the characteristics of key visual targets in the picture by using a target detection model, and constructing a visual graph structure according to the interrelation between the visual targets;
aiming at text information, performing feature extraction on each vocabulary in a sentence by utilizing natural language processing, and building a text graph structure by learning semantic dependency relationship among the vocabularies;
respectively updating features on a visual graph structure and a text graph structure by using a graph convolution neural network, and mining the interrelation between different visual targets and different vocabularies so as to learn to obtain feature representation containing rich semantic information;
and aggregating a plurality of visual target characteristics into an integral visual characteristic in a visual graph structure, aggregating a plurality of vocabulary characteristics into an integral text characteristic in a text graph structure, and combining the visual characteristic and the text characteristic into an overall characteristic to detect and recognize the event type by utilizing a pooling operation.
Wherein the visual pattern structure is G1,G1=(V1,E1),V1As node information, E1For side information, a node, i.e. a visual target entity in a picture: v1=IVEdge by adjacency matrix Aij=φ(aij;θA) Is shown asij=[vi,vj,vi-vj]Representing the similarity between two feature vectors, i represents the ith visual target, j represents the jth visual target, where φ (. eta.) represents the non-linear embedding, θAAre learnable parameters.
Wherein the text graph structure is G2,G2=(V2,E2),V2As node information, E2For side information, a node is each word in a sentence: v2=ITThe edges are represented by an adjacency matrix W ∈ Rm×mAnd directly obtaining the result according to the syntactic structure among the vocabularies.
The technical scheme provided by the invention has the beneficial effects that:
1. according to the invention, semantic information of an event is jointly mined from two aspects of a visual mode and a text mode, a visual target entity with fine granularity is detected by using a target detection algorithm in the visual mode, feature representations of different words are extracted in the text mode, comprehensive multi-mode information can be learned to obtain more comprehensive global feature representation, and semantic analysis and event detection can be better carried out;
2. the invention fully excavates the interrelationship between different visual targets under the visual mode by utilizing the graph convolution neural network, and transmits information according to the similarity of different visual entities; and fully mining the semantic dependency relationship of different vocabularies under the text mode, and performing information fusion according to the semantic dependency relationship of the vocabularies.
Therefore, the method and the device can fully learn and mine the structural information of the obtained data, and improve the accuracy of detecting the emergency of the social network.
Drawings
FIG. 1 is a flow chart of a social network incident detection method based on a multi-modal graph convolutional neural network;
fig. 2 is a schematic diagram of social network incident detection based on a multi-modal graph convolutional neural network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
Example 1
A social network emergency detection method based on a multi-modal graph convolutional neural network is disclosed, and referring to FIG. 1, the method comprises the following steps:
step 101: acquiring text information and picture information from a social network as input of multi-modal data;
step 102: aiming at the picture information, detecting and extracting the characteristics of key visual targets in the picture by using a target detection model, and constructing a visual graph structure according to the interrelation between the visual targets;
step 103: aiming at text information, extracting the characteristics of each vocabulary in a sentence by using a natural language processing algorithm, learning semantic dependency relations among the vocabularies, and constructing a text graph structure according to the semantic dependency relations among the vocabularies;
step 104: respectively updating features on a visual graph structure and a text graph structure by using a graph convolution neural network, and mining the interrelation between different visual targets and different vocabularies so as to learn to obtain feature representation containing rich semantic information;
step 105: and aggregating a plurality of visual target characteristics into an integral visual characteristic in a visual graph structure, aggregating a plurality of vocabulary characteristics into an integral text characteristic in a text graph structure, and combining the visual characteristic and the text characteristic into an overall characteristic to detect and recognize the event type by utilizing a pooling operation.
In summary, the embodiment of the present invention improves the accuracy of event detection through the above steps 101 to 105, and meets various requirements in practical applications.
Example 2
The scheme of example 1 is further described below with reference to specific calculation formulas and examples, which are described in detail below:
201: acquiring a text description T from a social network as text information input; acquiring a picture V corresponding to the text description from the social network, and inputting the picture V as visual information;
202: utilizing fast RCNN for picture information[5]The target detection model detects and extracts the characteristics of key visual targets in the picture, and a visual graph structure is constructed according to the interrelation among the visual targets;
aiming at input picture information V, utilizing target detection algorithm Faster RCNN[5]Visual target detection is carried out, n visual targets are obtained by detecting each picture, and feature extraction is carried out on the visual targets to obtain:d1representing the characteristic dimension of each visual object, R representing a real number, IVSet of visual target features, v, representing an input pictureiRepresenting the characteristics of the ith visual target.
Further mining the mutual relation among different visual target entities, and constructing a fully-connected visual graph structure G1=(V1,E1),V1Node information being the structure of a visual graph, E1For side information of visual graph structure, nodes in visual graph structure, i.e. graphVisual target entities in the patch: v1=IVThe edges of the visual graph structure are represented by a adjacency matrix, wherein the visual graph structure G1The adjacency matrix of (a) is defined as:
Aij=φ(aij;θA) (1)
wherein A ∈ Rn×nN is the number of visual objects in the input picture, aij=[vi,vj,vi-vj]Representing the similarity between two feature vectors, i represents the ith visual target, j represents the jth visual target, where φ (. quadrature.) represents the nonlinear embedding, θAIs a learnable parameter, and phi of the method is realized by adopting a three-layer multilayer perceptron.
203: aiming at text information, extracting the characteristics of each vocabulary in a sentence by using a natural language processing algorithm, learning semantic dependency relations among the vocabularies, and constructing a text graph structure according to the semantic dependency relations;
for the input text information T, the existing natural language processing toolkit Stanford CoreNLP is utilized[6]Performing feature extraction on m vocabularies in the text to obtain:d2representing the feature dimension, I, of each vocabulary vectorTSet of lexical features representing input text information, wiRepresenting the characteristics of the ith word in the input text message.
Further mining the syntactic dependency relationship among vocabularies, analyzing the syntactic structures among the vocabularies in the sentence by utilizing a Stanford CoreNLP toolkit, and constructing a text graph structure G2=(V2,E2),V2Node information for the structure of a text graph, E2For the side information of the text graph structure, the nodes of the text graph structure are each vocabulary in the sentence: v2=ITThe edges of the text graph structure are represented by adjacency matrices, and the adjacency matrices W ∈ R of the text graph structurem×mThe following can be directly obtained according to the syntax structure among vocabularies:
wherein, WijThe semantic dependency relationship between the ith vocabulary and the jth vocabulary in the input text information is adopted.
204: feature updating is carried out on a visual graph structure and a text graph structure respectively by using a graph convolution neural network, and the mutual relation between different visual targets and different vocabularies is mined so as to learn to obtain feature representation containing rich semantic information.
According to the constructed visual graph structure and text graph structure, the method utilizes graph convolution neural network algorithm0The node characteristics in the graph structure are updated, so that information transmission can be performed between the nodes and similar nodes, and semantic structural information between the nodes is better mined.
For visual graph structure G1=(V1,E1) The following graph convolutional neural network GCN is designed to update node information:
wherein the content of the first and second substances,the initial visual characteristics are represented by a visual representation,visual node characteristics of the l-th layer are shown, A is a graph structure G1I denotes an identity matrix, D denotes a degree matrix of (I + a), σ denotes a nonlinear activation function, θ1The method adopts a three-layer graph convolution network to aggregate and transmit visual information for learnable parameters.
For text graph structure G2=(V2,E2) Designing the following graph convolutional neural network GCN to update node information:
wherein, the first and the second end of the pipe are connected with each other,a feature of the original text is represented,represents the text node characteristics of the l-th layer, W represents the graph structure G2I denotes an identity matrix, D denotes a degree matrix of (I + W), ρ denotes a nonlinear activation function, θ denotes2The method adopts a three-layer graph convolution network to aggregate and transmit text information for learnable parameters.
205: and aggregating a plurality of visual target features into an integral visual feature in a visual graph structure, aggregating a plurality of vocabulary features into an integral text feature in a text graph structure by using a pooling operation, and then combining the visual feature and the text feature into a global feature for event type detection and identification.
Aggregating and updating the visual information and the text information through a graph convolution neural network to obtain the final visual characteristicsAnd text featuresL represents the number of GCN layers.
Respectively aggregating visual features and text features into integral features F by utilizing maximum pooling operationV,FTThen F is addedV,FTStitching to a multimodal feature representation F ═ FV,FTAnd (4) used for final event detection and identification.
Event type recognition is a multi-classification problem, multi-modal features F are used as input, events are classified by utilizing a softmax layer, and a cross entropy loss function is utilized to carry out constraint training on a model:
wherein p isiIs the probability of predicting as an i-th event, yiIs a true category label.
Reference to the literature
[1] Zhang first, Guo Shi just, Liu Song, etc. detection of self-similarity cluster events based on trigger word guidance [ J ] computer science, 2010(3) 212-214.
[2]Chen Y,Xu L,Liu K,et al.Event extraction via dynamic multi-pooling convolutional neural networks[C]//Proceedings of the 53rd Annual Meeting ofthe Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume1:Long Papers).2015:167-176.
[3]Lin H,Lu Y,Han X,et al.Nugget proposal networks for chinese event detection[J].arXiv preprint arXiv:1805.00249,2018.
[4]Kipf T N,Welling M.Semi-supervised classification with graph convolutional networks[J].arXiv preprint arXiv:1609.02907,2016.
[5]Ren S,He K,Girshick R,et al.Faster r-cnn:Towards real-time object detection with region proposal networks[J].IEEE transactions on pattern analysis and machine intelligence,2016,39(6):1137-1149.
[6]Manning C D,Surdeanu M,Bauer J,et al.The Stanford CoreNLP natural language processing toolkit[C]//Proceedings of52nd annual meeting of the association for computational linguistics:system demonstrations.2014:55-60.
In the embodiment of the present invention, except for the specific description of the model of each device, the model of other devices is not limited as long as the device can perform the above functions.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (1)
1. A social network emergency detection method based on a multi-modal graph convolutional neural network is characterized by comprising the following steps:
aiming at the picture information, detecting and extracting the characteristics of key visual targets in the picture by using a target detection model, and constructing a visual graph structure according to the interrelation between the visual targets;
aiming at text information, performing feature extraction on each vocabulary in a sentence by utilizing natural language processing, and building a text graph structure by learning semantic dependency relationship among the vocabularies;
respectively updating features on a visual graph structure and a text graph structure by using a graph convolution neural network, and mining the interrelation between different visual targets and different vocabularies so as to learn to obtain feature representation containing rich semantic information;
aggregating a plurality of visual target characteristics into an integral visual characteristic in a visual graph structure by utilizing pooling operation, aggregating a plurality of vocabulary characteristics into an integral text characteristic in a text graph structure, and combining the visual characteristic and the text characteristic into an overall characteristic to carry out event type detection and identification;
wherein, the respectively performing feature update on the visual graph structure and the text graph structure by using the graph convolution neural network specifically comprises:
the structure of the visual graph is G1,G1=(V1,E1),V1As node information, E1For side information, a node, i.e. a visual target entity in a picture: v1=IV,IVSet of visual target features representing input pictures, edge by adjacency matrix Aij=φ(aij;θA) Is shown asij=[vi,vj,vi-vj]Representing the similarity between two feature vectors, i represents the ith visual target, j represents the jth visual target, where φ (. eta.) represents the non-linear embedding, θAAre learnable parameters;
for visual graph structure G1=(V1,E1) Updating the characteristics:
wherein the content of the first and second substances,the initial visual characteristics are represented by a visual representation,visual node characteristics of the l-th layer, A denotes graph structure G1I denotes an identity matrix, D denotes a degree matrix of (I + a), σ denotes a nonlinear activation function, θ1For learnable parameters, R represents real number, n is the number of visual objects, d1Representing the characteristic dimension of each visual target, and adopting a three-layer graph convolution network to aggregate and transmit visual information;
the structure of the text graph is G2,G2=(V2,E2),V2As node information, E2For side information, a node is each word in a sentence: v2=IT,ITRepresenting a collection of lexical features of the input textual information, the edges being represented by an adjacency matrix W ∈ Rm×mDirectly obtaining the words according to the syntactic structure among the vocabularies;
for text graph structure G2=(V2,E2) Updating the characteristics:
wherein, the first and the second end of the pipe are connected with each other,the initial text features are represented by a representation of,represents the text node characteristics of the l-th layer, W represents the graph structure G2D represents a degree matrix of (I + W), ρ represents a nonlinear activation function, θ2As a learnable parameter, d2And (3) representing the characteristic dimension of each vocabulary vector, wherein m is the number of vocabularies, and a three-layer graph convolution network is adopted for carrying out aggregation and transmission of text information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110265390.5A CN113051927B (en) | 2021-03-11 | 2021-03-11 | Social network emergency detection method based on multi-modal graph convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110265390.5A CN113051927B (en) | 2021-03-11 | 2021-03-11 | Social network emergency detection method based on multi-modal graph convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113051927A CN113051927A (en) | 2021-06-29 |
CN113051927B true CN113051927B (en) | 2022-06-14 |
Family
ID=76511445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110265390.5A Active CN113051927B (en) | 2021-03-11 | 2021-03-11 | Social network emergency detection method based on multi-modal graph convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113051927B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113946683A (en) * | 2021-09-07 | 2022-01-18 | 中国科学院信息工程研究所 | Knowledge fusion multi-mode false news identification method and device |
CN113807307B (en) * | 2021-09-28 | 2023-12-12 | 中国海洋大学 | Multi-mode joint learning method for video multi-behavior recognition |
CN114357022B (en) * | 2021-12-23 | 2024-05-07 | 北京中视广信科技有限公司 | Media content association mining method based on event relation discovery |
CN114485666A (en) * | 2022-01-10 | 2022-05-13 | 北京科技大学顺德研究生院 | Blind person aided navigation method and device based on object association relationship cognitive inference |
CN116130089B (en) * | 2023-02-02 | 2024-01-02 | 湖南工商大学 | Hypergraph neural network-based multi-mode depression detection system, device and medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107480196A (en) * | 2017-07-14 | 2017-12-15 | 中国科学院自动化研究所 | A kind of multi-modal lexical representation method based on dynamic fusion mechanism |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170308790A1 (en) * | 2016-04-21 | 2017-10-26 | International Business Machines Corporation | Text classification by ranking with convolutional neural networks |
RU2662688C1 (en) * | 2017-03-16 | 2018-07-26 | Общество с ограниченной ответственностью "Аби Продакшн" | Extraction of information from sanitary blocks of documents using micromodels on basis of ontology |
CN111598214B (en) * | 2020-04-02 | 2023-04-18 | 浙江工业大学 | Cross-modal retrieval method based on graph convolution neural network |
CN112035669B (en) * | 2020-09-09 | 2021-05-14 | 中国科学技术大学 | Social media multi-modal rumor detection method based on propagation heterogeneous graph modeling |
-
2021
- 2021-03-11 CN CN202110265390.5A patent/CN113051927B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107480196A (en) * | 2017-07-14 | 2017-12-15 | 中国科学院自动化研究所 | A kind of multi-modal lexical representation method based on dynamic fusion mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN113051927A (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113051927B (en) | Social network emergency detection method based on multi-modal graph convolutional neural network | |
US11631007B2 (en) | Method and device for text-enhanced knowledge graph joint representation learning | |
JP7468929B2 (en) | How to acquire geographical knowledge | |
Ranjan et al. | LFNN: Lion fuzzy neural network-based evolutionary model for text classification using context and sense based features | |
CN112328801B (en) | Method for predicting group events by event knowledge graph | |
Li et al. | Improving convolutional neural network for text classification by recursive data pruning | |
Peng et al. | Dynamic network embedding via incremental skip-gram with negative sampling | |
CN110598005A (en) | Public safety event-oriented multi-source heterogeneous data knowledge graph construction method | |
CN112650929B (en) | Graph neural network recommendation method integrating comment information | |
CN111783903B (en) | Text processing method, text model processing method and device and computer equipment | |
CN110472226A (en) | A kind of network security situation prediction method and device of knowledge based map | |
Yang et al. | A decision-making algorithm combining the aspect-based sentiment analysis and intuitionistic fuzzy-VIKOR for online hotel reservation | |
WO2023159767A1 (en) | Target word detection method and apparatus, electronic device and storage medium | |
Wang et al. | An enhanced multi-modal recommendation based on alternate training with knowledge graph representation | |
CN116992304A (en) | Policy matching analysis system and method based on artificial intelligence | |
Ding et al. | User identification across multiple social networks based on naive Bayes model | |
Fu et al. | Robust representation learning for heterogeneous attributed networks | |
CN110674265B (en) | Unstructured information oriented feature discrimination and information recommendation system | |
Zhu et al. | Design of knowledge graph retrieval system for legal and regulatory framework of multilevel latent semantic indexing | |
Nagrath et al. | A comprehensive E-commerce customer behavior analysis using convolutional methods | |
Duan et al. | A hybrid recommendation system based on fuzzy c-means clustering and supervised learning | |
Deng et al. | Covidia: COVID-19 Interdisciplinary Academic Knowledge Graph | |
Ma | Artificial Intelligence-Assisted Decision-Making Method for Legal Judgment Based on Deep Neural Network | |
Wang et al. | An API Recommendation Method Based on Beneficial Interaction | |
Li et al. | Using big data from the web to train chinese traffic word representation model in vector space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |