CN116757460B - Emergency command scheduling platform construction method and system based on deep learning - Google Patents

Emergency command scheduling platform construction method and system based on deep learning Download PDF

Info

Publication number
CN116757460B
CN116757460B CN202311061207.5A CN202311061207A CN116757460B CN 116757460 B CN116757460 B CN 116757460B CN 202311061207 A CN202311061207 A CN 202311061207A CN 116757460 B CN116757460 B CN 116757460B
Authority
CN
China
Prior art keywords
emergency
model
generator
event
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311061207.5A
Other languages
Chinese (zh)
Other versions
CN116757460A (en
Inventor
阮峰
张文鹏
杨鹏诚
吴皓
张京生
杜周银
杜鑫
许小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhengfeng Information Technology Co ltd
Original Assignee
Nanjing Zhengfeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhengfeng Information Technology Co ltd filed Critical Nanjing Zhengfeng Information Technology Co ltd
Priority to CN202311061207.5A priority Critical patent/CN116757460B/en
Publication of CN116757460A publication Critical patent/CN116757460A/en
Application granted granted Critical
Publication of CN116757460B publication Critical patent/CN116757460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06312Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06313Resource planning in a project environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Databases & Information Systems (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Security & Cryptography (AREA)
  • Primary Health Care (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a system for constructing an emergency command and dispatch platform based on deep learning, wherein the method comprises the following steps: classifying, clustering and abnormality detecting event atom library data, monitoring emergency event nodes by using a knowledge reasoning technology based on graph generation antagonism network GraphGAN, generating emergency triplet information, and constructing an event knowledge graph; the method comprises the steps of using a self-encoder MGA based on a multi-mode diagram, converting a knowledge graph into a multi-mode representation form by encoding and decoding different types of nodes and edges, enhancing the expression capability and reasoning capability of a model, capturing the interdependence relationship of the nodes, and judging an emergency event; the method comprises the steps of introducing multitask learning, adding a bert as a pre-training model method, performing user personalized event emergency plan recommendation, event link scene recommendation and link classification tasks, and performing scoring calculation by distributing different task weights to obtain the event emergency plan recommendation with interpretability.

Description

Emergency command scheduling platform construction method and system based on deep learning
Technical Field
The invention relates to the technical field of emergency management command scheduling, in particular to an emergency command scheduling platform construction method and system based on deep learning.
Background
The emergency command and dispatch platform is used as a command and dispatch platform in sudden or emergency scenes, has important emergency management and emergency response functions, can help a command decision maker to quickly make a correct decision, coordinates emergency resources and rescue actions, reduces disaster loss and casualties to the greatest extent, and thoroughly realizes autonomous and controllable platform and faces a certain challenge due to the complexity of the emergency command and dispatch platform.
At present, in the scene of a new generation of emergency command and dispatch platform for chaotic engineering, the emergency command and dispatch platform in the prior art has the characteristics of low system reliability, poor emergency event processing capability and the like. Therefore, deep learning is applied to the construction of the emergency command and dispatch platform, user-personalized event emergency plan recommendation, event link scene recommendation and event classification tasks are carried out, the emergency event processing capacity and efficiency of the emergency command and dispatch platform are improved, the robustness of a transaction system is improved, and the method has important practical application significance.
Disclosure of Invention
The invention aims to solve the problems that: the emergency command dispatching platform construction method and system based on deep learning are used for improving the emergency event processing capacity and efficiency of the emergency command dispatching platform.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the emergency command scheduling platform construction method based on deep learning comprises the following steps:
step 1, establishing event atom library data, generating a knowledge reasoning technology against a network GraphGAN based on a graph, monitoring emergency nodes, generating emergency triplet information, and constructing an event knowledge graph;
step 2, based on a multi-mode diagram self-encoder MGA, encoding and decoding different emergency nodes and edges, converting a knowledge graph into a multi-mode representation form, capturing the inter-dependency relationship of the nodes, and judging the emergency;
step 3, introducing a multi-task learning model MTL-Att, and adding an attention mechanism and combining a bert pre-training model; extracting features of each task from the multi-task learning model, learning common features, and constructing an MTL-Att-bert model by using a shared bert model as a feature extractor; in the training process, the MTL-Att-bert model learns a weight vector for each task through an attention mechanism to weight the output of each task, adjusts the importance of each task through adjusting the value of the weight vector, obtains an event emergency plan recommendation result, and carries out personalized event emergency plan recommendation.
Specifically, in step 1, knowledge reasoning of the countermeasure network GraphGAN is generated based on a graph, the emergency event node is acquired to expand the fault knowledge graph, the graph is generated, the countermeasure network GraphGAN comprises a generation type model and a discriminant type model, specifically,
generating a model for fitting or estimating the probability of real connection distribution, selecting the most likely node V from the node set V c The connected nodes, expressed as: generator G (V|v) c ,θ G ) G is a generator; a discriminant model for discriminating the node v c And calculates output nodes v and v c With edges therebetweenThe likelihood is expressed as: discriminar D (V|v) c ,θ D ) D is a discriminator; minmax objective function of generator G and arbiter D, formula (1) below:
wherein D (v, v c ;θ D ) Is a discriminator for discriminating node pairs (v, v c ) Is a connectivity of (2);
D(v,v c ;θ D ) Outputting a scalar representing nodes v and v c Probability of edge existing between them;
θ D vectors for all nodes v; v (G, D) is a cost function;
Ptrue(.|v c ) Data true distribution, "" means a generated sample,calculating expected values for the random variable v in a true distribution;
G(.|v c ;θ G ) To be a generator model, v c Is the input condition of the generator, θ G Is a parameter of the generator and,to be under given condition v c The expected value of the sample v generated by the generator G is updated by continuously and alternately training the parameters of the node index generator and the arbiter, and the arbiter D passes the value from Ptrue (v|v c ) Training is performed on the positive samples from generator G, which is updated according to a gradient strategy, guided by the arbiter D.
Further, the arbiter D is a sigmoid function, as shown in the following formula (2):
wherein,is node v and v c K-dimensional vector expression in the arbiter, thus θ D Can be regarded as all +.>Is a collection of (3);
for inner product operation and conversion by means of a sigmoid function,/for the inner product operation>Taking the negative number for the inner product and performing exponential operation, D (v, v c ) For a given real sample v and predicted sample v for a discriminator c Is a predicted result of (a);
for a given node pair (v, v c ) Updating corresponding node expression vector by gradient descent methodThe following formula (3):
wherein,to get theta D Gradient of v to P true Representing the true distribution of the variable v, v-G representing the generation of samples v,/from the generator>Representing updating of the arbiter parameter θ with gradient descent D
Further, the generator G optimizes the update by gradient descent method, the gradient of the generator is as follows (equation 4)
Wherein the gradient isIs calculated as follows: weights are given as log (1-D (v, v c ;θ D ) Gradient->Summing the expectations;
representing the generation of a sample v c Under the condition of the generator G, utilizing the expected value generated by the discriminator;
representing the generation of a sample v c Under the condition of the generator G, the expected value of the generated sample v generated by the discriminator is utilized;
the implementation of generator G is defined by a softmax function, as follows equation (5):
wherein,is node v and v c K-dimensional vector representation, θ, in generator G All->Is a collection of (3);
for the inner product and performing an exponential operation, G (v|v c ) Generator G according to specified conditions v c A sample v is generated.
Then, based on the above formula (5), graph Softmax is proposed, and the estimated connection distribution G (v|v c ,θ G ) The method specifically comprises the following steps: BFS wide search and display are carried out on the original network omegaIs opened to v c Tree T being the root node c ,N C (v) Representing a set of neighbor nodes for node v, assuming a given node v and neighbor node v i E Nc (v), define v i The associated probability corresponding to v is as follows equation (6):
wherein each node v is defined by a root node v c The only path to start arrives, the unique path is defined as:wherein-> To be under condition v c Lower a set of conditions associated with sample v;
g (V|v) defined by Graph Softmax c ,θ G ) The following formula (7):
wherein,for samples under given conditions->Predicting the next sample->Probability of (2);
for samples under given conditions->Predictive sampleBook (I)>Is a probability of (2).
Further, in step 2, the decoding operation is performed by using the encoder to encode the different emergency nodes and edges and using the decoder based on the multi-modal map from the encoder MGA, specifically as follows:
the multi-mode self-encoder is a feedforward neural network and comprises an input layer, a first hiding layer, a second hiding layer, a third hiding layer and an output layer, wherein the input layer is formed by combining image feature vectorsAnd text feature vector +.>The input is obtained in two modalities, the input layer is followed by a first hidden layer, for each modality the mapping input is denoted +.>And->Will beAnd->Connected, the joint map is to a second hidden layer, expressed as joint multi-modal entity embedding v (3)
The decoder stage and the encoder have a completely symmetrical structure, and v is embedded (3) Is mapped as an input to a third hidden layer, denoted asAnd->Corresponding->And->Having the same dimension; the decoder will->And->Mapping to the output layer->And->The output layer and the input layer of each mode have the same dimension;
the overall architecture of the multi-mode self-encoder is defined as:
where f is the sigmoid activation function,and->Mapping weight matrix representing the modal image and text from layer j to layer (j+1), respectively,/>And->Bias terms, b, for image and text in the j-th hidden layer, respectively (j) V is embedded in (j+1) Bias term, sign->Representing a join operation; j=1, 2,3,4;
the multi-mode self-encoder trains with the aim of minimizing reconstruction errors, the reconstruction errors L a Is the sum of the dissimilarities between the two modal input and output layers, equation (15) below:
then, the knowledge graph is converted into a multi-modal representation using a transform model that maps entities and relationships in a low-dimensional continuous vector space, wherein the embedding of the head entity h, the tail entity t, and the relationship r satisfies the following triplet formula (16):
h+r≈t (16)
for triples in the knowledge-graph, the TransE model attempts to minimize the distance between h+r and t, while for triples outside the knowledge-graph, it attempts to maximize the distance between them.
Minimizing loss L when training TransE e The following formula (17):
L e =∑ (h,r,t)∈S(h′,r′,t′)∈S′ max(0,|[γ+d(h,r,t)-d(h′,r′,t′)]) (17)
where γ is the spacing hyper-parameter, max (0, x) is intended to obtain the positive part of x, d is the dissimilarity function, h ', r', t 'represent symbolic representations of the triplet head, relation and tail entities at the time of prediction, S represents the original dataset consisting of triples present in the knowledge graph, and S' represents the corrupted dataset consisting of triples not present in the knowledge graph; s' is considered a negative sample in training, constructed by replacing the head or tail entity of each triplet as shown in equation (18) below:
S′ h,r,t =(h′,r,t)|h′∈E∪(h,r,t′)|t′∈E (18)
wherein E is an entity set.
Constructing a TransAE model, combining the two models of the formula (17) and the formula (18), and simultaneously learning multi-mode knowledge and structural knowledge, wherein each entity in a knowledge graph corresponds to a plurality of pictures and a sentence description, extracting visual and text feature vectors of the knowledge graph, inputting the feature vectors into a multi-mode self-encoder to obtain joint embedding as entity representation, and randomly initializing relation embedding at the beginning of training, wherein the entity and the relation representation are used for training the model; structural loss L' e for (h, r, t) is expressed as the following formula (19):
wherein the method comprises the steps ofAnd->Representing the embedded representation of h and t, respectively, < >>And; />Respectively representing embedded representation forms of h 'and t';
a regularizer Ω (θ) is added for regularizing the parameter set θ, weighted α, training the model by minimizing the total loss L, as shown in equation (20):
L=L a +βL′ e +αΩ(θ) (20)
parameters in self-encoder and relational embeddingRandomly initializing at training beginning, selecting weight parameters beta and alpha to balance loss L a 、L′ e And the size and importance of the regularization term Ω (θ).
Further, in step 3, a multi-task learning model MTL-Att is introduced, an attention mechanism is added to combine with a bert pre-training model, specifically, in the multi-task learning model, features of each task are extracted, common features are learned, a shared bert model is used as a feature extractor to form an MTL-Att-bert model, in the training process, the MTL-Att-bert model learns a weight vector for each task through the attention mechanism to weight output of each task, and the importance of each task is adjusted through adjusting the value of the weight vector, so that an event emergency plan recommendation result is obtained and is used for user-personalized event emergency plan recommendation, scene recommendation of related event links and link classification tasks.
The invention also provides a system for constructing the emergency command and dispatch platform based on deep learning, which is constructed by using any one of the method for constructing the emergency command and dispatch platform, and comprises the following modules:
the event atom library acquisition module is used for acquiring input data and constructing a data atom library;
the event knowledge graph construction module; the method comprises the steps of generating an countermeasure network GraphGAN knowledge reasoning technology based on a graph, monitoring emergency nodes, generating emergency triplet information and constructing an event knowledge graph;
the multi-modal map self-encoder module: the method comprises the steps of performing encoding and decoding operations on different emergency nodes and edges, converting a knowledge graph into a multi-mode representation form, capturing the mutual dependency relationship of the nodes, and performing emergency judgment;
the MTL-Att-bert model building module is used for extracting the characteristics of each task by combining a bert pre-training model through using an attention mechanism;
and the event emergency plan recommending module is used for recommending personalized event emergency plans.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects:
1: the invention generates enough emergency triplet information based on a knowledge reasoning technology of a graph generation countermeasure network (GraphGAN) to construct an event knowledge graph.
2: the model of the invention uses a self-encoder based on a multi-modal diagram and combines a multi-task learning strategy, thereby improving the robustness and efficiency of the system when an emergency occurs in the scene of an emergency command and dispatch platform of chaotic engineering and making up the defect of the prior art for a new generation of emergency command and dispatch service.
Drawings
FIG. 1 is a flow chart of the emergency command and dispatch platform construction method based on deep learning.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the application will be further elaborated in conjunction with the accompanying drawings, and the described embodiments are only a part of the embodiments to which the present invention relates. All non-innovative embodiments in this example by others skilled in the art are intended to be within the scope of the invention.
The invention discloses a method for constructing an emergency command and dispatch platform based on deep learning, which is shown in fig. 1 and comprises the following steps:
step 1, establishing event atom library data, generating a knowledge reasoning technology against a network GraphGAN based on a graph, monitoring emergency nodes, generating emergency triplet information, and constructing an event knowledge graph;
step 2, based on a multi-mode diagram self-encoder MGA, encoding and decoding different emergency nodes and edges, converting a knowledge graph into a multi-mode representation form, capturing the inter-dependency relationship of the nodes, and judging the emergency;
and 3, introducing a multi-task learning model MTL-Att, adding an attention mechanism and combining a bert pre-training model to construct the MTL-Att-bert model, and performing personalized event emergency plan recommendation by distributing different task weight scoring calculations.
Firstly, in step 1, a knowledge reasoning technology of a countermeasure network GraphGAN is generated based on a graph, an emergency event node is acquired to expand a fault knowledge graph, the countermeasure network GraphGAN comprises a generative model and a discriminant model, and in particular,
generating a model for fitting or estimating the probability of real connection distribution, and selecting the most likely node V from the node set V c The connected nodes, expressed as: generator G (V|v) c ,θ G ) G is a generator;
a discriminant model for discriminating the node v c And calculates output nodes v and v c There is a possibility of edges between, expressed as: discriminar D (V|v) c ,θ D ) D is a discriminator;
the minmax objective function of generator G and arbiter D is as follows:
wherein the gradient isIs of weight log (1-D (v, v) c ;θ D ) Gradient->Is a desired sum of (2);
the implementation of generator G is defined by a softmax function, the following formula:
wherein,is node v and v c K-dimensional vector representation, θ, in generator G All->Is a collection of (3);
for the inner product and performing an exponential operation, G (v|v c ) Generator G according to specified conditions v c A sample v is generated.
The parameters of the generator and the arbiter are updated continuously and alternately, and the arbiter D is updated by training the parameters from Ptrue (v|v c ) Training is performed on the positive samples from generator G, which is updated according to a gradient strategy, guided by the arbiter D.
In order to optimize the technical solution, in a specific embodiment of the present invention, the generator G is updated according to the gradient strategy under the guidance of the arbiter D.
Specifically, the arbiter is optimized as follows:
the arbiter D is a sigmoid function, as follows:
wherein,is node v and v c K-dimensional vector expression in the arbiter, thus θ D Can be regarded as all +.>Is a collection of (3);
it should be noted that the arbiter may also use other methods, such as SDNE model.
For a given node pair (v, v c ) Updating corresponding node expression vector by gradient descent methodThe following formula is given:
further, the generator is optimized as follows:
for a generator, its objective function is to minimize the minimum function, so the update is optimized by gradient descent, and the gradient of the generator is calculated as follows:
it should be noted that the gradientIs of weight log (1-D (v, v) c ;θ D ) Gradient->That is to say if a generating node is identified as a negative sample node, the probability D (v, v c ;θ D ) The weight corresponding to the gradient of the generation node is very large, so that the whole gradient is large.
The implementation of the production model is defined by a softmax function, as follows:
wherein,is node v and v c K-dimensional vector representation, θ, in generator G All->Is a collection of (3);
for the inner product and performing an exponential operation, G (v|v c ) Generator G according to specified conditions v c A sample v is generated.
Based on such a setting, the estimated connection distribution G (v|v c ,θ G ) Then, based on the probability value, random sampling is performed to obtain a sample set (v, v c ) Finally, going to theta more by SGD method G
Further, the estimated connection distribution in the generated model is realized through softmax, however, the traditional softmax model is generally used for calculating softmax values of all nodes in the whole graph, which is time-consuming; at the same time, the topology of the network itself would imply rich information, while the softmax function ignores these entirely, since it is "fair" looking at each node, just the normalization task is completed.
It is common practice to have a hierarchical softmax and a negative sampling approach to mitigate computational overhead, but neither consider the structural information of the graph. The algorithms like deep, node2vec have acquired the structure information of the graph already at the time of (bias) random walk, so that the following softmax can be accelerated by only negative sampling; while GraphGAN does not actually take the network structure into consideration at present, the network embedding obtained by learning is meaningless.
GraphGAN is intended to de-embed network structure information by Softmax, so Softmax is modified here and graphsoftmax is proposed.
Graph Softmax is still used to calculate the estimated connection distribution G (v|v c ,θ G ) The conditions that it needs to satisfy are:
(1) Normalized normalization: it is desirable to satisfy an efficient probability distribution,
(2) Graph-structure-aware: with network structure information, a simple idea is that for two vertices in the graph, their probability of connectivity should decrease as their shortest distance increases.
(3)Computationally efficient:G(V|v c ,θ G ) Should cover only a small number of nodes in the graph, such as some and node v c Closer nodes.
To solve for such graph softmax, a BFS wide search of the original network ω is developed to v c Tree T being the root node c ,N C (v) Representing a set of neighbor nodes for node v, assuming a given node v and neighbor node v i E Nc (v), define v i The associated probability corresponding to v is as follows:
wherein each node v is defined by a root node v c The only path to begin arrives, defined as:wherein->
G (V|b) defined by Graph Softmax c ,θ G ) The following formula:
wherein,for samples under given conditions->Predicting the next sample->Probability of (2);
for samples under given conditions->Prediction sample->Is a probability of (2).
What the generative model has last to do is sampling, in particular,
a simple method calculates all graph softmax values G (V|v c ,θ G ) And then carrying out weighted random sampling according to the probability value.
Another online sampling strategy is from tree T c Root node v of (2) c Starting random walk, v is the node selected as the sample if the next step of the node v to which the current walk is to walk back to the parent node.
In a specific embodiment of the invention, a multi-modal graph-based self-encoder (Multimodal Graph Autoencoder, MGA) is used, and knowledge maps are converted into multi-modal representation forms through encoding and decoding operations on different emergency nodes and edges, so that the expressive capacity and the reasoning capacity of a model are enhanced, the dependency relationship of the mutual nodes is captured, and emergency judgment is carried out.
First, the multi-mode self-encoder is a feedforward neural network, which comprises an input layer, a first hidden layer, a second hidden layer, a third hidden layer and an output layer, wherein the input layer is derived from image feature vectorsAnd text feature vector +.>The input is obtained in two modalities, the input layer is followed by a first hidden layer, for each modality the mapping input is denoted +.>And->Will beAnd->Connected, the joint map is to a second hidden layer, expressed as joint multi-modal entity embedding v (3)
The decoder stage and the encoder have a completely symmetrical structure, v to be embedded (3) Mapping as input to a third hidden layer, denoted asAnd->Corresponding->And->Having the same dimension; the decoder will->And->Mapping to the output layer->And->
The output layer and the input layer of each modality have the same dimensions, the output layer also being called reconstruction layer, the purpose of which is to reconstruct the input feature vectors.
The overall architecture of the proposed multi-modal self-encoder is defined as:
where f is the sigmoid activation function,and->Mapping weight matrix representing the modal image and text from layer j to layer (j+1), respectively,/>And->Bias terms for image and text in the jth hidden layer, respectively, b (j) is the bias term for embedded b (j+1), symbol +.>Representing a join operation; j=1, 2,3,4.
Finally, the whole multi-mode self-encoder is trained with the aim of minimizing the reconstruction error, and the reconstruction error L a Is the sum of dissimilarities between the two modal input and output layers, as follows:
in another embodiment of the present invention, a knowledge graph is converted into a multi-modal representation using a transform model that maps entities and relationships in a low-dimensional continuous vector space, where the embedding of the head entity h, the tail entity t, and the relationship r satisfies the following formula:
h+r≈t;
for triples in the knowledge-graph, the TransE attempts to minimize the distance between h+r and t, while for triples outside the knowledge-graph, it attempts to maximize the distance between them.
In training the TransE, we minimize the loss L e
L e =∑ (h,r,t)∈S(h′,r′,t′)∈S′ max(0,|[γ+d(h,r,t)-d(h′,r′,t′)]);
Where γ is the spacing hyper-parameter, max (0, x) is intended to obtain the positive part of x, and d is the dissimilarity function, which may be the L1 norm or the L2 norm. S represents the original dataset consisting of triples present in the knowledge-graph, and S' represents the corrupted dataset consisting of triples not present in the knowledge-graph. S' is considered a negative sample in training, and is constructed by replacing the head or tail entity of each triplet as follows:
S′ h,r,t =(h′,r,t)|h′∈E∪(h,r,t′)|t′∈E
the TransAE model is a new knowledge graph representation learning method, and a multi-mode self-encoder is combined with the TransE model, wherein the TransE is a simple and effective knowledge graph representation learning method.
In the TransAE model, we combine the two models, while learning multi-modal knowledge and structural knowledge. Each entity in the knowledge graph corresponds to a plurality of pictures and a sentence description, firstly, visual and text feature vectors of the entity are extracted, and then the feature vectors are input into a multi-mode self-encoder to be jointly embedded to be used as entity representation. The relationship embedding is randomly initialized at the beginning of training. These entities and relationships represent models used to train us.
Structural loss L' e for (h, r, t) is represented by the formula:
wherein the method comprises the steps ofAnd->Representing the embedded representation of h and t, respectively. For better generalization we also add a regularizer Ω (θ) for the parameter set θ, whose weight is α.
Thus, we can train our model by minimizing the total loss L, as follows:
L=L d +βL′ e +αΩ(θ);
parameters in self-encoder and relational embeddingRandomly initializing at training beginning, selecting weight parameters beta and alpha to balance loss L a 、L′ e And the size and importance of the regularization term Ω (θ).
Finally, in step 3, in order to improve the event emergency plan recommendation effect and efficiency, multitasking is introduced based on step 2, and a method of pre-training is performed by using bert to perform personalized event emergency plan recommendation, and scoring calculation is performed by distributing weights of different tasks, so as to obtain the event emergency plan recommendation with interpretability.
Specifically, aiming at the defects of low utilization rate, poor generalization capability, high computational complexity and poor interpretability of the single-task learning data, in the embodiment, a multi-task learning strategy is used, and a plurality of tasks share a model, so that the data can be better utilized, the generalization performance is improved, and the computational complexity is reduced.
In the task, the multi-task learning method can improve the event emergency plan recommending effect and efficiency, and can perform personalized event emergency plan recommending, scene recommending of related event links and link classifying tasks, and scoring calculation is performed by distributing weights of different tasks, so that fault emergency plan recommending with interpretability is obtained. The knowledge entity may also provide varying degrees of assistance. And some information can be transferred between different tasks, and labels between different tasks are transferred through a knowledge entity, so that promotion effect on different tasks is achieved.
The present embodiment then selects a multitasking learning model with attention mechanism (Multi-Task Learning with Attention, MTL-Att). The model extracts features of each task by using an attention mechanism in combination with a BERT pre-training model, thereby learning common features, and uses a shared BERT model as a feature extractor in multi-task learning. In the training process, the model respectively adds a specific output layer for each task by learning the relation among the tasks, and dynamically adjusts the weight of the attention mechanism, thereby improving the performance of each task.
The MTL-Att-bert model can learn the importance of how to assign three different tasks, user-personalized event contingency plan recommendations, scene recommendations for related event links, and link classification, through an attention mechanism. Specifically, the model may learn a weight vector for each task and use the weight vectors to weight the output of each task to arrive at a final prediction result. The model can learn the three tasks simultaneously in the training process, and the goal of multi-task learning is realized. Finally, the importance of each task is better adjusted by the model through adjusting the value of the weight vector, so that a better event emergency plan recommendation result is obtained.
In summary, the idea of the invention is: firstly, aiming at the problems of a knowledge graph entity and a relationship list, a text classification algorithm based on deep learning is used for mining more emergency nodes, constructing more relevant triplet information and generating an emergency knowledge graph. Then, using a multi-modal graph self-encoder, the knowledge-graph can be converted into a multi-modal representation by performing encoding and decoding operations on different types of nodes and edges, thereby enhancing the expressive and inferential capabilities of the model. And capturing the dependency relationship of the mutual nodes, and judging the emergency event. Finally, in order to improve the event emergency plan recommending effect and efficiency, a method of introducing multitask learning and adding a bert as a pre-training model is adopted to conduct user personalized event emergency plan recommending, scene recommending of related event links and link classifying tasks, and scoring calculation is conducted by distributing weights of different tasks, so that the event emergency plan recommending with interpretability is obtained.
The foregoing description of the preferred embodiments of the present invention is provided for the purpose of limiting the invention, and is intended to be illustrative of the present invention as defined by the appended claims. Any modification, equivalent replacement or improvement made by those skilled in the art without departing from the principles of the present invention should be included in the scope of the present invention.

Claims (6)

1. The emergency command scheduling platform construction method based on deep learning is characterized by comprising the following steps of:
step 1, establishing event atom library data, generating a knowledge reasoning technology against a network GraphGAN based on a graph, monitoring emergency nodes, generating emergency triplet information, and constructing an event knowledge graph;
step 2, based on a multi-mode diagram self-encoder MGA, encoding and decoding different emergency nodes and edges, converting a knowledge graph into a multi-mode representation form, capturing the inter-dependency relationship of the nodes, and judging the emergency;
first, based on the multi-modal map self-encoder MGA, different emergency nodes and edges are encoded using encoders and decoded using decoders, as follows:
the multi-mode self-encoder is a feedforward neural network and comprises an input layer, a first hiding layer, a second hiding layer, a third hiding layer and an output layer, wherein the input layer is formed by combining image feature vectorsAnd text feature vector +.>The input is obtained in two modalities, the input layer is followed by a first hidden layer, for each modality the mapping input is denoted +.>And->Will->Andconnected, the joint map is to a second hidden layer, expressed as joint multi-modal entity embedding v (3)
The decoder stage and the encoder have a completely symmetrical structure, v to be embedded (3) Mapping as input to a third hidden layer, denoted asAnd->Corresponding->And->Having the same dimension; the decoder will->Andmapping to the output layer->And->The output layer and the input layer of each mode have the same dimension;
the overall architecture of the multi-mode self-encoder is defined as:
where f is the sigmoid activation function,and->Mapping weight matrices from layer j to layer (j+1) representing the modal image and text, respectively; />And->Bias terms, b, for image and text in the j-th hidden layer, respectively (j) V is embedded in (j+1) Bias term, sign->Representing a join operation, j=1, 2,3,4;
the multi-mode self-encoder trains with the aim of minimizing reconstruction errors, the reconstruction errors L a Is the sum of the dissimilarities between the two modal input and output layers, equation (8) below:
then, the knowledge graph is converted into a multi-modal representation using a transition model that maps entities and relationships in a low-dimensional continuous vector space, wherein the embedding of the head entity h, the tail entity t, and the relationship r satisfies the following triplet formula (9):
h+r≈t (9)
for triples in the knowledge-graph, the TransE model attempts to minimize the distance between h+r and t, while for triples outside the knowledge-graph, it attempts to maximize the distance between them;
minimizing loss L when training TransE e The following formula (10):
L e =∑ (h,r,t)∈S(h′,r′,t′)∈S′ max(0,|[γ+d(h,r,t)-d(h′,r′,t′)]) (10)
where γ is the spacing hyper-parameter, max (0, x) is intended to obtain the positive part of x, d is the dissimilarity function, h ', r', t 'represent symbolic representations of the triplet head, relation and tail entities at the time of prediction, S represents the original dataset consisting of triples present in the knowledge graph, and S' represents the corrupted dataset consisting of triples not present in the knowledge graph; s' is considered as a negative sample in training, constructed by replacing the head or tail entity of each triplet as shown in equation (11) below:
S′ h,r,t =(h′,r,t)|h′∈E∪(h,r,t′)|t′∈E (11)
wherein E is an entity set;
then, constructing a TransAE model, combining the two models of the formula (10) and the formula (11), and simultaneously learning multi-mode knowledge and structural knowledge, wherein each entity in a knowledge graph corresponds to a plurality of pictures and a sentence description, extracting visual and text feature vectors of the knowledge graph, inputting the feature vectors into a multi-mode self-encoder to obtain joint embedding as entity representation, and randomly initializing relation embedding at the beginning of training, wherein the entity and the relation representation are used for training the model;
structural loss L' e for (h, r, t) is expressed as the following formula (12):
wherein the method comprises the steps ofAnd->Representing the embedded representation of h and t, respectively, < >>And; />Respectively representing embedded representation forms of h 'and t';
a regularizer Ω (θ) is added for regularizing the parameter set θ, weighted α, training the model by minimizing the total loss L, as shown in equation (13) below:
L=L a +βL′ e +αΩ(θ) (13)
parameters in self-encoder and relational embeddingRandomly initializing at training beginning, selecting weight parameters beta and alpha to balance loss L a 、L′ e And the size and importance of the regularization term Ω (θ);
step 3, introducing a multi-task learning model MTL-Att, and adding an attention mechanism and combining a bert pre-training model; extracting features of each task from the multi-task learning model, learning common features, and constructing an MTL-Att-bert model by using a shared bert model as a feature extractor; in the training process, the MTL-Att-bert model learns a weight vector for each task through an attention mechanism to weight the output of each task, adjusts the importance of each task through adjusting the value of the weight vector, obtains an event emergency plan recommendation result, and carries out personalized event emergency plan recommendation.
2. The emergency command and dispatch platform construction method based on deep learning according to claim 1, wherein in step 1, knowledge reasoning of an countermeasure network GraphGAN is generated based on a graph, the graph generates the countermeasure network GraphGAN including a generation model and a discriminant model, specifically,
generating a model for fitting or estimating the real connection distribution probability, and selecting the most likely node V from the node set V C The connected nodes, expressed as: generator G (V|v) c ,θ G ) G is a generator; a discriminant model for discriminating the node v c And calculates output nodes v and v c The possibility of an edge exists between the two, expressed as: discriminar D (V|v) c ,θ D ) D is a discriminator; minmax objective function of generator G and arbiter D, equation (14) below:
wherein D (v, v c ;θ D ) Is a discriminator for discriminating node pairs (v, v c ) Is a connectivity of (2);
D(v,v c ;θ D ) Outputting a scalar representing nodes v and v c Probability of edge existing between them;
θ D vectors for all nodes v; v (G, D) is a cost function;
Ptrue(.|v c ) Data true distribution, "" means generated samples, E v~Ptrue (.|v c ) Calculating expected values for the random variable v in a true distribution;
G(.|v c ;θ G ) To be a generator model, v c Is the input condition of the generator, θ G Is a parameter of the generator, E v~G (.|v c ;θ G ) To be under given condition v c The expected value of the sample v generated by the generator GC is the parameter of the node index generator and the arbiter to be updated by continuous alternate training, and each iteration, the arbiter D passes through the node index from Ptrue (v|v c ) Training is performed on the positive samples from generator G, which is updated according to the gradient strategy, guided by the arbiter D.
3. The emergency command and dispatch platform construction method based on deep learning of claim 2, wherein the discriminator D is a sigmoid function, and the following formula (15) is:
wherein,is node v and v c K-dimensional vector expression in the arbiter, thus θ D All->Is a collection of (3);
for inner product operation and conversion by means of a sigmoid function,/for the inner product operation>Taking the negative number for the inner product and performing exponential operation, D (v, v c ) For a given real sample v and predicted sample v for a discriminator c Is a predicted result of (a);
for a given node pair (v, v c ) Updating corresponding node expression vector by gradient descent methodThe following formula (16)
Wherein,to get theta D v-Ptrue representing the true distribution of variable v, v-G representing the generation of samples v, from the generator>Representing updating of the arbiter parameter θ with gradient descent D
4. The emergency command and dispatch platform construction method based on deep learning according to claim 3, wherein the generator G optimizes updating by a gradient descent method, and the gradient of the generator is represented by the following formula (17):
wherein the gradient isIs calculated as follows: weights are given as log (1-D (v, v c ;θ D ) Gradient->Summing the expectations;
representing the generation of a sample v c Under the condition of the generator G, utilizing the expected value generated by the discriminator;
representing the generation of a sample v c Under the condition of the generator G, the expected value of the generated sample v generated by the discriminator is utilized;
the implementation of generator G is defined by a soffmax function, as follows equation (18):
wherein,is node v and v c K-dimensional vector representation, θ, in generator G All->Is a collection of (3);
for the inner product and performing an exponential operation, G (v|v c ) Generator G according to specified conditions v c A sample v is generated.
5. The emergency command and dispatch platform construction method based on deep learning as claimed in claim 4, wherein based on the above formula (18), graph Soffmax is proposed, and the network structure information is embedded through Soffmax, and the estimated connection distribution G (v|v c ,θ G ) The method specifically comprises the following steps: BFS wide search is performed on the original network omega, and the original network omega is unfolded to v c Tree T being the root node c ,N C (v) Representing a set of neighbor nodes for node v, assuming a given node v and neighbor node v i E Nc (v), define v i The associated probability p relative to v c (v i I v), the following formula (19):
wherein each node v is defined by a root node v c The only path to begin arrives, defined as:where m is the number of samples of the path,to be under condition v c Lower a set of conditions associated with sample v; g (V|v) defined by Graph Softmax c ,θ G ) The following formula (20):
wherein,for samples under given conditions->Predicting the next sample->Probability of (2);
for samples under given conditions->Prediction sample->Is a probability of (2).
6. An emergency command and dispatch platform construction system based on deep learning, which is characterized by being constructed by using the emergency command and dispatch platform construction method according to any one of claims 1-5, and comprising the following modules:
the event atom library acquisition module is used for acquiring input data and constructing a data atom library;
the event knowledge graph construction module; the method comprises the steps of generating an countermeasure network GraphGAN knowledge reasoning technology based on a graph, monitoring emergency nodes, generating emergency triplet information and constructing an event knowledge graph;
the multi-modal map self-encoder module: the method comprises the steps of performing encoding and decoding operations on different emergency nodes and edges, converting a knowledge graph into a multi-mode representation form, capturing the mutual dependency relationship of the nodes, and performing emergency judgment;
MTL-Att-bert model construction module for multitasking learning model combining bert pre-training model by using attention mechanism
Extracting the characteristics of each task;
and the event emergency plan recommending module is used for recommending personalized event emergency plans.
CN202311061207.5A 2023-08-23 2023-08-23 Emergency command scheduling platform construction method and system based on deep learning Active CN116757460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311061207.5A CN116757460B (en) 2023-08-23 2023-08-23 Emergency command scheduling platform construction method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311061207.5A CN116757460B (en) 2023-08-23 2023-08-23 Emergency command scheduling platform construction method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN116757460A CN116757460A (en) 2023-09-15
CN116757460B true CN116757460B (en) 2024-01-09

Family

ID=87953779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311061207.5A Active CN116757460B (en) 2023-08-23 2023-08-23 Emergency command scheduling platform construction method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116757460B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379053A (en) * 2020-12-17 2021-09-10 中国人民公安大学 Emergency response decision-making method and device and electronic equipment
WO2022116548A1 (en) * 2020-12-03 2022-06-09 全球能源互联网研究院有限公司 Power emergency command system
CN115712709A (en) * 2022-11-18 2023-02-24 哈尔滨工业大学 Multi-modal dialog question-answer generation method based on multi-relationship graph model
CN115828863A (en) * 2022-11-29 2023-03-21 南京争锋信息科技有限公司 Automatic generation method of emergency plan in chaotic engineering test scene
CN115840853A (en) * 2022-12-13 2023-03-24 黑龙江大学 Course recommendation system based on knowledge graph and attention network
CN116161087A (en) * 2023-01-03 2023-05-26 重庆交通大学 Train emergency driving control method for distributed deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022116548A1 (en) * 2020-12-03 2022-06-09 全球能源互联网研究院有限公司 Power emergency command system
CN113379053A (en) * 2020-12-17 2021-09-10 中国人民公安大学 Emergency response decision-making method and device and electronic equipment
CN115712709A (en) * 2022-11-18 2023-02-24 哈尔滨工业大学 Multi-modal dialog question-answer generation method based on multi-relationship graph model
CN115828863A (en) * 2022-11-29 2023-03-21 南京争锋信息科技有限公司 Automatic generation method of emergency plan in chaotic engineering test scene
CN115840853A (en) * 2022-12-13 2023-03-24 黑龙江大学 Course recommendation system based on knowledge graph and attention network
CN116161087A (en) * 2023-01-03 2023-05-26 重庆交通大学 Train emergency driving control method for distributed deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Research on the intelligent decision support system of the urban emergency command in the cloud computing environment;Yici Mao等;《2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)》;902-907 *
基于多模态和多任务学习的显著目标检测方法研究;项前;《中国优秀硕士学位论文全文数据库 信息科技辑》(第08期);I138-457 *
基于结构化预案的地铁预警应急联动系统研究;朱昊天;《智能城市》;第7卷(第04期);29-30 *

Also Published As

Publication number Publication date
CN116757460A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Alzubaidi et al. A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications
Amiri et al. Adventures in data analysis: A systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems
JP2024500182A (en) Explainable transducer transformer
CN109783666B (en) Image scene graph generation method based on iterative refinement
CN110413844A (en) Dynamic link prediction technique based on space-time attention depth model
CN111488734A (en) Emotional feature representation learning system and method based on global interaction and syntactic dependency
CN109389151B (en) Knowledge graph processing method and device based on semi-supervised embedded representation model
CN111291212A (en) Zero sample sketch image retrieval method and system based on graph convolution neural network
CN112200266B (en) Network training method and device based on graph structure data and node classification method
CN113159283A (en) Model training method based on federal transfer learning and computing node
CN111210002B (en) Multi-layer academic network community discovery method and system based on generation of confrontation network model
CN112765370B (en) Entity alignment method and device of knowledge graph, computer equipment and storage medium
CN111723930A (en) System applying crowd-sourcing supervised learning method
Shi et al. Mobile edge artificial intelligence: Opportunities and challenges
CN110781271A (en) Semi-supervised network representation learning model based on hierarchical attention mechanism
CN114743037A (en) Deep medical image clustering method based on multi-scale structure learning
Gao et al. Contextual spatio-temporal graph representation learning for reinforced human mobility mining
CN113128667B (en) Cross-domain self-adaptive graph rolling balance migration learning method and system
Zhang et al. An intrusion detection method based on stacked sparse autoencoder and improved gaussian mixture model
CN114912572A (en) Target identification method and neural network training method
Bayoudh A survey of multimodal hybrid deep learning for computer vision: Architectures, applications, trends, and challenges
Berlati et al. Ambiguity in sequential data: Predicting uncertain futures with recurrent models
CN116882503A (en) Scientific and technological innovation service decision support method based on knowledge reasoning model
WO2023143570A1 (en) Connection relationship prediction method and related device
CN116757460B (en) Emergency command scheduling platform construction method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant