CN108833173A - The depth network characterisation method of abundant structural information - Google Patents

The depth network characterisation method of abundant structural information Download PDF

Info

Publication number
CN108833173A
CN108833173A CN201810653420.8A CN201810653420A CN108833173A CN 108833173 A CN108833173 A CN 108833173A CN 201810653420 A CN201810653420 A CN 201810653420A CN 108833173 A CN108833173 A CN 108833173A
Authority
CN
China
Prior art keywords
matrix
represent
node
network
same order
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810653420.8A
Other languages
Chinese (zh)
Other versions
CN108833173B (en
Inventor
乔立升
陈恩红
刘淇
赵洪科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201810653420.8A priority Critical patent/CN108833173B/en
Publication of CN108833173A publication Critical patent/CN108833173A/en
Application granted granted Critical
Publication of CN108833173B publication Critical patent/CN108833173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of depth network characterisation methods of abundant structural information, network characterisation is carried out using relatively rich stage structure information by comprehensive visual angle, such as, transition probability direction is introduced to the transfer matrix of not same order and adjusts control parameter, Nonlinear Dimension Reduction processing is carried out to the transfer matrix of not same order using noise reduction autoencoder is stacked, multistage information is merged etc. using attention mechanism, preferably improves the effect of network characterisation.

Description

The depth network characterisation method of abundant structural information
Technical field
The present invention relates to machine learning and network characterisation optimization field more particularly to a kind of depth nets of abundant structural information Network characterizing method.
Background technique
Network characterisation is a recent hot technology, because this technology can be very good to promote the prediction of neural network Performance, while also can be applied in various other applications.Network characterisation is a kind of heavy of learning network node low-dimensional characterization Method is wanted, the purpose is to capture and save effective structural information.The network characterisation of lower dimensional space can have for a variety of and network The research of pass brings beneficial effect, such as influence power analysis, community discovery, node-classification, economic decision-making support etc..
At present in existing numerous network characterisation methods using network topology information, application is relatively effective It is widely that network is mapped to the such methods in low dimension vector space, such as DeepWalk, node2vec from higher dimensional space, GraRep, DNGR and SDNE etc..
But presently, there are this kind of algorithm, design when, be all to stress to consider in a certain respect from single visual angle, example Such as neighbor node type, anti-interference, order of information is utilized, nonlinear organization relationship etc., is not considered from multi-angle of view various aspects Comprehensive solution.It results in existing algorithm to exist in this way and lacks nodes neighbors type selectivity, or is quick to noise data The problems such as sense.
Summary of the invention
The object of the present invention is to provide a kind of depth network characterisation method of abundant structural information, can be very good to utilize has The structural topology information of effect characterizes network node, to provide strong support for downstream applications such as classification, predictions.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of depth network characterisation method of abundant structural information, including:
The not same order initial characteristics matrix of network topology structure is obtained, to capture necessary network structure information;
Micro process is carried out to not same order initial characteristics matrix, the positive bias for obtaining not same order optimizes matrix;
Dimension-reduction treatment is carried out to the positive bias optimization matrix of not same order, the hidden feature of not same order is obtained, passes through not same order Hidden feature carrys out the different aspect and level of reaction network feature;
The blending weight of each hidden feature is calculated using attention mechanism;
The probability distribution output of inter-related task is predicted in conjunction with all hidden features and corresponding blending weight.
As seen from the above technical solution provided by the invention, believed by comprehensive visual angle using relatively rich stage structure Breath, for example, introducing transition probability direction to the transfer matrix of not same order adjusts control parameter, utilizes stacking to carry out network characterisation Noise reduction autoencoder carries out Nonlinear Dimension Reduction processing to the transfer matrix of not same order, using attention mechanism to multistage information It merges etc., preferably improves the effect of network characterisation.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of flow chart of the depth network characterisation method of abundant structural information provided in an embodiment of the present invention;
Fig. 2 is that a kind of model framework of the depth network characterisation method of abundant structural information provided in an embodiment of the present invention is whole Body block diagram.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on this The embodiment of invention, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, belongs to protection scope of the present invention.
The embodiment of the present invention provides a kind of depth network characterisation method of abundant structural information, as shown in Figure 1, it is mainly wrapped Include following steps:
Step 1, the not same order initial characteristics matrix for obtaining network topology structure, to capture necessary network structure information.
It is inclined by introducing k rank when obtaining the not same order initial characteristics matrix of network topology structure in the embodiment of the present invention Hyper parameter is set, network node neighbor seaching direction to be adjusted, to capture effective network structure information.In the step, Two control parameters (p and q) are also introduced to adjust transition probability, and propose the concept of biasing transfer matrix, partially by this Transfer matrix is set, model can flexibly choose the type with regulating networks structure.Since the motivation of model is that acquisition is necessary Information preferably characterizes network, so, in order to preferably complete task, control parameter is introduced flexibly to choose necessity Information be necessary.
Step 2 carries out micro process to not same order initial characteristics matrix, and the positive bias for obtaining not same order optimizes matrix.
In the embodiment of the present invention, in order to improve the sparsity and consistency of characterization, the initial characteristics square to not same order is needed Battle array is adjusted processing, and the positive bias for obtaining not same order optimizes matrix.
Step 3 carries out dimension-reduction treatment to the positive bias optimization matrix of not same order, the hidden feature of not same order is obtained, by not The hidden feature of same order reflects the different aspect and level of network structure feature.
In the embodiment of the present invention, using stacked denoise (stacking noise reduction) autoencoder to the positively biased of not same order It sets optimization matrix and carries out dimension-reduction treatment, obtain the hidden feature in corresponding subspace.Since network structure type has diversity (example Such as size factor of structure homogeney, structure allelism, subgraph), the hidden characterization of same order does not reflect network structure feature Different aspect and level.
Step 4, the blending weight that each hidden feature is calculated using attention mechanism.
In order to which multistage information to be combined together, calculated not by introducing target labels information, and using attention mechanism With the fusion weighted value of hidden feature.Because carrying out the optimization of weight to the feature of not same order, rather than feature is carried out simple It is added or splicing is merged, so, to specific characterization task, the present invention can focus prior information.
Step 5, the probability distribution output that inter-related task is predicted in conjunction with all hidden features and corresponding blending weight.
In the embodiment of the present invention, prediction task mainly includes:Node-classification, connection prediction and economic decision-making prediction etc..
In order to make it easy to understand, elaborating below for above scheme of the present invention.
Depth network characterisation method provided by the embodiment of the present invention is based on multi-angle various aspects integrated network topology information Frame model realizes that model framework is as shown in Figure 2.
The purpose of frame model be when each node is mapped to lower dimensional space from higher dimensional space, be as far as possible The explicit capture of energy and save following information:1) the certain types of structural information needed;2) multiple-rank arrangement information;3) non-thread Property structural information, adequately to be characterized using effective structural topology information to network node, so for classification, prediction etc. Using the more reliable support of offer.
One, the not same order initial characteristics matrix of network topology structure is obtained.
In the embodiment of the present invention, biasing transition probability matrix U is introducedk, to be adjusted to network node neighbor seaching direction Section biases transition probability matrix UkIt is expressed as:
Uk=BAk,
Wherein, k represents k rank, k=1, and 2 ..., K, K are the sum of rank;B=[αpq(vi,vj)];
Wherein, p and q is control parameter, viAnd vjGeneration respectively Corresponding i-th and j's in table input sample sequence (i and j are counting variable, and the subscript i that many places use hereinafter is also this meaning) Node,Represent viAnd vjThe distance between, N is number of network node, i.e. input sample sum;A=D-1S, S are the neighbour of network Matrix is connect, D is the degree matrix (and diagonal matrix) of network, and the relationship of the two is as follows:Wherein H generation Table node viNeighbor node set,Represent viAnd vjBetween adjacent value.
Based on biasing transition probability matrix UkThe theoretical loss function L of learning network characterizationk(vi,vj), it is represented by:
Wherein,Respectively represent node vi、vjCorresponding vector, DkRepresent the node observed and other node shapes At the node pair with k rank sample path set, # (vi,vj)kNode is represented to (vi,vj) in DkThe number of middle appearance, # (vi)kWith # (vj)kRespectively represent node viAnd vjIn DkThe number of appearance, λ represent negative hits, and σ () is sigmoid function, σ ()=(1+e-x)-1
By being derived by:WhereinRepresent node viCorresponding network node The row of characterization matrix,Represent node vjCorresponding network background characterizes the transposition of matrix column,Represent matrix MkMiddle correspondence Node viAnd vjValue,Represent matrix UkMiddle corresponding node viAnd vjValue.
As shown from the above formula, to the optimization problem of loss function, it is changed into and matrix M is optimized to biasingkMatrix decomposition Problem.To obtain not same order initial characteristics matrix, wherein k rank initial characteristics matrix is denoted as Mk, matrix MkIt is also a biasing Optimize matrix.Meanwhile matrix MkIt can be based on the role or corporations that node belongs to, come learning table by way of suitable neighbours' selection Sign.
Two, micro process is carried out to not same order initial characteristics matrix.
In the embodiment of the present invention, in order to promote the sparsity and consistency of network characterisation eigenmatrix, by k rank initial characteristics Matrix MkIn all negative values replace with 0, to obtain corresponding positive k rank biasing optimization matrix Xk, it is expressed as:
Wherein,Represent matrix XkMiddle corresponding node viAnd vjValue.
Three, dimension-reduction treatment is carried out to the positive bias optimization matrix of not same order.
The positive k rank biasing optimization matrix X that abovementioned steps obtainkFor higher dimensional matrix, need to carry out Nonlinear Dimension Reduction processing.
In the embodiment of the present invention, by stacking noise reduction autoencoder (autocoding decoder) to the positively biased of not same order It sets optimization matrix and carries out dimension-reduction treatment, wherein to the part position of the characterization knot vector in the positive bias optimization matrix of not same order The value set is randomly set to 0 with certain probability;The final positive bias optimization matrix for obtaining not same order corresponds to the hidden spy in subspace Sign.
In deep learning field, stacking noise reduction autoencoder is a popular deep learning model, it can be to height N dimensional vector n carries out compression and dimensionality reduction, it is to learn more robust characterization by the weight of layer-by-layer pre-training depth network.It is practical On, stacking noise reduction autoencoder is the input layer introducing random noise in depth network, then, defeated using what is be interfered Enter data, to reconstruct initial data, the parameter learnt in this way has more robustness.It is pair specifically when realizing The value of the vector section position is randomly set to 0 with certain probability by the sample vector x of each input, and others are processed Journey is similar with the autoencoder of standard.In Fig. 2, in figureRepresent the defeated of the hidden node for stacking noise reduction autoencoder Out, n1 represents the node total number of the hidden layer.
The loss function formula of reduction process is:
WhereinThe unified parameters of noise reduction autoencoder are represented,be,bd, respectively Noise reduction autoencoder coding, the weight in decoding process and offset parameter are represented,Distance function is represented, is selected here With Euclidean distance, s () and g () respectively represent the nonlinear mapping function during coding and decoding, select here Sigmoid function, N represent the sum of input sample,Represent XkThe i-th row.
Four, the blending weight of each hidden feature is calculated, and obtains fusion feature.
The different latent space characterizations that optimization matrix obtains are biased based on not same order, represent the network topology of different level Information, they also embody the rich and diversity of network topology information.But more information are different surely to be obtained Better performance.If carrying out that the same utilization of differentiation is not added to these information, some or correlations unrelated with task are weak Information may will affect final performance.Therefore, it is necessary to introduce attention mechanism preferably to merge to information.
Specifically, in the embodiment of the present invention, using the gate cell with attention mechanism come according to the target information of introducing (referring to the real information of object, including the category classification information mentioned hereinafter) learns to calculate the blending weight of each hidden feature, Model is preferably focused in effective information, formula is:
Wherein,Represent the hidden characteristic Y of k rank when target information sequential value is tkBlending weight, GtkAnd btgkRepresent mesh Mark the weight and biasing when calculating k rank hidden feature weight when information sequence value is t.
All hidden features are combined together, to indicate that network node, formula be:
Wherein, Yt finalTarget information sequential value is represented as the node fusion feature value that calculates when t,It represents and believes in target Cease the hidden characteristic Y of k rank when sequential value is tkBlending weight, K represents the sum of rank.
Five, the probability distribution output of inter-related task is predicted.
In the embodiment of the present invention, prediction task mainly includes:Node-classification, connection prediction and economic decision-making prediction etc..
In the embodiment of the present invention, the prediction under inter-related task is calculated by following formula and is exported:
Wherein, P (labelt|Si) represent in input as SiUnder conditions of correspond to target information be t output probability, SiGeneration The i-th row of table matrix S, MtAnd btmRespectively represent weight and biasing of σ () function when target information sequential value is t.
On the other hand, it is the effect for improving network characterisation, also needs to the relevant parameter in gate cell, and prediction related The relevant parameter of business optimizes update.
1, the training optimization loss function of model framework is as follows:
In above formula, first item is the loss function of depth network, and Section 2 is gate cell and the loss for predicting inter-related task Function;Wherein, LfirstThe loss function that dimension-reduction treatment is carried out to the positive bias optimization matrix of not same order is represented, N represents input sample This sum, i represent the sum that i-th, T represents target information, and t represents t-th in target information sequence, labeltRepresentative pair Answer the label value of t-th of target information;Two are separately trained in above formula;It should be noted that for the unification of form, it will First item and Section 2 concentrate on a formula, in fact, this two training is carried out separately.
2, model initialization and optimization update.
The trained parameter is needed to be:{Mt,btm,Gtk,btgk, when training the model, in section In randomly select setting weight vector MtAnd GtkValue, meanwhile, setting biasing btmAnd btgkIt is 0.
After initializing to parameter, come training pattern, that is, pass through stochastic gradient descent using BP (backward to relay) algorithm Algorithm minimizes loss function;Specifically, accelerating trained process using minimum lot size, the size of batch is arranged pre- If between range (for example, between 10-50), by learning rate initialization in the section of setting (for example, in [5-20]), and learning It practises rate dynamically updating after certain iteration, illustratively, can take after batch data reaches a certain amount of to study Rate carries out processing strategie by half.
Above scheme of the embodiment of the present invention carries out net list using relatively rich stage structure information by comprehensive visual angle Sign utilizes stacked denoise for example, the transfer matrix to not same order introduces transition probability direction adjusting parameter Autoencoder carries out Nonlinear Dimension Reduction processing to the transfer matrix of not same order, is carried out using attention mechanism to multistage information Fusion etc., preferably improves the effect of network characterisation.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment can The mode of necessary general hardware platform can also be added to realize by software by software realization.Based on this understanding, The technical solution of above-described embodiment can be embodied in the form of software products, which can store non-easy at one In the property lost storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are with so that a computer is set Standby (can be personal computer, server or the network equipment etc.) executes method described in each embodiment of the present invention.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Within the technical scope of the present disclosure, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Subject to enclosing.

Claims (7)

1. a kind of depth network characterisation method of abundant structural information, which is characterized in that including:
The not same order initial characteristics matrix of network topology structure is obtained, to capture necessary network structure information;
Micro process is carried out to not same order initial characteristics matrix, the positive bias for obtaining not same order optimizes matrix;
Dimension-reduction treatment is carried out to the positive bias optimization matrix of not same order, the hidden feature of not same order is obtained, passes through the hidden spy of not same order Sign carrys out the different aspect and level of reaction network feature;
The blending weight of each hidden feature is calculated using attention mechanism;
The probability distribution output of inter-related task is predicted in conjunction with all hidden features and corresponding blending weight.
2. a kind of depth network characterisation method of abundant structural information according to claim 1, which is characterized in that described to obtain The not same order initial characteristics matrix for taking network topology structure includes:
Introduce biasing transition probability matrix Uk, network node neighbor seaching direction to be adjusted, bias transition probability matrix Uk It is expressed as:
Uk=BAk,
Wherein, k represents k rank, k=1, and 2 ..., K, K are the sum of rank;B=[αpq(vi,vj)];
Wherein, p and q is control parameter, viAnd vjGeneration respectively The node of corresponding i-th and j in table input sample sequence,Represent viAnd vjThe distance between, N is number of network node, that is, is inputted Total sample number;A=D-1S, S are the adjacency matrix of network, and D is as follows for the degree matrix and diagonal matrix, the relationship of the two of network:Wherein H represents node viNeighbor node set,Represent viAnd vjBetween adjacent value.
Based on biasing transition probability matrix UkThe theoretical loss function L of learning network characterizationk(vi,vj), it is expressed as:
Wherein,Respectively represent node vi、vjCorresponding vector, DkRepresent what the node observed was formed with other nodes The set of node pair with k rank sample path, # (vi,vj)kRepresent node pairIn DkThe number of middle appearance, # (vi)k With # (vj)kRespectively represent node viAnd vjIn DkThe number of appearance, λ represent negative hits, and σ () is sigmoid function;
By being derived by:WhereinRepresent node viCorresponding network node characterization The row of matrix,Represent node vjCorresponding network background characterizes the transposition of matrix column,Represent matrix MkMiddle corresponding node viAnd vjValue,Represent matrix UkMiddle corresponding node viAnd vjValue;
To obtain not same order initial characteristics matrix, wherein k rank initial characteristics matrix is denoted as Mk, matrix MkA namely biasing Optimize matrix.
3. a kind of depth network characterisation method of abundant structural information according to claim 1, which is characterized in that described right Same order initial characteristics matrix does not carry out micro process, and the positive bias optimization matrix for obtaining not same order includes:
K rank initial characteristics matrix is denoted as Mk, by MkIn all negative values be replaced by 0, to obtain corresponding positive k rank biasing Optimize matrix Xk, it is expressed as:
Wherein,Represent matrix XkMiddle corresponding node viAnd vjValue.
4. a kind of depth network characterisation method of abundant structural information according to claim 1, which is characterized in that described right The positive bias optimization matrix of same order does not carry out dimension-reduction treatment, and the hidden feature for obtaining not same order includes:
Matrix progress dimension-reduction treatment is optimized to the positive bias of not same order by stacking noise reduction autoencoder, wherein to not same order Positive bias optimization matrix in the value of vector section position be randomly set to 0 with certain probability;It is final to obtain not that same order is being just Biasing optimization matrix corresponds to the hidden feature in subspace;
The loss function formula of reduction process is:
Wherein,The unified parameters of noise reduction autoencoder are represented,Respectively represent drop The autoencoder that makes an uproar coding, the weight in decoding process and offset parameter,Represent distance function, s () and g () The nonlinear mapping function during coding and decoding is respectively represented, N represents the sum of input sample,It is excellent to represent positive k rank biasing Change matrix XkThe i-th row.
5. a kind of depth network characterisation method of abundant structural information according to claim 1, which is characterized in that the benefit The blending weight that each hidden feature is calculated with attention mechanism includes:
The fusion for being learnt to calculate each hidden feature according to the target information of introducing using the gate cell with attention mechanism is weighed Value, formula are:
Wherein,K rank hidden characteristic Y of the table when target information sequential value is tkBlending weight, GtkAnd btgkRepresent target information The weight and biasing when the hidden feature weight of k rank are calculated when sequential value is t;
All hidden features are combined together, to indicate that network node, formula be:
Wherein, Yt finalTarget information sequential value is represented as the node fusion feature value that calculates when t,It represents in target information sequence The hidden characteristic Y of k rank when train value is tkBlending weight, K represents the sum of rank.
6. a kind of depth network characterisation method of abundant structural information according to claim 5, which is characterized in that the knot All hidden features and corresponding blending weight are closed to predict that the probability distribution of inter-related task exports and include:
Prediction task includes:Node-classification, connection prediction and economic decision-making prediction;
All hidden features are combined together, to indicate network node, and corresponding blending weight are combined to pass through following formula Calculate the prediction output under inter-related task:
P(labelt|Si)=σ (Mt·Yt final+btm),
Wherein, P (labelt|Si) represent in input as SiUnder conditions of correspond to target information be t output probability, SiRepresent matrix The i-th row of S, MtAnd btmRespectively represent weight and biasing of σ () function when target information sequential value is t.
7. a kind of depth network characterisation method of abundant structural information according to claim 6, which is characterized in that this method Further include:To relevant parameter in gate cell, and when prediction inter-related task, relevant parameter optimizes update;
Wherein, depth network characterisation method is realized based on multi-angle various aspects integrated network topology information frame model, model The training optimization loss function of framework is as follows:
In above formula, first item is the loss function of depth network, and Section 2 is gate cell and the loss function for predicting inter-related task; Wherein, LfirstThe loss function that dimension-reduction treatment is carried out to the positive bias optimization matrix of not same order is represented, N represents the total of input sample Number, i represent the sum that i-th, T represents target information, and t represents t-th in target information sequence, labeltRepresent corresponding t The label value of a target information;Two are separately trained in above formula;
The trained parameter is needed to be:{Mt,btm,Gtk,btgk, when training the model, in section In randomly select setting weight vector MtAnd GtkValue, meanwhile, setting biasing btmAnd btgkIt is 0;
After initializing to parameter, using BP algorithm come training pattern, i.e., damage is minimized by stochastic gradient descent algorithm Lose function;Specifically, accelerating trained process using minimum lot size, the size of batch is arranged between preset range, it will Learning rate initializes in the section of setting, and learning rate is dynamically updating after certain iteration.
CN201810653420.8A 2018-06-22 2018-06-22 Deep network characterization method for enriching structure information Active CN108833173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810653420.8A CN108833173B (en) 2018-06-22 2018-06-22 Deep network characterization method for enriching structure information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810653420.8A CN108833173B (en) 2018-06-22 2018-06-22 Deep network characterization method for enriching structure information

Publications (2)

Publication Number Publication Date
CN108833173A true CN108833173A (en) 2018-11-16
CN108833173B CN108833173B (en) 2020-10-27

Family

ID=64137910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810653420.8A Active CN108833173B (en) 2018-06-22 2018-06-22 Deep network characterization method for enriching structure information

Country Status (1)

Country Link
CN (1) CN108833173B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598681A (en) * 2015-01-14 2015-05-06 清华大学 Method and system for monitoring process based on slow feature analysis
CN107273438A (en) * 2017-05-24 2017-10-20 深圳大学 A kind of recommendation method, device, equipment and storage medium
US20180012251A1 (en) * 2016-07-11 2018-01-11 Baidu Usa Llc Systems and methods for an attention-based framework for click through rate (ctr) estimation between query and bidwords
CN107784318A (en) * 2017-09-12 2018-03-09 天津大学 The learning method that a kind of robustness similar diagram for being applied to various visual angles cluster represents
CN108108771A (en) * 2018-01-03 2018-06-01 华南理工大学 Image answering method based on multiple dimensioned deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598681A (en) * 2015-01-14 2015-05-06 清华大学 Method and system for monitoring process based on slow feature analysis
US20180012251A1 (en) * 2016-07-11 2018-01-11 Baidu Usa Llc Systems and methods for an attention-based framework for click through rate (ctr) estimation between query and bidwords
CN107273438A (en) * 2017-05-24 2017-10-20 深圳大学 A kind of recommendation method, device, equipment and storage medium
CN107784318A (en) * 2017-09-12 2018-03-09 天津大学 The learning method that a kind of robustness similar diagram for being applied to various visual angles cluster represents
CN108108771A (en) * 2018-01-03 2018-06-01 华南理工大学 Image answering method based on multiple dimensioned deep learning

Also Published As

Publication number Publication date
CN108833173B (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111291836B (en) Method for generating student network model
CN113299354B (en) Small molecule representation learning method based on transducer and enhanced interactive MPNN neural network
CN106897254A (en) A kind of network representation learning method
CN113505855B (en) Training method for challenge model
CN106789320A (en) A kind of multi-species cooperative method for optimizing wireless sensor network topology
CN109921936A (en) Multiple target dynamic network community division method based on memetic frame
CN109978074A (en) Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning
Zhang et al. Evolving neural network classifiers and feature subset using artificial fish swarm
Sun et al. A tent marine predators algorithm with estimation distribution algorithm and Gaussian random walk for continuous optimization problems
CN105426959B (en) Aluminium electroloysis energy-saving and emission-reduction method based on BP neural network Yu adaptive M BFO algorithms
Chai et al. Correlation Analysis-Based Neural Network Self-Organizing Genetic Evolutionary Algorithm
CN108833173A (en) The depth network characterisation method of abundant structural information
CN115116139A (en) Multi-granularity human body action classification method based on graph convolution network
Zhang et al. Tree-shaped multiobjective evolutionary CNN for hyperspectral image classification
Hu et al. A classification surrogate model based evolutionary algorithm for neural network structure learning
Jiang et al. ATSA: An Adaptive Tree Seed Algorithm based on double-layer framework with tree migration and seed intelligent generation
Huang et al. Generalized regression neural network optimized by genetic algorithm for solving out-of-sample extension problem in supervised manifold learning
Shuai et al. A Self-adaptive neuroevolution approach to constructing Deep Neural Network architectures across different types
CN112508170A (en) Multi-correlation time sequence prediction system and method based on generation countermeasure network
Burguillo Playing with complexity: From cellular evolutionary algorithms with coalitions to self-organizing maps
Li et al. Smoothed deep neural networks for marine sensor data prediction
Jora et al. Evolutionary community detection in complex and dynamic networks
Antonelli et al. Exploiting a coevolutionary approach to concurrently select training instances and learn rule bases of Mamdani fuzzy systems
Santos et al. Neuroevolution with box mutation: An adaptive and modular framework for evolving deep neural networks
Ota et al. Kansei clothing retrieval system using features extracted by autoencoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant