CN107786958A - A kind of data fusion method based on deep learning model - Google Patents

A kind of data fusion method based on deep learning model Download PDF

Info

Publication number
CN107786958A
CN107786958A CN201710949767.2A CN201710949767A CN107786958A CN 107786958 A CN107786958 A CN 107786958A CN 201710949767 A CN201710949767 A CN 201710949767A CN 107786958 A CN107786958 A CN 107786958A
Authority
CN
China
Prior art keywords
mrow
msubsup
data
msub
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710949767.2A
Other languages
Chinese (zh)
Inventor
吴越
周林立
宋良图
刘磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201710949767.2A priority Critical patent/CN107786958A/en
Publication of CN107786958A publication Critical patent/CN107786958A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于深度学习模型的数据融合方法,包括:首先在汇聚节点即Sink节点对构建的特征提取模型进行训练,网络结构共含有3个卷积层、1个池化层和2个全连接层.在利用特征提取模型对节点数据进行融合之前完成该模型的训练;各终端节点通过该模型提取原始数据特征;向Sink节点发送融合后的数据。本发明首先在汇聚节点对构建的特征提取模型进行训练,然后各终端节点通过该模型提取原始数据特征,最后向汇聚节点发送融合后的数据,从而减少数据传输量,延长网络寿命。本发明与同类数据融合方法相比,在同样数据量的情况下能够大幅降低网络能耗,并有效提升了数据融合效率与准确度。

The present invention relates to a data fusion method based on a deep learning model, comprising: firstly training the constructed feature extraction model at the sink node, the network structure contains 3 convolutional layers, 1 pooling layer and 2 Fully connected layer. Complete the training of the model before using the feature extraction model to fuse the node data; each terminal node extracts the original data features through the model; sends the fused data to the Sink node. In the present invention, the constructed feature extraction model is firstly trained at the converging node, then each terminal node extracts the original data features through the model, and finally sends the fused data to the converging node, thereby reducing data transmission volume and prolonging network life. Compared with similar data fusion methods, the present invention can greatly reduce network energy consumption under the same amount of data, and effectively improve the efficiency and accuracy of data fusion.

Description

一种基于深度学习模型的数据融合方法A Data Fusion Method Based on Deep Learning Model

技术领域technical field

本发明涉及数据融合技术领域,尤其是一种基于深度学习模型的数据融合方法。The invention relates to the technical field of data fusion, in particular to a data fusion method based on a deep learning model.

背景技术Background technique

随着物联网技术的快速发展,无线传感器网络(wireless sensor networks,WSNs)作为物联网感知层的核心组成部分,在各类环境监测领域得到了广泛应用。而实际中各传感器节点多采用电池供电,导致网络内资源十分受限。大量节点由于地理位置分布不均,使得数据存在过多冗余信息,从而增加了能量消耗与传输延时。此外,由于物联网应用环境普遍存在较多干扰,会直接减弱数据通信传输能力,并降低数据采集精度,影响了物联网系统整体性能。With the rapid development of the Internet of Things technology, wireless sensor networks (WSNs), as the core component of the Internet of Things perception layer, have been widely used in various environmental monitoring fields. However, in practice, most of the sensor nodes are powered by batteries, which leads to very limited resources in the network. Due to the uneven geographical distribution of a large number of nodes, there is too much redundant information in the data, which increases energy consumption and transmission delay. In addition, due to the widespread interference in the application environment of the Internet of Things, it will directly weaken the data communication transmission capability and reduce the accuracy of data collection, which affects the overall performance of the Internet of Things system.

发明内容Contents of the invention

本发明的目的在于提供一种能够消除冗余、减少数据传输量,从而提高网络性能、延长网络寿命并降低能耗的基于深度学习模型的数据融合方法。The purpose of the present invention is to provide a data fusion method based on a deep learning model that can eliminate redundancy and reduce data transmission volume, thereby improving network performance, prolonging network life and reducing energy consumption.

为实现上述目的,本发明采用了以下技术方案:一种基于深度学习模型的数据融合方法,该方法包括下列顺序的步骤:In order to achieve the above object, the present invention adopts the following technical solutions: a data fusion method based on a deep learning model, the method includes the steps in the following order:

(1)首先在汇聚节点即Sink节点对构建的特征提取模型进行训练,网络结构共含有3个卷积层、1个池化层和2个全连接层.在利用特征提取模型对节点数据进行融合之前完成该模型的训练;(1) First, train the constructed feature extraction model at the sink node, the network structure contains 3 convolutional layers, 1 pooling layer and 2 fully connected layers. After using the feature extraction model to perform node data Complete the training of the model before fusion;

(2)各终端节点通过该模型提取原始数据特征;(2) Each terminal node extracts the original data features through the model;

(3)向Sink节点发送融合后的数据。(3) Send the fused data to the Sink node.

在所述步骤(1)中,所述模型训练的损失函数为:In the step (1), the loss function of the model training is:

训练的目标通过下式给出:The training objective is given by:

不断迭代更新参数以最小化损失函数J(θ),其中,θ为可训练的参数,包括卷积核的权重和偏置,α为学习率。Constantly iteratively update the parameters to minimize the loss function J(θ), where θ is a trainable parameter, including the weight and bias of the convolution kernel, and α is the learning rate.

为求出偏导数对于卷积层有:In order to find the partial derivative For convolutional layers there are:

式中,为第l层第j个特征图的灵敏度,为第l+1层第j个特征图的参数,将代入下式可得到卷积核权重ω和偏置b的导数;In the formula, is the sensitivity of the jth feature map of the l-th layer, is the parameter of the jth feature map of the l+1th layer, the Substituting the following formula can get the derivative of the convolution kernel weight ω and bias b;

式中,为第l-1层特征图与第l层卷积核进行卷积操作的结果,结合完成一次卷积成的参数更新。In the formula, It is the result of the convolution operation between the feature map of the l-1 layer and the convolution kernel of the l layer, combined with Complete the parameter update of a convolution.

在所述步骤(1)中,对于池化层有:In the step (1), for the pooling layer:

式中,代表第l层的第j个特征图,down表示执行一次池化操作,将结果代入完成一次池化层的参数更新。In the formula, Represents the j-th feature map of the l-th layer, down means to perform a pooling operation, and substitute the result into Complete a parameter update of the pooling layer.

在所述步骤(1)中,对于全连接层,采用反向传播算法进行训练,结果前向传播过程完成模型的训练,最后得到模型参数,具体的步骤如下:In the step (1), for the fully connected layer, the backpropagation algorithm is used for training. As a result, the forward propagation process completes the training of the model, and finally obtains the model parameters. The specific steps are as follows:

(5a)Sink节点根据需要处理的数据类型,从相应数据库中提取含有标签信息的数据;(5a) The Sink node extracts data containing label information from the corresponding database according to the type of data to be processed;

(5b)将训练数据输入至构建的模型,开始训练,然后Sink节点将训练好的参数通过簇头发送至各终端节点;(5b) Input the training data to the built model, start training, and then the Sink node sends the trained parameters to each terminal node through the cluster head;

(5c)各终端节点使用预训练的模型,对采集的传感器数据进行多层卷积特征提取与池化,然后将融合得到的特征数据发送至相应的簇头节点,其中,卷积与池化的过程就是数据融合的过程;(5c) Each terminal node uses the pre-trained model to perform multi-layer convolution feature extraction and pooling on the collected sensor data, and then sends the fused feature data to the corresponding cluster head node, where convolution and pooling The process is the process of data fusion;

(5d)簇头节点利用Logistic回归分类器对步骤(5c)产生的融合数据进行分类,得到分类结果,并向Sink节点发送融合数据;(5d) The cluster head node uses the Logistic regression classifier to classify the fusion data generated in step (5c), obtains the classification result, and sends the fusion data to the Sink node;

(5e)网络完成一轮数据采集融合与传输过程,Sink节点重新分簇和选取簇头节点,然后跳转至步骤(5c)。(5e) The network completes a round of data collection fusion and transmission process, the Sink node re-clusters and selects the cluster head node, and then jumps to step (5c).

由上述技术方案可知,本发明首先在汇聚节点对构建的特征提取模型进行训练,然后各终端节点通过该模型提取原始数据特征,最后向汇聚节点发送融合后的数据,从而减少数据传输量,延长网络寿命。本发明与同类数据融合方法相比,在同样数据量的情况下能够大幅降低网络能耗,并有效提升了数据融合效率与准确度。It can be known from the above technical solution that the present invention first trains the constructed feature extraction model at the converging node, then each terminal node extracts the original data features through the model, and finally sends the fused data to the converging node, thereby reducing the amount of data transmission and prolonging the Network longevity. Compared with similar data fusion methods, the present invention can greatly reduce network energy consumption under the same amount of data, and effectively improve the efficiency and accuracy of data fusion.

附图说明Description of drawings

图1是本发明中的节点路由图;Fig. 1 is a node routing diagram among the present invention;

图2是本发明中的方法流程图。Fig. 2 is a flow chart of the method in the present invention.

具体实施方式Detailed ways

如图1、2所示,一种基于深度学习模型的数据融合方法,该方法包括下列顺序的步骤:As shown in Figures 1 and 2, a data fusion method based on a deep learning model includes the following sequential steps:

(1)首先在汇聚节点即Sink节点对构建的特征提取模型进行训练,网络结构共含有3个卷积层、1个池化层和2个全连接层.在利用特征提取模型对节点数据进行融合之前完成该模型的训练;(1) First, train the constructed feature extraction model at the sink node, the network structure contains 3 convolutional layers, 1 pooling layer and 2 fully connected layers. After using the feature extraction model to perform node data Complete the training of the model before fusion;

(2)各终端节点通过该模型提取原始数据特征;(2) Each terminal node extracts the original data features through the model;

(3)向Sink节点发送融合后的数据。(3) Send the fused data to the Sink node.

在所述步骤(1)中,所述模型训练的损失函数为:In the step (1), the loss function of the model training is:

训练的目标通过下式给出:The training objective is given by:

不断迭代更新参数以最小化损失函数J(θ),其中,θ为可训练的参数,包括卷积核的权重和偏置,α为学习率。Constantly iteratively update the parameters to minimize the loss function J(θ), where θ is a trainable parameter, including the weight and bias of the convolution kernel, and α is the learning rate.

为求出偏导数对于卷积层有:In order to find the partial derivative For convolutional layers there are:

式中,为第l层第j个特征图的灵敏度,为第l+1层第j个特征图的参数,将代入下式可得到卷积核权重ω和偏置b的导数;In the formula, is the sensitivity of the jth feature map of the l-th layer, is the parameter of the jth feature map of the l+1th layer, the Substituting the following formula can get the derivative of the convolution kernel weight ω and bias b;

式中,为第l-1层特征图与第l层卷积核进行卷积操作的结果,结合完成一次卷积成的参数更新。In the formula, It is the result of the convolution operation between the feature map of the l-1 layer and the convolution kernel of the l layer, combined with Complete the parameter update of a convolution.

在所述步骤(1)中,对于池化层有:In the step (1), for the pooling layer:

式中,代表第l层的第j个特征图,down表示执行一次池化操作,将结果代入完成一次池化层的参数更新。In the formula, Represents the j-th feature map of the l-th layer, down means to perform a pooling operation, and substitute the result into Complete a parameter update of the pooling layer.

如图1所示,在所述步骤(1)中,对于全连接层,采用反向传播算法进行训练,结果前向传播过程完成模型的训练,最后得到模型参数,具体的步骤如下:As shown in Figure 1, in the step (1), for the fully connected layer, the backpropagation algorithm is used for training, and as a result, the forward propagation process completes the training of the model, and finally obtains the model parameters. The specific steps are as follows:

(5a)Sink节点根据需要处理的数据类型,从相应数据库中提取含有标签信息的数据;(5a) The Sink node extracts data containing label information from the corresponding database according to the type of data to be processed;

(5b)将训练数据输入至构建的模型,开始训练,然后Sink节点将训练好的参数通过簇头发送至各终端节点;(5b) Input the training data to the built model, start training, and then the Sink node sends the trained parameters to each terminal node through the cluster head;

(5c)各终端节点使用预训练的模型,对采集的传感器数据进行多层卷积特征提取与池化,然后将融合得到的特征数据发送至相应的簇头节点,其中,卷积与池化的过程就是数据融合的过程;(5c) Each terminal node uses the pre-trained model to perform multi-layer convolution feature extraction and pooling on the collected sensor data, and then sends the fused feature data to the corresponding cluster head node, where convolution and pooling The process is the process of data fusion;

(5d)簇头节点利用Logistic回归分类器对步骤(5c)产生的融合数据进行分类,得到分类结果,并向Sink节点发送融合数据;(5d) The cluster head node uses the Logistic regression classifier to classify the fusion data generated in step (5c), obtains the classification result, and sends the fusion data to the Sink node;

(5e)网络完成一轮数据采集融合与传输过程,Sink节点重新分簇和选取簇头节点,然后跳转至步骤(5c)。(5e) The network completes a round of data collection fusion and transmission process, the Sink node re-clusters and selects the cluster head node, and then jumps to step (5c).

综上所述,本发明首先在汇聚节点对构建的特征提取模型进行训练,然后各终端节点通过该模型提取原始数据特征,最后向汇聚节点发送融合后的数据,从而减少数据传输量,延长网络寿命。本发明与同类数据融合方法相比,在同样数据量的情况下能够大幅降低网络能耗,并有效提升了数据融合效率与准确度。In summary, the present invention firstly trains the constructed feature extraction model at the converging node, then each terminal node extracts the original data features through the model, and finally sends the fused data to the converging node, thereby reducing the amount of data transmission and extending the network life. Compared with similar data fusion methods, the present invention can greatly reduce network energy consumption under the same amount of data, and effectively improve the efficiency and accuracy of data fusion.

Claims (5)

  1. A kind of 1. data fusion method based on deep learning model, it is characterised in that:This method includes the step of following order:
    (1) it is that Sink node is trained to the Feature Selection Model of structure first in aggregation node, network structure contains 3 altogether Convolutional layer, 1 pond layer and 2 full articulamentum are completed before being merged using Feature Selection Model to node data should The training of model;
    (2) each terminal node passes through the model extraction initial data feature;
    (3) data after fusion are sent to Sink node.
  2. 2. the data fusion method according to claim 1 based on deep learning model, it is characterised in that:In the step (1) in, the loss function of the model training is:
    <mrow> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>lnh</mi> <mi>&amp;theta;</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mi>l</mi> <mi>n</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>h</mi> <mi>&amp;theta;</mi> </msub> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
    The target of training is given by the following formula:
    <mrow> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&amp;alpha;</mi> <mfrac> <mo>&amp;part;</mo> <mrow> <mo>&amp;part;</mo> <mi>&amp;theta;</mi> </mrow> </mfrac> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow>
    Continuous iteration undated parameter to minimize loss function J (θ), wherein, θ is trainable parameter, includes the power of convolution kernel Weight and biasing, α is learning rate.
  3. 3. the convolutional neural networks structure according to claim 2 based on deep learning model realizes radio sensing network number According to the method for fusion, it is characterised in that:To obtain partial derivativeHave for convolutional layer:
    <mrow> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;beta;</mi> <mi>j</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msup> <mi>f</mi> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <msubsup> <mi>u</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> <mo>&amp;CenterDot;</mo> <mi>u</mi> <mi>p</mi> <mo>(</mo> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    In formula,For the sensitivity of j-th of characteristic pattern of l layers,, will for the parameter of j-th of characteristic pattern of l+1 layersUnder substitution Formula can obtain convolution kernel weights omega and bias b derivative;
    <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>J</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> </munder> <msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>u</mi> <mi>v</mi> </mrow> </msub> <msub> <mrow> <mo>(</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>u</mi> <mi>v</mi> </mrow> </msub> </mrow>
    <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>J</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>b</mi> <mi>j</mi> </msub> </mrow> </mfrac> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> </munder> <msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>u</mi> <mi>v</mi> </mrow> </msub> </mrow>
    In formula,The result of convolution operation is carried out for l-1 layers characteristic pattern and l layers convolution kernel, with reference toComplete a convolution into parameter renewal.
  4. 4. the data fusion method according to claim 1 based on deep learning model, it is characterised in that:In the step (1) in, have for pond layer:
    <mrow> <msubsup> <mi>z</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;beta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>n</mi> <mo>(</mo> <mrow> <msubsup> <mi>z</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mi>l</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    <mrow> <msubsup> <mi>&amp;delta;</mi> <mi>l</mi> <mi>l</mi> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msubsup> <mi>&amp;beta;</mi> <mi>l</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>*</mo> <msub> <mi>k</mi> <mrow> <mi>l</mi> <mi>j</mi> </mrow> </msub> </mrow>
    In formula,J-th of characteristic pattern of l layers is represented, down is represented to perform a pondization operation, result is substituted intoComplete the parameter renewal of a pond layer.
  5. 5. the data fusion method according to claim 1 based on deep learning model, it is characterised in that:In the step (1) in, for full articulamentum, it is trained using back-propagation algorithm, as a result propagated forward process completes the training of model, most After obtain model parameter, specific step is as follows:
    The data type that (5a) Sink node is handled as needed, the data containing label information are extracted from associated databases;
    (5b) inputs training data to the model of structure, starts to train, and then the parameter trained is passed through cluster by Sink node Hair delivers to each terminal node;
    (5c) each terminal node uses the model of pre-training, and the feature extraction of multilayer convolution and pond are carried out to the sensing data of collection Change, then send the characteristic that fusion obtains to corresponding leader cluster node, wherein, convolution and the process in pond are exactly data The process of fusion;
    (5d) leader cluster node returns grader using Logistic and fused data caused by step (5c) is classified, and is divided Class result, and send fused data to Sink node;
    (5e) network completes a wheel data acquisition fusion and transmitting procedure, and Sink node sub-clustering and chooses leader cluster node again, so After jump to step (5c).
CN201710949767.2A 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model Pending CN107786958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710949767.2A CN107786958A (en) 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710949767.2A CN107786958A (en) 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model

Publications (1)

Publication Number Publication Date
CN107786958A true CN107786958A (en) 2018-03-09

Family

ID=61434718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710949767.2A Pending CN107786958A (en) 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model

Country Status (1)

Country Link
CN (1) CN107786958A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558909A (en) * 2018-12-05 2019-04-02 清华大学深圳研究生院 Combined depth learning method based on data distribution
CN110222750A (en) * 2019-05-27 2019-09-10 北京品友互动信息技术股份公司 The determination method and device of target audience's concentration
CN111814774A (en) * 2020-09-10 2020-10-23 熵智科技(深圳)有限公司 5D texture grid data structure
CN113078958A (en) * 2021-03-29 2021-07-06 河海大学 Network node distance vector synchronization method based on transfer learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558909A (en) * 2018-12-05 2019-04-02 清华大学深圳研究生院 Combined depth learning method based on data distribution
CN109558909B (en) * 2018-12-05 2020-10-23 清华大学深圳研究生院 Machine deep learning method based on data distribution
CN110222750A (en) * 2019-05-27 2019-09-10 北京品友互动信息技术股份公司 The determination method and device of target audience's concentration
CN111814774A (en) * 2020-09-10 2020-10-23 熵智科技(深圳)有限公司 5D texture grid data structure
WO2022052893A1 (en) * 2020-09-10 2022-03-17 熵智科技(深圳)有限公司 5d texture grid data structure
CN113078958A (en) * 2021-03-29 2021-07-06 河海大学 Network node distance vector synchronization method based on transfer learning

Similar Documents

Publication Publication Date Title
CN107786958A (en) A kind of data fusion method based on deep learning model
CN106529818B (en) Water quality assessment Forecasting Methodology based on Fuzzy Wavelet Network
CN112036512A (en) Image classification neural network architecture search method and device based on network cropping
CN107885853A (en) A kind of combined type file classification method based on deep learning
CN116579417A (en) Hierarchical personalized federated learning method, device and medium in edge computing network
CN112115377A (en) A Graph Neural Network Link Prediction Recommendation Method Based on Social Relationships
CN110263841A (en) A kind of dynamic, structured network pruning method based on filter attention mechanism and BN layers of zoom factor
CN113065649A (en) A complex network topology graph representation learning method, prediction method and server
CN104361393A (en) Method for using improved neural network model based on particle swarm optimization for data prediction
CN103489033A (en) Incremental type learning method integrating self-organizing mapping and probability neural network
CN107295453A (en) A kind of wireless sensor network data fusion method
CN114265954B (en) Graph representation learning method based on position and structure information
CN105228159A (en) Based on the wireless sense network coverage enhancement algorithm of gridding and improve PSO algorithm
CN113923123B (en) A Topology Control Method for Underwater Wireless Sensor Networks Based on Deep Reinforcement Learning
CN113128689A (en) Entity relationship path reasoning method and system for regulating knowledge graph
CN115526316A (en) A Knowledge Representation and Prediction Method Combined with Graph Neural Network
CN114254738A (en) Construction method and application of dynamic graph convolutional neural network model with two-layer evolution
CN115456093A (en) High-performance graph clustering method based on attention-graph neural network
CN105760549B (en) Nearest Neighbor based on attribute graph model
CN117933473A (en) Mixed unmanned aerial vehicle multi-task demand power prediction method based on meta-learning
CN118097982A (en) A method, system and device for predicting short-term urban traffic flow
CN111498148A (en) FDNN-based intelligent spacecraft controller and control method
CN112529188A (en) Knowledge distillation-based industrial process optimization decision model migration optimization method
CN114116692B (en) Mask and bidirectional model-based missing POI track completion method
CN110717260A (en) Unmanned aerial vehicle maneuvering capability model establishing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180309