CN107786958A - A kind of data fusion method based on deep learning model - Google Patents

A kind of data fusion method based on deep learning model Download PDF

Info

Publication number
CN107786958A
CN107786958A CN201710949767.2A CN201710949767A CN107786958A CN 107786958 A CN107786958 A CN 107786958A CN 201710949767 A CN201710949767 A CN 201710949767A CN 107786958 A CN107786958 A CN 107786958A
Authority
CN
China
Prior art keywords
mrow
msubsup
data
msub
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710949767.2A
Other languages
Chinese (zh)
Inventor
吴越
周林立
宋良图
刘磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201710949767.2A priority Critical patent/CN107786958A/en
Publication of CN107786958A publication Critical patent/CN107786958A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of data fusion method based on deep learning model, including:It is that Sink node is trained to the Feature Selection Model of structure first in aggregation node, network structure completes the training of the model containing 3 convolutional layers, 1 pond layer and 2 full articulamentum before being merged using Feature Selection Model to node data altogether;Each terminal node passes through the model extraction initial data feature;The data after fusion are sent to Sink node.The present invention is trained in aggregation node to the Feature Selection Model of structure first, and then each terminal node is by the model extraction initial data feature, most after to aggregation node send the data after fusion, so as to reduce volume of transmitted data, extend network life.Compared with homogeneous data fusion method network energy consumption can be greatly reduced, and effectively improve data fusion efficiency and the degree of accuracy in the present invention in the case of same data volume.

Description

A kind of data fusion method based on deep learning model
Technical field
The present invention relates to Data fusion technique field, especially a kind of data fusion method based on deep learning model.
Background technology
With the fast development of technology of Internet of things, wireless sensor network (wireless sensor networks, WSNs) the core component as thing network sensing layer, it is widely applied in all kinds of environmental monitorings.And in practice Each sensor node uses battery powered more, causes resource in network very limited.Great deal of nodes due to location distribution not So that data have excessive redundancy, so as to add energy expenditure and transmission delay.Further, since Internet of Things application It the more interference of environment generally existing, can directly weaken data communication ability, and reduce accuracy of data acquisition, have impact on Internet of Things Net systematic entirety energy.
The content of the invention
It is an object of the invention to provide one kind can eliminate redundancy, reduce volume of transmitted data, so as to improve network performance, Extend network life and reduce the data fusion method based on deep learning model of energy consumption.
To achieve the above object, present invention employs following technical scheme:A kind of data based on deep learning model are melted Conjunction method, the step of this method includes following order:
(1) it is that Sink node is trained to the Feature Selection Model of structure first in aggregation node, network structure contains altogether There are 3 convolutional layers, 1 pond layer and 2 full articulamentum complete before being merged using Feature Selection Model to node data Into the training of the model;
(2) each terminal node passes through the model extraction initial data feature;
(3) data after fusion are sent to Sink node.
In the step (1), the loss function of the model training is:
The target of training is given by the following formula:
Continuous iteration undated parameter to minimize loss function J (θ), wherein, θ is trainable parameter, including convolution kernel Weight and biasing, α be learning rate.
To obtain partial derivativeHave for convolutional layer:
In formula,For the sensitivity of j-th of characteristic pattern of l layers,, will for the parameter of j-th of characteristic pattern of l+1 layers Following formula is substituted into can obtain convolution kernel weights omega and bias b derivative;
In formula,The result of convolution operation is carried out for l-1 layers characteristic pattern and l layers convolution kernel, with reference toComplete a convolution into parameter renewal.
In the step (1), have for pond layer:
In formula,J-th of characteristic pattern of l layers is represented, down is represented to perform a pondization operation, result is substituted intoComplete the parameter renewal of a pond layer.
In the step (1), for full articulamentum, it is trained using back-propagation algorithm, as a result propagated forward mistake Journey completes the training of model, finally obtains model parameter, specific step is as follows:
The data type that (5a) Sink node is handled as needed, the number containing label information is extracted from associated databases According to;
(5b) inputs training data to the model of structure, starts to train, and then Sink node leads to the parameter trained Cluster head is crossed to send to each terminal node;
(5c) each terminal node uses the model of pre-training, and multilayer convolution feature extraction is carried out to the sensing data of collection With pond, then the characteristic that fusion obtains is sent to corresponding leader cluster node, wherein, convolution and the process in pond are exactly The process of data fusion;
(5d) leader cluster node returns grader using Logistic and fused data caused by step (5c) is classified, and obtains Fused data is sent to classification results, and to Sink node;
(5e) network completes a wheel data acquisition fusion and transmitting procedure, and Sink node sub-clustering and chooses cluster head section again Point, then jump to step (5c).
As shown from the above technical solution, the present invention is trained in aggregation node to the Feature Selection Model of structure first, Then each terminal node is by the model extraction initial data feature, most after to aggregation node send the data after fusion, so as to Volume of transmitted data is reduced, extends network life.The present invention is compared with homogeneous data fusion method, in the case of same data volume Network energy consumption can be greatly reduced, and effectively improve data fusion efficiency and the degree of accuracy.
Brief description of the drawings
Fig. 1 is the node-routing figure in the present invention;
Fig. 2 is the method flow diagram in the present invention.
Embodiment
As shown in Figure 1, 2, a kind of data fusion method based on deep learning model, this method include the step of following order Suddenly:
(1) it is that Sink node is trained to the Feature Selection Model of structure first in aggregation node, network structure contains altogether There are 3 convolutional layers, 1 pond layer and 2 full articulamentum complete before being merged using Feature Selection Model to node data Into the training of the model;
(2) each terminal node passes through the model extraction initial data feature;
(3) data after fusion are sent to Sink node.
In the step (1), the loss function of the model training is:
The target of training is given by the following formula:
Continuous iteration undated parameter to minimize loss function J (θ), wherein, θ is trainable parameter, including convolution kernel Weight and biasing, α be learning rate.
To obtain partial derivativeHave for convolutional layer:
In formula,For the sensitivity of j-th of characteristic pattern of l layers,, will for the parameter of j-th of characteristic pattern of l+1 layers Following formula is substituted into can obtain convolution kernel weights omega and bias b derivative;
In formula,The result of convolution operation is carried out for l-1 layers characteristic pattern and l layers convolution kernel, with reference toComplete a convolution into parameter renewal.
In the step (1), have for pond layer:
In formula,J-th of characteristic pattern of l layers is represented, down is represented to perform a pondization operation, result is substituted intoComplete the parameter renewal of a pond layer.
As shown in figure 1, in the step (1), for full articulamentum, it is trained using back-propagation algorithm, as a result Propagated forward process completes the training of model, finally obtains model parameter, specific step is as follows:
The data type that (5a) Sink node is handled as needed, the number containing label information is extracted from associated databases According to;
(5b) inputs training data to the model of structure, starts to train, and then Sink node leads to the parameter trained Cluster head is crossed to send to each terminal node;
(5c) each terminal node uses the model of pre-training, and multilayer convolution feature extraction is carried out to the sensing data of collection With pond, then the characteristic that fusion obtains is sent to corresponding leader cluster node, wherein, convolution and the process in pond are exactly The process of data fusion;
(5d) leader cluster node returns grader using Logistic and fused data caused by step (5c) is classified, and obtains Fused data is sent to classification results, and to Sink node;
(5e) network completes a wheel data acquisition fusion and transmitting procedure, and Sink node sub-clustering and chooses cluster head section again Point, then jump to step (5c).
In summary, the present invention is trained in aggregation node to the Feature Selection Model of structure first, then each terminal Node by the model extraction initial data feature, most after to aggregation node send the data after fusion, passed so as to reduce data Throughput rate, extend network life.The present invention can significantly drop compared with homogeneous data fusion method in the case of same data volume Low network energy consumption, and effectively improve data fusion efficiency and the degree of accuracy.

Claims (5)

  1. A kind of 1. data fusion method based on deep learning model, it is characterised in that:This method includes the step of following order:
    (1) it is that Sink node is trained to the Feature Selection Model of structure first in aggregation node, network structure contains 3 altogether Convolutional layer, 1 pond layer and 2 full articulamentum are completed before being merged using Feature Selection Model to node data should The training of model;
    (2) each terminal node passes through the model extraction initial data feature;
    (3) data after fusion are sent to Sink node.
  2. 2. the data fusion method according to claim 1 based on deep learning model, it is characterised in that:In the step (1) in, the loss function of the model training is:
    <mrow> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>lnh</mi> <mi>&amp;theta;</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mi>l</mi> <mi>n</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>h</mi> <mi>&amp;theta;</mi> </msub> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
    The target of training is given by the following formula:
    <mrow> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&amp;alpha;</mi> <mfrac> <mo>&amp;part;</mo> <mrow> <mo>&amp;part;</mo> <mi>&amp;theta;</mi> </mrow> </mfrac> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow>
    Continuous iteration undated parameter to minimize loss function J (θ), wherein, θ is trainable parameter, includes the power of convolution kernel Weight and biasing, α is learning rate.
  3. 3. the convolutional neural networks structure according to claim 2 based on deep learning model realizes radio sensing network number According to the method for fusion, it is characterised in that:To obtain partial derivativeHave for convolutional layer:
    <mrow> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;beta;</mi> <mi>j</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msup> <mi>f</mi> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <msubsup> <mi>u</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> <mo>&amp;CenterDot;</mo> <mi>u</mi> <mi>p</mi> <mo>(</mo> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    In formula,For the sensitivity of j-th of characteristic pattern of l layers,, will for the parameter of j-th of characteristic pattern of l+1 layersUnder substitution Formula can obtain convolution kernel weights omega and bias b derivative;
    <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>J</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> </munder> <msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>u</mi> <mi>v</mi> </mrow> </msub> <msub> <mrow> <mo>(</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>u</mi> <mi>v</mi> </mrow> </msub> </mrow>
    <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>J</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>b</mi> <mi>j</mi> </msub> </mrow> </mfrac> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> </munder> <msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;delta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>u</mi> <mi>v</mi> </mrow> </msub> </mrow>
    In formula,The result of convolution operation is carried out for l-1 layers characteristic pattern and l layers convolution kernel, with reference toComplete a convolution into parameter renewal.
  4. 4. the data fusion method according to claim 1 based on deep learning model, it is characterised in that:In the step (1) in, have for pond layer:
    <mrow> <msubsup> <mi>z</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;beta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>n</mi> <mo>(</mo> <mrow> <msubsup> <mi>z</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mi>l</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    <mrow> <msubsup> <mi>&amp;delta;</mi> <mi>l</mi> <mi>l</mi> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msubsup> <mi>&amp;beta;</mi> <mi>l</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>*</mo> <msub> <mi>k</mi> <mrow> <mi>l</mi> <mi>j</mi> </mrow> </msub> </mrow>
    In formula,J-th of characteristic pattern of l layers is represented, down is represented to perform a pondization operation, result is substituted intoComplete the parameter renewal of a pond layer.
  5. 5. the data fusion method according to claim 1 based on deep learning model, it is characterised in that:In the step (1) in, for full articulamentum, it is trained using back-propagation algorithm, as a result propagated forward process completes the training of model, most After obtain model parameter, specific step is as follows:
    The data type that (5a) Sink node is handled as needed, the data containing label information are extracted from associated databases;
    (5b) inputs training data to the model of structure, starts to train, and then the parameter trained is passed through cluster by Sink node Hair delivers to each terminal node;
    (5c) each terminal node uses the model of pre-training, and the feature extraction of multilayer convolution and pond are carried out to the sensing data of collection Change, then send the characteristic that fusion obtains to corresponding leader cluster node, wherein, convolution and the process in pond are exactly data The process of fusion;
    (5d) leader cluster node returns grader using Logistic and fused data caused by step (5c) is classified, and is divided Class result, and send fused data to Sink node;
    (5e) network completes a wheel data acquisition fusion and transmitting procedure, and Sink node sub-clustering and chooses leader cluster node again, so After jump to step (5c).
CN201710949767.2A 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model Pending CN107786958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710949767.2A CN107786958A (en) 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710949767.2A CN107786958A (en) 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model

Publications (1)

Publication Number Publication Date
CN107786958A true CN107786958A (en) 2018-03-09

Family

ID=61434718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710949767.2A Pending CN107786958A (en) 2017-10-12 2017-10-12 A kind of data fusion method based on deep learning model

Country Status (1)

Country Link
CN (1) CN107786958A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558909A (en) * 2018-12-05 2019-04-02 清华大学深圳研究生院 Combined depth learning method based on data distribution
CN110222750A (en) * 2019-05-27 2019-09-10 北京品友互动信息技术股份公司 The determination method and device of target audience's concentration
CN111814774A (en) * 2020-09-10 2020-10-23 熵智科技(深圳)有限公司 5D texture grid data structure
CN113078958A (en) * 2021-03-29 2021-07-06 河海大学 Network node distance vector synchronization method based on transfer learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558909A (en) * 2018-12-05 2019-04-02 清华大学深圳研究生院 Combined depth learning method based on data distribution
CN109558909B (en) * 2018-12-05 2020-10-23 清华大学深圳研究生院 Machine deep learning method based on data distribution
CN110222750A (en) * 2019-05-27 2019-09-10 北京品友互动信息技术股份公司 The determination method and device of target audience's concentration
CN111814774A (en) * 2020-09-10 2020-10-23 熵智科技(深圳)有限公司 5D texture grid data structure
WO2022052893A1 (en) * 2020-09-10 2022-03-17 熵智科技(深圳)有限公司 5d texture grid data structure
CN113078958A (en) * 2021-03-29 2021-07-06 河海大学 Network node distance vector synchronization method based on transfer learning

Similar Documents

Publication Publication Date Title
Alzubaidi et al. A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications
CN107786958A (en) A kind of data fusion method based on deep learning model
CN109492099B (en) Cross-domain text emotion classification method based on domain impedance self-adaption
CN103324628B (en) A kind of trade classification method and system for issuing text
Guo et al. Multi-source temporal data aggregation in wireless sensor networks
CN113095439A (en) Heterogeneous graph embedding learning method based on attention mechanism
CN103902775B (en) Multilayer obstacle-avoiding Steiner minimal tree construction method for very large scale integration
CN107103113A (en) Towards the Automation Design method, device and the optimization method of neural network processor
CN107016175A (en) It is applicable the Automation Design method, device and the optimization method of neural network processor
CN107330446A (en) A kind of optimization method of depth convolutional neural networks towards image classification
CN113962358B (en) Information diffusion prediction method based on time sequence hypergraph attention neural network
CN103297983A (en) Wireless sensor network node dynamic deployment method based on network flow
CN108985342A (en) A kind of uneven classification method based on depth enhancing study
CN104598611A (en) Method and system for sequencing search entries
CN109101629A (en) A kind of network representation method based on depth network structure and nodal community
CN107662617A (en) Vehicle-mounted interactive controlling algorithm based on deep learning
CN102915448B (en) A kind of three-dimensional model automatic classification method based on AdaBoost
CN109063719A (en) A kind of image classification method of co-ordinative construction similitude and category information
CN108805151A (en) A kind of image classification method based on depth similitude network
CN108496188A (en) Method, apparatus, computer system and the movable equipment of neural metwork training
Zhang et al. Representation learning of knowledge graphs with entity attributes
Hussain et al. Enabling smart cities with cognition based intelligent route decision in vehicles empowered with deep extreme learning machine
CN114238524B (en) Satellite frequency-orbit data information extraction method based on enhanced sample model
CN105160598A (en) Power grid service classification method based on improved EM algorithm
Yang Construction of graduate behavior dynamic model based on dynamic decision tree algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180309

WD01 Invention patent application deemed withdrawn after publication