CN107273509A - A kind of Neural Network Data memory, date storage method and data search method - Google Patents

A kind of Neural Network Data memory, date storage method and data search method Download PDF

Info

Publication number
CN107273509A
CN107273509A CN201710470753.2A CN201710470753A CN107273509A CN 107273509 A CN107273509 A CN 107273509A CN 201710470753 A CN201710470753 A CN 201710470753A CN 107273509 A CN107273509 A CN 107273509A
Authority
CN
China
Prior art keywords
node
hidden node
input
mrow
hidden
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710470753.2A
Other languages
Chinese (zh)
Other versions
CN107273509B (en
Inventor
孙建业
吴宏伟
程世杰
辛士光
王华林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201710470753.2A priority Critical patent/CN107273509B/en
Publication of CN107273509A publication Critical patent/CN107273509A/en
Application granted granted Critical
Publication of CN107273509B publication Critical patent/CN107273509B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to field of data storage, specifically related to a kind of Neural Network Data memory, date storage method and data search method, the present invention increases to solve the data search amount of calculation of existing date storage method with the increase of memory data output, major key is needed to use as the shortcoming of search field, and proposes a kind of Neural Network Data memory, date storage method and data search method.Neural Network Data memory includes:N input node;N output node;It is connected to the first hidden node group between input node and output node;Wherein the first hidden node group is 2 layers, every layer of 2 hidden node;The network weight of hidden node is the random number in (1,1) interval in first hidden node group;It is connected to the second hidden node group between input node and output node;Wherein the second hidden node group is 1 layer, depending on hidden node number is according to the scale of data storage;The network weight of hidden node is adjustable in second hidden node group.The present invention is applied to data storage.

Description

A kind of Neural Network Data memory, date storage method and data search method
Technical field
The present invention relates to field of data storage, and in particular to a kind of Neural Network Data memory, date storage method and Data search method.
Background technology
Current existing memory and storage method use the storage mode based on address, are characterized in the data volume of storage Bigger, then the time needed to use during searching data is longer.And when being retrieved in database, search field needs to include Major key content, otherwise searching the time can be very long.Therefore a kind of new memory, storage method are needed, to solve prior art Defect.
The content of the invention
The invention aims to solve the data search amount of calculation of existing data storage and storage method with The increase of memory data output and increase, it is necessary to using major key as the shortcoming of search field, and propose a kind of Neural Network Data Memory, date storage method and data search method.
A kind of Neural Network Data memory, the neural network memory is to include the neutral net mould of following feature Type:
N input node;
N output node;
It is connected to the first hidden node group between input node and output node;Wherein the first hidden node group is 2 layers, every layer 2 Individual hidden node;The network weight of hidden node is the random number in (- 1,1) interval in first hidden node group;
It is connected to the second hidden node group between input node and output node;Wherein the second hidden node group is 1 layer, therein Depending on hidden node number is according to the scale of data storage;The network weight of hidden node is adjustable in second hidden node group.
A kind of Neural Network Data storage method, including:
Step 1: data to be stored are divided into P groups, a field to be stored is represented per class value;
Step 2: being inputted by input node into Neural Network Data memory as claimed in claim 2;
Step 3: the network weight for using gradient method to adjust in the neural network memory in the second hidden node group causes Network output is 0 under the precision of setting, that is, causes y={ y1,y2,…,yn}={ 0,0 ..., 0 }.
A kind of Neural Network Data querying method, methods described is used to inquire about the Neural Network Data storage side for leading to the present invention Method stored after data, the Neural Network Data querying method includes:
Step 1: by input vector k={ k1,k2,…,kmInput to input node;Input vector k is used to represent known Field;The part for not receiving input vector in input node represents unknown field;
Step 2: adjusting the vector value of the corresponding input node of unknown field by gradient method so that the value of output node It is 0;
Step 3: obtaining the vector value of the now corresponding input node of unknown unknown field, data to be checked are used as.
Beneficial effects of the present invention are:1st, using such storage organization so that the amount of calculation of data search is not with depositing The increase of reserves and increase, inquiry velocity is very fast, when field quantity is 2, it is known that Field Inquiry another field, looks into It is 10ms to ask fastest;2nd, a part of content search data according to data can be realized, this process simulation human brain Memory association function, be the ability that active computer memory does not possess.Therefore the Neural Network Data memory can be used for The storage of the knowledge and information of artificial intelligence system.
Brief description of the drawings
Fig. 1 is the structural representation of the Neural Network Data memory of the present invention;
Fig. 2 is the schematic diagram of Neural Network Data querying method in embodiment five of the invention.
Embodiment
Embodiment one:Present embodiments provide for a kind of Neural Network Data memory, as shown in figure 1, it is Include the neural network model of following feature:
N input node.
N output node.
It is connected to the first hidden node group between input node and output node;Wherein the first hidden node group is 2 layers, every layer 2 Individual hidden node;The network weight of hidden node is the random number in (- 1,1) interval in first hidden node group.
It is connected to the second hidden node group between input node and output node;Wherein the second hidden node group is 1 layer, therein Depending on hidden node number is according to the scale of data storage, such as 10 or 100 nodes;The network of hidden node in second hidden node group Weights are adjustable.
The Neural Network Data memory of present embodiment is substantially a kind of neural network model, when in use, will be treated Data storage is inputted to input node, by output node output result after the processing of hidden node.This process is similar to god Training process through network model, i.e., data to be stored are equivalent to training data, and the processing procedure of hidden node was equivalent to training Journey, output result is equivalent to classification results.And if the structure to hidden node is not improved, occurs when searching and find The situation for the data not stored.Therefore the present invention is defined to the structure of hidden node, it is to avoid this case.
Embodiment two:Present embodiment from unlike embodiment one:
The action function of input node is:y(1)=x(1);y(1)For the output quantity of input node, x(1)For the defeated of input node Enter amount.
The action function of output node is:y(2)For the output quantity of output node;x(2)iFor output I-th of input quantity of node.
The action function of hidden node is in first hidden node group:y(3)For The output quantity of hidden node in one hidden node group;x(3)iFor i-th of input quantity of hidden node in the first hidden node group.
Wherein x(2)iFor i-th of input quantity in the first hidden node group;aiIt is xiTo the connection value of the hidden node, c is that this is hidden The network weight of node, m is the number of input quantity in the first hidden node group.
The action function of hidden node is in second hidden node group:
Wherein y(4)For the output quantity of hidden node in the second hidden node group;x(4)iFor in the second hidden node group hidden node it is defeated Enter amount;aiIt is x(4)iTo the connection weight of the hidden node, b and c are the network weights of the hidden node;X={ x1,x2,…,xmIt is this The input vector of hidden node, d is coordinate vector of the hidden node in its input vector space;Function f expression formula is:
Wherein r0For windows radius.
The effect of window function is:Only have distance input data to be less than radius r in the second hidden node group0Hidden node It is activated, because only that the output of these hidden nodes is not 0.This ensure that no matter the number of the second hidden node group how many, it is right The study of one data or the search active distance data are less than r0Hidden node, therefore this memory data search calculating Amount does not increase with the increase of memory data output, is adapted to the storage and inquiry of mass data.
Other steps and parameter are identical with embodiment one.
Embodiment three:Present embodiment provides a kind of Neural Network Data storage method, including:
Step 1: data to be stored are divided into Q groups, a field to be stored is represented per class value.
Step 2: being inputted by input node into the Neural Network Data memory of embodiment two.Need Bright, each input node is used to input what which type of information should be to determine, and such as one neural network memory is used In storage student information, wherein the 1st to the 5th input node is used for receiving the student status information of student, then follow-up storage Should be in this way, the implication of Query Result can be just determined by the sequence number of input node so when subsequent query.
Step 3: the network weight for using gradient method to adjust in the neural network memory in the second hidden node group causes Network output is 0 under the precision of setting, that is, causes y(2)={ y1,y2,…,yn}={ 0,0 ..., 0 }.
Other steps and parameter are identical with embodiment one or two.
Embodiment four:Unlike one of present embodiment and embodiment one to three:
In step 2, the formula of gradient descent method is:
Wherein w represents the network weight of hidden node in the second hidden node group;J=y1 2+y2 2+…+yn 2It is neutral net storage The object function of device.R is step-length.
Other steps and parameter are identical with one of embodiment one to three.
Embodiment five:Present embodiment provides a kind of Neural Network Data querying method, as shown in Fig. 2 described Method is used for the data inquired about after being stored by the method for embodiment four, the Neural Network Data querying method Including:
Step 1: by input vector k={ k1,k2,…,kmInput to input node;Input vector k is used to represent known Field;The part for not receiving input vector in input node represents unknown field.Unknown field is represented with z in fig. 2.
Step 2: adjusting the vector value of the corresponding input node of unknown field by gradient method so that the value of output node It is 0.
Step 3: obtaining the vector value of the now corresponding input node of unknown field, data to be checked are used as.
For example, as it is known that field is the student status information of student, the information of unknown field is the name information of student, by student status Number information is inputted to input node, and the vector value that output node is caused by adjusting other fields (field i.e. to be checked) is 0, this When the vector value of input node be student name information to be checked.
Embodiment six:The Neural Network Data querying method of present embodiment is different from embodiment five It is in step 2, to be by formula used in the network weight of the corresponding input node of gradient method regulation unknown field:
Wherein i=1,2,3 ..., p, p are the number of the corresponding input node of unknown field.ziFor i-th of unknown field Input node.
Other steps and parameter are identical with embodiment five.
The present invention can also have other various embodiments, in the case of without departing substantially from spirit of the invention and its essence, this area Technical staff works as can make various corresponding changes and deformation according to the present invention, but these corresponding changes and deformation should all belong to The protection domain of appended claims of the invention.

Claims (6)

1. a kind of Neural Network Data memory, it is characterised in that the neural network memory is to include the god of following feature Through network model:
N input node;
N output node;
It is connected to the first hidden node group between input node and output node;Wherein the first hidden node group is 2 layers, and 2 every layer hidden Node;The network weight of hidden node is the random number in (- 1,1) interval in first hidden node group;
It is connected to the second hidden node group between input node and output node;Wherein the second hidden node group is 1 layer, hidden section therein Depending on point number is according to the scale of data storage;The network weight of hidden node is adjustable in second hidden node group.
2. Neural Network Data memory according to claim 1, it is characterised in that
The action function of input node is:y(1)=x(1);y(1)For the output quantity of input node, x(1)For the input of input node Amount;
The action function of output node is:y(2)For the output quantity of output node;x(2)iFor output node I-th of input quantity;
The action function of hidden node is in first hidden node group:y(3)It is hidden for first The output quantity of hidden node in node group;x(3)iFor i-th of input quantity of hidden node in the first hidden node group;aiIt is input quantity to should The connection weight of hidden node, c is the network weight of the hidden node, and m is the number of input quantity in the first hidden node group;
The action function of hidden node is in second hidden node group:
Wherein y(4)For the output quantity of hidden node in the second hidden node group;x(4)iFor the input quantity of hidden node in the second hidden node group;b It is the network weight of the hidden node with c;X={ x1,x2,…,xmBe the hidden node input vector, d is that the hidden node is defeated at its The coordinate vector in incoming vector space;Function f expression formula is:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&gt;</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <mi>x</mi> <msub> <mi>r</mi> <mn>0</mn> </msub> </mfrac> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;le;</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein r0For windows radius.
3. a kind of Neural Network Data storage method, it is characterised in that including:
Step 1: data to be stored are divided into Q groups, a field to be stored is represented per class value;
Step 2: being inputted by input node into Neural Network Data memory as claimed in claim 2;
Step 3: the network weight for using gradient method to adjust in the neural network memory in the second hidden node group causes output The output vector of node is 0 under the precision of setting, that is, causes y(2)={ y1,y2,…,yn}={ 0,0 ..., 0 }.
4. Neural Network Data storage method according to claim 3, it is characterised in that including:
In step 2, the formula of gradient descent method is:
<mrow> <mi>w</mi> <mo>=</mo> <mi>w</mi> <mo>-</mo> <mi>r</mi> <mo>&amp;CenterDot;</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>J</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>w</mi> </mrow> </mfrac> </mrow>
Wherein w represents the network weight of hidden node in the second hidden node group, and w is equal to ai, b or c;It is god Object function through network memory;R is step-length.
5. a kind of Neural Network Data querying method, it is characterised in that methods described is used to inquire about by side described in claim 4 Method stored after data, the Neural Network Data querying method includes:
Step 1: by input vector k={ k1,k2,…,kmInput to input node;Input vector k is used to represent known field; The part for not receiving input vector in input node represents unknown field;
Step 2: passing through the vector value that gradient method adjusts the corresponding input node of unknown field so that the value of output node is 0;
Step 3: obtaining the vector value of the now corresponding input node of unknown field, data to be checked are used as.
6. Neural Network Data querying method according to claim 5, it is characterised in that in step 2, pass through gradient method Formula used in the network weight of the corresponding input node of regulation unknown field is:
<mrow> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>r</mi> <mo>&amp;CenterDot;</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>J</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> </mrow> </mfrac> </mrow>
Wherein i=1,2,3 ..., p, p are the number of the corresponding input node of unknown field;ziFor i-th of input of unknown field Node.
CN201710470753.2A 2017-06-20 2017-06-20 Neural network data memory, data storage method and data search method Expired - Fee Related CN107273509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710470753.2A CN107273509B (en) 2017-06-20 2017-06-20 Neural network data memory, data storage method and data search method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710470753.2A CN107273509B (en) 2017-06-20 2017-06-20 Neural network data memory, data storage method and data search method

Publications (2)

Publication Number Publication Date
CN107273509A true CN107273509A (en) 2017-10-20
CN107273509B CN107273509B (en) 2020-06-05

Family

ID=60068107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710470753.2A Expired - Fee Related CN107273509B (en) 2017-06-20 2017-06-20 Neural network data memory, data storage method and data search method

Country Status (1)

Country Link
CN (1) CN107273509B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090673A (en) * 2019-12-20 2020-05-01 北京百度网讯科技有限公司 Cache unit searching method and related equipment
CN111435939A (en) * 2019-01-14 2020-07-21 百度在线网络技术(北京)有限公司 Method and device for dividing storage space of node

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0385436A2 (en) * 1989-02-28 1990-09-05 Fujitsu Limited An error absorbing system in a neuron computer
CN101968832A (en) * 2010-10-26 2011-02-09 东南大学 Coal ash fusion temperature forecasting method based on construction-pruning mixed optimizing RBF (Radial Basis Function) network
CN102651088A (en) * 2012-04-09 2012-08-29 南京邮电大学 Classification method for malicious code based on A_Kohonen neural network
CN104932267A (en) * 2015-06-04 2015-09-23 曲阜师范大学 Neural network learning control method adopting eligibility trace
EP2923307A1 (en) * 2012-11-23 2015-09-30 Universite De Bretagne Sud Neural network architecture, production method, and programmes corresponding thereto
CN105787592A (en) * 2016-02-26 2016-07-20 河海大学 Wind turbine generator set ultra-short period wind power prediction method based on improved RBF network
US9430735B1 (en) * 2012-02-23 2016-08-30 Micron Technology, Inc. Neural network in a memory device
WO2017006104A1 (en) * 2015-07-07 2017-01-12 Touchtype Ltd. Improved artificial neural network for language modelling and prediction
CN106485205A (en) * 2016-09-20 2017-03-08 北京工业大学 Transfinited the Mental imagery Method of EEG signals classification of learning machine based on multilamellar
CN106650922A (en) * 2016-09-29 2017-05-10 清华大学 Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0385436A2 (en) * 1989-02-28 1990-09-05 Fujitsu Limited An error absorbing system in a neuron computer
CN101968832A (en) * 2010-10-26 2011-02-09 东南大学 Coal ash fusion temperature forecasting method based on construction-pruning mixed optimizing RBF (Radial Basis Function) network
US9430735B1 (en) * 2012-02-23 2016-08-30 Micron Technology, Inc. Neural network in a memory device
CN102651088A (en) * 2012-04-09 2012-08-29 南京邮电大学 Classification method for malicious code based on A_Kohonen neural network
EP2923307A1 (en) * 2012-11-23 2015-09-30 Universite De Bretagne Sud Neural network architecture, production method, and programmes corresponding thereto
CN104932267A (en) * 2015-06-04 2015-09-23 曲阜师范大学 Neural network learning control method adopting eligibility trace
WO2017006104A1 (en) * 2015-07-07 2017-01-12 Touchtype Ltd. Improved artificial neural network for language modelling and prediction
CN105787592A (en) * 2016-02-26 2016-07-20 河海大学 Wind turbine generator set ultra-short period wind power prediction method based on improved RBF network
CN106485205A (en) * 2016-09-20 2017-03-08 北京工业大学 Transfinited the Mental imagery Method of EEG signals classification of learning machine based on multilamellar
CN106650922A (en) * 2016-09-29 2017-05-10 清华大学 Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARK S.GOLDMAN,ET.AL: "《Memory without Feedback in a Neural Network》", 《ELSEVIER》 *
PING CHI,ET.AL: "《PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory》", 《2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE》 *
吴宏伟等: "《基于智能Agent的分布式入侵检测模型设计》", 《哈尔滨理工大学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435939A (en) * 2019-01-14 2020-07-21 百度在线网络技术(北京)有限公司 Method and device for dividing storage space of node
CN111090673A (en) * 2019-12-20 2020-05-01 北京百度网讯科技有限公司 Cache unit searching method and related equipment
CN111090673B (en) * 2019-12-20 2023-04-18 北京百度网讯科技有限公司 Cache unit searching method and related equipment

Also Published As

Publication number Publication date
CN107273509B (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN102622515B (en) A kind of weather prediction method
CN108287881A (en) A kind of optimization method found based on random walk relationship
CN104598611B (en) The method and system being ranked up to search entry
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN103226741A (en) Urban water supply network tube explosion prediction method
CN108830325A (en) A kind of vibration information classification of landform recognition methods based on study
CN110807509A (en) Depth knowledge tracking method based on Bayesian neural network
CN109101717A (en) Solid propellant rocket Reliability Prediction Method based on reality with the study of fuzzy data depth integration
CN108304674A (en) A kind of railway prediction of soft roadbed settlement method based on BP neural network
CN107886158A (en) A kind of bat optimized algorithm based on Iterated Local Search and Stochastic inertia weight
CN109492075A (en) A kind of transfer learning sort method generating confrontation network based on circulation
CN106777402A (en) A kind of image retrieval text method based on sparse neural network
CN106844738A (en) The sorting technique of Junker relation between food materials based on neutral net
CN107506350A (en) A kind of method and apparatus of identification information
CN105550747A (en) Sample training method for novel convolutional neural network
CN108280207A (en) A method of the perfect Hash of construction
CN107945534A (en) A kind of special bus method for predicting based on GMDH neutral nets
CN107273509A (en) A kind of Neural Network Data memory, date storage method and data search method
CN108073978A (en) A kind of constructive method of the ultra-deep learning model of artificial intelligence
CN103679269A (en) Method and device for selecting classifier sample based on active learning
CN108416483A (en) RBF type teaching quality evaluation prediction techniques based on PSO optimizations
Zhang Predicting model of traffic volume based on Grey-Markov
CN107798380A (en) Ephemeris computational methods and computing system on detector device
CN107273971A (en) Architecture of Feed-forward Neural Network self-organizing method based on neuron conspicuousness
CN106844626A (en) Using microblogging keyword and the method and system of positional information simulated air quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200605

Termination date: 20210620