CN105740950B - The template matching method of neural network based on slip teeth method - Google Patents

The template matching method of neural network based on slip teeth method Download PDF

Info

Publication number
CN105740950B
CN105740950B CN201610035536.6A CN201610035536A CN105740950B CN 105740950 B CN105740950 B CN 105740950B CN 201610035536 A CN201610035536 A CN 201610035536A CN 105740950 B CN105740950 B CN 105740950B
Authority
CN
China
Prior art keywords
error
data
template matching
error range
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610035536.6A
Other languages
Chinese (zh)
Other versions
CN105740950A (en
Inventor
王堃
张明翔
岳东
孙雁飞
吴蒙
亓晋
陈思光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Star Innovation Technology Co ltd
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201610035536.6A priority Critical patent/CN105740950B/en
Publication of CN105740950A publication Critical patent/CN105740950A/en
Application granted granted Critical
Publication of CN105740950B publication Critical patent/CN105740950B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of template matching method of neural network based on slip teeth method, comprising the following steps: according to laying down a regulation for the neural network of error back propagation, network structure is divided into input layer, hidden layer and output layer;First error range, maximum frequency of training and the second error range are set in hidden layer and are initialized;According to deblocking mechanism, the data of input are divided into multiple data blocks after carrying out similitude detection;Processed data are matched using slip teeth method;Whether the error of the error and template matching that judge network node respectively falls in first error range and the second error range, alternatively, whether the error of template matching in the second error range and reaches maximum frequency of training;If so, then exporting result;Otherwise the weight for correcting slip teeth, repeats above step, until exporting result.Invention further improves template matching precision, improve runing time and algorithm stability.

Description

The template matching method of neural network based on slip teeth method
Technical field
The present invention relates to big data processing technology fields, in particular to the template matching side of the neural network based on slip teeth method Method.
Background technique
The fast development of the emerging technologies such as cloud computing, Internet of Things promotes the scale of data just to increase at an unprecedented rate Long, big data era has begun arrival.But in the data of the complicated extreme redundancy of magnanimity, it is difficult to quickly timely and effective Obtain valuable information.Therefore be required to fast and accurately find valuable information, extract effective information and by these Valuable information is organized, and an effective solution scheme is obtained.Template matching side neural network based in big data Method is being analyzed data and classified by intelligence, according to the information model that known demand setting needs, then utilizes this A algorithm quickly carries out the template matching of data, quick obtaining effective information to mass data.However, current major part template It is that matching search is carried out according to the sequence of event importance with algorithm, since node needs to record multiple identical events, thus The consistency for keeping sequence causes node to need to be repeated several times the similar information of processing, and calculates and brought greatly to caching Pressure leads to not be applied to the higher occasion of real-time.
Summary of the invention
Place, the present invention propose a kind of template of neural network based on slip teeth method in view of above-mentioned deficiencies of the prior art Method of completing the square, using the modified slip teeth method error back propagation of weight neural network template matching algorithm to the data of input into Row classification, study and template data matching.
The template matching method of neural network based on slip teeth method, comprising the following steps:
Network structure is divided into input layer, hidden by step 1, laying down a regulation according to the neural network of error back propagation Layer and output layer;
Step 2 sets first error range, maximum frequency of training and the second error range and initial in the hidden layer Change, the first error range is the acceptable error range of the neural network, and second error range is template matching It is worth acceptable error range;
Step 3, according to deblocking mechanism, the data of input are divided into multiple data blocks after carrying out similitude detection;
Step 4 matches processed data using slip teeth method;
Whether the error of step 5, the error for judging network node and template matching respectively falls in the first error range In second error range, alternatively, whether the error of template matching in second error range and reaches maximum instruction Practice number;If so, then exporting result;It is no to then follow the steps 6;
The network weight of step 6, the amendment input layer;
Step 7 repeats step 4~step 6, until exporting result.
The detailed process of step 3 are as follows: utilize Similarity Detection Algorithm, calculate respective point in input node data and template Absolute error value adds up the absolute error value of each input data point, until being more than given threshold, stops cumulative and calculates this number According to the similitude mean value of block, the similitude mean value is greater than desired value at this time, stops segmentation data.
The detailed process of step 4 are as follows: in the network training stage, output signal obtains error letter compared with desired output Number, the error signal is fed back step by step from the output layer to the hidden layer and the input layer, and hidden described in dynamic corrections Hide the weight of layer and the input layer.
The present invention is detected by carrying out the similitude of data to mass data, determines the size of the data block of segmentation;It utilizes The template matching algorithm of the neural network of the slip teeth method error back propagation of modified weight is classified to the data of input, is learnt It is matched with template data;Using mean absolute difference parameter determining data matching degree, the data information of template matching is obtained, to add Algorithm speed and precision in fast matching process, and improve the stability of algorithm.Template matching precision is further improved, is promoted Runing time and algorithm stability.
Detailed description of the invention
Fig. 1 is the network model of the template matching of the neural network based on slip teeth method;
Fig. 2 is the flow chart of the template matching method embodiment of the neural network based on slip teeth method;
Fig. 3 is Fig. 2 embodiment threshold value and convergence number relational graph;
Fig. 4 is Fig. 2 embodiment connection weight and convergence number relational graph;
Fig. 5 is that Fig. 2 embodiment learns index and convergence number relational graph;
Fig. 6 is the relationship that Fig. 2 embodiment restrains number and the number of iterations.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Template matching method neural network based in big data carries out first to needing matched data to be split Modularized processing is conducive to the speed for accelerating to need matched template data information in this way, utilizes sequential similarity detection algorithm (Sequential Similarity Detection Algorithm, SSDA) carries out similitude detection such as to the data of input Formula 1.The absolute error value for calculating respective point in the data and template of input, the error of the data point of input is added up.It is tired every time If the error added is more than given threshold, stops adding up, the similitude mean value E value of this data block is calculated, if similitude mean value E Value is greater than the desired value of setting, then terminates and divide this data block;Before similitude mean value E is less than desired value, by the part Data segment is set as a small data block.Real time data calculation processing in this way, can obtain good data processed result.
WithRespectively represent the mean value of matching mean value and comparing.S (i, j) and T (i, j) Respectively represent the matching result value of real-time data and the end value of comparing.
Then template matching is carried out, the Nonlinear Mapping characteristic of the neural network of general error back propagation is suitable for big The unstructuredness of the out-of-order data of amount maps, at the same the model of neural network be influence e-learning performance principal element it One.The present embodiment replaces data packet using numerical information, creates item with this parameters revision and Fast Learning to neural network Part.(x is set by the data mode of input layer1,x2x3,…,xk, 1≤k≤n), n is input layer number, exports the number of plies According to function setup be yk=f (xk), wherein f (xk) indicate network output function.
Network is by the way that sample learning training, the connection weight adjusted between each neuron forces model parameter to realize Closely, so that network also carries out the study of network structure while connection weight adjusts.Error is calculated first with formula (2).It is hidden Hide the Sigmoid action function that layer error calculation uses, Sigmoid function expression and its derivative on closed interval [0,1] Formula (3) are defined as, wherein λ determines the compression degree of Sigmoid function and the compression degree of data, to avoid falling into office Portion is minimum generally to take 1.
J=xk-yk (2)
Assuming that j-th of node output function is y in networkk, it is denoted as formula (4)
ω in formulaijFor connection weight, h (si) be hidden layer i-th of neuron output, siIt is the algebra of its input With:
A in formulanFor the weight that input unit is connect with hidden layer, bmFor the connection weight of hidden layer and output unit.
The characteristic parameter of neural network needs dynamic to adjust, and quickly can dynamically adjust nerve net using the method for slip teeth Network, very big application range can be had by not needing to introduce additional parameter.But slip teeth method requires least square method flat Poor (hereinafter referred to as adjustment) Normal Distribution, actual conditions are often more complicated to be difficult to meet.Calculating pre- adjustment simultaneously It needs to carry out data average computation to the difference vector sample value of pre- side when matrix, can there is the On The Choice of data selection size. If data selection is too small, the pre- adjustment matrix estimated can have biggish error;On the contrary, if data selected Greatly, it will lead to pre- adjustment Matrix Estimation value to be difficult to reflect the transient response of system.Learning data feature is needed when target data It when value number is larger, obtains pre- adjustment and pre- mean value is all bigger, therefore the amplitude of neural network adjustment weight should increase accordingly.
According to the state of neural network center position and the size approximate calculation network system of weight.By need to optimize The state of discrete systemInput state as network system situation.Assuming that the data of input are M dimension datas, then can calculate The mean value of input isLeast square adjustment is Ak, then the system mode X of neural networki,kIt is calculated by following formula
δ is by Xi,kRelative toThe scalar that determines of diffusion, general value is conducive in this way between 0.001-1 Estimate the system mode of whole network.
X is learnt according to full probability and conditional probabilityi,k|k-1=Xi,k-1, then the calculation formula of the pre- mean value of system and pre- adjustment is such as Under,
WhereinIt isRelative toValue, Ak|k-1It is AkRelative to Ak-1Value,
When the error fall flat of output, the value information of input data is fed back and modifies in backpropagation, moves The ability of state regularized learning algorithm.Each neuron node of hidden layer only receives the feedback of itself unit, does not deposit between each node It is contacted in feedback, so be independent of each other, and the node of hidden layer is made only to receive the tune of certain specific datas (data cell) Whole information.A kind of approximation method that weight is the track obtained according to the steepest descending method of the weight space is modified simultaneously, it is network-like State updates calculating such as formula (10)~formula (13) of weight, and the modified weight of network is carried out using slip teeth method, dynamically adjusts network Learning direction and accelerating algorithm convergence rate.
Akk=h (xk)Ak|k-1h(xk)T (10)
Aik=Ak|k-1h(xk)T (11)
Neural network uses the structure of error back propagation, in the network training stage, BP neural network output signal and phase Output is hoped to compare to obtain error signal, error is connected to the network through each middle layer layer-by-layer correction from output layer by slip teeth method and is weighed Value.Mainly from the angle of compensation, network weight is constantly adjusted using the mapping approximation capability and self-learning capability of neural network To continuous correction result, so that innovatory algorithm, improves the computational accuracy and algorithm the convergence speed of template matching algorithm.
In template matching method, feature abstraction algorithm is that matched data will be needed to be abstracted the template needed Then matched data matched with original mass data using the data of template matching, the number being mutually matched It is believed that breath.In the matching process, determine that a critically important parameter of quality of match is mean absolute difference, it is calculation template matching One standard value of similarity.Using formula (14), data template that can be set with Rapid matching is obtained similar with template data Data information.
ai,jIt is needed template data, bi,jIt is the template data set of setting.When matching mass data, it can incite somebody to action Data are divided into many small data blocks and are handled.S × K is the data traffic and dimension of each data block in template data Summation.
Setting modifies the calculation formula of the mean absolute difference of template matching using polar coordinates, and the data of the inside are turned The data information in polar coordinates is turned to,
Wherein a (ρ, θ) is needed template data, and b (ρ, θ) is the template data set of setting.Different types of template Data and matched data, are constantly corrected, formula is as follows according to the data information of starting
A ' (ρ, θ)=a (ρ+1, θ)-a (ρ, θ), ρ=0,1 ..., r-1 (16)
B ' (ρ, θ)=b (ρ+1, θ)-b (ρ, θ), ρ=0,1 ..., r-1 (17)
Wherein a ' (ρ, θ) is needed the differential value of template data, and b ' (ρ, θ) is the differential for needing matched data set Value.Differential value is that polar radius is carried out differential as parameter to obtain.It is more due to input data when calculating absolute difference Sample and complexity need to accurately calculate the degree of Data Matching using differential method.In this way, available accurate The data for spending the template matching more increased make algorithm obtain more stable convergence, and can significantly reduce study number.
Simulation analysis carried out to the template matching method of the neural network based on slip teeth method on emulation platform, and with tradition Template matching algorithm IEBP compares experiment.
The experimental situation of simulation analysis is the PC of dominant frequency 3.2G, 4G a memory, and software environment is based on Eclipse 4.3.2 To program realization algorithm function.The structure setting of EBP algorithm is 20-50-1 (input layer-hidden layer-output layer), for each Sample, initial threshold change at random between [1,2].By the multiple study to 10000 groups of data, algorithm structure is found out most Excellent threshold value, connection weight and study index, and optimal solution is applied in template matching.
Use convergence number, study number, matching accuracy and the runing time of algorithm as the Performance Evaluating Indexes of algorithm. When one group of data in a large amount of data are by primary matching, i.e. referred to as an iteration, if error within setting range, i.e., It is judged to once restraining, the convergence more algorithm accuracies of number are higher under the same terms, obtain a near-optimal solution and are then known as once It is higher to learn the fewer efficiency of algorithm of number under same algorithm number of run for study.In the identical situation of data volume, matching is just True rate be equal to match correct number with match the ratio between number, the more high then accuracy of value is higher.
For 10000 groups of data being randomly generated, by change under given initial threshold, connection weight, study index Amount carries out dynamic adjustment, and improved algorithm is run multiple times, observes the convergence in mean number of algorithm experimental, obtains such as Fig. 3 to Fig. 6 Simulation result.
Fig. 3 is when threshold value changes at random between [0.5,21.5], and the present embodiment method is after 30 operations Obtained convergence in mean number.With the continuous variation of threshold value, preferable convergence time can have been obtained on section [0.95,1.1] Number, but there are still fuctuation within a narrow ranges, and be 1 moment in threshold value, maximum convergence number can be obtained, gradually decline is then presented and becomes Gesture.Therefore, when threshold value be 1 when, the present embodiment method MTA-IEBP obtain convergence in mean number compared to be carved with when other compared with Big advantage.
In Fig. 4, in the case where data volume is continuously increased, since the present embodiment method can't go out in the matching process Existing biggish fluctuation, enables algorithmic statement number to maintain more stable level, does not subtract with the increase of network load It is few.Compared to IEBP, the present embodiment method can not only obtain more convergence number, and the case where data load increases Under still can guarantee the stability and reliability of algorithm.
In Fig. 5, in the case where data volume is continuously increased, the study number of IEBP is there are biggish fluctuation, and this reality Neuron of the method due to deleting redundancy is applied, the error of original start can be just able to satisfy under less study number Range.
In Fig. 6, the present embodiment method is since it is determined Optimal Learning index, so that algorithm will not crash into Local Minimum Value, therefore the accuracy of 90% or more acquisition can be stablized, matching accuracy is which greatly improved, and have compared to IEBP There is preferable stability, i.e. accuracy floating variation range is little.
The technical means disclosed in the embodiments of the present invention is not limited only to technological means disclosed in above embodiment, further includes Technical solution consisting of any combination of the above technical features.

Claims (2)

1. the template matching method of the neural network based on slip teeth method, which comprises the following steps:
Step 1, laying down a regulation according to the neural network of error back propagation, by network structure be divided into input layer, hidden layer and Output layer;
Step 2 sets first error range, maximum frequency of training and the second error range and is initialized in the hidden layer, The first error range is the acceptable error range of the neural network, and second error range is that template matching value can The error range of receiving;
Step 3, according to deblocking mechanism, the data of input are divided into multiple data blocks after carrying out similitude detection;
Step 4 matches processed data using slip teeth method, wherein in the network training stage, by output signal and desired output It compares to obtain error signal, the error signal is fed back step by step from the output layer to the hidden layer and institute by slip teeth method Input layer is stated, and the network weight of hidden layer described in dynamic corrections and the input layer;
Whether the error of step 5, the error for judging network node and template matching respectively falls in the first error range and institute It states in the second error range, alternatively, whether the error of template matching in second error range and reaches maximum training time Number;If so, then exporting result;It is no to then follow the steps 6;
The network weight of step 6, the amendment input layer;
Step 7 repeats step 4~step 6, until exporting result.
2. the template matching method of the neural network according to claim 1 based on slip teeth method, which is characterized in that step 3 Detailed process are as follows: Similarity Detection Algorithm is utilized, the absolute error value of respective point in input node data and template is calculated, it will be each The absolute error value of input data point is cumulative, and until being more than given threshold, it is equal to stop similitude that is cumulative and calculating this data block Value, the similitude mean value is greater than desired value at this time, stops segmentation data.
CN201610035536.6A 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method Expired - Fee Related CN105740950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610035536.6A CN105740950B (en) 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610035536.6A CN105740950B (en) 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method

Publications (2)

Publication Number Publication Date
CN105740950A CN105740950A (en) 2016-07-06
CN105740950B true CN105740950B (en) 2019-03-29

Family

ID=56247495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610035536.6A Expired - Fee Related CN105740950B (en) 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method

Country Status (1)

Country Link
CN (1) CN105740950B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686385B (en) * 2016-12-30 2018-09-25 平安科技(深圳)有限公司 Video compress sensing reconstructing method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951833A (en) * 2015-07-02 2015-09-30 上海电机学院 Neutral network and learning method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020077756A1 (en) * 1999-11-29 2002-06-20 Scott Arouh Neural-network-based identification, and application, of genomic information practically relevant to diverse biological and sociological problems, including drug dosage estimation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951833A (en) * 2015-07-02 2015-09-30 上海电机学院 Neutral network and learning method thereof

Also Published As

Publication number Publication date
CN105740950A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
US10984308B2 (en) Compression method for deep neural networks with load balance
US20180046915A1 (en) Compression of deep neural networks with proper use of mask
Hinton Deterministic Boltzmann learning performs steepest descent in weight-space
Chen et al. FedSA: A staleness-aware asynchronous federated learning algorithm with non-IID data
CN109284406B (en) Intention identification method based on difference cyclic neural network
KR102413028B1 (en) Method and device for pruning convolutional neural network
CN103593538A (en) Fiber optic gyroscope temperature drift modeling method by optimizing dynamic recurrent neural network through genetic algorithm
CN111310965A (en) Aircraft track prediction method based on LSTM network
CN109472345A (en) A kind of weight update method, device, computer equipment and storage medium
CN114330644B (en) Neural network model compression method based on structure search and channel pruning
CN109062040B (en) PID (proportion integration differentiation) predicting method based on system nesting optimization
Song et al. Data-driven finite-horizon optimal tracking control scheme for completely unknown discrete-time nonlinear systems
CN106896724B (en) Tracking system and tracking method for sun tracker
CN106407932B (en) Handwritten Digit Recognition method based on fractional calculus Yu generalized inverse neural network
CN109886405A (en) It is a kind of inhibit noise based on artificial neural network structure's optimization method
CN105740950B (en) The template matching method of neural network based on slip teeth method
CN117523291A (en) Image classification method based on federal knowledge distillation and ensemble learning
CN107273971B (en) Feed-forward neural network structure self-organization method based on neuron significance
CN115793456A (en) Lightweight sensitivity-based power distribution network edge side multi-mode self-adaptive control method
CN106803233B (en) The optimization method of perspective image transformation
CN104537224A (en) Multi-state system reliability analysis method and system based on self-adaptive learning algorithm
CN115751441A (en) Heat supply system heating station heat regulation method and system based on secondary side flow
CN115601578A (en) Multi-view clustering method and system based on self-walking learning and view weighting
AU2019101145A4 (en) Method for determining ore grade using artificial neural network in a reserve estimation
CN113777965A (en) Spraying quality control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 66, New Model Road, Gulou District, Nanjing City, Jiangsu Province, 210000

Applicant after: NANJING University OF POSTS AND TELECOMMUNICATIONS

Address before: 210023 9 Wen Yuan Road, Ya Dong new town, Nanjing, Jiangsu.

Applicant before: NANJING University OF POSTS AND TELECOMMUNICATIONS

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200421

Address after: 610000 no.1402, block a, No.199, Tianfu 4th Street, Chengdu high tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu

Patentee after: Chengdu Star Innovation Technology Co.,Ltd.

Address before: 210000, 66 new model street, Gulou District, Jiangsu, Nanjing

Patentee before: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Template matching method of neural network based on sliding tooth method

Effective date of registration: 20220526

Granted publication date: 20190329

Pledgee: Industrial Bank Limited by Share Ltd. Chengdu branch

Pledgor: Chengdu Star Innovation Technology Co.,Ltd.

Registration number: Y2022510000141

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190329

CF01 Termination of patent right due to non-payment of annual fee