CN105740950B - The template matching method of neural network based on slip teeth method - Google Patents

The template matching method of neural network based on slip teeth method Download PDF

Info

Publication number
CN105740950B
CN105740950B CN201610035536.6A CN201610035536A CN105740950B CN 105740950 B CN105740950 B CN 105740950B CN 201610035536 A CN201610035536 A CN 201610035536A CN 105740950 B CN105740950 B CN 105740950B
Authority
CN
China
Prior art keywords
error
data
template matching
error range
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610035536.6A
Other languages
Chinese (zh)
Other versions
CN105740950A (en
Inventor
王堃
张明翔
岳东
孙雁飞
吴蒙
亓晋
陈思光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Star Innovation Technology Co ltd
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201610035536.6A priority Critical patent/CN105740950B/en
Publication of CN105740950A publication Critical patent/CN105740950A/en
Application granted granted Critical
Publication of CN105740950B publication Critical patent/CN105740950B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于滑齿法的神经网络的模板匹配方法,包括以下步骤:根据误差反向传播的神经网络的制定规则,将网络结构分为输入层、隐藏层和输出层;在隐藏层中设定第一误差范围、最大训练次数和第二误差范围并初始化;根据数据分块机制,输入的数据进行相似性检测后被分割成多个数据块;利用滑齿法匹配被处理的数据;判断网络节点的误差和模板匹配的误差是否分别落在第一误差范围和第二误差范围内,或者,模板匹配的误差是否在第二误差范围内且达到最大训练次数;如是,则输出结果;否则修正滑齿的权值,重复执行以上步骤,直到输出结果。本发明进一步改善了模板匹配精度,提升了运行时间以及算法稳定性。

The invention discloses a neural network template matching method based on a tooth sliding method, which comprises the following steps: dividing the network structure into an input layer, a hidden layer and an output layer according to the formulation rules of the neural network for error back propagation; Set the first error range, the maximum number of training times, and the second error range and initialize them; according to the data block mechanism, the input data is divided into multiple data blocks after similarity detection; the sliding gear method is used to match the processed data ; Judging whether the error of the network node and the error of template matching fall within the first error range and the second error range respectively, or whether the error of template matching is within the second error range and reaches the maximum number of training times; if so, output the result ; otherwise, correct the weight of the sliding tooth, and repeat the above steps until the result is output. The invention further improves the template matching accuracy, improves the running time and the algorithm stability.

Description

The template matching method of neural network based on slip teeth method
Technical field
The present invention relates to big data processing technology fields, in particular to the template matching side of the neural network based on slip teeth method Method.
Background technique
The fast development of the emerging technologies such as cloud computing, Internet of Things promotes the scale of data just to increase at an unprecedented rate Long, big data era has begun arrival.But in the data of the complicated extreme redundancy of magnanimity, it is difficult to quickly timely and effective Obtain valuable information.Therefore be required to fast and accurately find valuable information, extract effective information and by these Valuable information is organized, and an effective solution scheme is obtained.Template matching side neural network based in big data Method is being analyzed data and classified by intelligence, according to the information model that known demand setting needs, then utilizes this A algorithm quickly carries out the template matching of data, quick obtaining effective information to mass data.However, current major part template It is that matching search is carried out according to the sequence of event importance with algorithm, since node needs to record multiple identical events, thus The consistency for keeping sequence causes node to need to be repeated several times the similar information of processing, and calculates and brought greatly to caching Pressure leads to not be applied to the higher occasion of real-time.
Summary of the invention
Place, the present invention propose a kind of template of neural network based on slip teeth method in view of above-mentioned deficiencies of the prior art Method of completing the square, using the modified slip teeth method error back propagation of weight neural network template matching algorithm to the data of input into Row classification, study and template data matching.
The template matching method of neural network based on slip teeth method, comprising the following steps:
Network structure is divided into input layer, hidden by step 1, laying down a regulation according to the neural network of error back propagation Layer and output layer;
Step 2 sets first error range, maximum frequency of training and the second error range and initial in the hidden layer Change, the first error range is the acceptable error range of the neural network, and second error range is template matching It is worth acceptable error range;
Step 3, according to deblocking mechanism, the data of input are divided into multiple data blocks after carrying out similitude detection;
Step 4 matches processed data using slip teeth method;
Whether the error of step 5, the error for judging network node and template matching respectively falls in the first error range In second error range, alternatively, whether the error of template matching in second error range and reaches maximum instruction Practice number;If so, then exporting result;It is no to then follow the steps 6;
The network weight of step 6, the amendment input layer;
Step 7 repeats step 4~step 6, until exporting result.
The detailed process of step 3 are as follows: utilize Similarity Detection Algorithm, calculate respective point in input node data and template Absolute error value adds up the absolute error value of each input data point, until being more than given threshold, stops cumulative and calculates this number According to the similitude mean value of block, the similitude mean value is greater than desired value at this time, stops segmentation data.
The detailed process of step 4 are as follows: in the network training stage, output signal obtains error letter compared with desired output Number, the error signal is fed back step by step from the output layer to the hidden layer and the input layer, and hidden described in dynamic corrections Hide the weight of layer and the input layer.
The present invention is detected by carrying out the similitude of data to mass data, determines the size of the data block of segmentation;It utilizes The template matching algorithm of the neural network of the slip teeth method error back propagation of modified weight is classified to the data of input, is learnt It is matched with template data;Using mean absolute difference parameter determining data matching degree, the data information of template matching is obtained, to add Algorithm speed and precision in fast matching process, and improve the stability of algorithm.Template matching precision is further improved, is promoted Runing time and algorithm stability.
Detailed description of the invention
Fig. 1 is the network model of the template matching of the neural network based on slip teeth method;
Fig. 2 is the flow chart of the template matching method embodiment of the neural network based on slip teeth method;
Fig. 3 is Fig. 2 embodiment threshold value and convergence number relational graph;
Fig. 4 is Fig. 2 embodiment connection weight and convergence number relational graph;
Fig. 5 is that Fig. 2 embodiment learns index and convergence number relational graph;
Fig. 6 is the relationship that Fig. 2 embodiment restrains number and the number of iterations.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Template matching method neural network based in big data carries out first to needing matched data to be split Modularized processing is conducive to the speed for accelerating to need matched template data information in this way, utilizes sequential similarity detection algorithm (Sequential Similarity Detection Algorithm, SSDA) carries out similitude detection such as to the data of input Formula 1.The absolute error value for calculating respective point in the data and template of input, the error of the data point of input is added up.It is tired every time If the error added is more than given threshold, stops adding up, the similitude mean value E value of this data block is calculated, if similitude mean value E Value is greater than the desired value of setting, then terminates and divide this data block;Before similitude mean value E is less than desired value, by the part Data segment is set as a small data block.Real time data calculation processing in this way, can obtain good data processed result.
WithRespectively represent the mean value of matching mean value and comparing.S (i, j) and T (i, j) Respectively represent the matching result value of real-time data and the end value of comparing.
Then template matching is carried out, the Nonlinear Mapping characteristic of the neural network of general error back propagation is suitable for big The unstructuredness of the out-of-order data of amount maps, at the same the model of neural network be influence e-learning performance principal element it One.The present embodiment replaces data packet using numerical information, creates item with this parameters revision and Fast Learning to neural network Part.(x is set by the data mode of input layer1,x2x3,…,xk, 1≤k≤n), n is input layer number, exports the number of plies According to function setup be yk=f (xk), wherein f (xk) indicate network output function.
Network is by the way that sample learning training, the connection weight adjusted between each neuron forces model parameter to realize Closely, so that network also carries out the study of network structure while connection weight adjusts.Error is calculated first with formula (2).It is hidden Hide the Sigmoid action function that layer error calculation uses, Sigmoid function expression and its derivative on closed interval [0,1] Formula (3) are defined as, wherein λ determines the compression degree of Sigmoid function and the compression degree of data, to avoid falling into office Portion is minimum generally to take 1.
J=xk-yk (2)
Assuming that j-th of node output function is y in networkk, it is denoted as formula (4)
ω in formulaijFor connection weight, h (si) be hidden layer i-th of neuron output, siIt is the algebra of its input With:
A in formulanFor the weight that input unit is connect with hidden layer, bmFor the connection weight of hidden layer and output unit.
The characteristic parameter of neural network needs dynamic to adjust, and quickly can dynamically adjust nerve net using the method for slip teeth Network, very big application range can be had by not needing to introduce additional parameter.But slip teeth method requires least square method flat Poor (hereinafter referred to as adjustment) Normal Distribution, actual conditions are often more complicated to be difficult to meet.Calculating pre- adjustment simultaneously It needs to carry out data average computation to the difference vector sample value of pre- side when matrix, can there is the On The Choice of data selection size. If data selection is too small, the pre- adjustment matrix estimated can have biggish error;On the contrary, if data selected Greatly, it will lead to pre- adjustment Matrix Estimation value to be difficult to reflect the transient response of system.Learning data feature is needed when target data It when value number is larger, obtains pre- adjustment and pre- mean value is all bigger, therefore the amplitude of neural network adjustment weight should increase accordingly.
According to the state of neural network center position and the size approximate calculation network system of weight.By need to optimize The state of discrete systemInput state as network system situation.Assuming that the data of input are M dimension datas, then can calculate The mean value of input isLeast square adjustment is Ak, then the system mode X of neural networki,kIt is calculated by following formula
δ is by Xi,kRelative toThe scalar that determines of diffusion, general value is conducive in this way between 0.001-1 Estimate the system mode of whole network.
X is learnt according to full probability and conditional probabilityi,k|k-1=Xi,k-1, then the calculation formula of the pre- mean value of system and pre- adjustment is such as Under,
WhereinIt isRelative toValue, Ak|k-1It is AkRelative to Ak-1Value,
When the error fall flat of output, the value information of input data is fed back and modifies in backpropagation, moves The ability of state regularized learning algorithm.Each neuron node of hidden layer only receives the feedback of itself unit, does not deposit between each node It is contacted in feedback, so be independent of each other, and the node of hidden layer is made only to receive the tune of certain specific datas (data cell) Whole information.A kind of approximation method that weight is the track obtained according to the steepest descending method of the weight space is modified simultaneously, it is network-like State updates calculating such as formula (10)~formula (13) of weight, and the modified weight of network is carried out using slip teeth method, dynamically adjusts network Learning direction and accelerating algorithm convergence rate.
Akk=h (xk)Ak|k-1h(xk)T (10)
Aik=Ak|k-1h(xk)T (11)
Neural network uses the structure of error back propagation, in the network training stage, BP neural network output signal and phase Output is hoped to compare to obtain error signal, error is connected to the network through each middle layer layer-by-layer correction from output layer by slip teeth method and is weighed Value.Mainly from the angle of compensation, network weight is constantly adjusted using the mapping approximation capability and self-learning capability of neural network To continuous correction result, so that innovatory algorithm, improves the computational accuracy and algorithm the convergence speed of template matching algorithm.
In template matching method, feature abstraction algorithm is that matched data will be needed to be abstracted the template needed Then matched data matched with original mass data using the data of template matching, the number being mutually matched It is believed that breath.In the matching process, determine that a critically important parameter of quality of match is mean absolute difference, it is calculation template matching One standard value of similarity.Using formula (14), data template that can be set with Rapid matching is obtained similar with template data Data information.
ai,jIt is needed template data, bi,jIt is the template data set of setting.When matching mass data, it can incite somebody to action Data are divided into many small data blocks and are handled.S × K is the data traffic and dimension of each data block in template data Summation.
Setting modifies the calculation formula of the mean absolute difference of template matching using polar coordinates, and the data of the inside are turned The data information in polar coordinates is turned to,
Wherein a (ρ, θ) is needed template data, and b (ρ, θ) is the template data set of setting.Different types of template Data and matched data, are constantly corrected, formula is as follows according to the data information of starting
A ' (ρ, θ)=a (ρ+1, θ)-a (ρ, θ), ρ=0,1 ..., r-1 (16)
B ' (ρ, θ)=b (ρ+1, θ)-b (ρ, θ), ρ=0,1 ..., r-1 (17)
Wherein a ' (ρ, θ) is needed the differential value of template data, and b ' (ρ, θ) is the differential for needing matched data set Value.Differential value is that polar radius is carried out differential as parameter to obtain.It is more due to input data when calculating absolute difference Sample and complexity need to accurately calculate the degree of Data Matching using differential method.In this way, available accurate The data for spending the template matching more increased make algorithm obtain more stable convergence, and can significantly reduce study number.
Simulation analysis carried out to the template matching method of the neural network based on slip teeth method on emulation platform, and with tradition Template matching algorithm IEBP compares experiment.
The experimental situation of simulation analysis is the PC of dominant frequency 3.2G, 4G a memory, and software environment is based on Eclipse 4.3.2 To program realization algorithm function.The structure setting of EBP algorithm is 20-50-1 (input layer-hidden layer-output layer), for each Sample, initial threshold change at random between [1,2].By the multiple study to 10000 groups of data, algorithm structure is found out most Excellent threshold value, connection weight and study index, and optimal solution is applied in template matching.
Use convergence number, study number, matching accuracy and the runing time of algorithm as the Performance Evaluating Indexes of algorithm. When one group of data in a large amount of data are by primary matching, i.e. referred to as an iteration, if error within setting range, i.e., It is judged to once restraining, the convergence more algorithm accuracies of number are higher under the same terms, obtain a near-optimal solution and are then known as once It is higher to learn the fewer efficiency of algorithm of number under same algorithm number of run for study.In the identical situation of data volume, matching is just True rate be equal to match correct number with match the ratio between number, the more high then accuracy of value is higher.
For 10000 groups of data being randomly generated, by change under given initial threshold, connection weight, study index Amount carries out dynamic adjustment, and improved algorithm is run multiple times, observes the convergence in mean number of algorithm experimental, obtains such as Fig. 3 to Fig. 6 Simulation result.
Fig. 3 is when threshold value changes at random between [0.5,21.5], and the present embodiment method is after 30 operations Obtained convergence in mean number.With the continuous variation of threshold value, preferable convergence time can have been obtained on section [0.95,1.1] Number, but there are still fuctuation within a narrow ranges, and be 1 moment in threshold value, maximum convergence number can be obtained, gradually decline is then presented and becomes Gesture.Therefore, when threshold value be 1 when, the present embodiment method MTA-IEBP obtain convergence in mean number compared to be carved with when other compared with Big advantage.
In Fig. 4, in the case where data volume is continuously increased, since the present embodiment method can't go out in the matching process Existing biggish fluctuation, enables algorithmic statement number to maintain more stable level, does not subtract with the increase of network load It is few.Compared to IEBP, the present embodiment method can not only obtain more convergence number, and the case where data load increases Under still can guarantee the stability and reliability of algorithm.
In Fig. 5, in the case where data volume is continuously increased, the study number of IEBP is there are biggish fluctuation, and this reality Neuron of the method due to deleting redundancy is applied, the error of original start can be just able to satisfy under less study number Range.
In Fig. 6, the present embodiment method is since it is determined Optimal Learning index, so that algorithm will not crash into Local Minimum Value, therefore the accuracy of 90% or more acquisition can be stablized, matching accuracy is which greatly improved, and have compared to IEBP There is preferable stability, i.e. accuracy floating variation range is little.
The technical means disclosed in the embodiments of the present invention is not limited only to technological means disclosed in above embodiment, further includes Technical solution consisting of any combination of the above technical features.

Claims (2)

1.基于滑齿法的神经网络的模板匹配方法,其特征在于,包括以下步骤:1. the template matching method of the neural network based on the sliding tooth method, is characterized in that, comprises the following steps: 步骤1、根据误差反向传播的神经网络的制定规则,将网络结构分为输入层、隐藏层和输出层;Step 1. According to the formulation rules of the neural network for error back propagation, the network structure is divided into an input layer, a hidden layer and an output layer; 步骤2、在所述隐藏层中设定第一误差范围、最大训练次数和第二误差范围并初始化,所述第一误差范围为所述神经网络可接受的误差范围,所述第二误差范围为模板匹配值可接受的误差范围;Step 2. Set and initialize a first error range, a maximum number of training times, and a second error range in the hidden layer, where the first error range is an acceptable error range for the neural network, and the second error range is is the acceptable error range for the template matching value; 步骤3、根据数据分块机制,输入的数据进行相似性检测后被分割成多个数据块;Step 3. According to the data block mechanism, the input data is divided into multiple data blocks after similarity detection; 步骤4、利用滑齿法匹配被处理的数据,其中在网络训练阶段,将输出信号与期望输出相比较得到误差信号,所述误差信号通过滑齿法从所述输出层逐级反馈到所述隐藏层和所述输入层,并动态修正所述隐藏层和所述输入层的网络权值;Step 4: Match the processed data by using the sliding tooth method, wherein in the network training phase, the output signal is compared with the expected output to obtain an error signal, and the error signal is fed back step by step from the output layer to the output layer through the sliding tooth method. the hidden layer and the input layer, and dynamically modify the network weights of the hidden layer and the input layer; 步骤5、判断网络节点的误差和模板匹配的误差是否分别落在所述第一误差范围和所述第二误差范围内,或者,模板匹配的误差是否在所述第二误差范围内且达到最大训练次数;如是,则输出结果;否则执行步骤6;Step 5. Determine whether the error of the network node and the error of template matching fall within the first error range and the second error range respectively, or whether the error of template matching is within the second error range and reaches the maximum Training times; if so, output the result; otherwise, go to step 6; 步骤6、修正所述输入层的网络权值;Step 6, modifying the network weights of the input layer; 步骤7、重复执行步骤4~步骤6,直到输出结果。Step 7. Repeat steps 4 to 6 until the result is output. 2.根据权利要求1所述基于滑齿法的神经网络的模板匹配方法,其特征在于,步骤3的具体过程为:利用相似性检测算法,计算输入节点数据与模板中相应点的绝对误差值,将各输入数据点的绝对误差值累加,直到超过设定阈值,停止累加并计算此数据块的相似性均值,此时所述相似性均值大于预期值,停止分割数据。2. the template matching method based on the neural network of the sliding tooth method according to claim 1, is characterized in that, the concrete process of step 3 is: utilize similarity detection algorithm, calculate the absolute error value of corresponding point in input node data and template , accumulate the absolute error value of each input data point until it exceeds the set threshold, stop accumulating and calculate the similarity mean value of this data block, at this time, the similarity mean value is greater than the expected value, stop dividing the data.
CN201610035536.6A 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method Expired - Fee Related CN105740950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610035536.6A CN105740950B (en) 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610035536.6A CN105740950B (en) 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method

Publications (2)

Publication Number Publication Date
CN105740950A CN105740950A (en) 2016-07-06
CN105740950B true CN105740950B (en) 2019-03-29

Family

ID=56247495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610035536.6A Expired - Fee Related CN105740950B (en) 2016-01-19 2016-01-19 The template matching method of neural network based on slip teeth method

Country Status (1)

Country Link
CN (1) CN105740950B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686385B (en) 2016-12-30 2018-09-25 平安科技(深圳)有限公司 Video compress sensing reconstructing method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951833A (en) * 2015-07-02 2015-09-30 上海电机学院 Neutral network and learning method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020077756A1 (en) * 1999-11-29 2002-06-20 Scott Arouh Neural-network-based identification, and application, of genomic information practically relevant to diverse biological and sociological problems, including drug dosage estimation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951833A (en) * 2015-07-02 2015-09-30 上海电机学院 Neutral network and learning method thereof

Also Published As

Publication number Publication date
CN105740950A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN110880036B (en) Neural network compression method, device, computer equipment and storage medium
US10762426B2 (en) Multi-iteration compression for deep neural networks
US10984308B2 (en) Compression method for deep neural networks with load balance
KR102413028B1 (en) Method and device for pruning convolutional neural network
Qiao et al. Constructive algorithm for fully connected cascade feedforward neural networks
CN108490965A (en) Rotor craft attitude control method based on Genetic Algorithm Optimized Neural Network
CN110232448A (en) It improves gradient and promotes the method that the characteristic value of tree-model acts on and prevents over-fitting
CN109472345A (en) A kind of weight update method, device, computer equipment and storage medium
Song et al. Data-driven finite-horizon optimal tracking control scheme for completely unknown discrete-time nonlinear systems
Putra et al. Estimation of parameters in the SIR epidemic model using particle swarm optimization
EP3948677A1 (en) Residual semi-recurrent neural networks
CN110059439A (en) A kind of spacecraft orbit based on data-driven determines method
CN114330644A (en) Neural network model compression method based on structure search and channel pruning
CN105469142A (en) Neural network increment-type feedforward algorithm based on sample increment driving
CN116680969A (en) Filler evaluation parameter prediction method and device for PSO-BP algorithm
Waheeb et al. Nonlinear autoregressive moving-average (narma) time series forecasting using neural networks
CN114282478A (en) A Method for Correcting the Dot Product Error of Variable Resistor Device Array
CN116432539A (en) A method, system, device and medium for time-coherent cooperative guidance
CN105740950B (en) The template matching method of neural network based on slip teeth method
CN110083676B (en) Short text-based field dynamic tracking method
CN115793456A (en) Lightweight sensitivity-based power distribution network edge side multi-mode self-adaptive control method
JP2023550921A (en) Weight-based adjustment in neural networks
CN112069370B (en) Neural network structure search method, device, medium and equipment
CN113051820A (en) Cross-basin pneumatic parameter simulation method based on convolutional neural network
CN108960406B (en) MEMS gyroscope random error prediction method based on BFO wavelet neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 66, New Model Road, Gulou District, Nanjing City, Jiangsu Province, 210000

Applicant after: NANJING University OF POSTS AND TELECOMMUNICATIONS

Address before: 210023 9 Wen Yuan Road, Ya Dong new town, Nanjing, Jiangsu.

Applicant before: NANJING University OF POSTS AND TELECOMMUNICATIONS

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200421

Address after: 610000 no.1402, block a, No.199, Tianfu 4th Street, Chengdu high tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu

Patentee after: Chengdu Star Innovation Technology Co.,Ltd.

Address before: 210000, 66 new model street, Gulou District, Jiangsu, Nanjing

Patentee before: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Template matching method of neural network based on sliding tooth method

Effective date of registration: 20220526

Granted publication date: 20190329

Pledgee: Industrial Bank Limited by Share Ltd. Chengdu branch

Pledgor: Chengdu Star Innovation Technology Co.,Ltd.

Registration number: Y2022510000141

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190329

CF01 Termination of patent right due to non-payment of annual fee