CN110390386B - Sensitive long-short term memory method based on input change differential - Google Patents

Sensitive long-short term memory method based on input change differential Download PDF

Info

Publication number
CN110390386B
CN110390386B CN201910572594.6A CN201910572594A CN110390386B CN 110390386 B CN110390386 B CN 110390386B CN 201910572594 A CN201910572594 A CN 201910572594A CN 110390386 B CN110390386 B CN 110390386B
Authority
CN
China
Prior art keywords
input
function
time
information
tanh
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910572594.6A
Other languages
Chinese (zh)
Other versions
CN110390386A (en
Inventor
胡凯
郑翡
夏旻
翁理国
张彦雯
王文晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201910572594.6A priority Critical patent/CN110390386B/en
Publication of CN110390386A publication Critical patent/CN110390386A/en
Application granted granted Critical
Publication of CN110390386B publication Critical patent/CN110390386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The invention discloses a sensitive long-short term memory method based on input change differentiation, which is characterized in that in order to improve the response capability of a traditional LSTM neural network to short-time information, a neural unit of the long-short term memory network with increased information sensitivity is added, the response capability to the short-time information can be well increased, the application instantaneity of the long-short term memory network is improved, further more complete real-time analysis can be carried out, the contents such as micro-action and the like are further analyzed, and the application value is improved.

Description

Sensitive long-short term memory method based on input change differential
Technical Field
The invention relates to the field of long and short term memory networks, in particular to a sensitive long and short term memory method based on input change differentiation.
Background
Artificial intelligence is one of the three important disciplines in the 21 st century and is an important support for national science, economy and civilian life. The long and short term memory network (LSTM) is an important algorithm for recognition based on memory, has been recognized in many aspects including semantics, actions, texts and the like, and has good value.
The existing long and short term memory network still has a main problem that the analysis capability of information in a long time sequence of the whole video is improved by adopting a long and short term memory mode, but the short term memory network has no reaction capability to short term information, so that the existing long and short term memory network can only be used for post analysis and cannot achieve good real-time performance and the identification of micro-action and other contents.
If the structure of the long-term and short-term memory network can be adjusted, the response capability of the long-term and short-term memory network to short-term information is improved, and the application instantaneity of the long-term and short-term memory network is improved, then real-time analysis can be well performed, micro-action and other contents can be analyzed, and the application value of the long-term and short-term memory network is further improved.
Disclosure of Invention
The invention aims to solve the technical problem of providing a sensitive long-short term memory method based on input change differential aiming at the defects involved in the background technology.
The invention adopts the following technical scheme for solving the technical problems:
the sensitive long-short term memory method based on the input change differential comprises the following specific steps:
step 1), establishing a neural unit of an LSTM neural network, wherein the neural unit comprises three structures: input door i t Forgetting door f t And an output gate o t Each step length t and the corresponding input sequence are X ═ { X ═ X 1 ,x 2 ,...,x t };
Step 2), determining information needing to be discarded from the state of the nerve unit through a forgetting gate:
let the last time output value be h t-1 Input value x at the present time t H is to be t-1 And x t Inputting the value into a Sigmoid function to obtain a value output to a unit state between 0 and 1, wherein 0 represents that all information is forgotten, 1 represents that all information is reserved, and the value is multiplied by the unit state to determine discarded information; output value f of forgetting gate t The calculation formula of (2) is as follows:
f t =σ(w f *[h t-1 ,x t ]+b f )
wherein, w f 、b f Respectively are a weight matrix and a bias vector in a forgetting gate Sigmoid function, and sigma is a Sigmoid activation function;
step 3), determining the stored information in the neural unit state through the input gate:
h is to be t-1 And x t Inputting the input signal into a Sigmoid function to obtain an output value i t (ii) a H is to be t-1 And x t Input into tanh function to obtain output value k t ;i t 、k t The calculation formula of (2) is as follows:
i t =σ(w i *[h t-1 ,x t ]+b i )
k t =tanh(w k *[h t-1 ,x t ]+b k )
wherein, w i 、w k Weight matrices in the Sigmoid function and tanh function of the input gate, respectively, b i 、b k Respectively inputting offset vectors in a Sigmoid function and a tanh function of the gate;
step 4), adding new input x to the cell state in order to increase the reaction capability to short time information t -x t-1 I.e. the difference between the input at that time and the input at the previous time, x t -x t-1 、h t-1 Inputting the input into a Sigmoid function to obtain an output value j t X is t -x t-1 、h t-1 Input to tanh function to obtain an output value p t Will j is t 、p t After multiplication, the data are added into the unit state, so that the response capability of the network to short-time information can be increased, and the real-time performance is increased; j is a function of t 、p t The calculation formula of (a) is as follows:
j t =σ(w j *[h t-1 ,x t -x t-1 ]+b j )
p t =tanh(w p *[h t-1 ,x t -x t-1 ]+b p )
wherein, w j 、w p Adding new inputs x to cell states, respectively t -x t-1 Weight matrix in time Sigmoid function, tanh function, b j 、b p Adding new inputs x to cell states, respectively t -x t-1 Offset vectors in a time Sigmoid function and a tanh function;
Thus, the cell state at the next time is obtained as:
C t =f t *C t-1 +i t *k t +j t *p t
step 5), determining output information from the neural unit state through an output gate:
h is to be t-1 And x t Inputting the output value into a Sigmoid function to obtain an output value O t Then, for cell state C t Processed by tanh function and multiplied by output value O t To obtain an output value h transmitted to the next time t ;O t 、h t The calculation formula of (2) is as follows:
O t =σ(w O *[h t-1 ,x t ]+b O )
h t =O t *tanh(C t )
wherein, w O 、b O Respectively a weight matrix and an offset vector in the output gate Sigmoid function;
and 6) learning by adopting a learning algorithm in the LSTM algorithm to finish sensitive long-term and short-term memory.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
compared with the original classical LSTM method, the method adds a neural unit of the long-term and short-term memory network with increased information sensitivity, can well increase the reaction capability of the neural unit to short-term information, improves the application instantaneity of the neural unit, further can perform more perfect real-time analysis, further analyzes the contents such as micro-action and the like, and improves the application value.
Drawings
Fig. 1 is a structural explanatory diagram of an embodiment of the invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
The principle of the invention is as follows: the core of the LSTM neural network is to add a memory module to learn current information and extract associated information and rules in data so as to carry out information transfer. One neural unit of the LSTM neural network contains three structures: input door i t Forgetting door f t And an output gate o t Each step length t and the corresponding input sequence are X ═ { X ═ X 1 ,x 2 ,...,x t }. In order to improve the capability of reacting to short-time information, the invention adds an input differential sequence with similar differential effect
Figure BDA0002111275420000031
The invention relates to a neural unit of a long-term and short-term memory network with increased information sensitivity. The state information of the last node is input from the input end c t-1 Input, whenever data enters the neural unitAnd determining which information needs to be reserved through corresponding operation. The key to the network is the cell state, i.e., the horizontal line at the top of the cell in the figure, which passes information from the previous cell to the next.
The invention has two state chains which are transmitted along with time, namely a state h and a unit state c, h t-1 Is the value, x, of the current time of the last time t For the input value at the present moment, c t-1 Is the last moment to remember the state value of the cell, c t Is the cell state value at the current time.
The invention discloses a sensitive long-short term memory method based on input change differential, which comprises the following specific steps:
step 1), establishing a neural unit of an LSTM neural network, wherein the neural unit comprises three structures: input door i t Forgetting door f t And an output gate o t Each step length t and the corresponding input sequence are X ═ { X ═ X 1 ,x 2 ,...,x t };
Step 2), determining information needing to be discarded from the state of the nerve unit through a forgetting gate:
let the last time output value be h t-1 Input value x at the present time t H is to be t-1 And x t Inputting the value into a Sigmoid function to obtain a value output to a unit state between 0 and 1, wherein 0 represents that all information is forgotten, 1 represents that all information is reserved, and the value is multiplied by the unit state to determine discarded information; output value f of forgetting gate t The calculation formula of (2) is as follows:
f t =σ(w f *[h t-1 ,x t ]+b f )
wherein, w f 、b f Respectively are a weight matrix and a bias vector in a forgetting gate Sigmoid function, and sigma is a Sigmoid activation function;
step 3), determining the stored information in the neural unit state through the input gate:
h is to be t-1 And x t Inputting the input signal into a Sigmoid function to obtain an output value i t (ii) a H is to be t-1 And x t Input to the tanh function to obtainOutput value k t ;i t 、k t The calculation formula of (2) is as follows:
i t =σ(w i *[h t-1 ,x t ]+b i )
k t =tanh(w k *[h t-1 ,x t ]+b k )
wherein, w i 、w k Weight matrices in the Sigmoid function and tanh function of the input gate, respectively, b i 、b k Respectively inputting offset vectors in a Sigmoid function and a tanh function of the gate;
step 4), adding new input x to the cell state in order to increase the reaction capability to short time information t -x t-1 I.e. the difference between the input at that time and the input at the previous time, x t -x t-1 、h t-1 Inputting the input into a Sigmoid function to obtain an output value j t X is t -x t-1 、h t-1 Input to tanh function to obtain an output value p t Will j is t 、p t After multiplication, the data are added into the unit state, so that the response capability of the network to short-time information can be increased, and the real-time performance is increased; j is a function of t 、p t The calculation formula of (a) is as follows:
j t =σ(w j *[h t-1 ,x t -x t-1 ]+b j )
p t =tanh(w p *[h t-1 ,x t -x t-1 ]+b p )
wherein, w j 、w p Adding new inputs x to cell states, respectively t -x t-1 Weight matrix in time Sigmoid function, tanh function, b j 、b p Adding new inputs x to cell states, respectively t -x t-1 Offset vectors in a time Sigmoid function and a tanh function;
thus, the cell state at the next time is obtained as:
C t =f t *C t-1 +i t *k t +j t *p t
step 5), determining output information from the neural unit state through an output gate:
h is to be t-1 And x t Inputting the output value into a Sigmoid function to obtain an output value O t Then, for cell state C t Processed by tanh function and multiplied by output value O t To obtain an output value h transmitted to the next time t ;O t 、h t The calculation formula of (2) is as follows:
O t =σ(w O *[h t-1 ,x t ]+b O )
h t =O t *tanh(C t )
wherein, w O 、b O Respectively a weight matrix and an offset vector in the output gate Sigmoid function;
And 6) learning by adopting a learning algorithm in the LSTM algorithm to finish sensitive long-term and short-term memory.
Embodiments of the present invention will be explained below with reference to the application of the present invention to recognition of a video of a lifted arm.
Shown in fig. 1 are elements j and p of a long-short term memory network with increased information sensitivity. The state information of the last node is input from the input end c t-1 The input, each time data enters the neural unit, will decide which information needs to be retained through corresponding operations. The key to the network is the cell state, i.e., the horizontal line at the top of the cell in the figure, which passes information from the previous cell to the next.
In the present embodiment, the so-called status information c t-1 The states of the nerve units of the whole neural network at the moment t-1 are mainly the weight matrix and the offset vector of the nerve units, and specifically, the weight matrix and the offset vector of the whole neural network at the moment of the action of lifting the arm are identified for the t-1 th frame picture of the video of lifting the arm.
The invention has two state chains which are transmitted along with time, namely a state h and a unit state c, h t-1 Is the value, x, of the current time of the last time t For the current time input value, c t-1 Is to memorize the state value of the cell at the last moment,c t Is the cell state value at the current time.
In this embodiment, h t-1 Typically, for the t-1 th frame picture of the video for lifting the arm, the result of the action of lifting the arm is identified. x is the number of t Is the picture of the t-th frame of the video with the arm lifted.
Compared with the original classical LSTM method, the method adds a neural unit of the long-term and short-term memory network with increased information sensitivity, can well increase the reaction capability of the neural unit to short-term information, improves the application instantaneity of the neural unit, further can perform more perfect real-time analysis, further analyzes the contents such as micro-action and the like, and improves the application value of the neural unit.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only illustrative of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. The sensitive long-short term memory method based on the input change differential is characterized by comprising the following steps:
step 1), establishing a neural unit of an LSTM neural network, wherein the neural unit comprises three structures: input door i t Door f for forgetting to leave t And an output gate o t Each step length t and the corresponding input sequence are X ═ { X ═ X 1 ,x 2 ,...,x t }; LSTM nerveThe network is used for identifying the action of lifting the arm in the video, and the corresponding input sequence is the pictures of the 1 st to t frames of the video;
step 2), determining information needing to be discarded from the state of the nerve unit through a forgetting gate:
let the last time output value be h t-1 Input value x at the present time t H is to be t-1 And x t Inputting the value into a Sigmoid function to obtain a value output to a unit state between 0 and 1, wherein 0 represents that all information is forgotten, 1 represents that all information is reserved, and the value is multiplied by the unit state to determine discarded information; output value f of forgetting gate t The calculation formula of (2) is as follows:
f t =σ(w f *[h t-1 ,x t ]+b f )
wherein, w f 、b f Respectively are a weight matrix and a bias vector in a forgetting gate Sigmoid function, and sigma is a Sigmoid activation function;
step 3), determining the stored information in the neural unit state through the input gate:
h is to be t-1 And x t Inputting the data into a Sigmoid function to obtain an output value i t (ii) a H is to be t-1 And x t Input into tanh function to obtain output value k t ;i t 、k t The calculation formula of (2) is as follows:
i t =σ(w i *[h t-1 ,x t ]+b i )
k t =tanh(w k *[h t-1 ,x t ]+b k )
wherein, w i 、w k Weight matrices in the Sigmoid function and tanh function of the input gate, respectively, b i 、b k Respectively inputting offset vectors in a Sigmoid function and a tanh function of the gate;
step 4), adding new input x to the cell state in order to increase the reaction capability to short time information t -x t-1 I.e. the difference between the input at that time and the input at the previous time, x t -x t-1 、h t-1 Inputting the input into a Sigmoid function to obtain an output value j t X is t -x t-1 、h t-1 Input to tanh function to obtain an output value p t Will j is t 、p t After multiplication, the data are added into the unit state, so that the response capability of the network to short-time information can be increased, and the real-time performance is increased; j is a function of t 、p t The calculation formula of (a) is as follows:
j t =σ(w j *[h t-1 ,x t -x t-1 ]+b j )
p t =tanh(w p *[h t-1 ,x t -x t-1 ]+b p )
wherein, w j 、w p Adding new inputs x to cell states, respectively t -x t-1 Weight matrix in time Sigmoid function, tanh function, b j 、b p Adding new inputs x to cell states, respectively t -x t-1 Offset vectors in a time Sigmoid function and a tanh function;
Thus, the cell state at the next time is obtained as:
C t =f t *C t-1 +i t *k t +j t *p t
step 5), determining output information from the neural unit state through an output gate:
h is to be t-1 And x t Inputting the output value into a Sigmoid function to obtain an output value O t Then, for cell state C t Processed by tanh function and multiplied by output value O t To obtain an output value h transmitted to the next time t ;O t 、h t The calculation formula of (2) is as follows:
O t =σ(w O *[h t-1 ,x t ]+b O )
h t =O t *tanh(C t )
wherein, w O 、b O Respectively a weight matrix and an offset vector in the output gate Sigmoid function;
and 6) learning by adopting a learning algorithm in the LSTM algorithm to finish sensitive long-term and short-term memory.
CN201910572594.6A 2019-06-28 2019-06-28 Sensitive long-short term memory method based on input change differential Active CN110390386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910572594.6A CN110390386B (en) 2019-06-28 2019-06-28 Sensitive long-short term memory method based on input change differential

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910572594.6A CN110390386B (en) 2019-06-28 2019-06-28 Sensitive long-short term memory method based on input change differential

Publications (2)

Publication Number Publication Date
CN110390386A CN110390386A (en) 2019-10-29
CN110390386B true CN110390386B (en) 2022-07-29

Family

ID=68285905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910572594.6A Active CN110390386B (en) 2019-06-28 2019-06-28 Sensitive long-short term memory method based on input change differential

Country Status (1)

Country Link
CN (1) CN110390386B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8712096B2 (en) * 2010-03-05 2014-04-29 Sri International Method and apparatus for detecting and tracking vehicles
US20170351911A1 (en) * 2014-02-04 2017-12-07 Pointgrab Ltd. System and method for control of a device based on user identification
CN111275178A (en) * 2014-07-21 2020-06-12 徐志强 Neuron simulation method and device with different working forms
US10241684B2 (en) * 2017-01-12 2019-03-26 Samsung Electronics Co., Ltd System and method for higher order long short-term memory (LSTM) network
CN107392097B (en) * 2017-06-15 2020-07-07 中山大学 Three-dimensional human body joint point positioning method of monocular color video
CN108062561B (en) * 2017-12-05 2020-01-14 华南理工大学 Short-time data flow prediction method based on long-time and short-time memory network model
CN108537818B (en) * 2018-03-07 2020-08-14 上海交通大学 Crowd trajectory prediction method based on cluster pressure LSTM
CN108510065A (en) * 2018-03-30 2018-09-07 中国科学院计算技术研究所 Computing device and computational methods applied to long Memory Neural Networks in short-term
CN108520530B (en) * 2018-04-12 2020-01-14 厦门大学 Target tracking method based on long-time and short-time memory network
CN108960530A (en) * 2018-07-26 2018-12-07 江南大学 Prediction technique based on the long crop field vegetation coverage index of memory network in short-term
CN109583571B (en) * 2018-12-05 2023-04-28 南京工业大学 Mobile robot soft ground trafficability prediction method based on LSTM network

Also Published As

Publication number Publication date
CN110390386A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN109902293B (en) Text classification method based on local and global mutual attention mechanism
CN109284506B (en) User comment emotion analysis system and method based on attention convolution neural network
CN110084281B (en) Image generation method, neural network compression method, related device and equipment
CN109934261B (en) Knowledge-driven parameter propagation model and few-sample learning method thereof
Schäfer et al. Recurrent neural networks are universal approximators
CN113905391B (en) Integrated learning network traffic prediction method, system, equipment, terminal and medium
CN111310672A (en) Video emotion recognition method, device and medium based on time sequence multi-model fusion modeling
WO2020244174A1 (en) Face recognition method, apparatus and device, and computer readable storage medium
CN111144124B (en) Training method of machine learning model, intention recognition method, and related device and equipment
CN112036276A (en) Artificial intelligent video question-answering method
US20230215166A1 (en) Few-shot urban remote sensing image information extraction method based on meta learning and attention
CN114694255B (en) Sentence-level lip language recognition method based on channel attention and time convolution network
CN116524419A (en) Video prediction method and system based on space-time decoupling and self-attention difference LSTM
CN110390386B (en) Sensitive long-short term memory method based on input change differential
CN110472726B (en) Sensitive long-short term memory method based on output change differential
CN110490299B (en) Sensitive long-short term memory method based on state change differential
CN112464816A (en) Local sign language identification method and device based on secondary transfer learning
CN112560440A (en) Deep learning-based syntax dependence method for aspect-level emotion analysis
CN111933123A (en) Acoustic modeling method based on gated cyclic unit
CN116384373A (en) Knowledge distillation frame-based aspect-level emotion analysis method
CN114386527B (en) Category regularization method and system for domain adaptive target detection
Wang et al. Integration of heterogeneous classifiers based on choquet fuzzy integral
CN112598065B (en) Memory-based gating convolutional neural network semantic processing system and method
Bodenhausen et al. Connectionist architectural learning for high performance character and speech recognition
CN114564568A (en) Knowledge enhancement and context awareness based dialog state tracking method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant