CN108875601A - Action identification method and LSTM neural network training method and relevant apparatus - Google Patents

Action identification method and LSTM neural network training method and relevant apparatus Download PDF

Info

Publication number
CN108875601A
CN108875601A CN201810548634.9A CN201810548634A CN108875601A CN 108875601 A CN108875601 A CN 108875601A CN 201810548634 A CN201810548634 A CN 201810548634A CN 108875601 A CN108875601 A CN 108875601A
Authority
CN
China
Prior art keywords
neural network
lstm neural
training
improvement
lstm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810548634.9A
Other languages
Chinese (zh)
Inventor
刘栩辰
程云
赵雅倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201810548634.9A priority Critical patent/CN108875601A/en
Publication of CN108875601A publication Critical patent/CN108875601A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of action identification method and its LSTM neural network training method, system and the equipment that use and a kind of computer readable storage medium, which includes:Increase second level derivative term in the propagated forward algorithm of LSTM neural network, and update the back-propagation algorithm of the LSTM neural network according to the propagated forward algorithm after increase, LSTM neural network is improved with building;Wherein, the second level derivative term is second level derivative of the cell to the time;Training sample is obtained, and according to the training sample training improvement LSTM neural network, to obtain the improvement LSTM neural network of training completion.The identification that action sequence is carried out using improved LSTM neural network can be fine be saved the temporal information of action sequence, avoids the time misalignment of recognition result since there are cell in propagated forward algorithm and Back Propagation Algorithm to the second level derivative of time.

Description

Action identification method and LSTM neural network training method and relevant apparatus
Technical field
This application involves technical field of image processing, more specifically to a kind of action identification method and its use LSTM neural network training method, system and equipment and a kind of computer readable storage medium.
Background technique
In recent years, highest attention of the research of human action identification by industry, in video monitoring, game and machine The fields such as people have important application.However efficiently action recognition algorithm is very challenging:Firstly, different mobile speed Degree leads to the fluctuation of the same movement in time;Secondly, it is many movement have similitude, such as it is high throwing and wave;Most Afterwards, different people height, in terms of difference also result in the difficulty of identification.In the prior art, using LSTM nerve The problem of identification of network progress action sequence, recognition result meeting generation time misalignment.
Therefore, the temporal information for how saving identification maneuver sequence, avoiding the time misalignment of recognition result is this field skill Art personnel's problem to be solved.
Summary of the invention
A kind of LSTM neural network training method for being designed to provide action identification method and its using of the application is System and equipment and a kind of computer readable storage medium save the temporal information of identification maneuver sequence, avoid recognition result Time misalignment.
To achieve the above object, this application provides a kind of LSTM neural network training methods, including:
Increase second level derivative term in the propagated forward algorithm of LSTM neural network, and is calculated according to the propagated forward after increase Method updates the back-propagation algorithm of the LSTM neural network, improves LSTM neural network with building;Wherein, the second level derivative Item is second level derivative of the cell to the time;
Training sample is obtained, and according to the training sample training improvement LSTM neural network, to obtain having trained At improvement LSTM neural network.
Wherein, further include:
Test sample is obtained, and the test sample is inputted in the improvement LSTM neural network that training is completed, is moved Make sequence recognition result;
The average recognition rate of the test sample is calculated according to the discrimination of frame image each in the test sample.
Wherein, the acquisition training sample, including:
Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains the training sample;Wherein, The pretreatment operation includes the combination of any one of turning operation, down-sampling operation or cutting operation or several.
Wherein, according to the training sample training improvement LSTM neural network, including:
Each frame image in the training sample is inputted in the improvement LSTM neural network, and adjusts the improvement The key parameter of LSTM neural network is until the discrimination of the improvement LSTM neural network output reaches preset value, to be instructed Practice the improvement LSTM neural network completed.
Wherein, the key parameter for improving LSTM neural network is adjusted, including:
The key parameter for improving LSTM neural network is adjusted using cross validation method and pair-wise algorithm.
Wherein, the key parameter includes any one or several combinations of epoch, learning rate or learning rate decaying.
To achieve the above object, this application provides a kind of action identification methods, including:
Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains sample to be identified;
By the sample input improvement LSTM neural network that training is completed as described in claim 1 to be identified, moved Make recognition result.
To achieve the above object, this application provides a kind of LSTM neural metwork training systems, including:
Construct module, in the propagated forward algorithm of LSTM nerve net increase second level derivative term, and according to increase after Propagated forward algorithm update the back-propagation algorithm of the LSTM neural network, LSTM neural network is improved with building;Wherein, The second level derivative term is second level derivative of the cell to the time;
Training module trains the improvement LSTM neural network for obtaining training sample, and according to the training sample, To obtain the improvement LSTM neural network of training completion.
To achieve the above object, this application provides a kind of LSTM neural metwork training equipment, including:
Memory, for storing computer program;
Processor is realized when for executing the computer program such as the step of above-mentioned LSTM neural network training method.
To achieve the above object, this application provides a kind of computer readable storage medium, the computer-readable storages It is stored with computer program on medium, such as above-mentioned LSTM neural metwork training is realized when the computer program is executed by processor The step of method.
By above scheme it is found that a kind of LSTM neural network training method provided by the present application, including:In LSTM nerve Increase second level derivative term in the propagated forward algorithm of network, and the LSTM nerve is updated according to the propagated forward algorithm after increase The back-propagation algorithm of network improves LSTM neural network with building;Wherein, the second level derivative term is cell to the two of the time Grade derivative;Training sample is obtained, and according to the training sample training improvement LSTM neural network, to obtain training completion Improvement LSTM neural network.
LSTM neural network training method provided by the present application improves original LSTM neural network, passes in original forward direction It broadcasts and increases cell in algorithm to the second level derivative term of time, and according to the corresponding modification back-propagating of improved propagated forward algorithm Algorithm.The identification that action sequence is carried out using improved LSTM neural network, since propagated forward algorithm and back-propagating are calculated There are cell in method to the second level derivative of time, can be fine saves the temporal information of action sequence, avoids recognition result Time misalignment.Disclosed herein as well is a kind of LSTM neural metwork training system and equipment, a kind of action identification method and one kind Computer readable storage medium is equally able to achieve above-mentioned technical effect.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow chart of LSTM neural network training method disclosed in the embodiment of the present application;
Fig. 2 is the structure chart for the LSTM neural network that training is completed disclosed in the embodiment of the present application;
Fig. 3 is the flow chart of another kind LSTM neural network training method disclosed in the embodiment of the present application;
Fig. 4 is a kind of flow chart of action identification method disclosed in the embodiment of the present application;
Fig. 5 is a kind of structure chart of LSTM neural metwork training system disclosed in the embodiment of the present application;
Fig. 6 is a kind of structure chart of LSTM neural metwork training equipment disclosed in the embodiment of the present application;
Fig. 7 is the structure chart of another kind LSTM neural metwork training equipment disclosed in the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
The embodiment of the present application discloses a kind of LSTM neural network training method, improves the supervisory efficiency of tourist image design.
Referring to Fig. 1, a kind of flow chart of LSTM neural network training method disclosed in the embodiment of the present application, as shown in Figure 1, Including:
S101:Increase second level derivative term in the propagated forward algorithm of LSTM neural network, and according to the forward direction after increase Propagation algorithm updates the back-propagation algorithm of the LSTM neural network, improves LSTM neural network with building;Wherein, described two Grade derivative term is second level derivative of the cell to the time;
In specific implementation, in original LSTM (Chinese name:Shot and long term memory network, full name in English:Network-Long Short Term Memory Network) neural network propagated forward algorithm on the basis of, as shown in Figure 2 increase cell pairs The second level derivative term of time can derive corresponding back-propagating by improved propagated forward algorithm those skilled in the art Algorithm can be fine saves since there are cell in propagated forward algorithm and Back Propagation Algorithm to the second level derivative of time The temporal information of action sequence avoids the time misalignment of recognition result.
S102:Training sample is obtained, and according to the training sample training improvement LSTM neural network, to be instructed Practice the improvement LSTM neural network completed.
In specific implementation, the raw image data of training sample is obtained first, and the original image is located in advance Reason operation obtains the training sample, and the present embodiment is not defined specific pretreatment operation, and those skilled in the art can With flexible choice according to the actual situation.As a preferred implementation manner, pretreatment operation herein include turning operation, under adopt Any one of sample operation or cutting operation or several combinations.Wherein, down-sampling operation is i.e. for a sample sequence interval Several sample value samplings are primary.
The training process of above-mentioned improvement LSTM neural network is specially:The training sample is inputted and improves LSTM nerve net In network, and the key parameter of improvement LSTM neural network is adjusted until the discrimination of output reaches preset value, to be trained The improvement LSTM neural network of completion.
It can use cross validation method as a preferred implementation manner, and pair-wise algorithm adjust key parameter. Referring herein to key parameter may include Batch_video (the video number inputted every time), Batch_frame (each view The video frame number that frequency includes), the epoch numbers of all data (training primary), learning rate or learning rate decaying etc..It is not right herein The initial value of above-mentioned key parameter is specifically limited, those skilled in the art can flexible setting according to the actual situation, for example, Batch_video=6, Batch_frame=24, epoch=5000~8000, learning rate Learning_rate=0.1 are learned Habit rate decays lr_decay=0.1/1000 times.
LSTM neural network training method provided by the embodiments of the present application improves original LSTM neural network, original Increase cell in propagated forward algorithm to the second level derivative term of time, and according to the corresponding modification of improved propagated forward algorithm after To propagation algorithm.The identification that action sequence is carried out using improved LSTM neural network, due to propagated forward algorithm and backward There are cell in propagation algorithm to the second level derivative of time, can be fine saves the temporal information of action sequence, avoids identifying As a result time misalignment.
The embodiment of the present application discloses a kind of LSTM neural network training method, relative to a upper embodiment, the present embodiment Further instruction and optimization have been made to technical solution.Specifically:
Referring to Fig. 3, the flow chart of another kind LSTM neural network training method provided by the embodiments of the present application, such as Fig. 2 institute Show, including:
S301:Increase second level derivative term in the propagated forward algorithm of LSTM neural network, and according to the forward direction after increase Propagation algorithm updates the back-propagation algorithm of the LSTM neural network, improves LSTM neural network with building;Wherein, described two Grade derivative term is second level derivative of the cell to the time;
S302:Training sample is obtained, and according to the training sample training improvement LSTM neural network, to be instructed Practice the improvement LSTM neural network completed;
S303:Test sample is obtained, and the test sample is inputted in the improvement LSTM neural network that training is completed, is obtained To action sequence recognition result;
S304:The average identification of the test sample is calculated according to the discrimination of frame image each in the test sample Rate.
It is understood that after above-mentioned LSTM neural network is completed in training, it can also be using test sample to training The LSTM neural network of completion is tested, specifically, the improvement that all picture frames input training of test sample is completed In LSTM neural network, action sequence recognition result is obtained, and calculates the average recognition rate of all picture frames in test sample, with Obtain the action recognition accuracy rate of the LSTM neural network.It should be noted that each frame image in test sample herein Pass through pretreatment operation, i.e. turning operation, down-sampling operation and cutting operation etc..
A kind of action identification method provided in this embodiment is described below, applies the improvement that above-described embodiment training is completed LSTM neural network.Specifically:
Referring to fig. 4, a kind of flow chart of action recognition method disclosed in the embodiment of the present application, as shown in figure 4, including:
S401:Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains sample to be identified;
In specific implementation, it needs to carry out pretreatment operation to the raw image data after obtaining raw image data, i.e., Sample to be identified is obtained after enhancing operation, equally, pretreatment operation herein may include turning operation, down-sampling operation and cut Cut operation etc..
S402:The sample to be identified is inputted into the improvement LSTM neural network that training provided by the above embodiment is completed, Obtain action recognition result.
In specific implementation, the training provided by the above embodiment of each frame image in above-mentioned sample to be identified is completed LSTM neural network is improved, to obtain the action sequence recognition result of sample to be identified.It is understood that being obtained in this step Action sequence recognition result not only includes the action sequence at identification, can also include discrimination, that is, calculate each frame image Discrimination, and calculate according to the discrimination of each frame image the average recognition rate of sample to be identified.
A kind of LSTM neural metwork training system provided by the embodiments of the present application is introduced below, described below one Kind of LSTM neural metwork training system can be cross-referenced with a kind of above-described LSTM neural network training method.
Referring to Fig. 5, a kind of structure chart of LSTM neural metwork training system provided by the embodiments of the present application, as shown in figure 5, Including:
Module 501 is constructed, for increasing second level derivative term in the propagated forward algorithm of LSTM nerve net, and according to increase Propagated forward algorithm afterwards updates the back-propagation algorithm of the LSTM neural network, improves LSTM neural network with building;Its In, the second level derivative term is second level derivative of the cell to the time;
Training module 502, for obtaining training sample, and according to the training sample training improvement LSTM nerve net Network, to obtain the improvement LSTM neural network of training completion.
LSTM neural metwork training system provided by the embodiments of the present application improves original LSTM neural network, original Increase cell in propagated forward algorithm to the second level derivative term of time, and according to the corresponding modification of improved propagated forward algorithm after To propagation algorithm.The identification that action sequence is carried out using improved LSTM neural network, due to propagated forward algorithm and backward There are cell in propagation algorithm to the second level derivative of time, can be fine saves the temporal information of action sequence, avoids identifying As a result time misalignment.
On the basis of the above embodiments, further include as a preferred implementation manner,:
Test sample is obtained, and the test sample is inputted in the improvement LSTM neural network that training is completed, is moved Make sequence recognition result;
The average recognition rate of the test sample is calculated according to the discrimination of frame image each in the test sample.
On the basis of the above embodiments, the training module 502 includes as a preferred implementation manner,:
Acquiring unit for obtaining raw image data, and obtains original image progress pretreatment operation described Training sample;Wherein, the pretreatment operation includes any one of turning operation, down-sampling operation or cutting operation or several Combination;
Training unit, for training the improvement LSTM neural network according to the training sample, to obtain training completion Improvement LSTM neural network.
On the basis of the above embodiments, the training unit is specially by the instruction as a preferred implementation manner, The each frame image practiced in sample inputs in the improvement LSTM neural network, and adjusts the pass for improving LSTM neural network Bond parameter is until the discrimination of the improvement LSTM neural network output reaches preset value, to obtain the improvement LSTM of training completion Neural network unit.
On the basis of the above embodiments, the training unit is specially by the instruction as a preferred implementation manner, The each frame image practiced in sample inputs in the improvement LSTM neural network, and utilizes cross validation method and pair-wise Algorithm adjusts the unit of the key parameter for improving LSTM neural network.
On the basis of the above embodiments, the key parameter includes epoch, study as a preferred implementation manner, Rate or any one or several combinations of learning rate decaying.
Present invention also provides a kind of LSTM neural metwork training equipment, referring to Fig. 6, one kind provided by the embodiments of the present application The structure chart of LSTM neural metwork training equipment, as shown in fig. 6, including:
Memory 100, for storing computer program;
Step provided by above-described embodiment may be implemented in processor 200 when for executing the computer program.
Specifically, memory 100 includes non-volatile memory medium, built-in storage.Non-volatile memory medium storage There are operating system and computer-readable instruction, which is that the operating system and computer in non-volatile memory medium can The operation of reading instruction provides environment.Processor 200 provides calculating and control ability for LSTM neural metwork training equipment, executes institute When stating the computer program saved in memory 100, following steps may be implemented:In the propagated forward algorithm of LSTM neural network Middle increase second level derivative term, and calculated according to the backpropagation that the propagated forward algorithm after increase updates the LSTM neural network Method improves LSTM neural network with building;Wherein, the second level derivative term is second level derivative of the cell to the time;Obtain training Sample, and according to the training sample training improvement LSTM neural network, to obtain the improvement LSTM nerve of training completion Network.
The embodiment of the present application improves original LSTM neural network, increases cell in original propagated forward algorithm to the time Second level derivative term, and according to the corresponding modification Back Propagation Algorithm of improved propagated forward algorithm.Utilize improved LSTM Neural network carries out the identification of action sequence, since there are cell to the two of the time in propagated forward algorithm and Back Propagation Algorithm Grade derivative can be fine saves the temporal information of action sequence, avoids the time misalignment of recognition result.
Preferably, it when the processor 200 executes the computer subprogram saved in the memory 100, may be implemented Following steps:Test sample is obtained, and the test sample is inputted in the improvement LSTM neural network that training is completed, is moved Make sequence recognition result;The average identification of the test sample is calculated according to the discrimination of frame image each in the test sample Rate.
Preferably, it when the processor 200 executes the computer subprogram saved in the memory 100, may be implemented Following steps:Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains the training sample;Its In, the pretreatment operation includes the combination of any one of turning operation, down-sampling operation or cutting operation or several.
Preferably, it when the processor 200 executes the computer subprogram saved in the memory 100, may be implemented Following steps:Each frame image in the training sample is inputted in the improvement LSTM neural network, and is changed described in adjusting Into LSTM neural network key parameter until it is described improvement LSTM neural network output discrimination reach preset value, to obtain The improvement LSTM neural network that training is completed.
Preferably, it when the processor 200 executes the computer subprogram saved in the memory 100, may be implemented Following steps:The key parameter for improving LSTM neural network is adjusted using cross validation method and pair-wise algorithm.
On the basis of the above embodiments, preferably, referring to Fig. 7, the LSTM neural metwork training is set It is standby to further include:
Input interface 300 is connected with processor 200, for obtaining computer program, parameter and the instruction of external importing, It saves through the control of processor 200 into memory 100.The input interface 300 can be connected with input unit, and it is manual to receive user The parameter or instruction of input.The input unit can be the touch layer covered on display screen, be also possible to be arranged in terminal enclosure Key, trace ball or Trackpad, be also possible to keyboard, Trackpad or mouse etc..
Display unit 400 is connected with processor 200, the data sent for video-stream processor 200.The display unit 400 It can be display screen, liquid crystal display or the electric ink display screen etc. in PC machine.It, can be with specifically, in the present embodiment The action sequence recognition result etc. of sample to be identified is shown by display unit 400.
The network port 500 is connected with processor 200, for being communicatively coupled with external each terminal device.The communication link The communication technology used by connecing can be cable communicating technology or wireless communication technique, and such as mobile high definition chained technology (MHL) leads to It is blue with universal serial bus (USB), high-definition media interface (HDMI), adopting wireless fidelity technology (WiFi), Bluetooth Communication Technology, low-power consumption The tooth communication technology, communication technology based on IEEE802.11s etc..Specifically, in the present embodiment, the network port can be passed through 500 import original LSTM neural network model etc. to processor 200.
Video collector 600 is connected with processor 200, for obtaining video data, then sends video data to place It manages device 200 and carries out Data Analysis Services, processing result can be sent to display unit 400 and shown by subsequent processor 200, Or it is transmitted to processor 100 and is saved, or preset data receiver end can be sent to by the network port 500 End.Specifically, in the present embodiment, sample, training sample and test sample to be identified etc. can be obtained with video collector 600.
Present invention also provides a kind of computer readable storage medium, which may include:USB flash disk, mobile hard disk, Read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic The various media that can store program code such as dish or CD.Computer program, the calculating are stored on the storage medium Machine program realizes following steps when being executed by processor:Increase second level derivative in the propagated forward algorithm of LSTM neural network , and the back-propagation algorithm of the LSTM neural network is updated according to the propagated forward algorithm after increase, it is improved with constructing LSTM neural network;Wherein, the second level derivative term is second level derivative of the cell to the time;Training sample is obtained, and according to institute The training sample training improvement LSTM neural network is stated, to obtain the improvement LSTM neural network of training completion.
The embodiment of the present application improves original LSTM neural network, increases cell in original propagated forward algorithm to the time Second level derivative term, and according to the corresponding modification Back Propagation Algorithm of improved propagated forward algorithm.Utilize improved LSTM Neural network carries out the identification of action sequence, since there are cell to the two of the time in propagated forward algorithm and Back Propagation Algorithm Grade derivative can be fine saves the temporal information of action sequence, avoids the time misalignment of recognition result.
Preferably, when the computer subprogram stored in the computer readable storage medium is executed by processor, specifically Following steps may be implemented:Test sample is obtained, and the test sample is inputted into the improvement LSTM neural network that training is completed In, obtain action sequence recognition result;The test sample is calculated according to the discrimination of frame image each in the test sample Average recognition rate.
Preferably, when the computer subprogram stored in the computer readable storage medium is executed by processor, specifically Following steps may be implemented:Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains the training Sample;Wherein, the pretreatment operation includes the group of any one of turning operation, down-sampling operation or cutting operation or several It closes.
Preferably, when the computer subprogram stored in the computer readable storage medium is executed by processor, specifically Following steps may be implemented:Each frame image in the training sample is inputted in the improvement LSTM neural network, and is adjusted The key parameter for improving LSTM neural network is saved until the discrimination of the improvement LSTM neural network output reaches default Value, to obtain the improvement LSTM neural network of training completion.
Preferably, when the computer subprogram stored in the computer readable storage medium is executed by processor, specifically Following steps may be implemented:The pass for improving LSTM neural network is adjusted using cross validation method and pair-wise algorithm Bond parameter.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.
Each embodiment is described in a progressive manner in specification, the highlights of each of the examples are with other realities The difference of example is applied, the same or similar parts in each embodiment may refer to each other.For system disclosed in embodiment Speech, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is referring to method part illustration ?.It should be pointed out that for those skilled in the art, under the premise of not departing from the application principle, also Can to the application, some improvement and modification can also be carried out, these improvement and modification also fall into the protection scope of the claim of this application It is interior.
It should also be noted that, in the present specification, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.

Claims (10)

1. a kind of LSTM neural network training method, which is characterized in that including:
Increase second level derivative term in the propagated forward algorithm of LSTM neural network, and more according to the propagated forward algorithm after increase The back-propagation algorithm of the new LSTM neural network improves LSTM neural network with building;Wherein, the second level derivative term is Second level derivative of the cell to the time;
Training sample is obtained, and according to the training sample training improvement LSTM neural network, to obtain training completion Improve LSTM neural network.
2. LSTM neural network training method according to claim 1, which is characterized in that further include:
Test sample is obtained, and the test sample is inputted in the improvement LSTM neural network that training is completed, obtains movement sequence Column recognition result;
The average recognition rate of the test sample is calculated according to the discrimination of frame image each in the test sample.
3. LSTM neural network training method according to claim 1, which is characterized in that the acquisition training sample, including:
Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains the training sample;Wherein, described Pretreatment operation includes the combination of any one of turning operation, down-sampling operation or cutting operation or several.
4. any one of -3 LSTM neural network training method according to claim 1, which is characterized in that according to the trained sample This training improvement LSTM neural network, including:
Each frame image in the training sample is inputted in the improvement LSTM neural network, and adjusts the improvement LSTM The key parameter of neural network is until the discrimination of the improvement LSTM neural network output reaches preset value, to obtain having trained At improvement LSTM neural network.
5. LSTM neural network training method according to claim 4, which is characterized in that adjust the improvement LSTM nerve net The key parameter of network, including:
The key parameter for improving LSTM neural network is adjusted using cross validation method and pair-wise algorithm.
6. LSTM neural network training method according to claim 5, which is characterized in that the key parameter include epoch, Learning rate or any one or several combinations of learning rate decaying.
7. a kind of action identification method, which is characterized in that including:
Raw image data is obtained, and pretreatment operation is carried out to the original image and obtains sample to be identified;
By the sample input improvement LSTM neural network that training is completed as described in claim 1 to be identified, obtains movement and know Other result.
8. a kind of LSTM neural metwork training system, which is characterized in that including:
Module is constructed, for increasing second level derivative term in the propagated forward algorithm of LSTM nerve net, and according to before after increase The back-propagation algorithm of the LSTM neural network is updated to propagation algorithm, and LSTM neural network is improved with building;Wherein, described Second level derivative term is second level derivative of the cell to the time;
Training module, for obtaining training sample, and according to the training sample training improvement LSTM neural network, to obtain The improvement LSTM neural network completed to training.
9. a kind of LSTM neural metwork training equipment, which is characterized in that including:
Memory, for storing computer program;
Processor realizes that LSTM neural network is instructed as described in any one of claim 1 to 6 when for executing the computer program The step of practicing method.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program realizes the LSTM neural metwork training as described in any one of claim 1 to 6 when the computer program is executed by processor The step of method.
CN201810548634.9A 2018-05-31 2018-05-31 Action identification method and LSTM neural network training method and relevant apparatus Withdrawn CN108875601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810548634.9A CN108875601A (en) 2018-05-31 2018-05-31 Action identification method and LSTM neural network training method and relevant apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810548634.9A CN108875601A (en) 2018-05-31 2018-05-31 Action identification method and LSTM neural network training method and relevant apparatus

Publications (1)

Publication Number Publication Date
CN108875601A true CN108875601A (en) 2018-11-23

Family

ID=64335985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810548634.9A Withdrawn CN108875601A (en) 2018-05-31 2018-05-31 Action identification method and LSTM neural network training method and relevant apparatus

Country Status (1)

Country Link
CN (1) CN108875601A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784490A (en) * 2019-02-02 2019-05-21 北京地平线机器人技术研发有限公司 Training method, device and the electronic equipment of neural network
CN110125909A (en) * 2019-05-22 2019-08-16 南京师范大学镇江创新发展研究院 A kind of multi-information fusion human body exoskeleton robot Control protection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106901723A (en) * 2017-04-20 2017-06-30 济南浪潮高新科技投资发展有限公司 A kind of electrocardiographic abnormality automatic diagnosis method
CN107153812A (en) * 2017-03-31 2017-09-12 深圳先进技术研究院 A kind of exercising support method and system based on machine vision
WO2017185347A1 (en) * 2016-04-29 2017-11-02 北京中科寒武纪科技有限公司 Apparatus and method for executing recurrent neural network and lstm computations
CN107451552A (en) * 2017-07-25 2017-12-08 北京联合大学 A kind of gesture identification method based on 3D CNN and convolution LSTM
CN107679522A (en) * 2017-10-31 2018-02-09 内江师范学院 Action identification method based on multithread LSTM
CN107766839A (en) * 2017-11-09 2018-03-06 清华大学 Action identification method and device based on neutral net

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185347A1 (en) * 2016-04-29 2017-11-02 北京中科寒武纪科技有限公司 Apparatus and method for executing recurrent neural network and lstm computations
CN107153812A (en) * 2017-03-31 2017-09-12 深圳先进技术研究院 A kind of exercising support method and system based on machine vision
CN106901723A (en) * 2017-04-20 2017-06-30 济南浪潮高新科技投资发展有限公司 A kind of electrocardiographic abnormality automatic diagnosis method
CN107451552A (en) * 2017-07-25 2017-12-08 北京联合大学 A kind of gesture identification method based on 3D CNN and convolution LSTM
CN107679522A (en) * 2017-10-31 2018-02-09 内江师范学院 Action identification method based on multithread LSTM
CN107766839A (en) * 2017-11-09 2018-03-06 清华大学 Action identification method and device based on neutral net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VIVEK VEERIAH ET AL: "Differential Recurrent Neural Networks for Action Recognition", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
杨煜等: "TensorFlow 平台上基于 LSTM 神经网络的人体动作分类", 《智能计算机与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784490A (en) * 2019-02-02 2019-05-21 北京地平线机器人技术研发有限公司 Training method, device and the electronic equipment of neural network
US11645537B2 (en) 2019-02-02 2023-05-09 Beijing Horizon Robotics Technology Research And Development Co., Ltd. Neural network training method, neural network training apparatus and electronic device
CN110125909A (en) * 2019-05-22 2019-08-16 南京师范大学镇江创新发展研究院 A kind of multi-information fusion human body exoskeleton robot Control protection system
CN110125909B (en) * 2019-05-22 2022-04-22 南京师范大学镇江创新发展研究院 Multi-information fusion human body exoskeleton robot control protection system

Similar Documents

Publication Publication Date Title
EP4012706A1 (en) Training method and device for audio separation network, audio separation method and device, and medium
CN108764176A (en) A kind of action sequence recognition methods, system and equipment and storage medium
EP3660854A1 (en) Triage dialogue method, device, and system
KR101002751B1 (en) Device for education diagnosis using brain waves, education management system using the device and operating method thereof
EP2835765A2 (en) Method, apparatus and computer program product for activity recognition
CN111598160B (en) Training method and device of image classification model, computer equipment and storage medium
CN105335136A (en) Control method and device of intelligent equipment
CN107240319A (en) A kind of interactive Scene Teaching system for the K12 stages
CN110533987B (en) Real-scene interactive simulation driving system and method thereof
CN112546390A (en) Attention training method and device, computer equipment and storage medium
CN108875601A (en) Action identification method and LSTM neural network training method and relevant apparatus
CN107169427B (en) Face recognition method and device suitable for psychology
CN106774861B (en) Intelligent device and behavior data correction method and device
CN106383640A (en) Projection method
CN111858951A (en) Learning recommendation method and device based on knowledge graph and terminal equipment
CN108255962A (en) Knowledge Relation method, apparatus, storage medium and electronic equipment
CN108024763A (en) Action message provides method and supports its electronic equipment
CN111707375A (en) Electronic class card with intelligent temperature measurement attendance and abnormal behavior detection functions
CN110349066A (en) A kind of children education auxiliary system and method
WO2019080900A1 (en) Neural network training method and device, storage medium, and electronic device
CN107027072A (en) A kind of video marker method, terminal and computer-readable recording medium
CN106383809A (en) Solving method of system for solving mathematical functions
CN103488297A (en) Online semi-supervising character input system and method based on brain-computer interface
CN103297546A (en) Method and system for visual perception training and server
Alam et al. ASL champ!: a virtual reality game with deep-learning driven sign recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20181123

WW01 Invention patent application withdrawn after publication