CN111539495B - Recognition method based on recognition model, model training method and device - Google Patents

Recognition method based on recognition model, model training method and device Download PDF

Info

Publication number
CN111539495B
CN111539495B CN202010659647.0A CN202010659647A CN111539495B CN 111539495 B CN111539495 B CN 111539495B CN 202010659647 A CN202010659647 A CN 202010659647A CN 111539495 B CN111539495 B CN 111539495B
Authority
CN
China
Prior art keywords
extraction module
feature extraction
hidden state
gate signal
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010659647.0A
Other languages
Chinese (zh)
Other versions
CN111539495A (en
Inventor
赵泽宇
李科
黄宇凯
郝玉峰
邵志明
张卫强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Speechocean Technology Co ltd
Tsinghua University
Original Assignee
Beijing Speechocean Technology Co ltd
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Speechocean Technology Co ltd, Tsinghua University filed Critical Beijing Speechocean Technology Co ltd
Priority to CN202010659647.0A priority Critical patent/CN111539495B/en
Publication of CN111539495A publication Critical patent/CN111539495A/en
Application granted granted Critical
Publication of CN111539495B publication Critical patent/CN111539495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The disclosure relates to a recognition method based on a recognition model, a model training method and a model training device. The recognition method based on the recognition model comprises the step that the recognition model comprises a feature extraction module, and the feature extraction module comprises a plurality of LSTM units. The identification method comprises the following steps: acquiring a feature sequence to be identified, wherein the feature sequence to be identified comprises continuous features; inputting the plurality of features to a plurality of LSTM units respectively in sequence, and obtaining a first hidden state corresponding to the features through each LSTM unit; obtaining a current output result of the feature extraction module based on the plurality of features and first hidden states corresponding to the plurality of features respectively and a previous output result of the feature extraction module; and obtaining an identification result based on the current output result of the feature extraction module. Through the method and the device, the current output result of the feature extraction module is more reasonable and accurate.

Description

Recognition method based on recognition model, model training method and device
Technical Field
The present disclosure relates to the field of recognition technologies, and in particular, to a recognition method based on a recognition model, a model training method, and an apparatus.
Background
With the development of the recent years, the deep learning and deep neural network are widely applied in the fields of image recognition, speech recognition, medical diagnosis, natural language processing and the like, and have achieved a lot of surprising results. For example, recurrent neural networks play a significant role in the processing of sequence signals in speech recognition.
However, in the current sequential signal processing based on the recurrent neural network, information is only transferred between every time, that is, only the information correlation of a single input is considered.
Disclosure of Invention
In order to overcome the related technical problems, the present disclosure provides an identification method based on an identification model, a model training method and an apparatus.
In a first aspect, an embodiment of the present disclosure provides an identification method based on an identification model. Wherein the recognition model comprises a feature extraction module, the feature extraction module comprises a plurality of LSTM units, and the recognition method comprises the following steps: acquiring a feature sequence to be identified, wherein the feature sequence to be identified comprises continuous features; inputting a plurality of features to the plurality of LSTM units in sequence, and obtaining a first hidden state corresponding to the features through each LSTM unit; obtaining a current output result of the feature extraction module based on the plurality of features, first hidden states corresponding to the plurality of features respectively, and a previous output result of the feature extraction module; and obtaining an identification result based on the current output result of the feature extraction module.
In one embodiment, the identification method further comprises: determining a previous first hidden state of the feature; the sequentially and respectively inputting the plurality of features into the plurality of LSTM units, and obtaining a first hidden state corresponding to the features through each of the LSTM units, includes: determining a first input gate signal, a first forgetting gate signal and a first original hidden state corresponding to the characteristics based on the characteristics; and obtaining a first hidden state corresponding to the feature based on the first input gate signal, the first forgetting gate signal, the first original hidden state and the previous first hidden state of the feature.
In another embodiment, the identification method further comprises: determining a previous output result of the feature; the determining, based on the feature, a first input gate signal, a first forgetting gate signal, and a first original hidden state corresponding to the feature includes: and obtaining the first input gate signal, the first forgetting gate signal and the first original hidden state corresponding to the characteristic based on the characteristic and the previous output result of the characteristic.
In yet another embodiment, the identification method further comprises: obtaining the first output gate signal corresponding to the characteristic based on the characteristic and the previous output result of the characteristic; and obtaining a current output result of the characteristic based on the first output gate signal and a first hidden state corresponding to the characteristic.
In another embodiment, the obtaining the current output result of the feature extraction module based on the plurality of features, the first hidden states corresponding to the plurality of features, and the previous output result of the feature extraction module includes: obtaining the current input features of the feature extraction module based on the plurality of features; obtaining the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module and the first hidden states corresponding to the plurality of features respectively; and obtaining the current output result of the feature extraction module based on the hidden state of the feature extraction module.
In another embodiment, the obtaining of the current input feature of the feature extraction module based on the plurality of features is determined by:
Figure 793291DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 514122DEST_PATH_IMAGE002
extracting the current input features of the feature extraction module;
Figure 86049DEST_PATH_IMAGE003
is the feature.
In yet another embodiment, the identification method further comprises: determining a previous hidden state of the feature extraction module; the obtaining of the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module, and the first hidden states corresponding to the plurality of features respectively includes: obtaining a second input gate signal and a second forgetting gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module; and obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module corresponding to the plurality of features respectively.
In another embodiment, the obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal, and the previous hidden state of the feature extraction module, which correspond to the plurality of features, includes: obtaining an original hidden state of the feature extraction module based on first hidden states respectively corresponding to the plurality of features; and obtaining the hidden state of the feature extraction module based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module.
In another embodiment, the original hidden state of the feature extraction module is obtained based on the first hidden states corresponding to the plurality of features, and the method is implemented as follows:
Figure 307951DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 140778DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 907877DEST_PATH_IMAGE006
is a first hidden state corresponding to each of the features.
In another embodiment, the hidden state of the feature extraction module is obtained based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal, and the previous hidden state of the feature extraction module, and is implemented by the following formula:
Figure 826155DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 117327DEST_PATH_IMAGE008
a hidden state of the feature extraction module;
Figure 273502DEST_PATH_IMAGE009
extracting an original hidden state of the feature extraction module;
Figure 742661DEST_PATH_IMAGE010
extracting a previous hidden state of the feature extraction module;
Figure 617076DEST_PATH_IMAGE011
is the second input gate signal;
Figure 446361DEST_PATH_IMAGE012
is the second forgetting gate signal;
Figure 988200DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
In yet another embodiment, the identification method further comprises: obtaining a second output gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module; obtaining a current output result of the feature extraction module based on the hidden state of the feature extraction module, including: and obtaining the current output result of the feature extraction module based on the second output gate signal and the hidden state of the feature extraction module.
In another embodiment, the obtaining the current output result of the feature extraction module based on the second output gate signal and the hidden state of the feature extraction module is implemented by the following formula:
Figure 221736DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 724392DEST_PATH_IMAGE015
the current output result of the characteristic extraction module is obtained;
Figure 967155DEST_PATH_IMAGE016
is the second output gate signal;
Figure 488135DEST_PATH_IMAGE008
a hidden state of the feature extraction module;
Figure 892571DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
In yet another embodiment, the method further comprises: determining an output result of the preset feature extraction module; and if the current output result of the feature extraction module is the output result of the initial time step, taking the output result of the preset feature extraction module as the previous output result of the feature extraction module.
In a second aspect, an embodiment of the present disclosure provides a model training method for identifying a model, where the identification model is used for identification according to the identification method of the first aspect or any implementation manner of the first aspect. The method comprises the following steps: acquiring a training set, wherein the training set comprises a plurality of training samples and standard identifications corresponding to the training samples; obtaining the recognition result of the training sample through the recognition model; and monitoring the recognition result through the standard identification, and adjusting the parameters of the recognition model.
In a third aspect, an embodiment of the present disclosure provides a recognition apparatus based on a recognition model. The recognition model comprises a feature extraction module comprising a plurality of LSTM units, the recognition apparatus comprising: the device comprises a module for acquiring a characteristic sequence to be identified, a module for acquiring the characteristic; the acquisition module of a first hidden state is used for sequentially and respectively inputting a plurality of characteristics to the plurality of LSTM units and acquiring the first hidden state corresponding to the characteristics through each LSTM unit; the processing module is used for obtaining a current output result of the feature extraction module based on the plurality of features, the first hidden states corresponding to the plurality of features respectively and the previous output result of the feature extraction module; and the identification module is used for obtaining an identification result based on the current output result of the feature extraction module.
In one embodiment, the identification apparatus further comprises a determination module configured to: determining a previous first hidden state of the feature; the obtain first hidden state module is configured to: determining a first input gate signal, a first forgetting gate signal and a first original hidden state corresponding to the characteristics based on the characteristics; and obtaining a first hidden state corresponding to the feature based on the first input gate signal, the first forgetting gate signal, the first original hidden state and the previous first hidden state of the feature.
In another embodiment, the determining module is further configured to: determining a previous output result of the feature; the obtain first hidden state module is configured to: and obtaining the first input gate signal, the first forgetting gate signal and the first original hidden state corresponding to the characteristic based on the characteristic and the previous output result of the characteristic.
In yet another embodiment, the processing module is further configured to: obtaining the first output gate signal corresponding to the characteristic based on the characteristic and the previous output result of the characteristic; and obtaining a current output result of the characteristic based on the first output gate signal and a first hidden state corresponding to the characteristic.
In yet another embodiment, the processing module is configured to: obtaining the current input features of the feature extraction module based on the plurality of features; obtaining the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module and the first hidden states corresponding to the plurality of features respectively; and obtaining the current output result of the feature extraction module based on the hidden state of the feature extraction module.
In yet another embodiment, the processing module is configured to determine the current input features of the feature extraction module based on:
Figure 882524DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 663398DEST_PATH_IMAGE018
extracting the current input features of the feature extraction module;
Figure 914251DEST_PATH_IMAGE003
is the feature.
In yet another embodiment, the determining module is further configured to: determining a previous hidden state of the feature extraction module; the processing module is used for: obtaining a second input gate signal and a second forgetting gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module; and obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module corresponding to the plurality of features respectively.
In yet another embodiment, the processing module is configured to: obtaining an original hidden state of the feature extraction module based on first hidden states respectively corresponding to the plurality of features; and obtaining the hidden state of the feature extraction module based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module.
In yet another embodiment, the processing module is configured to determine an original hidden state of the feature extraction module based on:
Figure 885661DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 221965DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 681896DEST_PATH_IMAGE006
is a first hidden state corresponding to each of the features.
In yet another embodiment, the processing module is configured to determine the hidden state of the feature extraction module based on:
Figure 52834DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 799074DEST_PATH_IMAGE019
a hidden state of the feature extraction module;
Figure 747307DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 869984DEST_PATH_IMAGE010
extracting a previous hidden state of the feature extraction module;
Figure 970795DEST_PATH_IMAGE011
is the second input gate signal;
Figure 887935DEST_PATH_IMAGE020
is the second forgetting gate signal;
Figure 464410DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
In yet another embodiment, the processing module is configured to: obtaining a second output gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module; and obtaining the current output result of the feature extraction module based on the second output gate signal and the hidden state of the feature extraction module.
In yet another embodiment, the processing module is configured to determine the current output of the feature extraction module based on:
Figure 515412DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 329784DEST_PATH_IMAGE015
the current output result of the characteristic extraction module is obtained;
Figure 824350DEST_PATH_IMAGE016
is the second output gate signal;
Figure 622542DEST_PATH_IMAGE019
a hidden state of the feature extraction module;
Figure 477234DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
In another embodiment, the identification apparatus further includes a determining module, configured to: determining an output result of the preset feature extraction module; and if the current output result of the feature extraction module is the output result of the initial time step, taking the output result of the preset feature extraction module as the previous output result of the feature extraction module.
In a fourth aspect, an embodiment of the present disclosure provides a model training apparatus for identifying a model, where the identification model is used for identification according to the identification method of the first aspect or any implementation manner of the first aspect. The device comprises: the training set acquisition module is used for acquiring a training set, and the training set comprises a plurality of training samples and standard marks corresponding to the training samples; the identification result obtaining module is used for obtaining the identification result of the training sample through the identification model; and the parameter adjusting module is used for supervising the recognition result through the standard identifier and adjusting the parameters of the recognition model.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, where the electronic device includes: a memory to store instructions; and a processor, configured to call the instruction stored in the memory to execute the recognition model-based recognition method according to the first aspect or any embodiment of the first aspect or to execute the model training method according to the recognition model according to the second aspect or any embodiment of the second aspect.
In a sixth aspect, the present disclosure provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when executed by a processor, the computer-executable instructions perform the recognition model-based recognition method according to the first aspect or any one of the embodiments of the first aspect, or perform the model training method according to the recognition model according to the second aspect or any one of the embodiments of the second aspect.
The disclosure provides a recognition method based on a recognition model, a model training method and a model training device. According to the identification method based on the identification model, a plurality of LSTM units are arranged in the feature extraction module, the current output result of the feature extraction module is obtained based on the first hidden state of each feature in continuous features in the feature sequence to be identified, which is obtained by each LSTM unit, and the identification result is obtained based on the current output result of the feature extraction module. Through the method and the device, the relevance of continuous features in the feature sequence to be recognized is considered in the current output result of the feature extraction module, the current output result of the feature extraction module is more reasonable and accurate, and the recognition result obtained based on the current output result of the feature extraction module is more reasonable and has reference significance.
Drawings
The above and other objects, features and advantages of the embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a flow chart illustrating a recognition method based on a recognition model according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating obtaining a first hidden state corresponding to a feature in a recognition method based on a recognition model according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating obtaining a current output result of a feature extraction module in a recognition method based on a recognition model according to an embodiment of the disclosure;
FIG. 4 is a flowchart illustrating a hidden state of a feature extraction module in a recognition method based on a recognition model according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating an application of a recognition method based on a recognition model according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a model training method for recognizing a model according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a recognition apparatus based on a recognition model according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating a recognition model-based model training apparatus provided by an embodiment of the present disclosure;
fig. 9 shows a block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
The principles and spirit of the present disclosure will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
It should be noted that, although the expressions "first", "second", etc. are used herein to describe different modules, steps, data, etc. of the embodiments of the present disclosure, the expressions "first", "second", etc. are merely used to distinguish between different modules, steps, data, etc. and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable.
With the development of the deep learning and deep neural network in recent years, the deep learning and deep neural network are widely applied to the fields of image recognition, voice recognition, medical diagnosis, natural language processing and the like. For example, recurrent neural networks play a significant role in the processing of sequence signals in speech recognition.
Initially, the recurrent neural network has problems of gradient disappearance, gradient explosion, and the like, resulting in poor recognition effect based on the recurrent neural network. In order to solve this problem, a gated cycle unit and a Long Short Term Memory (LSTM) unit are proposed. The LSTM unit comprises an input gate control signal, an output gate control signal and a forgetting gate control signal, and the problems of gradient disappearance, gradient explosion and the like can be well inhibited and avoided through the gate control signals.
However, the LSTM unit only transfers information between each time instant, that is, only considers the information correlation of a single input, and does not consider the influence of the correlation between adjacent multiple inputs on the output result.
The present disclosure provides an identification method based on an identification model, which allows a current output result of a feature extraction module to consider the correlation of continuous features in a feature sequence to be identified, and allows the current output result of the feature extraction module to be more reasonable and accurate, so that the identification result obtained based on the current output result of the feature extraction module is more reasonable and accurate and has reference significance.
The identification method based on the identification model is applied to identification processing of the characteristic sequence to be identified with continuous characteristics, so that the current output result of the characteristic extraction module for the characteristic sequence to be identified is more reasonable and accurate, and the identification result obtained based on the current output result is more reasonable and accurate and has reference significance.
Fig. 1 shows a flowchart of a recognition model-based recognition method according to an embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, the recognition model includes a feature extraction module, wherein the feature extraction module includes a plurality of LSTM units.
As shown in fig. 1, the recognition method based on the recognition model includes the following steps.
In step S11, a feature sequence to be recognized is acquired. Wherein the sequence of features to be identified comprises consecutive features.
In one example, a temporally sequential sequence of voice segments, video segments, text, etc. may be pre-processed to obtain a corresponding sequence of features. Wherein the sequence of features comprises consecutive features.
Taking a speech segment as an example, the speech segment may be preprocessed to obtain features of each frame of speech of the speech segment, and the features of all the frame of speech are arranged in time sequence to form a feature sequence to be recognized. The features of the continuous frame speech can be understood as continuous features in the sequence of features to be recognized.
In step S12, the plurality of features are sequentially and respectively input to the plurality of LSTM units, and a first hidden state corresponding to the feature is obtained by each LSTM unit.
The feature sequence to be recognized includes continuous features, and in the application process, a plurality of features in the continuous features can be sequentially and respectively input to the plurality of LSTM units, so that a first hidden state corresponding to the features is obtained through each LSTM unit.
In step S13, a current output result of the feature extraction module is obtained based on the plurality of features and the first hidden states corresponding to the plurality of features respectively, and the previous output result of the feature extraction module.
By the method, when the feature extraction module identifies the feature sequence to be identified, the influence of the correlation between a plurality of adjacent features in the feature sequence to be identified on the current output result can be considered, and a basis is laid for a more reasonable and accurate identification result about the feature sequence to be identified, which is obtained based on the current output result of the feature extraction module.
It should be noted that the current output result of the feature extraction module may be understood as a result that is output after considering the correlation between a plurality of adjacent features in the feature sequence to be recognized and considering the integrity and the correlation of the plurality of features for the plurality of features.
Taking the feature sequence to be recognized as a speech segment with nineteen frames as an example, the plurality of features may be features of speech corresponding to three adjacent frames (e.g., the first frame to the third frame). In the application process, the characteristics of the voices of the three adjacent frames are sequentially input into an LSTM unit to obtain a first hidden state of each frame of voice in the three frames of voice; and based on the characteristics of the three frames of speech and the first hidden state, obtaining a result which is output by considering the integrity and the correlation of the characteristics of the three frames of speech. The result of this output may be understood as the current output result of the feature extraction module.
Furthermore, the first hidden state is calculated for the features of the next group of adjacent three frames (e.g. the fourth frame to the sixth frame) of speech. And obtaining the current output result of the feature extraction module based on the features of the voices of the fourth frame to the sixth frame and the corresponding first hidden state.
And processing all the features of the speech segments with the nineteen-nineteen frames in the above mode to obtain the current output result of the corresponding feature extraction module.
In this way, for the speech segment with ninety-nine frames, the feature extraction module outputs the current output results of the thirty-three feature extraction modules. Further, the voice segment is recognized based on the thirty-three current output results to obtain a recognition result. Because the correlation of adjacent features is considered in the current output result, the identification result is more reasonable and has reference significance.
In one example, a plurality of adjacent features of the continuous features in the feature sequence to be recognized may be sequentially and respectively input to the plurality of LSTM units. After the LSTM units complete the calculation of the first hidden states of the features, the LSTM units obtain the current output result of the feature extraction module based on the features and the corresponding first hidden states.
Since the current output result of the feature extraction module is obtained by considering the correlation of the plurality of features, the process of obtaining the current output result of the feature extraction module based on the plurality of features and the corresponding plurality of first hidden states may not be performed when the plurality of LSTM units do not completely calculate the first hidden states of the plurality of features.
In an example, in the process of "inputting a plurality of features to a plurality of LSTM units in sequence respectively, and obtaining a first hidden state corresponding to the feature by each LSTM unit", the number of features calculated by the plurality of LSTM units may be the same as the number of LSTM units, or may be the same as an integral multiple of the number of LSTM units.
In step S14, a recognition result is obtained based on the current output result of the feature extraction module.
After all the characteristics are processed by the characteristic extraction module, a plurality of current output results can be obtained. And splicing the current output results, and identifying through an identification unit, such as a softmax identification model, so as to obtain an identification result of the characteristic sequence to be identified, wherein the identification result is about all the characteristics and takes the correlation between adjacent characteristics into consideration.
According to the identification method based on the identification model, a plurality of LSTM units are arranged in a feature extraction module, the current output result of the feature extraction module is obtained based on the first hidden state of each feature in continuous features in a feature sequence to be identified, which is obtained by each LSTM unit, and the identification result is obtained based on the current output result of the feature extraction module. Through the method and the device, the relevance of continuous features in the feature sequence to be recognized is considered in the current output result of the feature extraction module, and the current output result of the feature extraction module is more reasonable and accurate.
In an exemplary embodiment of the present disclosure, the recognition method based on the recognition model further includes determining a previous first hidden state of the feature.
In one example, the description continues with an example of having a ninety-nine frame speech segment. And if the feature is the feature of the third frame of voice, the previous first hidden state of the feature is the first hidden state corresponding to the feature of the second frame of voice.
Fig. 2 shows a flowchart for obtaining a first hidden state corresponding to a feature in a recognition method based on a recognition model according to an embodiment of the present disclosure.
The method for obtaining the first hidden state corresponding to the features by inputting the features to the LSTM units sequentially and respectively may include the following steps.
In step S21, based on the feature, a first input gate signal, a first forgetting gate signal, and a first original hidden state corresponding to the feature are determined.
In one example, a previous output of the feature is determined. And if the feature is the feature of the third frame of voice, the last output result of the feature is the output result corresponding to the feature of the second frame of voice.
The first input gate signal, the first forgetting gate signal, the first output gate signal and the first original hidden state corresponding to the characteristic can be obtained based on the characteristic and the previous output result of the characteristic.
The current time step of the current order corresponds to the characteristics of
Figure 146113DEST_PATH_IMAGE022
The output of the last time step, i.e. the previous output of the feature, is
Figure 936215DEST_PATH_IMAGE023
The first input gate signal corresponding to the characteristic can be calculated in the following manner
Figure 97069DEST_PATH_IMAGE024
First forget gate signal
Figure 99660DEST_PATH_IMAGE025
A first output gate signal
Figure 278837DEST_PATH_IMAGE026
And a first original hidden state
Figure 239840DEST_PATH_IMAGE027
Figure 481465DEST_PATH_IMAGE028
Figure 428693DEST_PATH_IMAGE029
Figure 993535DEST_PATH_IMAGE030
Figure 859860DEST_PATH_IMAGE031
Wherein the content of the first and second substances,
Figure 854361DEST_PATH_IMAGE032
a function, i.e. a sigmoid function,
Figure 339700DEST_PATH_IMAGE033
is a hyperbolic tangent function.
Figure 368836DEST_PATH_IMAGE034
And
Figure 265117DEST_PATH_IMAGE035
are trainable parameters.
In the application process, a first input gate signal corresponding to the characteristics is obtained based on the LSTM unit
Figure 12493DEST_PATH_IMAGE024
First forget gate signal
Figure 301523DEST_PATH_IMAGE025
A first output gate signal
Figure 919586DEST_PATH_IMAGE026
And a first original hidden state
Figure 393293DEST_PATH_IMAGE027
Before, the feature extraction module may also be initialized.
In one example, in the feature extraction module, a preset number of LSTM units may be provided, for example, three LSTM units may be provided. Where each LSTM unit may have 128 neuron numbers.
In the present disclosure, the number of LSTM units provided in the feature extraction module is not specifically limited, nor is the number of neurons in each LSTM unit specifically limited. In application, the adjustment can be carried out according to actual conditions.
In step S22, a first hidden state corresponding to the feature is obtained based on the first input gate signal, the first forgetting gate signal, the first original hidden state, and the previous first hidden state of the feature.
In one example, the first hidden state of the last time step, i.e., the previous first hidden state of the feature, is now ordered to be
Figure 487019DEST_PATH_IMAGE036
In the current time step, the first hidden state corresponding to the feature
Figure 173216DEST_PATH_IMAGE037
Can be determined by:
Figure 52310DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 431339DEST_PATH_IMAGE013
representing a corresponding position element multiplication operation between vectors.
It should be noted that, in the current time step, the feature corresponds to the first hidden state
Figure 622149DEST_PATH_IMAGE037
For a feature in the next time step, it may be the previous first hidden state of the feature in the next time step
Figure 496389DEST_PATH_IMAGE036
In an embodiment, the first output gate signal may be further based on
Figure 89045DEST_PATH_IMAGE026
First hidden state corresponding to feature
Figure 779920DEST_PATH_IMAGE037
And obtaining the current output result of the characteristics.
Current output result of feature
Figure 723605DEST_PATH_IMAGE039
Can be determined by:
Figure 751604DEST_PATH_IMAGE040
it is to be noted thatIn the current time step, the current output result corresponding to the characteristic
Figure 588979DEST_PATH_IMAGE039
For the feature in the next time step, it may be the previous output result of the feature in the next time step
Figure 309811DEST_PATH_IMAGE023
Fig. 3 shows a flowchart for obtaining a current output result of the feature extraction module in the recognition method based on the recognition model according to the embodiment of the present disclosure.
In an exemplary embodiment of the disclosure, as shown in fig. 3, obtaining a current output result of the feature extraction module based on the plurality of features and the first hidden states corresponding to the plurality of features respectively, and the previous output result of the feature extraction module may include the following steps.
In step S31, based on the plurality of features, the current input features of the feature extraction module are obtained.
In one example, based on the plurality of features, the current input feature to the derived feature extraction module may be determined by:
Figure 616158DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 713427DEST_PATH_IMAGE042
extracting the current input features of the feature extraction module;
Figure 405308DEST_PATH_IMAGE003
is characterized in that.
By the method, a plurality of adjacent features in the feature sequence to be recognized can be spliced to obtain the current input features of the feature extraction module considering the correlation of the plurality of adjacent features, so that the current output result of the feature extraction module obtained based on the current input features of the feature extraction module is more reasonable and accurate.
In step S32, the hidden state of the feature extraction module is obtained based on the current input feature of the feature extraction module, the last output result of the feature extraction module, and the first hidden states corresponding to the plurality of features.
In step S33, the current output result of the feature extraction module is obtained based on the hidden state of the feature extraction module.
In an exemplary embodiment of the present disclosure, the recognition method based on the recognition model further includes: and determining the previous hidden state of the feature extraction module.
In one example, the description continues with an example of having a ninety-nine frame speech segment. Based on the correlation of the features of the voices of the third frame to the sixth frame, the current input features of the feature extraction module and the hidden state A of the feature extraction module can be obtained. Based on the correlation of the features of the voices of the first frame to the third frame, the current input feature of the feature extraction module and the hidden state B of the feature extraction module can be obtained. Here, the hidden state B of the feature extraction module may be understood as a previous hidden state of the hidden state a of the feature extraction module with respect to the feature extraction module.
Fig. 4 shows a flowchart for obtaining a hidden state of a feature extraction module in a recognition method based on a recognition model according to an embodiment of the present disclosure.
As shown in fig. 4, obtaining the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the previous output result of the feature extraction module, and the first hidden states corresponding to the plurality of features respectively may include the following steps.
In step S41, a second input gate signal and a second forgetting gate signal corresponding to the feature extraction module are obtained based on the current input features of the feature extraction module and the previous output result of the feature extraction module.
The current input features of the present order feature extraction module are
Figure 297041DEST_PATH_IMAGE002
Feature extraction moduleThe previous output result is
Figure 215319DEST_PATH_IMAGE043
The second input gate signal corresponding to the feature extraction module can be calculated in the following manner
Figure 460486DEST_PATH_IMAGE011
And a second forgetting gate signal
Figure 147819DEST_PATH_IMAGE012
Figure 600666DEST_PATH_IMAGE044
Figure 6240DEST_PATH_IMAGE045
Wherein
Figure 320678DEST_PATH_IMAGE046
In order to be a sigmoid function,
Figure 862518DEST_PATH_IMAGE047
and
Figure 486266DEST_PATH_IMAGE048
are trainable parameters.
In step S42, the hidden state of the feature extraction module is obtained based on the first hidden state, the second input gate signal, the second forgetting gate signal, and the previous hidden state of the feature extraction module, which correspond to the plurality of features, respectively.
In an example, the original hidden state of the feature extraction module may be obtained based on first hidden states corresponding to the plurality of features respectively.
The original hidden state of the feature extraction module is determined by utilizing the first hidden states corresponding to the adjacent multiple features, so that the original hidden state retains the information of the first hidden states of the adjacent multiple features, and the current output result of the feature extraction module covers the correlation between the adjacent features, so that the current output result of the feature extraction module is more reasonable and accurate.
In an exemplary embodiment of the present disclosure, the original hidden state of the feature extraction module is obtained based on the first hidden states corresponding to the plurality of features, and the method may be implemented as follows:
Figure 113556DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 825160DEST_PATH_IMAGE005
the original hidden state of the feature extraction module;
Figure 362452DEST_PATH_IMAGE006
is a first hidden state corresponding to each feature.
In an exemplary embodiment of the present disclosure, the hidden state of the feature extraction module is obtained based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal, and the previous hidden state of the feature extraction module, and may be implemented by:
Figure 766888DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 6109DEST_PATH_IMAGE019
a hidden state for the feature extraction module;
Figure 521404DEST_PATH_IMAGE005
the original hidden state of the feature extraction module;
Figure 37836DEST_PATH_IMAGE010
the previous hidden state of the feature extraction module;
Figure 754119DEST_PATH_IMAGE011
a second input gate signal;
Figure 90422DEST_PATH_IMAGE012
a second forgetting gate signal;
Figure 799621DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
It should be noted that, if in the current time step, the hidden state of the feature extraction module is
Figure 904981DEST_PATH_IMAGE019
For the next time step, the previous hidden state of the feature extraction module in the next time step can be
Figure 916799DEST_PATH_IMAGE010
In an exemplary embodiment of the present disclosure, the recognition method based on the recognition model further includes: and obtaining a second output gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module.
Continuing to use the current input features of the feature extraction module as
Figure 350185DEST_PATH_IMAGE042
The previous output result of the feature extraction module is
Figure 738441DEST_PATH_IMAGE043
For example, the second io gate signal corresponding to the feature extraction module can be determined as follows
Figure 88520DEST_PATH_IMAGE016
Figure 5661DEST_PATH_IMAGE049
Wherein
Figure 723081DEST_PATH_IMAGE046
In order to be a sigmoid function,
Figure 649449DEST_PATH_IMAGE050
and
Figure 463821DEST_PATH_IMAGE051
are trainable parameters.
In an example, the current output result of the feature extraction module may be obtained based on the second output gate signal and the hidden state of the feature extraction module.
In an exemplary embodiment of the present disclosure, the current output result of the feature extraction module is obtained based on the second output gate signal and the hidden state of the feature extraction module, and may be determined by:
Figure 213514DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 11706DEST_PATH_IMAGE052
is the current output result of the feature extraction module;
Figure 85972DEST_PATH_IMAGE016
is a second output gate signal;
Figure 286009DEST_PATH_IMAGE019
a hidden state for the feature extraction module;
Figure 810532DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
It is noted that, in the current time step, the current output result of the feature extraction module
Figure 955074DEST_PATH_IMAGE052
For the next timeStep can be the last output result of the feature extraction module in the next time step
Figure 223244DEST_PATH_IMAGE043
Through the embodiment, the correlation between the adjacent features is considered in the current output result of the feature extraction module, so that the current output result of the feature extraction module is more reasonable and accurate. Furthermore, the recognition result of the feature sequence to be recognized, which is obtained based on the current output result of the feature extraction module, is more reasonable and has reference significance.
In an exemplary embodiment of the present disclosure, the recognition method based on the recognition model further includes the steps of:
and determining an output result of the preset feature extraction module.
And if the current output result of the feature extraction module is the output result of the initial time step, taking the output result of the preset feature extraction module as the previous output result of the feature extraction module.
Because the last time step does not exist in the initial time step, when the current output result of the feature extraction module is the output result of the initial time step, the pseudo output setting can be carried out on the previous output result of the feature extraction module, namely the previous output result of the feature extraction module is directly assigned. In the application process, the previous output result of the feature extraction module may be assigned to 0 or 1.
The assignment processing of the previous output result of the feature extraction module can be adjusted according to the actual situation, and the assignment situation is not specifically limited in the present disclosure.
Fig. 5 shows a flowchart of an application of a recognition method based on a recognition model according to an embodiment of the present disclosure.
In an example, as shown in fig. 5, the feature sequence to be recognized is a speech segment with ninety-nine frames as an example. Now the feature extraction module comprises three LSTM units, each LSTM unit having 128 neuron numbers.
The sixth frame in the feature sequence to be recognized is now assignedThe features of the speech to the eighth frame speech are sequentially inputted to the LSTM unit. Corresponding to FIG. 5, input 1 may be feature x of the sixth frame of speech1Input 2 may be feature x of seventh frame speech2And the input 3 can be the feature x of the eighth frame voice3
Determining the previous output of the features of each frame of speech, i.e.h(T-1)。
Previous output result based on each featureh(T-1) and featuresx T (x 1, x 2, x 3) Calculating to obtain the first input gate signal corresponding to each featurei T (i 1, i 2, i 3) (ii) a First forget gate signalf T (f 1, f 2, f 3) (ii) a First output gate signalo T (o 1, o 2, o 3) And a first original hidden state
Figure 153154DEST_PATH_IMAGE053
Previous first hidden state based on each determined featurec(T-1), and a first input gate signali T First forget gate signalf T Obtaining a first hidden state related to the features of the voices of the sixth frame to the eighth framec(T)。
Features based on multiple voices (x 1, x 2, x 3) The current input characteristics of the characteristic extraction module can be obtainedX T . At this time, the current input feature X of the feature extraction moduleTCorrelations between features of multiple voices are considered. Therefore, the current output result of the feature extraction module obtained based on the current input features of the feature extraction module is more reasonable and accurate.
In one example, characteristics of a plurality of voices (a)x 1, x 2, x 3) Tong (Chinese character of 'tong')The first splicing operation unit can obtain the current input characteristics of the characteristic extraction moduleX T . Wherein, the current input characteristics of the characteristic extraction module are obtained through the first splicing operation unitX T The method can be realized by the following steps:
Figure 848578DEST_PATH_IMAGE054
first hidden state based on multiple featuresc(T) Obtaining the original hidden state of the feature extraction module
Figure 745996DEST_PATH_IMAGE005
In an example, a first hidden state of a plurality of featuresc(T) The original hidden state of the feature extraction module can be obtained through the second splicing operation unit
Figure 552278DEST_PATH_IMAGE005
. Wherein the original hidden state of the feature extraction module is obtained through the second splicing operation unit
Figure 461328DEST_PATH_IMAGE005
The method can be realized by the following steps:
Figure 203019DEST_PATH_IMAGE055
further, the previous output result of the feature extraction module, i.e., the result of the feature extraction module is determinedH(T-1). Last output result based on characteristic extraction moduleH(T-1) current input features of the feature extraction moduleX T And obtaining a second input gate signal corresponding to the feature extraction module through calculationI T Second forgetting gate signalF T And a second output gate signalO T
Wherein, the last output result based on the characteristic extraction moduleH(T-1) current input features of the feature extraction moduleX T The second input gate signal corresponding to the feature extraction module can be obtained through the second activation operation unitI T (ii) a The second input gate signal corresponding to the feature extraction module can be obtained through the first activation arithmetic unitF T (ii) a The second input gate signal corresponding to the feature extraction module can be obtained through the third activation operation unitO T
Specifically, the second input gate signals corresponding to the feature extraction modules may be determined in the following manner respectivelyI T Second forgetting gate signalF T And a second output gate signalO T
Figure 197520DEST_PATH_IMAGE056
Figure 197706DEST_PATH_IMAGE057
Figure 961262DEST_PATH_IMAGE058
Wherein
Figure 998488DEST_PATH_IMAGE046
In order to be a sigmoid function,
Figure 621231DEST_PATH_IMAGE059
and
Figure 769315DEST_PATH_IMAGE051
are trainable parameters.
Determining previous hidden state of feature extraction moduleC(T-1)。
Previous hidden state of feature extraction moduleC(T-1) with a second forgetting gate signalF T Through a multiplication unitAfter processing, the original hidden state of the feature extraction module is added
Figure 777591DEST_PATH_IMAGE005
And a second input gate signalI T After the processing of the multiplication operation unit, the hidden state of the feature extraction module is obtained through the processing of the addition operation unitC(T)。
The multiplication waybill unit can be a unit for multiplying the same position elements between vectors.
In one example, the hidden state of the feature extraction moduleC(T) Can be determined by:
Figure 720140DEST_PATH_IMAGE060
further, hidden state based on feature extraction moduleC(T) And two output gate signalsO T The current output result of the feature extraction module can be obtainedH(T)。
In one example, the current output results of the feature extraction moduleH(T) Can be determined by:
Figure 689233DEST_PATH_IMAGE061
wherein the content of the first and second substances,
Figure 781954DEST_PATH_IMAGE013
it is understood to be a multiplication unit, meaning a unit that multiplies the same positional elements between vectors.
By the method, the current output result of the feature extraction module is obtained on the basis of considering the direct correlation of the sixth frame voice to the eighth frame voice. The current output result of the feature extraction module is more reasonable and has reference significance.
Further, based on the same idea, the segments with nineteen frames of speech can all obtain the current output results of thirty-three feature extraction modules through the above-mentioned embodiment. And recognizing the voice segment based on the thirty-three current output results to obtain a recognition result. Because the correlation of adjacent features is considered in the current output result, the identification result is more reasonable and has reference significance.
Based on the same inventive concept, a second aspect of the present disclosure provides a model training method of a recognition model.
Fig. 6 shows a flowchart of a model training method for recognizing a model according to an embodiment of the present disclosure.
In an exemplary embodiment of the disclosure, as shown in fig. 6, the recognition model is used for recognition according to the recognition method described in the first aspect or any one of the embodiments of the first aspect. The model training method for recognizing the model comprises the following steps.
In step S51, a training set is acquired. The training set comprises a plurality of training samples and standard identifications corresponding to the training samples.
In step S52, the recognition result of the training sample is obtained by the recognition model.
In step S53, the recognition result is supervised by the standard identifier, and the parameters of the recognition model are adjusted.
Identifying parameters in the model may include
Figure 254523DEST_PATH_IMAGE050
And
Figure 23765DEST_PATH_IMAGE051
in the application process, a gradient descent optimizer can be selected to optimize and adjust the model parameters by using a reverse transfer algorithm. For example, the model parameters can be optimized and adjusted by the following optimizer:
Figure 214575DEST_PATH_IMAGE062
wherein the content of the first and second substances,
Figure 110987DEST_PATH_IMAGE063
and
Figure 969221DEST_PATH_IMAGE064
for the network parameters at the current time step and after the last time step training,
Figure 909364DEST_PATH_IMAGE065
for the learning rate of training, 0.001 may be taken here.
And when the difference between the output result output by the trained model and the standard mark is in a certain range, namely the loss function is not reduced or is reduced below a preset value, the training of the model is finished.
Based on the same inventive concept, a third aspect of the present disclosure provides a recognition apparatus based on a recognition model, wherein the recognition model includes a feature extraction module. The feature extraction module includes a plurality of LSTM units.
Fig. 7 shows a schematic diagram of a recognition apparatus based on a recognition model according to an embodiment of the present disclosure.
Based on the same inventive concept, as shown in fig. 7, the recognition apparatus based on the recognition model provided by the embodiment of the present disclosure includes a module 110 for obtaining a feature sequence to be recognized, a module 120 for obtaining a first hidden state, a processing module 130, and a recognition module 140. Each module will be described separately below.
And a to-be-identified feature sequence obtaining module 110, configured to obtain a to-be-identified feature sequence, where the to-be-identified feature sequence includes consecutive features.
The module 120 for obtaining a first hidden state is configured to sequentially and respectively input the plurality of features to the plurality of LSTM units, and obtain a first hidden state corresponding to the feature through each LSTM unit.
The processing module 130 is configured to obtain a current output result of the feature extraction module based on the plurality of features, the first hidden states corresponding to the plurality of features, and the previous output result of the feature extraction module.
And the identification module 140 is used for obtaining an identification result based on the current output result of the feature extraction module.
In an exemplary embodiment of the present disclosure, the identification apparatus further includes a determination module.
The determination module is to: a previous first hidden state of the feature is determined.
The get first hidden state module 120 is to: determining a first input gate signal, a first forgetting gate signal and a first original hidden state corresponding to the characteristics based on the characteristics; and obtaining a first hidden state corresponding to the characteristic based on the first input gate signal, the first forgetting gate signal, the first original hidden state and the previous first hidden state of the characteristic.
In an exemplary embodiment of the disclosure, the determining module is further configured to: determining the previous output result of the characteristic.
The get first hidden state module 120 is to: and obtaining a first input gate signal, a first forgetting gate signal and a first original hidden state corresponding to the characteristics based on the characteristics and the previous output result of the characteristics.
In an exemplary embodiment of the disclosure, the processing module 130 is further configured to: obtaining a first output gate signal corresponding to the characteristic based on the characteristic and a previous output result of the characteristic; and obtaining a current output result of the characteristic based on the first output gate signal and the first hidden state corresponding to the characteristic.
In an exemplary embodiment of the disclosure, the processing module 130 is configured to: obtaining the current input features of the feature extraction module based on the plurality of features; obtaining the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module and the first hidden states corresponding to the plurality of features respectively; and obtaining the current output result of the feature extraction module based on the hidden state of the feature extraction module.
In an exemplary embodiment of the present disclosure, the processing module 130 is configured to determine the current input features of the feature extraction module based on:
Figure 853050DEST_PATH_IMAGE066
wherein the content of the first and second substances,
Figure 287573DEST_PATH_IMAGE067
extracting the current input features of the feature extraction module;
Figure 314DEST_PATH_IMAGE003
is characterized in that.
In an exemplary embodiment of the disclosure, the determining module is further configured to: determining the previous hidden state of the feature extraction module; the processing module 130 is configured to: obtaining a second input gate signal and a second forgetting gate signal corresponding to the feature extraction module based on the current input features of the feature extraction module and the last output result of the feature extraction module; and obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module corresponding to the plurality of features respectively.
In an exemplary embodiment of the disclosure, the processing module 130 is configured to: obtaining an original hidden state of the feature extraction module based on first hidden states respectively corresponding to the plurality of features; and obtaining the hidden state of the feature extraction module based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module.
In an exemplary embodiment of the present disclosure, the processing module 130 is configured to determine the original hidden state of the feature extraction module based on:
Figure 845779DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 276761DEST_PATH_IMAGE005
the original hidden state of the feature extraction module;
Figure 108450DEST_PATH_IMAGE006
to correspond to each featureIs in the first hidden state.
In an exemplary embodiment of the present disclosure, the processing module 130 is configured to determine the hidden state of the feature extraction module based on:
Figure 551064DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 442797DEST_PATH_IMAGE019
a hidden state for the feature extraction module;
Figure 237707DEST_PATH_IMAGE005
the original hidden state of the feature extraction module;
Figure 873088DEST_PATH_IMAGE010
the previous hidden state of the feature extraction module;
Figure 560421DEST_PATH_IMAGE011
a second input gate signal;
Figure 498421DEST_PATH_IMAGE012
a second forgetting gate signal;
Figure 638415DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
In an exemplary embodiment of the disclosure, the processing module 130 is configured to: obtaining a second output gate signal corresponding to the feature extraction module based on the current input features of the feature extraction module and the last output result of the feature extraction module; and obtaining the current output result of the feature extraction module based on the second output gate signal and the hidden state of the feature extraction module.
In an exemplary embodiment of the present disclosure, the processing module 130 is configured to determine the current output result of the feature extraction module based on:
Figure 811908DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 478381DEST_PATH_IMAGE052
is the current output result of the feature extraction module;
Figure 711917DEST_PATH_IMAGE016
is a second output gate signal;
Figure 214573DEST_PATH_IMAGE019
a hidden state for the feature extraction module;
Figure 457336DEST_PATH_IMAGE013
representing multiplication of co-located elements between vectors.
In an exemplary embodiment of the present disclosure, the identification apparatus further includes a determining module, where the determining module is configured to: determining an output result of the preset feature extraction module; and if the current output result of the feature extraction module is the output result of the initial time step, taking the output result of the preset feature extraction module as the previous output result of the feature extraction module.
Based on the same inventive concept, a fourth aspect of the present disclosure provides a model training apparatus that recognizes a model. Wherein the identification model is used for identification according to the first aspect or the identification method according to any embodiment of the first aspect.
Fig. 8 is a schematic diagram illustrating a recognition model-based model training apparatus according to an embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, as shown in fig. 8, the training apparatus includes a training set obtaining module 210, a recognition result obtaining module 220, and an adjustment parameter module 230. Each module is described separately below.
And an acquire training set module 210 configured to acquire a training set. The training set comprises a plurality of training samples and standard identifications corresponding to the training samples.
And the identification result obtaining module 220 is configured to obtain an identification result of the training sample through the identification model.
And the parameter adjusting module 230 is configured to supervise the recognition result through the standard identifier and adjust parameters of the recognition model.
Fig. 9 shows a block diagram of an electronic device provided by an embodiment of the present disclosure.
As shown in fig. 9, an embodiment of the present disclosure provides an electronic device 30, where the electronic device 30 includes a memory 310, a processor 320, and an Input/Output (I/O) interface 330. The memory 310 is used for storing instructions. A processor 320 for calling the instructions stored in the memory 310 to execute the recognition model-based recognition method or the recognition model training method of the embodiment of the present disclosure. The processor 320 is connected to the memory 310 and the I/O interface 330, respectively, for example, via a bus system and/or other connection mechanism (not shown). The memory 310 may be used to store programs and data, including a program of a recognition model-based recognition method or a recognition model training method of a recognition model, which are involved in the embodiments of the present disclosure, and the processor 320 executes various functional applications and data processing of the electronic device 30 by executing the programs stored in the memory 310.
In the embodiment of the present disclosure, the processor 320 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and the processor 320 may be one or a combination of several Central Processing Units (CPUs) or other forms of Processing units with data Processing capability and/or instruction execution capability.
Memory 310 in embodiments of the present disclosure may comprise one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile Memory may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD), a Solid-State Drive (SSD), or the like.
In the disclosed embodiment, the I/O interface 330 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 30, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 330 in embodiments of the present disclosure may include one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, a joystick, a trackball, a microphone, a speaker, a touch panel, and the like.
In some embodiments, the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present disclosure can be accomplished with standard programming techniques with rule-based logic or other logic to accomplish the various method steps. It should also be noted that the words "means" and "module," as used herein and in the claims, is intended to encompass implementations using one or more lines of software code, and/or hardware implementations, and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, which is executable by a computer processor for performing any or all of the described steps, operations, or procedures.
The foregoing description of the implementations of the disclosure has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the disclosure and its practical application to enable one skilled in the art to utilize the disclosure in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (28)

1. A recognition method based on a recognition model, wherein the recognition model comprises a feature extraction module, the feature extraction module comprises a plurality of LSTM units, and the recognition method comprises:
acquiring a feature sequence to be recognized, wherein the feature sequence to be recognized comprises continuous features, and the feature sequence to be recognized comprises one or more of the following sequences with time sequence: a voice clip, video clip, or text;
inputting a plurality of features to the plurality of LSTM units in sequence, and obtaining a first hidden state corresponding to the features through each LSTM unit;
obtaining a current output result of the feature extraction module based on the plurality of features, first hidden states corresponding to the plurality of features respectively, and a previous output result of the feature extraction module;
obtaining an identification result based on the current output result of the feature extraction module;
wherein, the obtaining of the current output result of the feature extraction module based on the plurality of features, the first hidden states corresponding to the plurality of features, and the previous output result of the feature extraction module includes:
obtaining the current input features of the feature extraction module based on the plurality of features;
obtaining the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module and the first hidden states corresponding to the plurality of features respectively;
and obtaining the current output result of the feature extraction module based on the hidden state of the feature extraction module.
2. The recognition model-based recognition method of claim 1, further comprising:
determining a previous first hidden state of the feature;
the sequentially and respectively inputting the plurality of features into the plurality of LSTM units, and obtaining a first hidden state corresponding to the features through each of the LSTM units, includes:
determining a first input gate signal, a first forgetting gate signal and a first original hidden state corresponding to the characteristics based on the characteristics;
and obtaining a first hidden state corresponding to the feature based on the first input gate signal, the first forgetting gate signal, the first original hidden state and the previous first hidden state of the feature.
3. The recognition model-based recognition method of claim 2, further comprising:
determining a previous output result of the feature;
the determining, based on the feature, a first input gate signal, a first forgetting gate signal, and a first original hidden state corresponding to the feature includes:
and obtaining the first input gate signal, the first forgetting gate signal and the first original hidden state corresponding to the characteristic based on the characteristic and the previous output result of the characteristic.
4. The recognition model-based recognition method of claim 3, further comprising:
obtaining a first output gate signal corresponding to the characteristic based on the characteristic and a previous output result of the characteristic;
and obtaining a current output result of the characteristic based on the first output gate signal and a first hidden state corresponding to the characteristic.
5. The recognition model-based recognition method of claim 1, wherein the current input features of the feature extraction module are obtained based on the plurality of features, and are determined by:
Figure 874371DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 322670DEST_PATH_IMAGE002
extracting the current input features of the feature extraction module;
Figure 899144DEST_PATH_IMAGE003
is the feature.
6. The recognition model-based recognition method of claim 1, further comprising:
determining a previous hidden state of the feature extraction module;
the obtaining of the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module, and the first hidden states corresponding to the plurality of features respectively includes:
obtaining a second input gate signal and a second forgetting gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module;
and obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module corresponding to the plurality of features respectively.
7. The recognition method based on recognition model according to claim 6, wherein the obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module corresponding to the plurality of features respectively comprises:
obtaining an original hidden state of the feature extraction module based on first hidden states respectively corresponding to the plurality of features;
and obtaining the hidden state of the feature extraction module based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module.
8. The recognition method based on the recognition model of claim 7, wherein the original hidden state of the feature extraction module is obtained based on the first hidden states corresponding to the plurality of features respectively, and is implemented by:
Figure 825512DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 374305DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 229391DEST_PATH_IMAGE006
is a first hidden state corresponding to each feature.
9. The recognition model-based recognition method of claim 7, wherein the hidden state of the feature extraction module is obtained based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module, and is realized by the following formula:
Figure 293162DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 23221DEST_PATH_IMAGE008
a hidden state of the feature extraction module;
Figure 426520DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 951042DEST_PATH_IMAGE009
extracting a previous hidden state of the feature extraction module;
Figure 502109DEST_PATH_IMAGE010
is the second input gate signal;
Figure 973542DEST_PATH_IMAGE011
is the second forgetting gate signal;
Figure 28086DEST_PATH_IMAGE012
representing multiplication of co-located elements between vectors.
10. The recognition model-based recognition method of claim 1, further comprising:
obtaining a second output gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module;
obtaining a current output result of the feature extraction module based on the hidden state of the feature extraction module, including:
and obtaining the current output result of the feature extraction module based on the second output gate signal and the hidden state of the feature extraction module.
11. The recognition model-based recognition method of claim 10, wherein the current output result of the feature extraction module is obtained based on the second output gate signal and the hidden state of the feature extraction module, and is implemented by the following formula:
Figure 254668DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 27452DEST_PATH_IMAGE014
the current output result of the characteristic extraction module is obtained;
Figure 36996DEST_PATH_IMAGE015
is the second output gate signal;
Figure 710160DEST_PATH_IMAGE008
a hidden state of the feature extraction module;
Figure 842064DEST_PATH_IMAGE012
representing multiplication of co-located elements between vectors.
12. The recognition model-based recognition method of claim 1, wherein the method further comprises:
determining an output result of the preset feature extraction module;
and if the current output result of the feature extraction module is the output result of the initial time step, taking the output result of the preset feature extraction module as the previous output result of the feature extraction module.
13. A method for model training of a recognition model, wherein the recognition model is used for recognition by the recognition method according to any one of claims 1 to 12, the method comprising:
acquiring a training set, wherein the training set comprises a plurality of training samples and standard identifications corresponding to the training samples;
obtaining the recognition result of the training sample through the recognition model;
and monitoring the recognition result through the standard identification, and adjusting the parameters of the recognition model.
14. A recognition apparatus based on a recognition model, wherein the recognition model comprises a feature extraction module, the feature extraction module comprises a plurality of LSTM units, and the recognition apparatus comprises:
the device comprises a module for acquiring a feature sequence to be recognized, wherein the feature sequence to be recognized comprises continuous features, and the feature sequence to be recognized comprises one or more of the following sequences with time sequence: a voice clip, video clip, or text;
the acquisition module of a first hidden state is used for sequentially and respectively inputting a plurality of characteristics to the plurality of LSTM units and acquiring the first hidden state corresponding to the characteristics through each LSTM unit;
the processing module is used for obtaining a current output result of the feature extraction module based on the plurality of features, the first hidden states corresponding to the plurality of features respectively and the previous output result of the feature extraction module;
the identification module is used for obtaining an identification result based on the current output result of the feature extraction module;
wherein the processing module is configured to: obtaining the current input features of the feature extraction module based on the plurality of features; obtaining the hidden state of the feature extraction module based on the current input feature of the feature extraction module, the last output result of the feature extraction module and the first hidden states corresponding to the plurality of features respectively; and obtaining the current output result of the feature extraction module based on the hidden state of the feature extraction module.
15. The recognition model-based recognition apparatus of claim 14, further comprising a determination module configured to:
determining a previous first hidden state of the feature;
the obtain first hidden state module is configured to:
determining a first input gate signal, a first forgetting gate signal and a first original hidden state corresponding to the characteristics based on the characteristics;
and obtaining a first hidden state corresponding to the feature based on the first input gate signal, the first forgetting gate signal, the first original hidden state and the previous first hidden state of the feature.
16. A recognition model-based recognition apparatus according to claim 15, wherein the determination module is further configured to: determining a previous output result of the feature;
the obtain first hidden state module is configured to: and obtaining the first input gate signal, the first forgetting gate signal and the first original hidden state corresponding to the characteristic based on the characteristic and the previous output result of the characteristic.
17. A recognition model-based recognition apparatus according to claim 16, wherein the processing module is further configured to:
obtaining a first output gate signal corresponding to the characteristic based on the characteristic and a previous output result of the characteristic;
and obtaining a current output result of the characteristic based on the first output gate signal and a first hidden state corresponding to the characteristic.
18. A recognition model-based recognition apparatus according to claim 14, wherein the processing module is configured to determine the current input features of the feature extraction module based on:
Figure 305407DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 915380DEST_PATH_IMAGE002
extracting the current input features of the feature extraction module;
Figure 210095DEST_PATH_IMAGE003
is the feature.
19. The recognition model-based recognition apparatus of claim 14, wherein the determination module is further configured to: determining a previous hidden state of the feature extraction module;
the processing module is used for: obtaining a second input gate signal and a second forgetting gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module;
and obtaining the hidden state of the feature extraction module based on the first hidden state, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module corresponding to the plurality of features respectively.
20. A recognition model-based recognition apparatus according to claim 19, wherein the processing module is configured to:
obtaining an original hidden state of the feature extraction module based on first hidden states respectively corresponding to the plurality of features;
and obtaining the hidden state of the feature extraction module based on the original hidden state of the feature extraction module, the second input gate signal, the second forgetting gate signal and the previous hidden state of the feature extraction module.
21. A recognition model-based recognition apparatus according to claim 20, wherein the processing module is configured to determine the original hidden state of the feature extraction module based on:
Figure 716163DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 197959DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 877203DEST_PATH_IMAGE006
is a first hidden state corresponding to each feature.
22. A recognition model-based recognition apparatus according to claim 20, wherein the processing module is configured to determine the hidden state of the feature extraction module based on:
Figure 760845DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 437814DEST_PATH_IMAGE008
a hidden state of the feature extraction module;
Figure 406907DEST_PATH_IMAGE005
extracting an original hidden state of the feature extraction module;
Figure 125726DEST_PATH_IMAGE009
extracting a previous hidden state of the feature extraction module;
Figure 129455DEST_PATH_IMAGE010
is the second input gate signal;
Figure 977325DEST_PATH_IMAGE011
is the second forgetting gate signal;
Figure 433714DEST_PATH_IMAGE012
representing multiplication of co-located elements between vectors.
23. A recognition model-based recognition apparatus according to claim 14, wherein the processing module is configured to:
obtaining a second output gate signal corresponding to the feature extraction module based on the current input feature of the feature extraction module and the last output result of the feature extraction module;
and obtaining the current output result of the feature extraction module based on the second output gate signal and the hidden state of the feature extraction module.
24. A recognition model-based recognition apparatus according to claim 23, wherein the processing module is configured to determine the current output result of the feature extraction module based on:
Figure 454760DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 250677DEST_PATH_IMAGE014
the current output result of the characteristic extraction module is obtained;
Figure 66187DEST_PATH_IMAGE015
is the second output gate signal; a hidden state of the feature extraction module;
Figure 275451DEST_PATH_IMAGE012
representing multiplication of co-located elements between vectors.
25. The recognition model-based recognition apparatus according to claim 14, further comprising a determination module, said determination module is configured to:
determining an output result of the preset feature extraction module;
and if the current output result of the feature extraction module is the output result of the initial time step, taking the output result of the preset feature extraction module as the previous output result of the feature extraction module.
26. A model training apparatus for recognizing a model, the recognizing model being used for recognition by the recognition method according to any one of claims 1 to 12, the training apparatus comprising:
the training set acquisition module is used for acquiring a training set, and the training set comprises a plurality of training samples and standard marks corresponding to the training samples;
the identification result obtaining module is used for obtaining the identification result of the training sample through the identification model;
and the parameter adjusting module is used for supervising the recognition result through the standard identifier and adjusting the parameters of the recognition model.
27. An electronic device, comprising:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform a recognition model-based recognition method of any one of claims 1-12 or to perform a model training method of a recognition model of claim 13.
28. A computer-readable storage medium, wherein,
the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform the recognition model-based recognition method of any one of claims 1-12 or perform the model training method of the recognition model of claim 13.
CN202010659647.0A 2020-07-10 2020-07-10 Recognition method based on recognition model, model training method and device Active CN111539495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010659647.0A CN111539495B (en) 2020-07-10 2020-07-10 Recognition method based on recognition model, model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659647.0A CN111539495B (en) 2020-07-10 2020-07-10 Recognition method based on recognition model, model training method and device

Publications (2)

Publication Number Publication Date
CN111539495A CN111539495A (en) 2020-08-14
CN111539495B true CN111539495B (en) 2020-11-10

Family

ID=71976449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659647.0A Active CN111539495B (en) 2020-07-10 2020-07-10 Recognition method based on recognition model, model training method and device

Country Status (1)

Country Link
CN (1) CN111539495B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011555B (en) * 2021-02-09 2023-01-31 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151248A1 (en) * 2018-11-09 2020-05-14 Genesys Telecommunications Laboratories, Inc. System and method for model derivation for entity prediction
EP3674988A1 (en) * 2018-12-31 2020-07-01 Tata Consultancy Services Limited Method and system for prediction of correct discrete sensor data based on temporal uncertainty

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513591B (en) * 2015-12-21 2019-09-03 百度在线网络技术(北京)有限公司 The method and apparatus for carrying out speech recognition with LSTM Recognition with Recurrent Neural Network model
CN109918647A (en) * 2019-01-30 2019-06-21 中国科学院信息工程研究所 A kind of security fields name entity recognition method and neural network model
CN110335162A (en) * 2019-07-18 2019-10-15 电子科技大学 A kind of stock market quantization transaction system and algorithm based on deeply study
CN110781305B (en) * 2019-10-30 2023-06-06 北京小米智能科技有限公司 Text classification method and device based on classification model and model training method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151248A1 (en) * 2018-11-09 2020-05-14 Genesys Telecommunications Laboratories, Inc. System and method for model derivation for entity prediction
EP3674988A1 (en) * 2018-12-31 2020-07-01 Tata Consultancy Services Limited Method and system for prediction of correct discrete sensor data based on temporal uncertainty

Also Published As

Publication number Publication date
CN111539495A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN108960207B (en) Image recognition method, system and related components
CN113537481B (en) Apparatus and method for performing LSTM neural network operation
US11093734B2 (en) Method and apparatus with emotion recognition
CN110444198B (en) Retrieval method, retrieval device, computer equipment and storage medium
CN111354237A (en) Context-based deep knowledge tracking method and computer readable medium thereof
CN111931795B (en) Multi-modal emotion recognition method and system based on subspace sparse feature fusion
EP3979098A1 (en) Data processing method and apparatus, storage medium, and electronic apparatus
KR20180043937A (en) Method and apparatus for recognizing facial expression
CN110363220B (en) Behavior class detection method and device, electronic equipment and computer readable medium
CN116171473A (en) Bimodal relationship network for audio-visual event localization
CN111192576A (en) Decoding method, speech recognition device and system
KR102074909B1 (en) Apparatus and method for classifying software vulnerability
CN111653274B (en) Wake-up word recognition method, device and storage medium
CN111223476B (en) Method and device for extracting voice feature vector, computer equipment and storage medium
KR20220130565A (en) Keyword detection method and apparatus thereof
CN111539495B (en) Recognition method based on recognition model, model training method and device
KR20220098991A (en) Method and apparatus for recognizing emtions based on speech signal
WO2022227214A1 (en) Classification model training method and apparatus, and terminal device and storage medium
WO2019019667A1 (en) Speech processing method and apparatus, storage medium and processor
Naranjo-Alcazar et al. A comparative analysis of residual block alternatives for end-to-end audio classification
KR20210018586A (en) Method and apparatus for identifying video content using biometric features of characters
CN113609948B (en) Method, device and equipment for detecting video time sequence action
JP5611232B2 (en) Method for pattern discovery and pattern recognition
CN113870320A (en) Pedestrian tracking monitoring method and system based on deep neural network
CN113283388A (en) Training method, device and equipment of living human face detection model and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant