US10607602B2 - Speech recognition device and computer program - Google Patents
Speech recognition device and computer program Download PDFInfo
- Publication number
- US10607602B2 US10607602B2 US15/575,512 US201615575512A US10607602B2 US 10607602 B2 US10607602 B2 US 10607602B2 US 201615575512 A US201615575512 A US 201615575512A US 10607602 B2 US10607602 B2 US 10607602B2
- Authority
- US
- United States
- Prior art keywords
- sequence
- state
- speech recognition
- probability
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
- G10L15/19—Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
- G10L15/193—Formal grammars, e.g. finite state automata, context free grammars or word networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Abstract
Description
By modifying the right side of this equation in accordance with Bayes' theorem, we obtain
P(X 1:T |W)≅P(X 1:T |S 1:T)P(S 1:T |W). (3)
Here, the state sequence S1:T represents a state sequence S1, . . . , ST of HMM. The first term of the right side of Equation (3) represents output probability of HMM. From Equations (1) to (3), the word sequence {tilde over (W)} as the result of speech recognition can be given by
The probability P(xt|st) is calculated by Gaussian Mixture Model (GMM).
- NPL 1: C. Weng, D. Yu, S. Watanabe, and B.-H. F. Juang, “Recurrent deep neural networks for robust speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 5532-5536.
In Equation (6), P(xt) is common to each HMM state and, therefore, it is negligible in arg max operation. P(st) can be estimated by counting the number of each state in aligned training data.
Since P(X1:t|St) is not proportional to P(Xt|St), it is not possible to use it in Equation (5). It is because the state St at time point t and preceding observed sequence X1:t have strong dependency. Though this score itself has abundant information, it is impossible to process it in the HMM framework.
In Equation (8), the numerator also appears in Equation (4) of the conventional method, and it can be calculated in the conventional manner. The denominator is a language probability of state sequence S1:T, which can be approximated by Equation (9) below. Using this equation, P(S1:T) can be calculated using an N-gram language model.
The former half of the upper part holds strictly in accordance with Bayes' theorem, and the approximation of the latter half assumes that the state St does not depend on a future observed sequence X(t+1):T. Such an approximation is generally impossible. Supposing that the observed value Xt sufficiently reflects future observed sequence, however, this approximation becomes possible. For this purpose, at the time of learning of this probability, a large feature vector obtained by concatenating successive feature vectors including a vector or vectors of time points subsequent to a time point of interest (for example, a vector of a time point as an object and preceding and succeeding vectors) is used, or a label appended to an observed sequence is shifted behind. In the present embodiment, a vector as a concatenation of a vector at the time point as an object and preceding and succeeding vectors is used and further, the label is shifted behind.
In other words, the recognition score of each hypothesis is calculated using the value obtained by dividing the RNN output by P(S1:T). In Equation (12), the RNN output is obtained at each time point, while all other values can be calculated based on previous learning. In this calculation, the RNN output is directly used, and it is unnecessary to forcefully convert a DNN output to the output format of HMM as in the conventional DNN-HMM hybrid method. Such a method is referred to as direct decoding method here.
Alternatively, the following approximation is also possible.
Various other methods of approximation may be possible.
TABLE 1 | ||||
Numbers of | Word Error Rates |
Hidden | Numbers of | Conventional | The Present | |
Architectures | Layers | Parameters | Methods | Embodiment |
DNN | 5 | 6M | 20.4 | — |
DNN | 5 | 13M | 18.7 | — |
DNN | 5 | 35M | 17.8 | — |
RNN | 3 | 6M | 18.8 | 18.2 |
RNN | 5 | 7M | 18 | 17.5 |
RNN | 5 | 35M | 17.5 | 17.1 |
- 30 word sequence
- 32 phoneme sequence
- 34 state sequence
- 36 observed sequence
- 70 DNN
- 72 input layer
- 74, 76 hidden layer
- 78 output layer
- 100 RNN
- 280 speech recognition device
- 282 input speech
- 284 text of speech recognition
- 300 A/D converter circuit
- 302 framing unit
- 304 feature extracting unit
- 306 feature storage unit
- 308 acoustic model
- 310 decoder
- 320 WFST based on S−1HCLG
- 330 computer system
- 340 computer
- 354 hard disk
- 356 CPU
- 358 ROM
- 360 RAM
Claims (5)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-104336 | 2015-05-22 | ||
JP2015104336A JP6614639B2 (en) | 2015-05-22 | 2015-05-22 | Speech recognition apparatus and computer program |
PCT/JP2016/063818 WO2016190077A1 (en) | 2015-05-22 | 2016-05-10 | Speech recognition device and computer program |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180204566A1 US20180204566A1 (en) | 2018-07-19 |
US10607602B2 true US10607602B2 (en) | 2020-03-31 |
Family
ID=57393215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/575,512 Active 2036-10-28 US10607602B2 (en) | 2015-05-22 | 2016-05-10 | Speech recognition device and computer program |
Country Status (5)
Country | Link |
---|---|
US (1) | US10607602B2 (en) |
EP (1) | EP3300075A4 (en) |
JP (1) | JP6614639B2 (en) |
CN (1) | CN107615376B (en) |
WO (1) | WO2016190077A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11341958B2 (en) * | 2015-12-31 | 2022-05-24 | Google Llc | Training acoustic models using connectionist temporal classification |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6614639B2 (en) | 2015-05-22 | 2019-12-04 | 国立研究開発法人情報通信研究機構 | Speech recognition apparatus and computer program |
JP6727607B2 (en) | 2016-06-09 | 2020-07-22 | 国立研究開発法人情報通信研究機構 | Speech recognition device and computer program |
KR20180080446A (en) * | 2017-01-04 | 2018-07-12 | 삼성전자주식회사 | Voice recognizing method and voice recognizing appratus |
JP6728083B2 (en) * | 2017-02-08 | 2020-07-22 | 日本電信電話株式会社 | Intermediate feature amount calculation device, acoustic model learning device, speech recognition device, intermediate feature amount calculation method, acoustic model learning method, speech recognition method, program |
US11024302B2 (en) * | 2017-03-14 | 2021-06-01 | Texas Instruments Incorporated | Quality feedback on user-recorded keywords for automatic speech recognition systems |
JP6699945B2 (en) * | 2017-04-17 | 2020-05-27 | 日本電信電話株式会社 | Acoustic model learning device, method and program |
DE112018007846B4 (en) * | 2018-08-24 | 2022-06-02 | Mitsubishi Electric Corporation | SPOKEN LANGUAGE SEPARATION EQUIPMENT, SPOKEN LANGUAGE SEPARATION METHOD, SPOKEN LANGUAGE SEPARATION PROGRAM AND SPOKEN LANGUAGE SEPARATION SYSTEM |
JP7063779B2 (en) * | 2018-08-31 | 2022-05-09 | 国立大学法人京都大学 | Speech dialogue system, speech dialogue method, program, learning model generator and learning model generation method |
US11694062B2 (en) | 2018-09-27 | 2023-07-04 | Nec Corporation | Recurrent neural networks having a probabilistic state component and state machines extracted from the recurrent neural networks |
TWI698857B (en) * | 2018-11-21 | 2020-07-11 | 財團法人工業技術研究院 | Speech recognition system and method thereof, and computer program product |
WO2020136948A1 (en) * | 2018-12-26 | 2020-07-02 | 日本電信電話株式会社 | Speech rhythm conversion device, model learning device, methods for these, and program |
CN113707135B (en) * | 2021-10-27 | 2021-12-31 | 成都启英泰伦科技有限公司 | Acoustic model training method for high-precision continuous speech recognition |
CN114267337B (en) * | 2022-03-02 | 2022-07-19 | 合肥讯飞数码科技有限公司 | Voice recognition system and method for realizing forward operation |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010041978A1 (en) * | 1997-12-24 | 2001-11-15 | Jean-Francois Crespo | Search optimization for continuous speech recognition |
JP2009080309A (en) | 2007-09-26 | 2009-04-16 | Toshiba Corp | Speech recognition device, speech recognition method, speech recognition program and recording medium in which speech recogntion program is recorded |
US20120065976A1 (en) * | 2010-09-15 | 2012-03-15 | Microsoft Corporation | Deep belief network for large vocabulary continuous speech recognition |
US8442821B1 (en) | 2012-07-27 | 2013-05-14 | Google Inc. | Multi-frame prediction for hybrid neural network/hidden Markov models |
US20140358545A1 (en) * | 2013-05-29 | 2014-12-04 | Nuance Communjications, Inc. | Multiple Parallel Dialogs in Smart Phone Applications |
US20150039301A1 (en) | 2013-07-31 | 2015-02-05 | Google Inc. | Speech recognition using neural networks |
US20150112679A1 (en) * | 2013-10-18 | 2015-04-23 | Via Technologies, Inc. | Method for building language model, speech recognition method and electronic apparatus |
US20150269934A1 (en) * | 2014-03-24 | 2015-09-24 | Google Inc. | Enhanced maximum entropy models |
US20160093294A1 (en) * | 2014-09-25 | 2016-03-31 | Google Inc. | Acoustic model training corpus selection |
US20160140956A1 (en) * | 2014-11-13 | 2016-05-19 | Microsoft Technology Licensing, Llc | Prediction-based sequence recognition |
JP2016218309A (en) | 2015-05-22 | 2016-12-22 | 国立研究開発法人情報通信研究機構 | Voice recognition device and computer program |
US20170004824A1 (en) | 2015-06-30 | 2017-01-05 | Samsung Electronics Co., Ltd. | Speech recognition apparatus, speech recognition method, and electronic device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2996926B2 (en) * | 1997-03-11 | 2000-01-11 | 株式会社エイ・ティ・アール音声翻訳通信研究所 | Phoneme symbol posterior probability calculation device and speech recognition device |
US10867597B2 (en) * | 2013-09-02 | 2020-12-15 | Microsoft Technology Licensing, Llc | Assignment of semantic labels to a sequence of words using neural network architectures |
CN104575490B (en) * | 2014-12-30 | 2017-11-07 | 苏州驰声信息科技有限公司 | Spoken language pronunciation evaluating method based on deep neural network posterior probability algorithm |
JP6628350B2 (en) * | 2015-05-11 | 2020-01-08 | 国立研究開発法人情報通信研究機構 | Method for learning recurrent neural network, computer program therefor, and speech recognition device |
-
2015
- 2015-05-22 JP JP2015104336A patent/JP6614639B2/en active Active
-
2016
- 2016-05-10 CN CN201680029440.7A patent/CN107615376B/en not_active Expired - Fee Related
- 2016-05-10 US US15/575,512 patent/US10607602B2/en active Active
- 2016-05-10 WO PCT/JP2016/063818 patent/WO2016190077A1/en active Application Filing
- 2016-05-10 EP EP16799785.7A patent/EP3300075A4/en not_active Withdrawn
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010041978A1 (en) * | 1997-12-24 | 2001-11-15 | Jean-Francois Crespo | Search optimization for continuous speech recognition |
JP2009080309A (en) | 2007-09-26 | 2009-04-16 | Toshiba Corp | Speech recognition device, speech recognition method, speech recognition program and recording medium in which speech recogntion program is recorded |
US20120065976A1 (en) * | 2010-09-15 | 2012-03-15 | Microsoft Corporation | Deep belief network for large vocabulary continuous speech recognition |
US8442821B1 (en) | 2012-07-27 | 2013-05-14 | Google Inc. | Multi-frame prediction for hybrid neural network/hidden Markov models |
US20140358545A1 (en) * | 2013-05-29 | 2014-12-04 | Nuance Communjications, Inc. | Multiple Parallel Dialogs in Smart Phone Applications |
US20150039301A1 (en) | 2013-07-31 | 2015-02-05 | Google Inc. | Speech recognition using neural networks |
US20150112679A1 (en) * | 2013-10-18 | 2015-04-23 | Via Technologies, Inc. | Method for building language model, speech recognition method and electronic apparatus |
US20150269934A1 (en) * | 2014-03-24 | 2015-09-24 | Google Inc. | Enhanced maximum entropy models |
US20160093294A1 (en) * | 2014-09-25 | 2016-03-31 | Google Inc. | Acoustic model training corpus selection |
US20160140956A1 (en) * | 2014-11-13 | 2016-05-19 | Microsoft Technology Licensing, Llc | Prediction-based sequence recognition |
JP2016218309A (en) | 2015-05-22 | 2016-12-22 | 国立研究開発法人情報通信研究機構 | Voice recognition device and computer program |
US20180204566A1 (en) | 2015-05-22 | 2018-07-19 | National Institute Of Information And Communications Technology | Speech recognition device and computer program |
US20170004824A1 (en) | 2015-06-30 | 2017-01-05 | Samsung Electronics Co., Ltd. | Speech recognition apparatus, speech recognition method, and electronic device |
JP2017016131A (en) | 2015-06-30 | 2017-01-19 | 三星電子株式会社Samsung Electronics Co.,Ltd. | Speech recognition apparatus and method, and electronic device |
Non-Patent Citations (21)
Title |
---|
A. Graves, et al., "Towards End-to-End Speech Recognition with Recurrent Neural Networks", Proc. ICML 2014, Jun. 21, 2014, pp. 1764-1772. |
A. Graves, S. Fernandez, F. Gomez, and J. Schmidhuber, "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks," in Proc. ICML. ACM, 2006, pp. 369-376. |
A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Eisen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates et al., "Deepspeech: Scaling up end-to-end speech recognition," arXiv preprint arXiv:1412.5567, 2014. |
A. L. Maas, A. Y. Hannun, D. Jurafsky, and A. Y. Ng, "First-pass large vocabulary continuous speech recognition using bi-directional recurrent DNNs," arXiv preprint arXiv:1408.2873, 2014. |
A. L. Maas, Z. Xie, D. Jurafsky, and A. Y. Ng, "Lexicon-free conversational speech recognition with neural networks," in Proc. NAACL HLT, 2015. |
A. Senior, H. Sak, F. de Chaumont Quitry, T. N. Sainath, and K. Rao, "Acoustic modelling with CD-CTC-SMBR LSTM RNNs," in Proc. ASRU, 2015, pp. 604-609. |
C. Weng, et al., "Recurrent deep neural networks for robust speech recognition", in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 5532-5536 (Discussed in Specification). |
Dzmitry Bandanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel and Yoshua Bengio, "End-to-end attention-based large vocabulary speech recognition", in Proc. ICASSP, 2016, pp. 4945-4949 (cited in Specification). |
George E. Dahl, et al., "Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition", IEEE Transactions on Audio, Speech, and Language Processing, Jan. 2012, vol. 20, No. I, pp. 30-42. |
H. Sak, A. Senior, K. Rao, and F. Beaufays, "Fast and accurate recurrent neural network acoustic models for speech recognition," in Proc. INTERSPEECH 2015, pp. 1468-1472. |
H. Sak, A. Senior, K. Rao, O. Irsoy, A. Graves, F. Beaufays, and J. Schalkwyk, "Learning acoustic frame labeling for speech recognition with recurrent neural networks," in Proc. ICASSP, 2015, pp. 4280-4284. |
H. Sak, et al., "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition", arXiv e-prints, 2014. |
International Search report for corresponding International Application No. PCT/JP2016/063818 dated May 5, 2016. |
J.T. Geiger, et al., "Robust speech recognition using long short-term memory recurrent neural networks for hybrid acoustic modelling", in Proceedings of the Annual Conference of International Speech Communication Association (INTERSPEECH), 2014. |
Naoyuki Kanda et al., "Maximum A Posteriori based Decoding for CTC Acoustic Models", in Proceedings of Interspeech 2016, Sep. 8-12, 2016, pp. 1868-1872. |
Sak, et al., "Sequence discriminative distributed training of long short-term memory recurrent neural networks",Interspeech 2014. |
Saon, et al., "Unfolded recurrent neural networks for speech recognition", in Fifteenth Annual Conference of the International Speech Communication Association, 2014. |
Steve Renals, et al., "Connectionist Probability Estimators in HMM Speech Recognition, IEEE Transactions on Speech and Audio Processing", Jan. 1994, vol. 2, No. I, Part 2, pp. 161-174. |
Tatsuya Kawahara, "Onsei Ninshiki no Hohoron ni Kansuru Kosatsu-Sedai Kotai ni Mukete-", IPSJSIG Notes, Jan. 24, 2014 (Jan. 24, 2014), vol. 2014-SLP-100, No. 3, pp. 1-5 (with partial translation). |
Y. Miao, M. Gowayyed, and F. Metze, "EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding," in Proc. ASRU, 2015, pp. 167-174. |
Yotaro Kubo, et al., "Integrating Deep Neural Networks into Structured Classification Approach based on Weighted Finite-State Transducers", Proc. INTERSPECCH 2012, Sep. 9, 2012, pp. 2594-2597. |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11341958B2 (en) * | 2015-12-31 | 2022-05-24 | Google Llc | Training acoustic models using connectionist temporal classification |
US11769493B2 (en) | 2015-12-31 | 2023-09-26 | Google Llc | Training acoustic models using connectionist temporal classification |
Also Published As
Publication number | Publication date |
---|---|
EP3300075A1 (en) | 2018-03-28 |
CN107615376A (en) | 2018-01-19 |
US20180204566A1 (en) | 2018-07-19 |
WO2016190077A1 (en) | 2016-12-01 |
JP6614639B2 (en) | 2019-12-04 |
JP2016218309A (en) | 2016-12-22 |
EP3300075A4 (en) | 2019-01-02 |
CN107615376B (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10607602B2 (en) | Speech recognition device and computer program | |
US10909976B2 (en) | Speech recognition device and computer program | |
US10460721B2 (en) | Dialogue act estimation method, dialogue act estimation apparatus, and storage medium | |
US10467525B2 (en) | Recurrent neural network training method, computer program therefor and speech recognition device | |
US5050215A (en) | Speech recognition method | |
US7103544B2 (en) | Method and apparatus for predicting word error rates from text | |
US8762142B2 (en) | Multi-stage speech recognition apparatus and method | |
US6845357B2 (en) | Pattern recognition using an observable operator model | |
KR20180071029A (en) | Method and apparatus for speech recognition | |
Kanda et al. | Maximum a posteriori Based Decoding for CTC Acoustic Models. | |
US7684987B2 (en) | Segmental tonal modeling for tonal languages | |
Kuo et al. | Maximum entropy direct models for speech recognition | |
US7877256B2 (en) | Time synchronous decoding for long-span hidden trajectory model | |
WO2018066436A1 (en) | Learning device for acoustic model and computer program for same | |
Markov et al. | Integration of articulatory and spectrum features based on the hybrid HMM/BN modeling framework | |
JP2004226982A (en) | Method for speech recognition using hidden track, hidden markov model | |
Aymen et al. | Hidden Markov Models for automatic speech recognition | |
JP2002358097A (en) | Voice recognition device | |
JP3628245B2 (en) | Language model generation method, speech recognition method, and program recording medium thereof | |
JP2005156593A (en) | Method for creating acoustic model, device for creating the acoustic model, program for creating acoustic model, and voice-recognition device | |
Kumar et al. | Speech Recognition Using Hmm and Combinations: A Review | |
JP4678464B2 (en) | Voice recognition apparatus, voice recognition method, program, and recording medium | |
Gupta et al. | Noise robust acoustic signal processing using a Hybrid approach for speech recognition | |
JP2003271187A (en) | Device, method and program for recognizing voice | |
AMARENDRA BABU et al. | Data Driven Methods for Adaptation of ASR Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECHNOLOGY, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KANDA, NAOYUKI;REEL/FRAME:044192/0516 Effective date: 20171006 Owner name: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KANDA, NAOYUKI;REEL/FRAME:044192/0516 Effective date: 20171006 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |