CN113470650A - Operation ticket anti-error method based on voice recognition - Google Patents
Operation ticket anti-error method based on voice recognition Download PDFInfo
- Publication number
- CN113470650A CN113470650A CN202110508820.1A CN202110508820A CN113470650A CN 113470650 A CN113470650 A CN 113470650A CN 202110508820 A CN202110508820 A CN 202110508820A CN 113470650 A CN113470650 A CN 113470650A
- Authority
- CN
- China
- Prior art keywords
- voice
- voice recognition
- method based
- operation ticket
- command
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004458 analytical method Methods 0.000 claims abstract description 16
- 230000002265 prevention Effects 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000009432 framing Methods 0.000 claims description 3
- 210000001503 joint Anatomy 0.000 claims description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012790 confirmation Methods 0.000 claims description 2
- 239000012634 fragment Substances 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 5
- 238000013461 design Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000010183 spectrum analysis Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/04—Segmentation; Word boundary detection
- G10L15/05—Word boundary detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/20—Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0631—Creating reference templates; Clustering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Probability & Statistics with Applications (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention relates to an operation ticket anti-error method based on voice recognition, which is technically characterized by comprising the following steps of: step 1, carrying out voice recognition and semantic analysis on a scheduling command, and converting the scheduling command voice into scheduling command text data; and step 2, writing an operation order to prevent errors and verifying. The intelligent anti-error management system is reasonable in design, can automatically monitor whether the order of the dispatching telephone is accurate or not and whether the repeating is accurate or not by performing voice recognition on the dispatching command, performing semantic analysis on the voice of the dispatching command and performing intelligent anti-error processing on the operation order, improves convenience for intelligent anti-error, greatly improves working efficiency of a dispatcher, and ensures safe and stable operation of a power grid.
Description
Technical Field
The invention belongs to the technical field of electric power regulation and control, relates to a scheduling voice recognition method, and particularly relates to an operation ticket anti-misoperation method based on voice recognition.
Background
With the comprehensive promotion of the power grid regulation and control integration, the task of the regulation and control center is heavier and heavier, and the work of the dispatching personnel is bigger and bigger. The operation order is an important content in the regulation and control process, and the regulation and control center and the station end often need to complete the processing work of the operation order in a voice mode in the regulation and control process.
With the continuous development of human voice recognition technology, the real-time self-learning of voice recognition can be realized. In recent years, the electric power department adopts a voice recognition technology to automatically recognize dispatching telephone voice so as to improve the automation level of a dispatching system. If the automatic recognition function is to be completed, accurate recognition of the voice is necessary, and practical convenience can be brought to serious scheduling work. Because voice communication has a plurality of factors such as accent and background noise interference, and the like, accurate recognition of telephone voice is difficult in the prior art, so how to effectively prevent error processing of an operation ticket of voice recognition is a problem which needs to be solved urgently at present.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a reasonable, accurate and reliable operation ticket anti-error method based on voice recognition.
The invention solves the technical problems in the prior art by adopting the following technical scheme:
an operation ticket anti-error method based on voice recognition comprises the following steps:
step 1, carrying out voice recognition and semantic analysis on a scheduling command, and converting the scheduling command voice into scheduling command text data;
and step 2, writing an operation order to prevent errors and verifying.
Further, the specific implementation method of step 1 includes the following steps:
step 1.1, preprocessing a voice signal of a scheduling command;
step 1.2, obtaining a real and effective speech paragraph through endpoint detection;
step 1.3, extracting characteristic parameters changing along with time;
step 1.4, establishing an acoustic model through voice training data and noise data, and establishing a language model through text training data;
and step 1.5, matching the characteristic parameters with a parameter template in the language model to determine the voice content.
Further, the pre-processing in step 1.1 includes pre-emphasis, framing and windowing of the input speech signal, wherein the speech signal is pre-emphasized using a high-pass filter.
Further, in the step 1.2, a hidden markov model algorithm is adopted to detect the breathing and noise components in the speech signal, so as to detect a real and effective speech paragraph.
Further, the characteristic parameters varying with time include mel-frequency cepstrum coefficients and linear prediction cepstrum coefficients.
Further, the specific implementation method of step 1.4 is as follows: performing matching search by using a dynamic time warping algorithm or an algorithm based on an artificial neural network; and extracting noun phrases through matching of part-of-speech tagging and matching patterns.
Further, the specific implementation method of step 2 includes the following steps:
step 2.1, writing an operation order command based on the language model to prevent error;
step 2.2, order checking based on voice recognition;
step 2.3, repeating and checking based on voiceprint recognition and voice recognition:
and 2.4, carrying out real-time state butt joint to carry out intelligent error prevention on the operation ticket.
Further, the specific implementation method of step 2.1 is as follows: and automatically generating word segmentation for the operation command through a language model, and performing command writing check by combining a D5000 model and a field mode section.
Further, the step 2.2 is realized by the following method: and after the scheduling instruction is converted into the factor table, the factor table is compared with the operation command for checking.
Further, before repeating and checking, the on-site operators need to be trained on duty, and sound is input for voiceprint analysis in the step 2.3; during the repeating verification, voice print analysis is carried out on the voice fragments so as to carry out identity confirmation.
The invention has the advantages and positive effects that:
the intelligent anti-error management system is reasonable in design, can automatically monitor whether the order of the dispatching telephone is accurate or not and whether the repeating is accurate or not by performing voice recognition on the dispatching command, performing semantic analysis on the voice of the dispatching command and performing intelligent anti-error processing on the operation order, improves convenience for intelligent anti-error, greatly improves working efficiency of a dispatcher, and ensures safe and stable operation of a power grid.
Drawings
FIG. 1 is a schematic diagram of a model training process for speech recognition according to the present invention;
fig. 2 is a schematic diagram of a voice detection result of the operation ticket system.
Detailed Description
The embodiments of the present invention will be described in detail with reference to the accompanying drawings.
An operation ticket anti-error method based on voice recognition comprises the following steps:
step 1, carrying out voice recognition and semantic analysis on the scheduling command, and converting the scheduling command voice into scheduling command text data.
The specific implementation method of this step, as shown in fig. 1, includes the following steps:
and step 1.1, preprocessing the voice signal of the scheduling command.
In this step, the preprocessing of the voice signal is to perform pre-emphasis, framing, windowing and other processing on the input signal, so as to facilitate subsequent operations.
Studies have shown that glottal excitation during vocalization can affect the power spectrum of speech. After the signal is converted from the time domain to the frequency domain, the frequency spectrum can be smoothed, which is very helpful for the frequency spectrum analysis. Therefore, in order to reduce the drop of the high frequency part of the speech signal and ensure the operation of frequency domain spectrum analysis, the speech signal needs to pass through a high-pass filter to perform pre-emphasis on the speech signal.
And step 1.2, obtaining a real and effective speech paragraph through endpoint detection.
In this step, the sound of non-speech components such as respiration and noise in the speech signal can be detected by using the hidden markov model algorithm, thereby detecting a real and effective speech passage. Besides the algorithms of hidden markov models, there are many common algorithms that can be broadly classified into several categories: spectral analysis, pitch detection, cepstrum analysis, energy thresholds, and current predictions, among others.
And 1.3, extracting characteristic parameters changing along with time.
In this step, two characteristic parameters that change with time are extracted as follows: mel-frequency Cepstral coefficients (MFCC) and Linear Predictive Cepstral Coefficients (LPCC). The extraction of the MFCC characteristic parameters is mainly divided into four steps of preprocessing, fast Fourier transform, spectral energy calculation through Mel filter bank energy and DCT cepstrum calculation. Fast Fourier Transform (FFT) is an FFT that performs a subframe windowing on each frame of a speech signal to convert the speech signal from initial time domain data to frequency domain data. The formula is as follows: x (i.k) ═ FFT [ xi (m) ]
And step 1.4, establishing an acoustic model through the voice training data and the noise data, and establishing a language model through the text training data.
The method for establishing the acoustic model comprises the following steps: an acoustic model is built according to voice training data and noise data and by using a structure of a Gate recycling Unit-manual alignment mode (GRU-CTC), a Recurrent Neural Network can obtain a more accurate recognition result by using voice context related information, a GRU selectively retains required long-term information, and a bidirectional Neural Network (RNN) can fully utilize context signals.
The establishment method of the language model comprises the following steps: and training according to the text training data to establish a language model, so that the function of recognizing Chinese characters can be achieved. For example, pinyin input is essentially a sequence-to-sequence model: inputting phonetic sequence and outputting Chinese character sequence.
Step 1.5, template library matching search: and (4) matching the characteristic parameters extracted in the step (1.3) with the acoustic model and the parameter template in the language model to determine the voice content.
Template library matching search may typically be performed using a dynamic time warping algorithm or an artificial neural network based algorithm, or the like. And extracting noun phrases through matching of part-of-speech tagging and matching patterns. Usually, words of a sentence are converged into chunk (chunking) phrases, such as common noun phrase chunks, verb chunks, and so on. Looking up the labeled data set, finding that most keywords are noun phrase chunks, and efficiently extracting the keywords through pattern matching by rules.
Through the voice recognition and semantic analysis processes, the scheduling command voice can be converted into scheduling command text data.
And 2, writing an operation order command to prevent errors and verifying.
The specific implementation method of the step is as follows:
and 2.1, performing error prevention of operation order writing based on the language model.
After training is carried out on scheduling command terms, the language model can automatically generate word segmentation for operation commands, and command writing checking is carried out by combining the D5000 model and the site mode section. If the switch is wrongly written as the Ci' an I line 206 switch, the switch 206 can be replaced by the switch 203 by the connection analysis of semantic analysis and topological model. And finishing the writing and checking.
And 2.2, ordering and checking based on voice recognition.
When the dispatcher orders: "order: the switch of the 220kV Xinji I line 203 of the Xinan station is switched from operation to hot standby. And performing voice recognition on the scheduling instruction, comparing the scheduling instruction with the operation command, and checking. The difficulty here is in identifying the technical terms of scheduling, such as #2 main transformer, AVC, PT, protection, etc. Chinese-English combination or symbol identification; or because the scheduling command is serious and slightly interfered, the identification is slightly inaccurate and the judgment is misjudged. Therefore, the voice instruction is converted into the phoneme table to be matched with the voice recognition pronunciation, and the interference of the language model is eliminated.
And 2.3, repeating and checking based on voiceprint recognition and voice recognition.
When the field operator is on duty and trained, sound can be recorded for voiceprint analysis. Each time a repeat is made, voice print analysis is performed on the voice segments to confirm identity. The execution information is consistent with the 2.2 method, and if the repeat is wrong, a check prompt is carried out.
And 2.4, intelligently preventing error of the operation ticket.
In the step, real-time state butt joint is carried out, voice check is carried out simultaneously through topology and real-time state check whether live pull-close grounding knife exists, grounding wire is detached and hung, the grounding wire needs obvious interval, whether load is dropped, whether looped network is caused, heavy overload and other operations are carried out, and if yes, mistaken reminding is carried out.
Fig. 2 shows the result of voice detection of the operation ticket system by which it is possible to effectively prevent whether or not the voice command coincides with the operation ticket face. Wherein the text command is: the Diazizhu line II 251 switch of the Diazizhu station is opened. The voice command is: the Diazizhai line I251 switch at the Diazizhai station is opened. The voice command successfully identified by the operation ticket system is inconsistent with the face of the operation ticket, so that the anti-misoperation function of the operation ticket is effectively realized.
When the voice application method is applied, for a power system with higher safety requirement, the voice application technology depends on the safety performance of a telephone network and the Internet. In order to ensure the security and integrity of network data, good security measures in terms of communication infrastructure, network transmission protocol, system management and the like are required. Security protocols and standard protocols are important issues in the development of voice applications. The voice stream may be encrypted using SM2, SM4, or MD5, which, while reducing transmission efficiency, may enhance security.
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.
Claims (10)
1. An operation ticket anti-error method based on voice recognition is characterized in that: the method comprises the following steps:
step 1, carrying out voice recognition and semantic analysis on a scheduling command, and converting the scheduling command voice into scheduling command text data;
and step 2, writing an operation order to prevent errors and verifying.
2. The operation ticket error prevention method based on the voice recognition as claimed in claim 1, wherein: the specific implementation method of the step 1 comprises the following steps:
step 1.1, preprocessing a voice signal of a scheduling command;
step 1.2, obtaining a real and effective speech paragraph through endpoint detection;
step 1.3, extracting characteristic parameters changing along with time;
step 1.4, establishing an acoustic model through voice training data and noise data, and establishing a language model through text training data;
and step 1.5, matching the characteristic parameters with a parameter template in the language model to determine the voice content.
3. The operation ticket error prevention method based on the voice recognition as claimed in claim 2, wherein: the pre-processing in said step 1.1 comprises pre-emphasis, framing and windowing of the input speech signal, wherein the speech signal is pre-emphasized using a high pass filter.
4. The operation ticket error prevention method based on the voice recognition as claimed in claim 2, wherein: and 1.2, detecting breathing and noise components in the voice signal by adopting a hidden Markov model algorithm, thereby detecting a real and effective voice paragraph.
5. The operation ticket error prevention method based on the voice recognition as claimed in claim 2, wherein: the characteristic parameters varying with time include mel-frequency cepstrum coefficients and linear prediction cepstrum coefficients.
6. The operation ticket error prevention method based on the voice recognition as claimed in claim 2, wherein: the specific implementation method of the step 1.4 is as follows: performing matching search by using a dynamic time warping algorithm or an algorithm based on an artificial neural network; and extracting noun phrases through matching of part-of-speech tagging and matching patterns.
7. The operation ticket error prevention method based on the voice recognition as claimed in claim 1, wherein: the specific implementation method of the step 2 comprises the following steps:
step 2.1, writing an operation order command based on the language model to prevent error;
step 2.2, order checking based on voice recognition;
step 2.3, repeating and checking based on voiceprint recognition and voice recognition:
and 2.4, carrying out real-time state butt joint to carry out intelligent error prevention on the operation ticket.
8. The operation ticket error prevention method based on the voice recognition as claimed in claim 7, wherein: the specific implementation method of the step 2.1 is as follows: and automatically generating word segmentation for the operation command through a language model, and performing command writing check by combining a D5000 model and a field mode section.
9. The operation ticket error prevention method based on the voice recognition as claimed in claim 7, wherein: the step 2.2 is realized by the following method: and after the scheduling instruction is converted into the factor table, the factor table is compared with the operation command for checking.
10. The operation ticket error prevention method based on the voice recognition as claimed in claim 7, wherein: step 2.3, before repeating and checking, on-duty training needs to be carried out on site operators, and sound is input for voiceprint analysis;
during the repeating verification, voice print analysis is carried out on the voice fragments so as to carry out identity confirmation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110508820.1A CN113470650A (en) | 2021-05-11 | 2021-05-11 | Operation ticket anti-error method based on voice recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110508820.1A CN113470650A (en) | 2021-05-11 | 2021-05-11 | Operation ticket anti-error method based on voice recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113470650A true CN113470650A (en) | 2021-10-01 |
Family
ID=77870592
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110508820.1A Pending CN113470650A (en) | 2021-05-11 | 2021-05-11 | Operation ticket anti-error method based on voice recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113470650A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115019809A (en) * | 2022-05-17 | 2022-09-06 | 中国南方电网有限责任公司超高压输电公司广州局 | Method, apparatus, device, medium, and program product for preventing false entry into an interval |
CN116825140A (en) * | 2023-08-29 | 2023-09-29 | 北京龙德缘电力科技发展有限公司 | Voice interaction method and system for standardizing action flow in operation ticket |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559737A (en) * | 2018-12-13 | 2019-04-02 | 朱明增 | Electric power system dispatching speech model method for building up |
CN110808040A (en) * | 2019-09-24 | 2020-02-18 | 国网河北省电力有限公司衡水市桃城区供电分公司 | System and method for controlling flow of interlocking work tickets and operation tickets based on voice |
-
2021
- 2021-05-11 CN CN202110508820.1A patent/CN113470650A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559737A (en) * | 2018-12-13 | 2019-04-02 | 朱明增 | Electric power system dispatching speech model method for building up |
CN110808040A (en) * | 2019-09-24 | 2020-02-18 | 国网河北省电力有限公司衡水市桃城区供电分公司 | System and method for controlling flow of interlocking work tickets and operation tickets based on voice |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115019809A (en) * | 2022-05-17 | 2022-09-06 | 中国南方电网有限责任公司超高压输电公司广州局 | Method, apparatus, device, medium, and program product for preventing false entry into an interval |
CN115019809B (en) * | 2022-05-17 | 2024-04-02 | 中国南方电网有限责任公司超高压输电公司广州局 | Method, apparatus, device, medium and program product for monitoring false entry prevention interval |
CN116825140A (en) * | 2023-08-29 | 2023-09-29 | 北京龙德缘电力科技发展有限公司 | Voice interaction method and system for standardizing action flow in operation ticket |
CN116825140B (en) * | 2023-08-29 | 2023-10-31 | 北京龙德缘电力科技发展有限公司 | Voice interaction method and system for standardizing action flow in operation ticket |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102982811B (en) | Voice endpoint detection method based on real-time decoding | |
US7177810B2 (en) | Method and apparatus for performing prosody-based endpointing of a speech signal | |
CN104123939A (en) | Substation inspection robot based voice interaction control method | |
CN109147768A (en) | A kind of audio recognition method and system based on deep learning | |
CN103003876A (en) | Modification of speech quality in conversations over voice channels | |
CN113470650A (en) | Operation ticket anti-error method based on voice recognition | |
CN111429915A (en) | Scheduling system and scheduling method based on voice recognition | |
CN110853628A (en) | Model training method and device, electronic equipment and storage medium | |
CN109243492A (en) | A kind of speech emotion recognition system and recognition methods | |
EP0767950B1 (en) | Method and device for adapting a speech recognition equipment for dialectal variations in a language | |
CN114330371A (en) | Session intention identification method and device based on prompt learning and electronic equipment | |
CN112397054A (en) | Power dispatching voice recognition method | |
CN110782902A (en) | Audio data determination method, apparatus, device and medium | |
Kumar et al. | Machine learning based speech emotions recognition system | |
CN109215634A (en) | A kind of method and its system of more word voice control on-off systems | |
CN111555247A (en) | Switching operation control method, device, equipment and medium for power equipment | |
CN108417209A (en) | A kind of power scheduling morpheme extraction method based on natural language processing technique | |
US20220358913A1 (en) | Method for facilitating speech activity detection for streaming speech recognition | |
CN112863485A (en) | Accent voice recognition method, apparatus, device and storage medium | |
Balpande et al. | Speaker recognition based on mel-frequency cepstral coefficients and vector quantization | |
CN111276146A (en) | Teaching training system based on voice recognition | |
Moneykumar et al. | Malayalam word identification for speech recognition system | |
Ma et al. | Russian speech recognition system design based on HMM | |
Fanfeng | Application research of voice control in reading assistive device for visually impaired persons | |
Gadekar et al. | Analysis of speech recognition techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211001 |