CN107068145B - Speech evaluating method and system - Google Patents
Speech evaluating method and system Download PDFInfo
- Publication number
- CN107068145B CN107068145B CN201611269661.XA CN201611269661A CN107068145B CN 107068145 B CN107068145 B CN 107068145B CN 201611269661 A CN201611269661 A CN 201611269661A CN 107068145 B CN107068145 B CN 107068145B
- Authority
- CN
- China
- Prior art keywords
- client
- user
- voice
- fractionation
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/04—Segmentation; Word boundary detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
Abstract
The present invention relates to technical field of voice recognition, a kind of speech evaluating method and system are disclosed, to improve the accuracy of evaluation and test.Speech evaluating method disclosed by the invention, comprising: client acquires the voice data of user, and the voice of acquisition is carried out fractionation word for word with uniform time interval, then confirms the voice recording and broadcasting after fractionation for client user to user;After client user confirms and splits correctly, the voice data after fractionation is transmitted and is identified and is evaluated and tested for it to server.Based on speech evaluating method and system of the invention, since the word speed of different evaluation and test users is inconsistent, the correctness split and recorded broadcast ensures to split to user word for word is carried out to acquired voice by client, and make the fractionation time interval controls of different user within the recognizable set of server, while facilitating server to do corresponding speech recognition and evaluation and test according to the fractionation information, the accuracy of speech recognition and evaluation and test is also improved.
Description
Technical field
The present invention relates to voice processing technology field more particularly to a kind of speech evaluating method and systems.
Background technique
Language is most important the vehicle of communication and information carrier, common national language it is universal be unification of the motherland, national unity,
The important foundation of social progress, China are a multi-national, multilingual countries, and mother tongue environment is more loose, the initial institute of people
The language of acquistion is mostly this national language or dialect, so that people's exchange of different regions hinders, and mandarin is as complete
The general language of state, is widelyd popularize.Actively universal mandarin is conducive to eliminate language barrier, promotes social interaction, main to society
Adopted economic, politics, cultural construction and social development are of great significance.Mandarin is popularized to be conducive to promote various nationalities various regions
The exchange in area is conducive to safeguard unification of the motherland, enhances the cohesion of the Chinese nation.Genic male sterility is as during popularizing Beijing pronunciation
An important ring, currently still mostly use the mode manually to score, one by examination people when needing 3 to 5 examination personnel to carry out long
Between examination, but annual every profession and trade requires a large amount of genic male sterility competent persons, this method time and effort consuming, at high cost
It is high, subjectivity is strong, it is clear that be not able to satisfy current social demand.And the high speed development of mobile field hardware technology imparts intelligence
Mobile terminal broader practice prospect, intelligent mobile terminal becomes personal connection network and enterprise provides the important flat of service
Platform, people can attempt to carry out genic male sterility by intelligent mobile terminal.For example, the mandarin based on Android device
System is evaluated and instructed to carry out genic male sterility, time-consuming short, at low cost, easy to use, objective and fair.
Summary of the invention
Present invention aims at a kind of speech evaluating method and system is disclosed, to improve the accuracy of evaluation and test.
To achieve the above object, the invention discloses a kind of speech evaluating methods, comprising:
Client acquires the voice data of user, and the voice of acquisition is carried out tearing open word for word with uniform time interval
Point, then the voice recording and broadcasting after fractionation is confirmed to user for client user;
After client user confirms and splits correctly, the voice data after fractionation is transmitted to server for its progress
Identification and evaluation and test.
Corresponding with above-mentioned evaluating method, invention additionally discloses a kind of speech evaluating systems, including client and service
Device:
The client is carried out for acquiring the voice data of user, and by the voice of acquisition with uniform time interval
Then fractionation word for word confirms the voice recording and broadcasting after fractionation to user for client user;And confirm in client user
After splitting correctly, the voice data after fractionation is transmitted and is identified and is evaluated and tested for it to the server.
The invention has the following advantages:
Since the word speed of different evaluation and test users is inconsistent, fractionation and record word for word are carried out to acquired voice by client
The correctness for ensuring to split is broadcast to user, and makes the fractionation time interval controls of different user in the recognizable model of server
Within enclosing, while facilitating server to do corresponding speech recognition and evaluation and test according to the fractionation information, speech recognition is also improved
And the accuracy of evaluation and test.Preferably, speech evaluating method disclosed in this invention and system, the time interval of above-mentioned fractionation are based on
Can by the customized setting of client, and client be sent in the VoP of server include fractionation it is customized when
Between interval information, for server according to the time interval information of differentiation to different user carry out adaptive speech recognition and
Evaluation and test.
Below with reference to accompanying drawings, the present invention is described in further detail.
Detailed description of the invention
The attached drawing constituted part of this application is used to provide further understanding of the present invention, schematic reality of the invention
It applies example and its explanation is used to explain the present invention, do not constitute improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the flow diagram of voice assessment method disclosed by the embodiments of the present invention.
Specific embodiment
The embodiment of the present invention is described in detail below in conjunction with attached drawing, but the present invention can be defined by the claims
Implement with the multitude of different ways of covering.
Embodiment 1
The embodiment of the present invention discloses a kind of speech evaluating method, as shown in Figure 1, comprising:
Step S1, client acquisition user voice data, and by the voice of acquisition with uniform time interval carry out by
Then the fractionation of word confirms the voice recording and broadcasting after fractionation to user for client user.
In this step, due to it is different evaluation and test users word speeds it is inconsistent, by client to acquire voice progress by
The correctness of word split and recorded broadcast ensures to split to user, and the fractionation time interval controls of different user are being serviced
Within the recognizable set of device, while facilitating server to do corresponding speech recognition and evaluation and test according to the fractionation information, also mention
The high accuracy of speech recognition and evaluation and test.
Step S2, after client user confirms and splits correctly, the voice data after fractionation is transmitted to server
It is identified and is evaluated and tested for it.When user thinks to split incorrect, the voice data and return step of current typing can be deleted
S1 re-starts the acquisition of voice data.Corresponding, server, can be by acquired voice in specific identification and evaluation and test
Characteristic value is compared with pre-stored corresponding received pronunciation characteristic value carries out correlation;And correlation comparison result is returned
The client;Wherein, specific correlation, which compares, to carry out correlation comparison based on Pearson correlation coefficients.
In the present embodiment, environmental noise can be more or less usually mingled with from beginning to end while acquiring voice data.For
This, the present embodiment can provide the different processing mode of the following two kinds:
Mode one, before client is split the voice of acquisition with uniform time interval, by client to being adopted
The voice data of collection carries out the environmental noise removal processing of head and the tail.Such as: one section of environmental audio of acquisition in advance obtains the ambient sound
Then the frequency information of frequency subtracts each other with the frequency information of tested speech, obtain the audio letter of the tested speech of removal ambient noise
Breath.
Mode two, in client by the voice recording and broadcasting after fractionation to user when, client show voice recorded broadcast progress,
And the environmental noise that head and the tail are fallen in the editing of editing interface for users is provided after playing.
In addition, in embodiments of the present invention, the time interval of above-mentioned fractionation can be arranged by client and server,
The fixed fixation duration without user setting can be used, the mode that following preferred users can customize can also be used:
The time interval of fractionation is by the customized setting of client, and client is sent in the VoP of server and wraps
Self defined time interval information containing fractionation, for server according to the time interval information of differentiation to different user into
Row adaptive speech recognition and evaluation and test.
Further, consider the diversity of existing languages, the present embodiment can also be provided at least two by client
Different types of languages test pattern for selection by the user, and carry in the VoP of transmission corresponding languages label with
Identified for server, wherein be previously stored in the database that server is connected in corresponding test topic each languages label with
The mapping table of corresponding standard audio data.Wherein, the languages in the present embodiment, either conventional mandarin, English, method
Commonly used languages such as language etc. are also possible to the local dialect etc..
In the present embodiment, in addition to the above-mentioned word for word fractionation to voice, other interaction design packets of client and server
It includes but is not limited to:
Server obtains the tone testing request of client user, then judges that user selects to carry out any specific survey
Examination, such as mandarin or the local dialect etc.;Then the request issued according to user is at random by the paragraph or sentence of corresponding test
Son is displayed on the screen, and the audio data that user reads aloud the paragraph or sentence is tested and acquired for user;
Corresponding, server can convert the received audio data of institute in the identification and test process of voice
For text, then the text that the audio is converted into is compared with the paragraph or sentence tested, and passes through Boolean variable
Word for word to mark correctness;Further, any word, word or sentence can also be provided in the evaluation and test interface for returning to client
Received pronunciation training interface redirected link can also further provide for two kinds of standard speech of male sound version and female's sound version when necessary
Message breath.
To sum up, speech evaluating method provided in this embodiment passes through client since the word speed of different evaluation and test users is inconsistent
It holds the simultaneously recorded broadcast that splits carried out word for word to acquired voice to ensure the correctness split to user, and makes tearing open for different user
Divide time interval controls within the recognizable set of server, facilitates server to do corresponding voice according to the fractionation information and know
While not and evaluation and test, the accuracy of speech recognition and evaluation and test is also improved.Preferably, it is commented based on voice disclosed in this invention
The time interval of survey method, above-mentioned fractionation can be by the customized setting of client, and client is sent to the voice data of server
Include the self defined time interval information of fractionation in packet, for server according to the time interval information of differentiation to difference
User carries out adaptive speech recognition and evaluation and test.
Embodiment 2
Corresponding to the above method embodiment, the present embodiment discloses a kind of speech evaluating system, including client kimonos
Business device, wherein client is used for: acquiring the voice data of user, and the voice of acquisition is torn open with uniform time interval
Point, then the voice recording and broadcasting after fractionation is confirmed to user for client user;And confirms in client user and split correctly
Afterwards, the voice data after fractionation is transmitted and is identified and evaluated and tested for it to server.
Optionally, above-mentioned client is also used to: when the voice recording and broadcasting after splitting is to user, show the recorded broadcast of voice into
Degree, and the environmental noise of head and the tail is fallen in offer editing interface for users editing after playing;Or: in the voice that will be acquired
Before being split with uniform time interval, the environmental noise removal for carrying out head and the tail to voice data collected is handled.
Preferably, above-mentioned client is also used to the time interval that customized setting is split, and in the language for being sent to server
It include the self defined time interval information of fractionation in sound data packet.Further, the present embodiment client is also used to: being set
It is equipped at least two different types of languages test patterns for selection by the user, and is carried accordingly in the VoP of transmission
Languages label for server identification, wherein be previously stored in the database that server is connected in corresponding test topic
The mapping table of each languages label and corresponding standard audio data.
Similarly, speech evaluating system provided in this embodiment passes through client since the word speed of different evaluation and test users is inconsistent
It holds the simultaneously recorded broadcast that splits carried out word for word to acquired voice to ensure the correctness split to user, and makes tearing open for different user
Divide time interval controls within the recognizable set of server, facilitates server to do corresponding voice according to the fractionation information and know
While not and evaluation and test, the accuracy of speech recognition and evaluation and test is also improved.Preferably, it is commented based on voice disclosed in this invention
The time interval of examining system, above-mentioned fractionation can be by the customized setting of client, and client is sent to the voice data of server
Include the self defined time interval information of fractionation in packet, for server according to the time interval information of differentiation to difference
User carries out adaptive speech recognition and evaluation and test.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.
Claims (6)
1. a kind of speech evaluating method characterized by comprising
Client acquires the voice data of user, and the voice of acquisition is carried out fractionation word for word with uniform time interval, so
The voice recording and broadcasting after fractionation is confirmed to user for client user afterwards;The time interval of the fractionation by the client from
Definition setting is inconsistent with the word speed for adapting to different evaluation and test users;
After client user confirms and splits correctly, the voice data after fractionation is transmitted and is identified to server for it
And evaluation and test;And the client be sent in the VoP of server include fractionation self defined time interval letter
Breath;When user thinks to split incorrect, deletes the voice data of current typing and re-start the acquisition of voice data;
The client is provided at least two different types of languages test patterns for selection by the user, and in the voice of transmission
Corresponding languages label is carried in data packet for server identification;
The server receives the voice data after the fractionation from the client and the time interval information according to differentiation
Adaptive speech recognition and evaluation and test are carried out to different user;Also, the server identifies to be carried in the VoP
Corresponding languages label, and inquire in the database of connection each languages in pre-stored corresponding test topic mark with it is corresponding
The mapping table of standard audio data carries out the characteristic value of acquired voice with pre-stored corresponding received pronunciation characteristic value
Correlation compares;And correlation comparison result is returned into the client.
2. speech evaluating method according to claim 1, which is characterized in that it is described in client by the Speech Record after fractionation
When broadcasting to user, further includes:
The recorded broadcast progress of the voice is shown in client, and is provided the editing of editing interface for users after playing and fallen head and the tail
Environmental noise.
3. speech evaluating method according to claim 1, which is characterized in that the voice by acquisition is with the uniform time
Before interval is split further include:
The environmental noise removal that the client carries out head and the tail to voice data collected is handled.
4. a kind of speech evaluating system, which is characterized in that including client and server:
The client is carried out word for word for acquiring the voice data of user, and by the voice of acquisition with uniform time interval
Fractionation confirm then by the voice recording and broadcasting after fractionation to user for client user;The time interval of the fractionation is by described
The customized setting of client is inconsistent with the word speed for adapting to different evaluation and test users;And confirms in client user and split correctly
Afterwards, the voice data after fractionation is transmitted and is identified and evaluated and tested for it to the server;And the client is sent
To the self defined time interval information in the VoP of server including fractionation;It is also used to think to split not in user
When correct, delete the voice data of current typing and re-start the acquisition of voice data;
The client is also used to: being provided at least two different types of languages test patterns for selection by the user, and is being sent out
Corresponding languages label is carried in the VoP sent for server identification;
The server, for receiving the voice data after the fractionation from the client and according to the time interval of differentiation
Information carries out adaptive speech recognition and evaluation and test to different user;It is also used to identify in the VoP and carry accordingly
Languages label, and inquire in the database of connection each languages label and corresponding standard pronunciation in pre-stored corresponding test topic
The characteristic value of acquired voice is carried out correlation with pre-stored corresponding received pronunciation characteristic value by the mapping table of frequency evidence
Compare;And correlation comparison result is returned into the client.
5. speech evaluating system according to claim 4, which is characterized in that the client is also used to:
When the voice recording and broadcasting after splitting is to user, the recorded broadcast progress of the voice is shown, and provide and cut after playing
Editing interface falls the environmental noise of head and the tail for user's editing.
6. speech evaluating system according to claim 5, which is characterized in that the client is also used to: what will be acquired
Before voice is split with uniform time interval, the environmental noise removal for carrying out head and the tail to voice data collected is handled.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611269661.XA CN107068145B (en) | 2016-12-30 | 2016-12-30 | Speech evaluating method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611269661.XA CN107068145B (en) | 2016-12-30 | 2016-12-30 | Speech evaluating method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107068145A CN107068145A (en) | 2017-08-18 |
CN107068145B true CN107068145B (en) | 2019-02-15 |
Family
ID=59623400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611269661.XA Active CN107068145B (en) | 2016-12-30 | 2016-12-30 | Speech evaluating method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107068145B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108320743A (en) * | 2018-02-07 | 2018-07-24 | 上海速益网络科技有限公司 | A kind of data entry method and device |
CN109036431A (en) * | 2018-07-11 | 2018-12-18 | 北京智能管家科技有限公司 | A kind of speech recognition system and method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1545694A (en) * | 2001-06-19 | 2004-11-10 | 英特尔公司 | Client-server based distributed speech recognition system |
CN1815522A (en) * | 2006-02-28 | 2006-08-09 | 安徽中科大讯飞信息科技有限公司 | Method for testing mandarin level and guiding learning using computer |
CN101122900A (en) * | 2007-09-25 | 2008-02-13 | 中兴通讯股份有限公司 | Words partition system and method |
CN103076893A (en) * | 2012-12-31 | 2013-05-01 | 百度在线网络技术(北京)有限公司 | Method and equipment for realizing voice input |
CN103440253A (en) * | 2013-07-25 | 2013-12-11 | 清华大学 | Speech retrieval method and system |
CN103559880A (en) * | 2013-11-08 | 2014-02-05 | 百度在线网络技术(北京)有限公司 | Voice input system and voice input method |
CN103761975A (en) * | 2014-01-07 | 2014-04-30 | 苏州思必驰信息科技有限公司 | Method and device for oral evaluation |
CN106205635A (en) * | 2016-07-13 | 2016-12-07 | 中南大学 | Method of speech processing and system |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1100305C (en) * | 1999-03-31 | 2003-01-29 | 五邑大学 | Speech control command generator in noiseful environment |
CN100354930C (en) * | 2004-11-01 | 2007-12-12 | 英业达股份有限公司 | Speech display system and method |
CN101083798A (en) * | 2007-07-09 | 2007-12-05 | 中兴通讯股份有限公司 | Method for realizing multimedia speech SMS service |
CN103366742B (en) * | 2012-03-31 | 2018-07-31 | 上海果壳电子有限公司 | Pronunciation inputting method and system |
KR20130134195A (en) * | 2012-05-30 | 2013-12-10 | 삼성전자주식회사 | Apparatas and method fof high speed visualization of audio stream in a electronic device |
US9728201B2 (en) * | 2012-10-31 | 2017-08-08 | Nec Corporation | Playback apparatus, setting apparatus, playback method, and program |
CN103634472B (en) * | 2013-12-06 | 2016-11-23 | 惠州Tcl移动通信有限公司 | User mood and the method for personality, system and mobile phone is judged according to call voice |
CN103841268A (en) * | 2014-03-17 | 2014-06-04 | 联想(北京)有限公司 | Information processing method and information processing device |
CN104318921B (en) * | 2014-11-06 | 2017-08-25 | 科大讯飞股份有限公司 | Segment cutting detection method and system, method and system for evaluating spoken language |
JP6003971B2 (en) * | 2014-12-22 | 2016-10-05 | カシオ計算機株式会社 | Voice search device, voice search method and program |
JP6003972B2 (en) * | 2014-12-22 | 2016-10-05 | カシオ計算機株式会社 | Voice search device, voice search method and program |
JP6317281B2 (en) * | 2015-03-05 | 2018-04-25 | 日本電信電話株式会社 | Call system, call control system, and call method |
US10380242B2 (en) * | 2015-05-14 | 2019-08-13 | Nuance Communications, Inc. | System and method for processing out of vocabulary compound words |
CN105161094A (en) * | 2015-06-26 | 2015-12-16 | 徐信 | System and method for manually adjusting cutting point in audio cutting of voice |
CN104901820B (en) * | 2015-06-29 | 2018-11-23 | 广州华多网络科技有限公司 | A kind of wheat sequence controlling method, device and system |
CN205230135U (en) * | 2015-11-30 | 2016-05-11 | 刘奇 | Spoken remote testing system of foreign language |
CN106202301B (en) * | 2016-07-01 | 2019-10-08 | 武汉泰迪智慧科技有限公司 | A kind of intelligent response system based on deep learning |
-
2016
- 2016-12-30 CN CN201611269661.XA patent/CN107068145B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1545694A (en) * | 2001-06-19 | 2004-11-10 | 英特尔公司 | Client-server based distributed speech recognition system |
CN1815522A (en) * | 2006-02-28 | 2006-08-09 | 安徽中科大讯飞信息科技有限公司 | Method for testing mandarin level and guiding learning using computer |
CN101122900A (en) * | 2007-09-25 | 2008-02-13 | 中兴通讯股份有限公司 | Words partition system and method |
CN103076893A (en) * | 2012-12-31 | 2013-05-01 | 百度在线网络技术(北京)有限公司 | Method and equipment for realizing voice input |
CN103440253A (en) * | 2013-07-25 | 2013-12-11 | 清华大学 | Speech retrieval method and system |
CN103559880A (en) * | 2013-11-08 | 2014-02-05 | 百度在线网络技术(北京)有限公司 | Voice input system and voice input method |
CN103761975A (en) * | 2014-01-07 | 2014-04-30 | 苏州思必驰信息科技有限公司 | Method and device for oral evaluation |
CN106205635A (en) * | 2016-07-13 | 2016-12-07 | 中南大学 | Method of speech processing and system |
Also Published As
Publication number | Publication date |
---|---|
CN107068145A (en) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107093431B (en) | Method and device for quality inspection of service quality | |
US20170169822A1 (en) | Dialog text summarization device and method | |
CN111241357A (en) | Dialogue training method, device, system and storage medium | |
JP6172769B2 (en) | Understanding support system, understanding support server, understanding support method, and program | |
JP2002125047A5 (en) | ||
CN107102990A (en) | The method and apparatus translated to voice | |
CN112507294B (en) | English teaching system and teaching method based on human-computer interaction | |
CN110135879A (en) | Customer service quality automatic scoring method based on natural language processing | |
CN111048095A (en) | Voice transcription method, equipment and computer readable storage medium | |
CN107068145B (en) | Speech evaluating method and system | |
CN107240394A (en) | A kind of dynamic self-adapting speech analysis techniques for man-machine SET method and system | |
CN110600033A (en) | Learning condition evaluation method and device, storage medium and electronic equipment | |
CN107092692A (en) | The update method and intelligent customer service system of knowledge base | |
CN111259124A (en) | Dialogue management method, device, system and storage medium | |
CN111768781A (en) | Voice interruption processing method and device | |
CN110490428A (en) | Job of air traffic control method for evaluating quality and relevant apparatus | |
CN109308578A (en) | A kind of enterprise's big data analysis system and method | |
CN111858897A (en) | Customer service staff speech guiding method and system | |
KR20070006742A (en) | Language teaching method | |
CN110751950A (en) | Police conversation voice recognition method and system based on big data | |
KR102287431B1 (en) | Apparatus for recording meeting and meeting recording system | |
CN109147792A (en) | A kind of voice resume system | |
CN112562644A (en) | Customer service quality inspection method, system, equipment and medium based on human voice separation | |
CN108717851A (en) | A kind of audio recognition method and device | |
KR20110018244A (en) | Method and system for providing lecture information associated with on-line examination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |