CN107068145A - Speech evaluating method and system - Google Patents

Speech evaluating method and system Download PDF

Info

Publication number
CN107068145A
CN107068145A CN201611269661.XA CN201611269661A CN107068145A CN 107068145 A CN107068145 A CN 107068145A CN 201611269661 A CN201611269661 A CN 201611269661A CN 107068145 A CN107068145 A CN 107068145A
Authority
CN
China
Prior art keywords
client
user
fractionation
speech
server
Prior art date
Application number
CN201611269661.XA
Other languages
Chinese (zh)
Other versions
CN107068145B (en
Inventor
李淼磊
蒋直平
于健昕
崔玉杰
赵杨
党伟然
Original Assignee
中南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中南大学 filed Critical 中南大学
Priority to CN201611269661.XA priority Critical patent/CN107068145B/en
Publication of CN107068145A publication Critical patent/CN107068145A/en
Application granted granted Critical
Publication of CN107068145B publication Critical patent/CN107068145B/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting

Abstract

The present invention relates to technical field of voice recognition, a kind of speech evaluating method and system are disclosed, to improve the accuracy of evaluation and test.Speech evaluating method disclosed by the invention, including:Client gathers the speech data of user, and the fractionation by the voice of collection with uniform time interval progress word for word, then confirms the voice recording and broadcasting after fractionation for client user to user;Client user confirm to split it is correct after, the speech data after fractionation is transmitted to server and is identified and evaluates and tests for it.Speech evaluating method and system based on the present invention, because the word speed of different evaluation and test users is inconsistent, by client to gathered voice carry out word for word split and recorded broadcast to user come ensure split correctness, and cause the fractionation time interval controls of different user within the recognizable set of server, facilitate server according to the fractionation information do corresponding speech recognition and evaluation and test while, also improve the accuracy of speech recognition and evaluation and test.

Description

Speech evaluating method and system

Technical field

The present invention relates to voice processing technology field, more particularly to a kind of speech evaluating method and system.

Background technology

Language is most important the vehicle of communication and information carrier, the popularization of common national language be unification of the motherland, national unity, The important foundation of social progress, China is a multi-national, multilingual country, and mother tongue environment is more loose, the initial institute of people The language of acquistion is generally this national language or dialect so that people's exchange of different regions hinders, and mandarin is as complete The general language of state, is widelyd popularize.Actively popularization mandarin, is conducive to eliminating language barrier, promotes social interaction, main to society Adopted economic, politics, cultural construction and social development are significant.Mandarin is popularized to be conducive to promoting various nationalities various regions The exchange in area, is conducive to safeguarding unification of the motherland, strengthens the cohesion of the Chinese nation.During genic male sterility is as popularizing Beijing pronunciation An important ring, currently still many using manually scoring by the way of, one by examination people when needing 3 to 5 examination personnel's progress long Between examination, but annual every profession and trade is required for a large amount of genic male sterility competent persons, and this method time and effort consuming, cost are high High, subjectivity is strong, it is clear that can not meet current social demand.And the high speed development of mobile field hardware technology imparts intelligence Mobile terminal broader practice prospect, intelligent mobile terminal turns into personal connection network and enterprise provides the important flat of service Platform, people can attempt to carry out genic male sterility by intelligent mobile terminal.For example, the mandarin based on Android device Evaluate and instruct system to carry out genic male sterility, time-consuming short, cost is low, easy to use, objective and fair.

The content of the invention

Present invention aims at a kind of speech evaluating method and system is disclosed, to improve the accuracy of evaluation and test.

To achieve the above object, the invention discloses a kind of speech evaluating method, including:

Client gathers the speech data of user, and the voice of collection is carried out into tearing open word for word with uniform time interval Point, then the voice recording and broadcasting after fractionation is confirmed to user for client user;

After client user confirms to split correctly, the speech data after fractionation is transmitted and carried out to server for it Identification and evaluation and test.

Corresponding with above-mentioned evaluating method, invention additionally discloses a kind of speech evaluating system, including client and service Device:

The client, the speech data for gathering user, and the voice of collection is carried out with uniform time interval Fractionation word for word, then confirms the voice recording and broadcasting after fractionation to user for client user;And confirm in client user Split it is correct after, the speech data after fractionation is transmitted to the server and is identified and evaluates and tests for it.

The invention has the advantages that:

Because the word speeds of different evaluation and test users are inconsistent, by client to the progress fractionation word for word of gather voice and record Broadcast to user to ensure the correctness split, and cause the fractionation time interval controls of different user in the recognizable model of server Within enclosing, facilitate server according to the fractionation information do corresponding speech recognition and evaluation and test while, also improve speech recognition And the accuracy of evaluation and test.It is preferred that, based on speech evaluating method disclosed in this invention and system, the time interval of above-mentioned fractionation Can by client it is self-defined set, and client be sent to include in the VoP of server fractionation it is self-defined when Between interval information, for server according to the time interval information of differentiation different user is carried out adaptive speech recognition and Evaluation and test.

Below with reference to accompanying drawings, the present invention is further detailed explanation.

Brief description of the drawings

The accompanying drawing for constituting the part of the application is used for providing a further understanding of the present invention, schematic reality of the invention Apply example and its illustrate to be used to explain the present invention, do not constitute inappropriate limitation of the present invention.In the accompanying drawings:

Fig. 1 is the schematic flow sheet of voice assessment method disclosed in the embodiment of the present invention.

Embodiment

Embodiments of the invention are described in detail below in conjunction with accompanying drawing, but the present invention can be defined by the claims Implement with the multitude of different ways of covering.

Embodiment 1

The embodiment of the present invention discloses a kind of speech evaluating method, as shown in figure 1, including:

Step S1, client gather user speech data, and by the voice of collection with uniform time interval carry out by The fractionation of word, then confirms the voice recording and broadcasting after fractionation to user for client user.

In this step, due to it is different evaluation and test users word speeds it is inconsistent, by client to gather voice progress by The simultaneously recorded broadcast that splits of word ensures the correctness of fractionation to user, and causes the fractionation time interval controls of different user in service Within the recognizable set of device, facilitate server according to the fractionation information do corresponding speech recognition and evaluation and test while, also carry The high accuracy of speech recognition and evaluation and test.

Step S2, client user confirm to split it is correct after, the speech data after fractionation is transmitted to server It is identified and evaluates and tests for it.When user thinks to split incorrect, the speech data and return to step of current typing can be deleted S1 re-starts the collection of speech data.Corresponding, server, can be by gathered voice in specific identification and evaluation and test Characteristic value carries out correlation comparison with the corresponding received pronunciation characteristic value prestored;And return to correlation result of the comparison The client;Wherein, specific correlation, which compares, to carry out correlation comparison based on Pearson correlation coefficients.

In the present embodiment, environmental noise can be more or less generally mingled with from beginning to end while speech data is gathered.For This, the present embodiment can provide the following two kinds different processing modes:

Mode one, before client is split the voice of collection with uniform time interval, by client to being adopted The speech data of collection carries out the environmental noise removal processing of head and the tail.For example:One section of environmental audio of collection, obtains the ambient sound in advance The frequency information of frequency, then the frequency information with tested speech subtract each other, obtain remove ambient noise tested speech audio letter Breath.

Mode two, in client by the voice recording and broadcasting after fractionation to user when, client show voice recorded broadcast progress, And the environmental noise that head and the tail are fallen in editing interface for users editing is provided after playing.

In addition, in embodiments of the present invention, the time interval of above-mentioned fractionation can be arranged by client with server, its Can be using the fixed fixation duration set without user, also can be by the way of following preferred users can customize:

The time interval of fractionation is set by client is self-defined, and client is sent in the VoP of server and wrapped Self defined time interval information containing fractionation, so that server enters according to the time interval information of differentiation to different user Row adaptive speech recognition and evaluation and test.

Further, it is considered to which the diversity of existing languages, the present embodiment can also be provided with least two by client Different types of languages test pattern selects for user, and carry in the VoP of transmission corresponding languages mark with For server identification, wherein, be previously stored with the database that server is connected in corresponding test topic each languages mark with The mapping table of corresponding standard audio data.Wherein, the languages in the present embodiment, both can be conventional mandarin, English, method Commonly used languages such as language etc. or the local dialect etc..

In the present embodiment, in addition to the above-mentioned word for word fractionation to voice, other interaction design bags of client and server Include but be not limited to:

Server obtains the tone testing request of client user, then judges that user's selection carries out any specific survey Examination, such as mandarin or the local dialect etc.;Then the random paragraph or sentence by corresponding test of request sent according to user Son is shown on screen, for user test and gathers the voice data that user reads aloud the paragraph or sentence;

Corresponding, server can change the voice data received in the identification of voice and test process For word, the word for being then converted into the audio is compared with the paragraph or sentence tested, and passes through Boolean variable Word for word to mark correctness;Further, can also return client evaluation and test interface in there is provided any word, word or sentence Received pronunciation train the redirected link at interface, if necessary, male sound version and female's sound two kinds of standard speech of version can also be further provided for Message ceases.

To sum up, the speech evaluating method that the present embodiment is provided, because the word speed of different evaluation and test users is inconsistent, passes through client Hold the fractionation and recorded broadcast that are carried out to gathered voice word for word to ensure the correctness split to user, and cause tearing open for different user Divide time interval controls within the recognizable set of server, facilitate server to do corresponding voice according to the fractionation information and know While other and evaluation and test, the accuracy of speech recognition and evaluation and test is also improved.It is preferred that, commented based on voice disclosed in this invention Survey method, the time interval of above-mentioned fractionation can be set by client is self-defined, and client is sent to the speech data of server Include the self defined time interval information of fractionation in bag, for server according to the time interval information of differentiation to difference User carries out adaptive speech recognition and evaluation and test.

Embodiment 2

Corresponding with above method embodiment, the present embodiment discloses a kind of speech evaluating system, including client kimonos Business device, wherein, client is used for:The speech data of user is gathered, and the voice of collection is torn open with uniform time interval Point, then the voice recording and broadcasting after fractionation is confirmed to user for client user;And confirm to split correctly in client user Afterwards, the speech data after fractionation is transmitted to server and is identified and evaluates and tests for it.

Optionally, above-mentioned client is additionally operable to:When the voice recording and broadcasting after by fractionation is to user, the recorded broadcast of display voice is entered Degree, and the environmental noise of head and the tail is fallen in offer editing interface for users editing after playing;Or:By the voice of collection Before being split with uniform time interval, the environmental noise removal that the speech data gathered carries out head and the tail is handled.

It is preferred that, above-mentioned client is additionally operable to the self-defined time interval for setting and splitting, and is being sent to the language of server Include the self defined time interval information of fractionation in sound packet.Further, the present embodiment client is additionally operable to:If At least two different types of languages test patterns are equipped with so that user selects, and carry corresponding in the VoP of transmission Languages mark so that server is recognized, wherein, be previously stored with the database that server is connected in corresponding test topic The mapping table of each languages mark and corresponding standard audio data.

Similarly, the speech evaluating system that the present embodiment is provided, because the word speed of different evaluation and test users is inconsistent, passes through client Hold the fractionation and recorded broadcast that are carried out to gathered voice word for word to ensure the correctness split to user, and cause tearing open for different user Divide time interval controls within the recognizable set of server, facilitate server to do corresponding voice according to the fractionation information and know While other and evaluation and test, the accuracy of speech recognition and evaluation and test is also improved.It is preferred that, commented based on voice disclosed in this invention Examining system, the time interval of above-mentioned fractionation can be set by client is self-defined, and client is sent to the speech data of server Include the self defined time interval information of fractionation in bag, for server according to the time interval information of differentiation to difference User carries out adaptive speech recognition and evaluation and test.

The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (10)

1. a kind of speech evaluating method, it is characterised in that including:
Client gathers the speech data of user, and the fractionation by the voice of collection with uniform time interval progress word for word, so The voice recording and broadcasting after fractionation is confirmed to user for client user afterwards;
After client user confirms to split correctly, the speech data after fractionation is transmitted and is identified to server for it And evaluation and test.
2. speech evaluating method according to claim 1, it is characterised in that it is described in client by the Speech Record after fractionation When broadcasting to user, in addition to:
The recorded broadcast progress of the voice is shown in client, and editing interface for users editing is provided after playing and falls head and the tail Environmental noise.
3. speech evaluating method according to claim 1, it is characterised in that the voice by collection is with the uniform time Interval also includes before being split:
The environmental noise removal that the client carries out head and the tail to the speech data gathered is handled.
4. according to any described speech evaluating method of claims 1 to 3, it is characterised in that the time interval of the fractionation by The client is self-defined to be set, and the client is sent in the VoP of server and includes making by oneself for fractionation Adopted time interval information.
5. speech evaluating method according to claim 4, it is characterised in that it is different that the client is provided with least two The languages test pattern of type selects for user, and carries corresponding languages mark in the VoP of transmission for clothes Be engaged in device identification, wherein, be previously stored with the database that the server is connected in corresponding test topic each languages mark with The mapping table of corresponding standard audio data.
6. a kind of speech evaluating system, it is characterised in that including client and server:
The client, the speech data for gathering user, and the voice of collection is carried out word for word with uniform time interval Fractionation, then by the voice recording and broadcasting after fractionation to user, confirm for client user;And confirm to split in client user After correct, the speech data after fractionation is transmitted to the server and is identified and evaluates and tests for it.
7. speech evaluating system according to claim 6, it is characterised in that the client is additionally operable to:
When the voice recording and broadcasting after by fractionation is to user, the recorded broadcast progress of the voice is shown, and offer is cut after playing Editing interface falls the environmental noise of head and the tail for user's editing.
8. speech evaluating system according to claim 6, it is characterised in that the client is additionally operable to:By collection Before voice is split with uniform time interval, the environmental noise removal that the speech data gathered carries out head and the tail is handled.
9. according to any described speech evaluating system of claim 6 to 8, it is characterised in that the client is additionally operable to make by oneself Justice sets the time interval of the fractionation, and include in the VoP for be sent to server fractionation it is self-defined when Between interval information.
10. speech evaluating system according to claim 9, it is characterised in that the client is additionally operable to:It is provided with least Two distinct types of languages test pattern selects for user, and carries in the VoP of transmission corresponding languages mark Remember so that server is recognized, wherein, each language in corresponding test topic is previously stored with the database that the server is connected Plant the mapping table of mark and corresponding standard audio data.
CN201611269661.XA 2016-12-30 2016-12-30 Speech evaluating method and system CN107068145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611269661.XA CN107068145B (en) 2016-12-30 2016-12-30 Speech evaluating method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611269661.XA CN107068145B (en) 2016-12-30 2016-12-30 Speech evaluating method and system

Publications (2)

Publication Number Publication Date
CN107068145A true CN107068145A (en) 2017-08-18
CN107068145B CN107068145B (en) 2019-02-15

Family

ID=59623400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611269661.XA CN107068145B (en) 2016-12-30 2016-12-30 Speech evaluating method and system

Country Status (1)

Country Link
CN (1) CN107068145B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320743A (en) * 2018-02-07 2018-07-24 上海速益网络科技有限公司 A kind of data entry method and device
CN109036431A (en) * 2018-07-11 2018-12-18 北京智能管家科技有限公司 A kind of speech recognition system and method

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1100305C (en) * 1999-03-31 2003-01-29 五邑大学 Speech control command generator in noiseful environment
CN1545694A (en) * 2001-06-19 2004-11-10 英特尔公司 Client-server based distributed speech recognition system
CN1770262A (en) * 2004-11-01 2006-05-10 英业达股份有限公司 Speech display system and method
CN1815522A (en) * 2006-02-28 2006-08-09 安徽中科大讯飞信息科技有限公司 Method for testing mandarin level and guiding learning using computer
CN101083798A (en) * 2007-07-09 2007-12-05 中兴通讯股份有限公司 Method for realizing multimedia speech SMS service
CN101122900A (en) * 2007-09-25 2008-02-13 中兴通讯股份有限公司 Words partition system and method
CN103076893A (en) * 2012-12-31 2013-05-01 百度在线网络技术(北京)有限公司 Method and equipment for realizing voice input
CN103366742A (en) * 2012-03-31 2013-10-23 盛乐信息技术(上海)有限公司 Voice input method and system
EP2669893A2 (en) * 2012-05-30 2013-12-04 Samsung Electronics Co., Ltd Apparatus and method for high speed visualization of audio stream in an electronic device
CN103440253A (en) * 2013-07-25 2013-12-11 清华大学 Speech retrieval method and system
CN103559880A (en) * 2013-11-08 2014-02-05 百度在线网络技术(北京)有限公司 Voice input system and voice input method
CN103634472A (en) * 2013-12-06 2014-03-12 惠州Tcl移动通信有限公司 Method, system and mobile phone for judging mood and character of user according to call voice
CN103761975A (en) * 2014-01-07 2014-04-30 苏州思必驰信息科技有限公司 Method and device for oral evaluation
CN103841268A (en) * 2014-03-17 2014-06-04 联想(北京)有限公司 Information processing method and information processing device
CN104318921A (en) * 2014-11-06 2015-01-28 科大讯飞股份有限公司 Voice section segmentation detection method and system and spoken language detecting and evaluating method and system
US20150243294A1 (en) * 2012-10-31 2015-08-27 Nec Casio Mobile Communications, Ltd. Playback apparatus, setting apparatus, playback method, and program
CN104901820A (en) * 2015-06-29 2015-09-09 广州华多网络科技有限公司 System, device and method for speaking sequence control
CN105161094A (en) * 2015-06-26 2015-12-16 徐信 System and method for manually adjusting cutting point in audio cutting of voice
CN205230135U (en) * 2015-11-30 2016-05-11 刘奇 Spoken remote testing system of foreign language
CN105719643A (en) * 2014-12-22 2016-06-29 卡西欧计算机株式会社 VOICE RETRIEVAL APPARATUS and VOICE RETRIEVAL METHOD
CN105718503A (en) * 2014-12-22 2016-06-29 卡西欧计算机株式会社 Voice retrieval apparatus, and voice retrieval method
JP2016163303A (en) * 2015-03-05 2016-09-05 日本電信電話株式会社 System, control system and method for speech communication
US20160336004A1 (en) * 2015-05-14 2016-11-17 Nuance Communications, Inc. System and method for processing out of vocabulary compound words
CN106205635A (en) * 2016-07-13 2016-12-07 中南大学 Method of speech processing and system
CN106202301A (en) * 2016-07-01 2016-12-07 武汉泰迪智慧科技有限公司 A kind of intelligent response system based on degree of depth study

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1100305C (en) * 1999-03-31 2003-01-29 五邑大学 Speech control command generator in noiseful environment
CN1545694A (en) * 2001-06-19 2004-11-10 英特尔公司 Client-server based distributed speech recognition system
CN1770262A (en) * 2004-11-01 2006-05-10 英业达股份有限公司 Speech display system and method
CN1815522A (en) * 2006-02-28 2006-08-09 安徽中科大讯飞信息科技有限公司 Method for testing mandarin level and guiding learning using computer
CN101083798A (en) * 2007-07-09 2007-12-05 中兴通讯股份有限公司 Method for realizing multimedia speech SMS service
CN101122900A (en) * 2007-09-25 2008-02-13 中兴通讯股份有限公司 Words partition system and method
CN103366742A (en) * 2012-03-31 2013-10-23 盛乐信息技术(上海)有限公司 Voice input method and system
EP2669893A2 (en) * 2012-05-30 2013-12-04 Samsung Electronics Co., Ltd Apparatus and method for high speed visualization of audio stream in an electronic device
US20150243294A1 (en) * 2012-10-31 2015-08-27 Nec Casio Mobile Communications, Ltd. Playback apparatus, setting apparatus, playback method, and program
CN103076893A (en) * 2012-12-31 2013-05-01 百度在线网络技术(北京)有限公司 Method and equipment for realizing voice input
CN103440253A (en) * 2013-07-25 2013-12-11 清华大学 Speech retrieval method and system
CN103559880A (en) * 2013-11-08 2014-02-05 百度在线网络技术(北京)有限公司 Voice input system and voice input method
CN103634472A (en) * 2013-12-06 2014-03-12 惠州Tcl移动通信有限公司 Method, system and mobile phone for judging mood and character of user according to call voice
CN103761975A (en) * 2014-01-07 2014-04-30 苏州思必驰信息科技有限公司 Method and device for oral evaluation
CN103841268A (en) * 2014-03-17 2014-06-04 联想(北京)有限公司 Information processing method and information processing device
CN104318921A (en) * 2014-11-06 2015-01-28 科大讯飞股份有限公司 Voice section segmentation detection method and system and spoken language detecting and evaluating method and system
CN105719643A (en) * 2014-12-22 2016-06-29 卡西欧计算机株式会社 VOICE RETRIEVAL APPARATUS and VOICE RETRIEVAL METHOD
CN105718503A (en) * 2014-12-22 2016-06-29 卡西欧计算机株式会社 Voice retrieval apparatus, and voice retrieval method
JP2016163303A (en) * 2015-03-05 2016-09-05 日本電信電話株式会社 System, control system and method for speech communication
US20160336004A1 (en) * 2015-05-14 2016-11-17 Nuance Communications, Inc. System and method for processing out of vocabulary compound words
CN105161094A (en) * 2015-06-26 2015-12-16 徐信 System and method for manually adjusting cutting point in audio cutting of voice
CN104901820A (en) * 2015-06-29 2015-09-09 广州华多网络科技有限公司 System, device and method for speaking sequence control
CN205230135U (en) * 2015-11-30 2016-05-11 刘奇 Spoken remote testing system of foreign language
CN106202301A (en) * 2016-07-01 2016-12-07 武汉泰迪智慧科技有限公司 A kind of intelligent response system based on degree of depth study
CN106205635A (en) * 2016-07-13 2016-12-07 中南大学 Method of speech processing and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320743A (en) * 2018-02-07 2018-07-24 上海速益网络科技有限公司 A kind of data entry method and device
CN109036431A (en) * 2018-07-11 2018-12-18 北京智能管家科技有限公司 A kind of speech recognition system and method

Also Published As

Publication number Publication date
CN107068145B (en) 2019-02-15

Similar Documents

Publication Publication Date Title
WO2018072390A1 (en) Classroom teaching recording and requesting method and system
Miller Measuring up to speech intelligibility
Kilgarri Senseval: An exercise in evaluating word sense disambiguation programs
Eisenberg et al. The use of MLU for identifying language impairment in preschool children
Sperber et al. Irony and the use-mention distinction
Busby et al. A revision of the Dyadic Adjustment Scale for use with distressed and nondistressed couples: Construct hierarchy and multidimensional scales
Bohm 5.13 TheoreticaI Coding: Text AnaIysis in Grounded Theory
Simon et al. Media framing and effective public deliberation
Rocco et al. Literature reviews, conceptual frameworks, and theoretical frameworks: Terms, functions, and distinctions
Cross Effects of listening strategy instruction on news videotext comprehension
Arciuli et al. " Um, I can tell you're lying": Linguistic markers of deception versus truth-telling in speech
Cargile Attitudes toward Chinese-accented speech: An investigation in two contexts
Campbell‐Kibler Sociolinguistics and perception
CN107093431B (en) Method and device for quality inspection of service quality
DiCicco‐Bloom et al. The qualitative research interview
Ryan Social psychological mechanisms underlying native speaker evaluations of non-native speech
Mentis et al. Cohesion in the discourse of normal and head-injured adults
Philipsen A theory of speech codes
CA2311439C (en) Conversational data mining
US8082142B2 (en) Speech recognition method, speech recognition system and server thereof
Graham et al. Listening comprehension and strategy use: A longitudinal exploration
CN108932508A (en) A kind of topic intelligent recognition, the method and system corrected
Brajnik Comparing accessibility evaluation tools: a method for tool effectiveness
Leite et al. Validation of scores on the Marlowe-Crowne social desirability scale and the balanced inventory of desirable responding
Horton et al. Age-related differences in communication and audience design.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant