CN109460558B - Effect judging method of voice translation system - Google Patents

Effect judging method of voice translation system Download PDF

Info

Publication number
CN109460558B
CN109460558B CN201811489674.7A CN201811489674A CN109460558B CN 109460558 B CN109460558 B CN 109460558B CN 201811489674 A CN201811489674 A CN 201811489674A CN 109460558 B CN109460558 B CN 109460558B
Authority
CN
China
Prior art keywords
text
translation
source language
speech
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811489674.7A
Other languages
Chinese (zh)
Other versions
CN109460558A (en
Inventor
陈巍华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Shanghai Intelligent Technology Co Ltd
Original Assignee
Unisound Shanghai Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Shanghai Intelligent Technology Co Ltd filed Critical Unisound Shanghai Intelligent Technology Co Ltd
Priority to CN201811489674.7A priority Critical patent/CN109460558B/en
Publication of CN109460558A publication Critical patent/CN109460558A/en
Application granted granted Critical
Publication of CN109460558B publication Critical patent/CN109460558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/51Translation evaluation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/01Assessment or evaluation of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The invention provides an effect judging method of a voice translation system, which does not need to manually align translation results of voice machine translation in a manual labeling and aligning mode every time, and can save trouble of manual intervention in each evaluation, and simultaneously still ensure that the evaluation results have the same accuracy with the evaluation results subjected to manual labeling and aligning, thereby greatly improving the evaluation efficiency and accuracy of the voice translation system.

Description

Effect judging method of voice translation system
Technical Field
The invention relates to the technical field of voice recognition translation, in particular to an effect judging method of a voice translation system.
Background
Along with the continuous development of globalization, people in different countries can communicate more and more frequently, because the languages used in each country are different due to the factors of history, region and the like, but at present, a common voice which is simple and easy to understand and convenient and fast to operate does not exist, so that the language communication between people in different countries usually needs the cooperation of specific translation work, and normal communication between people in different languages can be ensured. In some formal occasions, the communication between the staff of different countries is often equipped with professional translation staff to assist the communication, but in other most informal occasions or daily life, the corresponding translation staff cannot exist at any time to perform the translation work, and not all people can have basic translation capability. In order to meet the requirement that people using different languages can still communicate without any translation staff, a large amount of manpower and material resources are input into a large number of artificial intelligence enterprises to develop and improve various voice machine translation tools. The requirements and standards of the voice machine translation tool are higher than those of the existing text machine translation tool, the existing text machine translation tool is used for translating an input text in one language into another language, the text machine translation tool only needs to correct and identify the input text, and the text machine translation tool only translates on a text data layer and does not relate to any intelligent interactive identification technology.
In contrast, the working process of the machine translation tool mainly comprises the steps of voice extraction, voice paraphrasing recognition, language translation conversion and translation result voice feedback, wherein each step relates to recognition of voice signals, and the voice signals have great randomness in terms of pronunciation, intonation, vocabulary and the like for different users. The accuracy of the translation result of the speech machine translation tool is largely related to the accuracy of the recognition of the user in terms of pronunciation, intonation, vocabulary and the like. In order to ensure the correctness of the voice translation to the maximum extent, almost all voice machine translation tools automatically evaluate the translation result by adopting different algorithms such as NIST, METEOR or BLEU, and each algorithm for automatic evaluation has different advantages and disadvantages.
Disclosure of Invention
In the development of the voice machine translation technology, various algorithms used for automatically evaluating the voice translation result are combined with a manual labeling test set to complete a corresponding test flow. Because the translation result of the voice machine translation tool is not in one-to-one correspondence with the data in the manual labeling test set, the translation result and the data in the manual labeling test set are required to be mapped one by one in each test process in a manual alignment mode, namely, each test needs intervention of manual operation, so that the test work of voice machine translation consumes a great deal of manpower and time, and meanwhile, the voice machine translation loses the original research and development significance.
Aiming at the defects of the existing voice machine translation effect evaluation system, the invention provides an effect evaluation method of the voice translation system, which does not need to manually align the translation result of the voice machine translation in a manual labeling alignment mode, and can save the trouble of manual intervention required for each evaluation, and simultaneously still ensure that the evaluation result has the same accuracy with the evaluation result after manual labeling alignment, thereby greatly improving the evaluation efficiency and accuracy of the voice translation system.
The invention provides an effect judging method of a voice translation system, which is characterized by comprising the following steps of:
step (1), performing voice recognition operation on a source language composed of N sentences of manually marked source language texts S1 to obtain corresponding M sentences of voice recognition texts S2, and performing machine translation on the voice recognition texts S2 to obtain corresponding M sentences of target end texts D2, wherein N and M are positive integers;
step (2), based on the source language text S1 and the voice recognition text S2, obtaining index information about the source language text S1 and the voice recognition text S2 through specific algorithm processing;
step (3), updating the artificially marked target end text D1 and the machine translated target end text D2 based on the index information, and evaluating the updated target end text D1 and target end text D2 through specific algorithm processing;
further, in step (1), before implementing the speech recognition operation, selecting corresponding N text fields from a source language text database at will, then performing a score judgment of semantic smoothness on a speech segment formed by the N text fields, and performing a manual labeling process on the speech segment to form the source language text S1 if the score judgment result meets a preset condition, wherein the manual labeling process includes performing at least one of a misplacement word correction, a grammar correction and a logic correction on the speech segment;
further, in step (1), after the voice recognition operation is performed, text correction processing is performed on the recognition result after the voice recognition is performed on the source language text S1, so as to obtain the voice recognition text S2, and after the text correction processing, a mapping relationship between the source language text S1 and the voice recognition text S2 exists in a semantic level; wherein the text correction processing includes performing at least one of text sentence smoothing processing and text sentence breaking processing on the recognition result;
further, in the step (1), the machine translation is implemented in a predetermined translation machine set, the recognition difficulty of the speech recognition text S2 is evaluated according to at least one of the language type, the duration of the speech text and the speech speed of the speech recognition text S2, and a translation machine meeting the requirements is selected from the translation machine set to translate according to the result of the evaluation, so as to obtain the target text D2;
further, in step (2), the obtaining of the index information of the source language text S1 and the language identification text S2 through the specific algorithm processing includes obtaining a matrix a of m×n after performing a matching process and a convolution operation on each sentence in the source language text S1 and each sentence in the language identification text S2 in sequence by using a text evaluation algorithm BLEU, where the matching process is used to obtain a matching accuracy of each sentence in the source language text S1 and each sentence in the language identification text S2, and the convolution operation is to obtain each element in the matrix a after convolving a length penalty factor for the matching accuracy, so as to obtain a complete expression of the matrix a;
further, in step (2), the processing to obtain the index information of the source language text S1 and the language identification text S2 through a specific algorithm further includes dynamically planning the matrix a through a DTW dynamic time-warping algorithm to obtain a warping path related to the matrix a, wherein the specific process of the dynamic planning is that a plurality of matrixes a corresponding to the source language text S1 and the language identification text S2 in a plurality of different groups are obtained, the difference between every two matrixes in the plurality of matrixes a is calculated to form a corresponding difference set, a covariance value corresponding to the difference set is obtained, and if the covariance value is smaller than a preset covariance threshold, a distribution function of each difference in the difference set is used as the warping path;
further, in step (2), obtaining the index information of the source language text S1 and the language identification text S2 through a specific algorithm process further includes aligning the source language text S1 and the language identification text S2 to have the same number of sentences through a backtracking process on the backtracking path, and simultaneously recording to obtain the index information, wherein the backtracking process is implemented by performing iterative process on the backtracking path based on a gaussian mixture model and a clustering algorithm, and the index information is related to corresponding sequence distribution information in the backtracking path generated after the backtracking process;
further, in step (3), updating the artificially marked target text D1 and the machine translated target text D2 includes performing an inspection process on the target text D1 and the target text D2 according to the sequence distribution information in the backtracking path, and correcting text errors determined by the inspection process, so as to update the target text D1 and the target text D2;
further, in the step (3), the updated target end text D1 'and the target end text D2' are evaluated by a specific algorithm, specifically, a matching accuracy between the target end text D1 'and the target end text D2' is calculated by a text evaluation algorithm BLEU, wherein the matching accuracy is a comprehensive evaluation parameter about phrase symmetry degree, grammar accuracy and word alignment rate between the target end text D1 'and the target end text D2';
further, in step (3), comparing the matching accuracy with an accuracy threshold, if the matching accuracy is greater than or equal to the accuracy threshold, then instructing the speech translation system to take the current translation result as a final output result, and if the matching accuracy is less than the accuracy threshold, then instructing the speech translation system to re-perform steps (1), (2) and (3) until the matching accuracy meets a preset condition.
Compared with the prior art, the effect evaluation method of the voice translation system does not need to manually align the translation result of the voice machine translation in a manual labeling and aligning mode, so that the trouble of manual intervention in each evaluation is saved, and meanwhile, the evaluation result still has the same accuracy as the evaluation result subjected to manual labeling and aligning, and the evaluation efficiency and accuracy of the voice translation system are greatly improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a speech translation system according to an embodiment of the present invention.
Fig. 2 is a flow chart of an effect evaluation method of a speech translation system according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts fall within the protection scope of the present invention.
Referring to fig. 1, a schematic structural diagram of a speech translation system according to an embodiment of the present invention is provided. The voice translation system mainly comprises a voice receiving module, a voice recognition translation module, a translation feedback module, a translation evaluation module and the like. The voice receiving module is used for receiving voice signals input by a user, and preferably, the voice receiving module can be, but is not limited to, a microphone or a microphone array and the like; in addition, the voice receiving module can also perform signal amplification, signal filtering, adaptive noise reduction and other different types of processing on the voice signal so as to reduce noise components in the voice signal, so that the signal-to-noise ratio of the voice signal can meet the requirements of the voice recognition translation module. In fact, if the noise component in the speech signal is too large, resulting in too low a signal-to-noise ratio of the speech signal, the speech recognition translation module will not be able to extract a valid speech component from the speech signal. The speech recognition translation module is used for sequentially carrying out recognition and translation processing on the speech signal meeting the specific signal-to-noise ratio requirement, specifically, the speech recognition translation module firstly extracts the effective speech component from the speech signal, then recognizes language elements in different aspects such as languages, speech speed, intonation and vocabulary corresponding to the speech component, and then translates the speech component into a translation result corresponding to a target language according to the language elements, wherein the translation result can preferably have a text form or a speech form. The translation feedback module can feed back the translation result to the user in a text form and/or a voice form, wherein the translation result in the text form can be displayed through a display screen of the voice translation system, and the translation result in the voice form can be reread and played through a loudspeaker of the voice translation system. The translation evaluation module is used for performing corresponding effect evaluation on the translation result of the voice translation system so as to determine whether the translation result meets the preset translation requirement.
Referring to fig. 2, a flow chart of an effect evaluation method of a speech translation system according to the present invention is shown. The effect evaluation method is used for automatically evaluating the translation results of the voice translation systems such as voice machine translation tools and the like. The effect evaluation method is mainly used for performing corresponding effect evaluation based on a text evaluation algorithm BLEU or an improved text evaluation algorithm BLEU. The effect evaluation method comprises the following steps:
and (1) performing voice recognition operation on a source language composed of N manually marked source language texts S1 to obtain corresponding M sentences of voice recognition texts S2, and performing machine translation on the voice recognition texts S2 to obtain corresponding M sentences of target end texts D2, wherein N and M are positive integers.
The manually marked source language text S1 is generally obtained from a preset source language text database, in order to ensure that the manually marked source language text S1 has certain universality, before implementing the voice recognition operation, the corresponding N sentence text fields are arbitrarily picked from the source language text database, then the semantic smoothness score judgment is performed on the sentence segments formed by the N sentence text fields, and under the condition that the score judgment result meets the preset condition, the manually marked sentence segments are subjected to the manual marking processing to form the source language text S1, wherein the manual marking processing comprises at least one of misprinting word correction, grammar correction and logic correction on the sentence segments. The source language text S1 of the manual mark is selected in a random form, and semantic smoothness judgment is carried out on the language segments formed by the source language text S1, so that all source language text data of the source language text database can be enabled to be the source language text S1, and the effect judgment method can be enabled to cover common translation vocabulary and voice to the greatest extent, and judgment comprehensiveness and accuracy of the effect judgment method are guaranteed.
In addition, after the voice recognition operation is implemented, text correction processing is further performed on the recognition result after voice recognition is performed on the source language text S1, so as to obtain a voice recognition text S2, and after the text correction processing, a many-to-many mapping relationship exists between the source language text S1 and the voice recognition text S2 on a semantic level; the text correction processing includes at least one of text sentence smoothing processing and text sentence breaking processing on the recognition result, specifically, the text sentence smoothing processing mainly refers to editing the vocabulary arrangement sequence and sentence logic smoothness of the source language text S1, and the text sentence breaking processing mainly refers to breaking editing of different phrases of the source language text S1. Through the text correction processing, the meaning of the source language text S1 can be correctly reflected by the speech recognition text S2 on both grammar and semantic levels, otherwise, the mapping relationship between the speech recognition text S2 and the source language text S1 cannot be formed, so that the subsequent method steps cannot be realized.
In addition, the machine translation is implemented in a predetermined translation machine set, the recognition difficulty of the speech recognition text S2 is evaluated according to at least one of the language type, the duration of the speech text and the speech speed of the speech recognition text S2, and a translation machine meeting the requirements is selected from the translation machine set to translate according to the evaluation result, so that the target text D2 is obtained. Through the mode, the voice signals of different languages can be subjected to targeted judgment operation, so that the effect judgment method is effective for judging a plurality of general languages.
And (2) processing the source language text S1 and the voice recognition text S2 by a specific algorithm to obtain index information about the source language text S1 and the voice recognition text S2.
Specifically, the index information of the source language text S1 and the language identification text S2 is obtained through processing by a specific algorithm, which specifically includes performing matching processing and convolution operation on each sentence in the source language text S1 and each sentence in the language identification text S2 by using a text evaluation algorithm BLEU, so as to obtain a matrix a of m×n; the matching process is used for obtaining the matching accuracy of each sentence in the source language text S1 and each sentence in the language identification text S2, and the convolution operation is to obtain each element in the matrix a after convolving the matching accuracy by a length penalty factor, so as to obtain the complete expression of the matrix a.
In addition, the obtaining of the index information of the source language text S1 and the language identification text S2 through the processing of the specific algorithm further comprises the step of dynamically planning the matrix a through a DTW dynamic time-warping algorithm to obtain a warping path related to the matrix a, wherein the specific process of the dynamic planning is to obtain a plurality of matrixes a corresponding to the source language text S1 and the language identification text S2 in a plurality of different groups, calculate the difference degree between every two matrixes in the plurality of matrixes a to form a corresponding difference degree set, obtain a covariance value corresponding to the difference degree set, and if the covariance value is smaller than a preset covariance threshold, take a distribution function of each difference degree in the difference degree set as the warping path. Furthermore, after the normalization path is obtained, a backtracking process is required to be performed on the normalization path, the source language text S1 and the language identification text S2 are aligned and normalized to have the same number of sentences, and the index information is recorded and obtained; the backtracking processing is realized by carrying out iterative processing on the normalization path based on a Gaussian mixture model and a clustering algorithm, and the index information is related to corresponding sequence distribution information in the backtracking path generated after the backtracking processing.
And (3) updating the artificially marked target text D1 and the machine translated target text D2 based on the index information, and evaluating the updated target text D1 and target text D2 through specific algorithm processing.
Specifically, updating the artificially marked target text D1 and the machine translated target text D2 includes performing an inspection process on the target text D1 and the target text D2 according to the sequence distribution information in the backtracking path, and correcting text errors determined by the inspection process, so as to update the target text D1 and the target text D2.
In addition, through specific algorithm processing, the step of evaluating the updated target end text D1 'and the target end text D2' specifically comprises the step of calculating the matching accuracy between the target end text D1 'and the target end text D2' through a text evaluation algorithm BLEU; the matching accuracy is a comprehensive evaluation parameter about phrase symmetry degree, grammar accuracy and word alignment rate between the target text D1 'and the target text D2'.
Preferably, the step (3) may further compare the matching accuracy with an accuracy threshold, and instruct the speech translation system to take the current translation result as a final output result if the matching accuracy is greater than or equal to the accuracy threshold, and instruct the speech translation system to re-perform the steps (1), (2) and (3) until the matching accuracy meets a preset condition if the matching accuracy is less than the accuracy threshold.
According to the embodiment, the effect judging method of the language translation system does not need to manually align the translation result of the language machine translation in a manual labeling alignment mode every time, so that the trouble of manual intervention in each evaluation is omitted, and meanwhile, the evaluation result still has the same accuracy with the evaluation result subjected to manual labeling alignment, and the evaluation efficiency and accuracy of the language translation system are greatly improved.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

1. An effect evaluation method of a speech translation system, characterized in that the effect evaluation method comprises:
step (1), performing voice recognition operation on a source language composed of N sentences of manually marked source language texts S1 to obtain corresponding M sentences of voice recognition texts S2, and performing machine translation on the voice recognition texts S2 to obtain corresponding M sentences of target end texts D2, wherein N and M are positive integers;
step (2), based on the source language text S1 and the voice recognition text S2, obtaining index information about the source language text S1 and the voice recognition text S2 through specific algorithm processing;
step (3), updating the artificially marked target end text D1 and the machine translated target end text D2 based on the index information, and evaluating the updated target end text D1 and target end text D2 through specific algorithm processing;
updating the artificially marked target text D1 and the machine translated target text D2 comprises respectively checking the target text D1 and the target text D2 according to sequence distribution information in the backtracking path, and correcting text errors determined by the checking process to realize updating of the target text D1 and the target text D2;
the updated target end text D1 'and the target end text D2' are evaluated through specific algorithm processing, namely, the matching accuracy between the target end text D1 'and the target end text D2' is calculated through a text evaluation algorithm BLEU, wherein the matching accuracy is a comprehensive evaluation parameter about phrase symmetry degree, grammar accuracy and word alignment rate between the target end text D1 'and the target end text D2';
and (3) comparing the matching accuracy with an accuracy threshold, if the matching accuracy is greater than or equal to the accuracy threshold, indicating the voice translation system to take the current translation result as a final output result, and if the matching accuracy is less than the accuracy threshold, indicating the voice translation system to carry out the steps (1), (2) and (3) again until the matching accuracy meets a preset condition.
2. The method according to claim 1, wherein in step (1), before the speech recognition operation is performed, the corresponding N text fields are arbitrarily picked from a source language text database, then, a score judgment of semantic smoothness is performed on a speech segment composed of the N text fields, and in the case that the score judgment result satisfies a preset condition, a manual labeling process is performed on the speech segment to form the source language text S1, wherein the manual labeling process includes performing at least one of misword correction, grammar correction and logic correction on the speech segment.
3. The method for evaluating the effect of a speech translation system according to claim 1, wherein in step (1), after the speech recognition operation is performed, text correction processing is further performed on the recognition result after the speech recognition is performed on the source language text S1 to obtain the speech recognition text S2, and after the text correction processing, a semantic many-to-many mapping relationship exists between the source language text S1 and the speech recognition text S2; wherein the text correction processing includes performing at least one of text sentence smoothing processing and text sentence breaking processing on the recognition result.
4. The method according to claim 1, wherein in the step (1), performing the machine translation includes evaluating recognition difficulty of the speech recognition text S2 according to at least one of a language type, a speech text duration and a speech speed of the speech recognition text S2 in a predetermined translation machine set, and a result of the evaluation selects a translation machine meeting a requirement from the translation machine set to translate to obtain the target text D2.
5. The method of claim 1, wherein in step (2), the step of obtaining the index information of the source language text S1 and the language identification text S2 through a specific algorithm includes obtaining a matrix a of M by N by sequentially performing a matching process and a convolution operation on each sentence in the source language text S1 and each sentence in the language identification text S2 by using a text evaluation algorithm BLEU, wherein the matching process is used for obtaining a matching accuracy of each sentence in the source language text S1 and each sentence in the language identification text S2, and the convolution operation is performed on a length penalty factor of the matching accuracy to obtain each element in the matrix a, so as to obtain a complete expression of the matrix a.
6. The method for evaluating the effect of a speech translation system according to claim 5, wherein in the step (2), the processing to obtain the index information of the source language text S1 and the language identification text S2 by a specific algorithm further includes dynamically planning the matrix a by a DTW dynamic time-warping algorithm to obtain a warping path about the matrix a, wherein the specific process of the dynamic planning is to obtain a plurality of matrices a corresponding to the source language text S1 and the language identification text S2 in a plurality of different groups, calculate the difference between two matrices in the plurality of matrices a to form a corresponding difference set, obtain a covariance value corresponding to the difference set, and if the covariance value is smaller than a preset covariance threshold, take a distribution function of each difference in the difference set as the warping path.
7. The method according to claim 6, wherein in the step (2), the step of obtaining the index information of the source language text S1 and the language identification text S2 through a specific algorithm process further comprises aligning the source language text S1 and the language identification text S2 to have the same number of sentences by performing a backtracking process on the backtracking path, and recording the aligned sentences to obtain the index information, wherein the backtracking process is implemented by performing an iterative process on the backtracking path based on a gaussian mixture model and a clustering algorithm, and the index information is about corresponding sequence distribution information in the backtracking path generated after the backtracking process.
CN201811489674.7A 2018-12-06 2018-12-06 Effect judging method of voice translation system Active CN109460558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811489674.7A CN109460558B (en) 2018-12-06 2018-12-06 Effect judging method of voice translation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811489674.7A CN109460558B (en) 2018-12-06 2018-12-06 Effect judging method of voice translation system

Publications (2)

Publication Number Publication Date
CN109460558A CN109460558A (en) 2019-03-12
CN109460558B true CN109460558B (en) 2023-04-21

Family

ID=65612631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811489674.7A Active CN109460558B (en) 2018-12-06 2018-12-06 Effect judging method of voice translation system

Country Status (1)

Country Link
CN (1) CN109460558B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109830230B (en) * 2019-03-27 2020-09-01 深圳平安综合金融服务有限公司上海分公司 Data labeling method and device based on self-learning algorithm
CN111680527B (en) * 2020-06-09 2023-09-19 语联网(武汉)信息技术有限公司 Man-machine co-interpretation system and method based on dedicated machine turning engine training
CN111767743B (en) * 2020-09-01 2020-11-27 浙江蓝鸽科技有限公司 Machine intelligent evaluation method and system for translation test questions

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090192782A1 (en) * 2008-01-28 2009-07-30 William Drewes Method for increasing the accuracy of statistical machine translation (SMT)
CN104050160B (en) * 2014-03-12 2017-04-05 北京紫冬锐意语音科技有限公司 Interpreter's method and apparatus that a kind of machine is blended with human translation

Also Published As

Publication number Publication date
CN109460558A (en) 2019-03-12

Similar Documents

Publication Publication Date Title
CN108304372B (en) Entity extraction method and device, computer equipment and storage medium
CN111369996A (en) Method for correcting text error in speech recognition in specific field
CN107622054B (en) Text data error correction method and device
CN101739869B (en) Priori knowledge-based pronunciation evaluation and diagnosis system
CN109460558B (en) Effect judging method of voice translation system
CN101667177B (en) Method and device for aligning bilingual text
JP2005084681A (en) Method and system for semantic language modeling and reliability measurement
CN103678271B (en) A kind of text correction method and subscriber equipment
CN106570180A (en) Artificial intelligence based voice searching method and device
CN110415725B (en) Method and system for evaluating pronunciation quality of second language using first language data
WO2021139265A1 (en) Composition scoring method and apparatus, computer device, and computer readable storage medium
Chen et al. Discriminative training on language model
CN111753524A (en) Text sentence break position identification method and system, electronic device and storage medium
CN111881297A (en) Method and device for correcting voice recognition text
CN112101032A (en) Named entity identification and error correction method based on self-distillation
CN112447172B (en) Quality improvement method and device for voice recognition text
Ruiz et al. Assessing the impact of speech recognition errors on machine translation quality
CN112216284A (en) Training data updating method and system, voice recognition method and system, and equipment
CN110826301B (en) Punctuation mark adding method, punctuation mark adding system, mobile terminal and storage medium
CN104750676A (en) Machine translation processing method and device
CN110442876B (en) Text mining method, device, terminal and storage medium
CN113822052A (en) Text error detection method and device, electronic equipment and storage medium
CN106548787A (en) The evaluating method and evaluating system of optimization new word
CN110705321A (en) Computer aided translation system
CN112151019A (en) Text processing method and device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant