CN112818707B - Reverse text consensus-based multi-turn engine collaborative speech translation system and method - Google Patents

Reverse text consensus-based multi-turn engine collaborative speech translation system and method Download PDF

Info

Publication number
CN112818707B
CN112818707B CN202110054103.6A CN202110054103A CN112818707B CN 112818707 B CN112818707 B CN 112818707B CN 202110054103 A CN202110054103 A CN 202110054103A CN 112818707 B CN112818707 B CN 112818707B
Authority
CN
China
Prior art keywords
translation
text
voice
subsystem
translated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110054103.6A
Other languages
Chinese (zh)
Other versions
CN112818707A (en
Inventor
何征宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Transn Iol Technology Co ltd
Original Assignee
Transn Iol Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Transn Iol Technology Co ltd filed Critical Transn Iol Technology Co ltd
Priority to CN202110054103.6A priority Critical patent/CN112818707B/en
Publication of CN112818707A publication Critical patent/CN112818707A/en
Application granted granted Critical
Publication of CN112818707B publication Critical patent/CN112818707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a reverse text consensus-based multi-turn engine collaborative speech translation system and a reverse text consensus-based multi-turn engine collaborative speech translation method. The system comprises: a voice input subsystem: for inputting speech to be interpreted; a first speech translation subsystem: the method comprises the steps of translating voice to be translated to output a first voice translation result; the second text translation subsystem, the consensus judging subsystem and the output subsystem; the second text translation subsystem includes a plurality of text translation engines; taking each first voice translation result output by the first voice translation subsystem as input of a plurality of text translation engines of a second text translation subsystem, and outputting a plurality of text translation results by the second text translation subsystem; the consensus judging subsystem is used for carrying out consensus judgment on the translation results based on the text translation results; and the output subsystem outputs a translation record file corresponding to the voice to be translated based on the translation result consensus judgment. The invention also provides a voice translation method realized based on the system.

Description

Reverse text consensus-based multi-turn engine collaborative speech translation system and method
Technical Field
The invention belongs to the technical field of intelligent voice translation, and particularly relates to a multi-turn engine collaborative voice translation system and method based on reverse text consensus.
Background
With the increasing globalization of human society, economy and culture, people holding different native languages are more and more frequently communicating in many occasions such as travel, meetings, commerce, sports and the like, and the more obvious is the language barrier. Thus, there is an urgent need for computers with higher intelligence to act as an intermediate role in overcoming human language barriers, enabling free inter-human communication. This translation of Speech from one natural language to another, which is implemented by a computer system, is what we commonly speak of Speech translation (Speech-to-Speech Translation). Since daily communication between people is usually performed in a natural language form, research on automatic translation of natural spoken language often has a wider application prospect, and research on international development of a voice translation technology is generally performed on the basis of natural spoken language at present, so that voice translation is also often called automatic translation of spoken language (Automatic Spoken Language Translation, abbreviated as SLT).
There are two typical application scenarios for research and application of automatic translation of spoken language: firstly, daily spoken language communication between users in different languages face to face; secondly, in the conference scene, the talkers face to the audience to communicate in the conference scene. Because meetings have field expertise, the large amount of terms they involve and industry-related language expressions present a significant challenge to machine translation.
For this, the chinese patent application CN202011016786.8 focuses on the text transcribed from continuous speech provided by the speech recognition engine for real-time translation, i.e. it firstly recognizes whether the current text to be processed is mixed with words of multiple languages, then, when it is determined that the current text is mixed with words, it combines the time interval between sentences and the language of the previous sentence, i.e. the translation direction used before processing the current text, to obtain a switching threshold, and finally, decides whether to switch the translation direction used before processing the current text according to the switching threshold and the ratio of words of different languages in the current text.
Important high-end conferences (e.g., multi-national head-end conferences, multi-user high-end conferences) often require accurate recording of real-time utterances of the participants. However, since the conference usually needs to keep real-time, avoiding long delay, the existing system can only directly store the speaking records of all participants, which obviously increases the data storage and transmission amount, and particularly, in the teleconference, the transmission cost is high. If all the utterances are not recorded, the translation results of different languages need to be manually checked, the working efficiency is reduced, and the labor cost needs to be increased.
Therefore, how to effectively generate an effective conference record for the multilingual multi-user teleconference, and automatically identify possible translation disputes is a technical problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-turn engine collaborative voice translation system and a multi-turn engine collaborative voice translation method based on reverse text consensus. The system comprises: a voice input subsystem: for inputting speech to be interpreted; a first speech translation subsystem: the method comprises the steps of translating voice to be translated to output a first voice translation result; the second text translation subsystem, the consensus judging subsystem and the output subsystem; the second text translation subsystem includes a plurality of text translation engines; taking each first voice translation result output by the first voice translation subsystem as input of a plurality of text translation engines of a second text translation subsystem, and outputting a plurality of text translation results by the second text translation subsystem; the consensus judging subsystem is used for carrying out consensus judgment on the translation results based on the text translation results; and the output subsystem outputs a translation record file corresponding to the voice to be translated based on the translation result consensus judgment.
The invention also provides a voice translation method realized based on the system.
Specifically, in a first aspect of the present invention, there is provided a multi-turn engine collaborative speech translation system based on reverse text consensus, the system comprising:
a voice input subsystem: the voice input subsystem is used for inputting voice to be translated;
a first speech translation subsystem: the first voice translation subsystem is used for translating the voice to be translated and outputting a first voice translation result;
as an improvement of the invention, the system also comprises a second text translation subsystem, a consensus judging subsystem and an output subsystem;
as one of the key technical means of the improvement, the second text translation subsystem comprises a plurality of text translation engines;
taking each first voice translation result output by the first voice translation subsystem as input of a plurality of text translation engines of the second text translation subsystem, wherein the second text translation subsystem outputs a plurality of text translation results;
as an improvement of the invention, the consensus judging subsystem performs consensus judgment of the translation result based on the plurality of text translation results;
and the output subsystem judges and outputs a translation record file corresponding to the voice to be translated based on the translation result consensus.
As an improvement of the present invention, the first speech translation subsystem includes a plurality of first speech translation engines;
the first speech translation engine translates the speech to be translated into a first speech translation result of a target language;
and the text translation engine of the second text translation subsystem translates the first voice translation result into a text translation result corresponding to the voice language to be translated.
As one of the key technical means of the improvement, the consensus judging subsystem performs consensus judgment of the translation result based on the plurality of text translation results, and specifically includes:
and calculating the similarity between the text translation results to construct a first similarity matrix.
Judging whether the first similarity matrix is stable or not;
if not, outputting a translation record file corresponding to the to-be-translated voice.
In a second aspect of the invention, a multi-turn engine collaborative speech translation method based on reverse text consensus is provided, the method being implemented based on a first speech translation method and a second text translation method,
specifically, the method comprises the following steps:
s700: receiving a voice to be translated;
s701: translating the voice to be translated into a first voice translation result of the target language by adopting a first voice translation method;
s702: translating the first voice translation result into a plurality of second text translation results of the voice language to be translated by adopting a second text translation method;
s703: calculating the pairwise similarity between the plurality of second text translation results, and constructing a first similarity matrix;
s704: judging whether the first similarity matrix is stable or not; if the translation record file is unstable, generating a translation record file corresponding to the voice to be translated and storing the translation record file into a system log.
As a further improvement, the step S704 further includes:
s7041: judging whether the first similarity matrix is stable or not;
if not, the process proceeds to the next step S7042:
s7042: converting the voice to be translated into a third text to be translated by adopting a third text recognition method;
s7043: calculating a plurality of text similarities between the third text to be translated and the plurality of second text translation results by adopting a fourth similarity comparison method;
s7044: and if the similarity of the texts is lower than a preset threshold value, generating a translation record file corresponding to the voice to be translated and storing the translation record file into a system log.
In a third aspect of the present invention, a system-implemented multi-turn engine collaborative speech translation method based on the foregoing first aspect is provided.
The above-described method of the present invention may be automatically implemented by program instructions in the form of a computer. Accordingly, in a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon computer executable program instructions for carrying out the method of the second or third aspect described above by a terminal device comprising a processor and a memory and a plurality of functional subsystems.
The above-mentioned technical problems are appropriately solved by the technical solutions of the above-mentioned aspects of the present invention. According to the technical scheme, the real-time performance of voice translation is ensured, the existing multi-translation engine is fully utilized to realize collaborative recognition judgment, and a translation part which possibly has disputes is rapidly recognized based on reverse text consensus, so that a targeted conference record file is automatically generated without manual auditing.
Further advantages of the invention will be further elaborated in the description section of the embodiments in connection with the drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of a main body architecture of a reverse text consensus-based multi-turn engine collaborative speech translation system according to an embodiment of the present invention
FIG. 2 is a schematic diagram of a transmission flow of a portion of a data stream in the system of FIG. 1
FIG. 3 is a control flow diagram of data transmission in the system of FIG. 1
FIG. 4 is a main flow chart of a reverse text consensus-based multi-turn engine collaborative speech translation method according to an embodiment of the present invention
FIG. 5 is a further preferred embodiment of the method of FIG. 4
FIG. 6 is a main flow chart of a reverse text consensus-based multi-turn engine collaborative speech translation method implemented based on the system of FIG. 1
Detailed Description
The invention will be further described with reference to the drawings and detailed description.
Referring to fig. 1, a main body architecture diagram of a multi-rollover engine collaborative speech translation system based on reverse text consensus according to an embodiment of the present invention is shown.
In fig. 1, the system comprises: the system comprises a voice input subsystem, a first voice translation subsystem, a second text translation subsystem, a consensus judgment subsystem, an output subsystem and a voice text recognition subsystem;
the voice input subsystem is respectively in communication connection with the voice text recognition subsystem and the first voice translation subsystem.
The voice input subsystem is used for inputting voice to be translated;
the first voice translation subsystem is used for translating the voice to be translated and outputting a first voice translation result.
And the voice text recognition subsystem is activated under a preset condition to convert the voice to be translated into the text to be translated.
As an example, assuming that the speech to be translated input by the current user is a chinese speech sequence and the target translation language is english, the first speech translation subsystem is configured to translate the speech to be translated, and output a first speech translation result, including: translating the Chinese voice sequence into an English voice sequence and outputting the English voice sequence as the first voice translation result;
the voice text recognition subsystem converts the Chinese voice sequence into a Chinese text to be translated.
As a more specific introduction, the second text translation subsystem includes a plurality of text translation engines;
and taking each first voice translation result output by the first voice translation subsystem as input of a plurality of text translation engines of the second text translation subsystem, and outputting a plurality of text translation results by the second text translation subsystem.
In various embodiments of the present invention, the second text translation subsystem is a translation engine group that translates in a direction opposite to that of the first speech translation subsystem.
Taking the case that the voice to be translated input by the current user is a Chinese voice sequence and the target language is English as an example, the first voice translation subsystem is used for translating the voice to be translated, and the output first voice translation result is an English voice sequence;
correspondingly, the second text translation subsystem comprises a plurality of English voice-Chinese text translation output engines, namely the text translation engines;
and each of the plurality of text translation engines of the second text translation subsystem translates the English voice sequence output by the first voice translation subsystem into a Chinese text sequence as a plurality of text translation results.
The consensus judging subsystem performs consensus judgment on the translation results based on the text translation results;
and the output subsystem judges and outputs a translation record file corresponding to the voice to be translated based on the translation result consensus.
More specific examples can be seen in fig. 2.
In fig. 2, the speech to be translated is the language A0;
the first speech translation subsystem includes a plurality of first speech translation engines;
the first speech translation engine translates the speech to be translated into a first speech translation result B of the target language B;
and the text translation engine of the second text translation subsystem translates the first voice translation result B into a text translation result corresponding to the voice language to be translated.
In fig. 2, three text translation engines of the second text translation subsystem output three text translation results A1-A3, respectively.
Referring to the foregoing example, three text translation results A1-A3 are in the same language as A0.
In fig. 2, the consensus judging subsystem performs consensus judgment of the translation result based on the plurality of text translation results, and specifically includes:
calculating the pairwise similarity between the text translation results, and constructing a first similarity matrix; and generating a translation record file based on the stability judging result of the first similarity matrix.
Referring to fig. 3 in combination with fig. 1, the output subsystem determines, based on the translation result consensus, to output a translation record file corresponding to the speech to be translated, and specifically includes:
judging whether the first similarity matrix is stable or not;
if not, outputting a translation record file corresponding to the to-be-translated voice.
As a more specific technical means, judging whether the first similarity matrix is stable; if the speech is unstable, the speech to be translated is converted into the text to be translated.
Here, the voice text recognition subsystem satisfies an activation condition, that is, converts the to-be-translated voice a into a to-be-translated text A0;
then, calculating a plurality of second similarity between the text translation results and the text to be translated;
and if each second similarity is lower than a preset threshold value, outputting a translation record file corresponding to the voice to be translated.
In the above embodiment, the translation record file of the to-be-translated voice includes the input time of the to-be-translated voice, the input terminal ID, and the original file of the to-be-translated voice.
Referring next to fig. 4, a main flow chart of a reverse text consensus-based multi-turn engine collaborative speech translation method according to an embodiment of the present invention is shown.
The method shown in fig. 4 mainly includes a first speech translation method and a second text translation method, where the first speech translation method may be implemented based on the first speech translation subsystem, and the second text translation method may be implemented based on the second text translation subsystem.
The method illustrated in fig. 4 includes 5 main steps S700-S704, each of which is implemented as follows:
s700: receiving a voice to be translated;
s701: translating the voice to be translated into a first voice translation result of the target language by adopting a first voice translation method;
s702: translating the first voice translation result into a plurality of second text translation results of the voice language to be translated by adopting a second text translation method;
s703: calculating the pairwise similarity between the plurality of second text translation results, and constructing a first similarity matrix;
s704: judging whether the first similarity matrix is stable or not; if the translation record file is unstable, generating a translation record file corresponding to the voice to be translated and storing the translation record file into a system log.
More specifically, the method of FIG. 4 further includes the preferred embodiment of FIG. 5. In fig. 5, the step S704 further includes:
s7041: judging whether the first similarity matrix is stable or not;
if not, the process proceeds to the next step S7042:
s7042: converting the voice to be translated into a third text to be translated by adopting a third text recognition method;
s7043: calculating a plurality of text similarities between the third text to be translated and the plurality of second text translation results by adopting a fourth similarity comparison method;
s7044: and if the similarity of the texts is lower than a preset threshold value, generating a translation record file corresponding to the voice to be translated and storing the translation record file into a system log.
Based on the system described in fig. 1, a multi-turn engine collaborative speech translation method based on reverse text consensus can also be implemented, and this method is shown in fig. 6, and generally includes a first speech translation method, a second text translation method, and a third consensus judgment method.
With respect to fig. 1, the first speech translation method is implemented based on a first speech translation subsystem in the translation system described in fig. 1; the second speech translation method is realized based on a second text translation subsystem in the translation system shown in fig. 1; the third consensus judgment method is realized based on a consensus judgment subsystem in the translation system shown in fig. 1.
As a further preferred aspect, in the above technical solution, the first speech translation method, the second text translation method, and the third consensus judgment method are implemented by asynchronous threads. The asynchronously implemented method does not require waiting, further avoiding data delays.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A reverse text consensus-based multi-turn engine collaborative speech translation system, the system comprising:
a voice input subsystem: the voice input subsystem is used for inputting voice to be translated;
a first speech translation subsystem: the first voice translation subsystem is used for translating the voice to be translated and outputting a first voice translation result;
the method is characterized in that:
the system also comprises a second text translation subsystem, a consensus judging subsystem and an output subsystem;
the second text translation subsystem includes a plurality of text translation engines;
taking each first voice translation result output by the first voice translation subsystem as input of a plurality of text translation engines of the second text translation subsystem, wherein the second text translation subsystem outputs a plurality of text translation results;
the consensus judging subsystem performs consensus judgment on the translation results based on the text translation results;
the output subsystem judges and outputs a translation record file corresponding to the voice to be translated based on the translation result consensus;
the consensus judging subsystem carries out consensus judgment on the translation results based on the text translation results, and specifically comprises the following steps:
calculating the pairwise similarity between the text translation results, and constructing a first similarity matrix;
the output subsystem judges and outputs a translation record file corresponding to the voice to be translated based on the translation result consensus, and specifically comprises the following steps:
judging whether the first similarity matrix is stable or not;
if not, outputting a translation record file corresponding to the to-be-translated voice.
2. The reverse text consensus-based multi-turn engine collaborative speech translation system according to claim 1 wherein: the first speech translation subsystem includes a plurality of first speech translation engines;
the first speech translation engine translates the speech to be translated into a first speech translation result of a target language;
and the text translation engine of the second text translation subsystem translates the first voice translation result into a text translation result corresponding to the voice language to be translated.
3. The reverse text consensus-based multi-turn engine collaborative speech translation system according to claim 1 wherein: the output subsystem judges and outputs a translation record file corresponding to the voice to be translated based on the translation result consensus, and specifically comprises the following steps:
judging whether the first similarity matrix is stable or not;
if the speech is unstable, converting the speech to be translated into a text to be translated;
calculating a plurality of second similarity between the text translation results and the text to be translated;
and if each second similarity is lower than a preset threshold value, outputting a translation record file corresponding to the voice to be translated.
4. The reverse text consensus-based multi-turn engine collaborative speech translation system according to claim 1 wherein: the translation record file of the to-be-translated voice comprises the input time of the to-be-translated voice, the ID of the input terminal and the original file of the to-be-translated voice.
5. The method is realized based on a first voice translation method and a second text translation method, and is characterized by comprising the following steps:
s700: receiving a voice to be translated;
s701: translating the voice to be translated into a first voice translation result of the target language by adopting a first voice translation method;
s702: translating the first voice translation result into a plurality of second text translation results of the voice language to be translated by adopting a second text translation method;
s703: calculating the pairwise similarity between the plurality of second text translation results, and constructing a first similarity matrix;
s704: judging whether the first similarity matrix is stable or not; if the translation record file is unstable, generating a translation record file corresponding to the voice to be translated and storing the translation record file into a system log;
s7041: judging whether the first similarity matrix is stable or not;
if not, the process proceeds to the next step S7042:
s7042: converting the voice to be translated into a third text to be translated by adopting a third text recognition method;
s7043: calculating a plurality of text similarities between the third text to be translated and the plurality of second text translation results by adopting a fourth similarity comparison method;
s7044: and if the similarity of the texts is lower than a preset threshold value, generating a translation record file corresponding to the voice to be translated and storing the translation record file into a system log.
6. A computer readable storage medium having stored thereon computer executable program instructions for performing the method of claim 5 by a terminal device comprising a processor and a memory and a plurality of functional subsystems.
CN202110054103.6A 2021-01-19 2021-01-19 Reverse text consensus-based multi-turn engine collaborative speech translation system and method Active CN112818707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110054103.6A CN112818707B (en) 2021-01-19 2021-01-19 Reverse text consensus-based multi-turn engine collaborative speech translation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110054103.6A CN112818707B (en) 2021-01-19 2021-01-19 Reverse text consensus-based multi-turn engine collaborative speech translation system and method

Publications (2)

Publication Number Publication Date
CN112818707A CN112818707A (en) 2021-05-18
CN112818707B true CN112818707B (en) 2024-02-27

Family

ID=75869804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110054103.6A Active CN112818707B (en) 2021-01-19 2021-01-19 Reverse text consensus-based multi-turn engine collaborative speech translation system and method

Country Status (1)

Country Link
CN (1) CN112818707B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001117922A (en) * 1999-10-15 2001-04-27 Sony Corp Device and method for translation and recording medium
JP2002014957A (en) * 2001-06-04 2002-01-18 Hitachi Ltd Translation system and recording medium
WO2006083690A2 (en) * 2005-02-01 2006-08-10 Embedded Technologies, Llc Language engine coordination and switching
CN101266600A (en) * 2008-05-07 2008-09-17 陈光火 Multimedia multi- language interactive synchronous translation method
WO2010025460A1 (en) * 2008-08-29 2010-03-04 O3 Technologies, Llc System and method for speech-to-speech translation
JP2015026057A (en) * 2013-07-29 2015-02-05 韓國電子通信研究院Electronics and Telecommunications Research Institute Interactive character based foreign language learning device and method
KR20160081244A (en) * 2014-12-31 2016-07-08 한국전자통신연구원 Automatic interpretation system and method
JP2017138650A (en) * 2016-02-01 2017-08-10 株式会社リクルートライフスタイル Voice translation system, voice translation method and voice translation program
CN107451129A (en) * 2017-08-08 2017-12-08 传神语联网网络科技股份有限公司 The judgement of unconventional word or unconventional short sentence and interpretation method and its system
CN107885737A (en) * 2017-12-27 2018-04-06 传神语联网网络科技股份有限公司 A kind of human-computer interaction interpretation method and system
JP2018088088A (en) * 2016-11-28 2018-06-07 ヤマハ株式会社 Information processing system and terminal device
CN109522564A (en) * 2018-12-17 2019-03-26 北京百度网讯科技有限公司 Voice translation method and device
WO2019165748A1 (en) * 2018-02-28 2019-09-06 科大讯飞股份有限公司 Speech translation method and apparatus
WO2020001546A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Method, device, and system for speech recognition
CN111680525A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Human-machine co-translation method and system based on reverse difference recognition
CN111680527A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Man-machine co-translation system and method based on exclusive machine translation engine training
CN111680524A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Human-machine feedback translation method and system based on reverse matrix analysis
CN111680526A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Human-computer interaction translation system and method based on reverse translation result comparison

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030110023A1 (en) * 2001-12-07 2003-06-12 Srinivas Bangalore Systems and methods for translating languages
US20120330643A1 (en) * 2010-06-04 2012-12-27 John Frei System and method for translation
US8554558B2 (en) * 2010-07-12 2013-10-08 Nuance Communications, Inc. Visualizing automatic speech recognition and machine translation output
KR20140121580A (en) * 2013-04-08 2014-10-16 한국전자통신연구원 Apparatus and method for automatic translation and interpretation
US10176366B1 (en) * 2017-11-01 2019-01-08 Sorenson Ip Holdings Llc Video relay service, communication system, and related methods for performing artificial intelligence sign language translation services in a video relay service environment

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001117922A (en) * 1999-10-15 2001-04-27 Sony Corp Device and method for translation and recording medium
JP2002014957A (en) * 2001-06-04 2002-01-18 Hitachi Ltd Translation system and recording medium
WO2006083690A2 (en) * 2005-02-01 2006-08-10 Embedded Technologies, Llc Language engine coordination and switching
CN101266600A (en) * 2008-05-07 2008-09-17 陈光火 Multimedia multi- language interactive synchronous translation method
WO2010025460A1 (en) * 2008-08-29 2010-03-04 O3 Technologies, Llc System and method for speech-to-speech translation
JP2015026057A (en) * 2013-07-29 2015-02-05 韓國電子通信研究院Electronics and Telecommunications Research Institute Interactive character based foreign language learning device and method
KR20160081244A (en) * 2014-12-31 2016-07-08 한국전자통신연구원 Automatic interpretation system and method
JP2017138650A (en) * 2016-02-01 2017-08-10 株式会社リクルートライフスタイル Voice translation system, voice translation method and voice translation program
JP2018088088A (en) * 2016-11-28 2018-06-07 ヤマハ株式会社 Information processing system and terminal device
CN107451129A (en) * 2017-08-08 2017-12-08 传神语联网网络科技股份有限公司 The judgement of unconventional word or unconventional short sentence and interpretation method and its system
CN107885737A (en) * 2017-12-27 2018-04-06 传神语联网网络科技股份有限公司 A kind of human-computer interaction interpretation method and system
WO2019165748A1 (en) * 2018-02-28 2019-09-06 科大讯飞股份有限公司 Speech translation method and apparatus
WO2020001546A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Method, device, and system for speech recognition
CN109522564A (en) * 2018-12-17 2019-03-26 北京百度网讯科技有限公司 Voice translation method and device
CN111680525A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Human-machine co-translation method and system based on reverse difference recognition
CN111680527A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Man-machine co-translation system and method based on exclusive machine translation engine training
CN111680524A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Human-machine feedback translation method and system based on reverse matrix analysis
CN111680526A (en) * 2020-06-09 2020-09-18 语联网(武汉)信息技术有限公司 Human-computer interaction translation system and method based on reverse translation result comparison

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于网络搜索的英汉人名翻译;刘颖;曹项;;中文信息学报(第02期);全文 *

Also Published As

Publication number Publication date
CN112818707A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
Zhong et al. Dialoglm: Pre-trained model for long dialogue understanding and summarization
CN110444196B (en) Data processing method, device and system based on simultaneous interpretation and storage medium
US20190164064A1 (en) Question and answer interaction method and device, and computer readable storage medium
US20170308526A1 (en) Compcuter Implemented machine translation apparatus and machine translation method
CN111477216A (en) Training method and system for pronunciation understanding model of conversation robot
CN110705317B (en) Translation method and related device
KR20200119410A (en) System and Method for Recognizing Emotions from Korean Dialogues based on Global and Local Contextual Information
CN111680524B (en) Human-machine feedback translation method and system based on inverse matrix analysis
US20190121860A1 (en) Conference And Call Center Speech To Text Machine Translation Engine
Liu Research on the development of computer intelligent proofreading system based on the perspective of English translation application
CN116797695A (en) Interaction method, system and storage medium of digital person and virtual whiteboard
CN111680523B (en) Man-machine collaborative translation system and method based on context semantic comparison
CN112818707B (en) Reverse text consensus-based multi-turn engine collaborative speech translation system and method
CN113505609A (en) One-key auxiliary translation method for multi-language conference and equipment with same
CN112818709B (en) Speech translation system and method for recording marks of multi-user speech conferences
Gu et al. Concept-based speech-to-speech translation using maximum entropy models for statistical natural concept generation
CN115455981A (en) Semantic understanding method, device, equipment and storage medium for multi-language sentences
JP2023034235A (en) Text summarization method and text summarization system
CN111652005B (en) Synchronous inter-translation system and method for Chinese and Urdu
KR102562692B1 (en) System and method for providing sentence punctuation
CN112395889A (en) Machine-synchronized translation
CN112818705B (en) Multilingual speech translation system and method based on group consensus
CN112818706B (en) Voice translation real-time dispute recording system and method based on reverse result stability
CN112818704B (en) Multilingual translation system and method based on inter-thread consensus feedback
CN112818708B (en) System and method for processing voice translation of multi-terminal multi-language video conference in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant