CN112818702B - Multi-user multi-language cooperative speech translation system and method - Google Patents

Multi-user multi-language cooperative speech translation system and method Download PDF

Info

Publication number
CN112818702B
CN112818702B CN202110032594.4A CN202110032594A CN112818702B CN 112818702 B CN112818702 B CN 112818702B CN 202110032594 A CN202110032594 A CN 202110032594A CN 112818702 B CN112818702 B CN 112818702B
Authority
CN
China
Prior art keywords
translation
user
voice input
grouping
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110032594.4A
Other languages
Chinese (zh)
Other versions
CN112818702A (en
Inventor
何征宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Transn Iol Technology Co ltd
Original Assignee
Transn Iol Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Transn Iol Technology Co ltd filed Critical Transn Iol Technology Co ltd
Priority to CN202110032594.4A priority Critical patent/CN112818702B/en
Publication of CN112818702A publication Critical patent/CN112818702A/en
Application granted granted Critical
Publication of CN112818702B publication Critical patent/CN112818702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation

Abstract

The invention provides a multi-user multi-language collaborative speech translation system and a multi-user multi-language collaborative speech translation method. The system comprises: a user access module for accessing a plurality of user terminals to acquire voice input signals: a user grouping module for grouping voice input signals of the plurality of user terminals; the system comprises a grouping translation module, a translation consensus module, a translation difference identification module and a difference result broadcasting module. The difference result broadcasting module broadcasts the voice input signals meeting the difference recording conditions and the grouping translation results thereof to the collaborative user terminal; the collaborative user terminal is a user terminal with the same input voice input signal language as the voice input signal meeting the difference recording condition. The invention also discloses a multi-user multi-language cooperative voice translation method realized based on the system. The technical scheme of the invention fully utilizes the multilingual translation engine to realize self-adaptive collaborative voice translation result judgment.

Description

Multi-user multi-language cooperative speech translation system and method
Technical Field
The invention belongs to the technical field of intelligent voice translation, and particularly relates to a multi-user multi-language collaborative voice translation system and method.
Background
In AI speech translation, conventional modules include speech recognition-converting acoustic signals in a source language into words-machine translation-converting source language words into target language words, and speech synthesis-converting target language words into target language speech.
Artificial intelligence, represented by computer information processing technology, can to some extent rival the processing power of the human brain in certain environments and fields. The advent and development of artificial intelligence translation has brought a tremendous revolution to the translation industry. However, human language has great complexity and diversity, and in addition, under different translation environments, the expression mode of a presenter, the professional nature of a translation subject, on-site sound quantity, noise interference and other factors affect the translation quality, especially in multi-language multi-user environments.
Heretofore, for multilingual text translation, an invention patent application CN201410499790.2 filed by wuhan biography information technology limited company proposes a system and a method for preprocessing multi-user multilingual mail translation, which comprise a preprocessing server system and a client system. The pre-processing server system comprises a mail receiving module, a filtering processing module, a mail distributing module, a first storage area, a second storage area and a filtering rule database; the mail receiving module receives an external mail; the filtering processing module reads external mails one by one, filters out mails which do not need to be translated and repeated mails according to a filtering rule, and finds a translator account corresponding to the mails which need to be translated; the forwarding module sends the mail to the client system that corresponds to the translator account login. The pre-processing client system is used for logging in by a translator and receiving the external mail forwarded by the mail distribution module. The invention accurately distributes the external mail to the corresponding translator by filtering a plurality of gateways through rule filtering, repeated mail judgment and language filtering, simultaneously prevents repeated invalid mails from puzzling the translator, and greatly improves the working efficiency of the translator.
Aiming at multilingual speech translation, the Chinese patent application CN202011162634.9 provides a multilingual speech interaction method which comprises the following steps: responding to the acquired audio, and sending the audio into a mixed language model for recognition, wherein the mixed language model trains a plurality of languages of switching language command words and stores the switching language command words locally; judging whether a language switching command word exists in the audio based on the identification result; if the switching language command word exists, determining the language after switching based on the switching language command word; and setting an online default language model based on the switched languages and synchronizing the online default language model to a server, wherein the server comprises a plurality of single language models. The mixed language model for switching language command words is used at the client side, and a plurality of single language models are used at the server side, so that the expensive cost for training the mixed language model can be reduced, and the stability of voice interaction is improved.
However, in a multi-person multi-language speech scenario, besides the participation of machine translation, manual translation is often required to verify and audit the machine translation content, so that the practical efficiency is reduced.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-user multi-language collaborative voice translation consensus method and a multi-language collaborative voice translation consensus system. The system comprises: a user access module for accessing a plurality of user terminals to acquire voice input signals: a user grouping module for grouping voice input signals of the plurality of user terminals; the system comprises a grouping translation module, a translation consensus module, a translation difference identification module and a difference result broadcasting module. The difference result broadcasting module broadcasts the voice input signals meeting the difference recording conditions and the grouping translation results thereof to the collaborative user terminal; the collaborative user terminal is a user terminal with the same input voice input signal language as the voice input signal meeting the difference recording condition.
The invention also discloses a multi-user multi-language cooperative voice translation method realized based on the system.
Specifically, the multi-user multi-language cooperative voice translation system is realized as follows:
the system comprises a user access module, a user grouping module, a grouping translation module, a translation consensus module, a translation difference identification module and a difference result broadcasting module;
the user access module is used for accessing a plurality of user terminals and acquiring voice input signals of the plurality of user terminals;
the user grouping module is connected with the user access module and is used for grouping the voice input signals of the plurality of user terminals acquired by the user access module;
the grouping translation module comprises a plurality of first translation engines with different categories, acquires a plurality of grouping voice input signals output by the user grouping module, and distributes each grouping voice input signal to the corresponding category of first translation engines;
the translation consensus module comprises a plurality of second translation engines of different categories, inputs the grouping translation results output by the first translation engines of the grouping translation module to the second translation engines, and obtains translation consensus results based on the output of the second translation engines and the voice input signals;
the translation difference recognition module recognizes a voice input signal meeting a difference recording condition and a grouping translation result thereof based on the translation consensus result, and sends the voice input signal meeting the difference recording condition and the grouping translation result thereof to the difference result broadcasting module;
the difference result broadcasting module broadcasts the voice input signals meeting the difference recording conditions and the grouping translation results thereof to the collaborative user terminal;
the collaborative user terminal is a user terminal with the same input voice input signal language as the voice input signal meeting the difference recording condition.
In the invention, the translation directions of the first translation engines of the plurality of different classes and the second translation engines of the plurality of different classes are symmetrically opposite.
The translation consensus module obtains a translation consensus result based on the output of the second translation engine and the voice input signal, and specifically comprises the following steps:
and the translation consensus module calculates a similarity value of an output result of the second translation engine and a voice input signal, and takes the similarity value as the translation consensus result.
The translation difference recognition module recognizes a voice input signal meeting a difference recording condition and a grouping translation result thereof based on the translation consensus result, and specifically comprises the following steps:
the translation difference recognition module judges whether the similarity value of the output result of the second translation engine and the voice input signal is lower than a preset standard threshold value;
if yes, the voice input signal and the grouping translation result thereof meet the difference recording condition.
On the basis, the multi-user multi-language collaborative speech translation method based on the system comprises the following steps:
s800: receiving voice input signals sent by a plurality of different user terminals, wherein the voice input signals comprise position information of the user terminals;
s810: grouping the voice input signals based on the location information;
s820: based on the grouping, sending voice input signals of the corresponding grouping to first translation engines of the corresponding categories, and outputting corresponding first grouping translation results by the first translation engines of each category;
s830: taking the first grouping translation result as the input of a second translation engine of a corresponding category;
s840: judging whether the similarity value between the output translation result of the second translation engine and the corresponding voice input signal meets a preset recording condition or not;
if so, go to step S850:
s850: broadcasting the corresponding voice input signals and the grouping translation results to a collaborative user terminal;
the collaborative user terminal is a user terminal with the same input voice input signal language as the corresponding voice input signal.
The step S800 and the step S810 are realized through a first process; the step S820 is implemented by a second process;
the second process and the first process communicate through a unidirectional data pipe.
The step S830 further includes:
and selecting partial group translation results from the first group translation results as the input of the second translation engine of the corresponding category.
The technical scheme of the invention fully utilizes the multilingual translation engine to realize self-adaptive collaborative voice translation result judgment.
Further advantages of the invention will be further elaborated in the description section of the embodiments in connection with the drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of a multi-user, multi-lingual collaborative speech translation consensus system according to an embodiment of the present invention
FIG. 2 is a flow chart illustrating the system of FIG. 1 for obtaining a first packet translation result
FIG. 3 is a flow chart illustrating a translation consensus performed by the system of FIG. 1
FIG. 4 is a schematic diagram of the system of FIG. 1 with the translation directions of the translation engines symmetrically opposite
FIG. 5 is a main flow chart of a multi-user multi-lingual collaborative speech translation consensus method implemented based on the system of FIG. 1
FIG. 6 is a further preferred embodiment of the method of FIG. 5
Detailed Description
The invention will be further described with reference to the drawings and detailed description.
Referring to fig. 1, a main body structure diagram of a multi-user multi-language collaborative speech translation consensus system according to an embodiment of the present invention is shown.
In fig. 1, the system includes a user access module, a user grouping module, a grouping translation module, a translation consensus module, a translation difference identification module, and a difference result broadcasting module.
The user access module is used for accessing a plurality of user terminals and acquiring voice input signals of the plurality of user terminals;
the user grouping module is connected with the user access module and is used for grouping the voice input signals of the plurality of user terminals acquired by the user access module;
the grouping translation module comprises a plurality of first translation engines with different categories, acquires a plurality of grouping voice input signals output by the user grouping module, and distributes each grouping voice input signal to the corresponding category of first translation engines;
the translation consensus module comprises a plurality of second translation engines of different categories, inputs the grouping translation results output by the first translation engines of the grouping translation module to the second translation engines, and obtains translation consensus results based on the output of the second translation engines and the voice input signals;
the translation difference recognition module recognizes a voice input signal meeting a difference recording condition and a grouping translation result thereof based on the translation consensus result, and sends the voice input signal meeting the difference recording condition and the grouping translation result thereof to the difference result broadcasting module;
the difference result broadcasting module broadcasts the voice input signals meeting the difference recording conditions and the grouping translation results thereof to the collaborative user terminal;
the collaborative user terminal is a user terminal with the same input voice input signal language as the voice input signal meeting the difference recording condition.
See fig. 2, based on fig. 1.
FIG. 2 is a flow chart illustrating the system of FIG. 1 obtaining a first packet translation result.
The user terminal comprises a voice receiving end, and the voice receiving end preprocesses voice signals input by a user and sends the voice signals to the user access module;
the preprocessing includes embedding geographic location information of a current user terminal into the voice signal input by the user.
In fig. 2, the geographical location information is determined based on a network Ip address used when the user terminal transmits the voice signal input by the user.
Based on this, the user grouping module groups the voice input signals of the user terminal based on the geographic location information contained in the voice input signals.
More specifically, the user grouping module determines a language category to be translated of the voice input signal based on geographic position information contained in the voice input signal;
the voice input signals with the same class of the language to be translated are divided into the same group.
At this time, the voice input data received by the voice receiving end already includes at least the location information, and the language type of the voice input data can be directly determined based on the preset location information-language corresponding relation database.
This way of judgment and recognition is particularly applicable to remote video/voice conferences in special circumstances, i.e. where the participants are located locally in each case, participating in the conference via a remote voice system.
Unlike the prior art, which needs to adopt complicated speech recognition and other techniques to determine the input language, the embodiment can avoid speech translation recognition and delay caused by the complicated recognition.
Based on the packets, voice input signals corresponding to the packets are sent to the first translation engines of the corresponding categories, and the first translation engines of each category output corresponding translation results of the first packets.
Reference is next made to fig. 3. FIG. 3 is a flow diagram of the system of FIG. 1 performing translation consensus.
In fig. 3, the first group translation result is taken as input of a second translation engine of a corresponding category;
judging whether the similarity value between the output translation result of the second translation engine and the corresponding voice input signal meets a preset recording condition or not;
if yes, broadcasting the corresponding voice input signal and the grouping translation result to a collaborative user terminal; the collaborative user terminal is a user terminal with the same input voice input signal language as the corresponding voice input signal.
The translation consensus module obtains a translation consensus result based on the output of the second translation engine and the voice input signal, and specifically comprises the following steps:
and the translation consensus module calculates a similarity value of an output result of the second translation engine and a voice input signal, and takes the similarity value as the translation consensus result.
As an example, the speech signal to be input is a chinese speech sequence, and the first translation engine is a translation engine for translating english sounds; the first group translation result is an English voice translation result of the Chinese voice sequence.
The second translation engine is a text translation engine in English translation, and the output translation result of the second translation engine is a Chinese text translation sequence corresponding to the English voice translation result.
On the basis of the sequences, the similarity value between the output translation result of the second translation engine and the corresponding voice input signal is calculated, and the similarity between the Chinese text translation sequence corresponding to the English voice translation result and the Chinese text can be calculated after the voice signal to be input is directly converted into the Chinese text.
The inventor notes that the accuracy of the text translation technology is far higher than that of the voice translation technology, so that the translation consensus judgment and recognition are completed through the key technical means, and the method is one of the specific key technical means of the translation reverse coordination of the invention.
Thus, in the above embodiment, the translation directions of the first translation engines of the plurality of different classes and the second translation engines of the plurality of different classes are symmetrically opposite, for example, one is a middle translation and one is an in-translation.
FIG. 4 is a diagram illustrating the opposite translation directions of the translation engine in the system of FIG. 1.
In the foregoing embodiment, the translation consensus module obtains a translation consensus result based on the output of the second translation engine and the voice input signal, and specifically includes:
and the translation consensus module calculates a similarity value of an output result of the second translation engine and a voice input signal, and takes the similarity value as the translation consensus result.
The translation difference recognition module recognizes a voice input signal meeting a difference recording condition and a grouping translation result thereof based on the translation consensus result, and specifically comprises the following steps:
the translation difference recognition module judges whether the similarity value of the output result of the second translation engine and the voice input signal is lower than a preset standard threshold value;
if yes, the voice input signal and the grouping translation result thereof meet the difference recording condition.
With further reference to fig. 5-6, in addition to the above embodiments.
Fig. 5 is a main flow chart of a multi-user multi-lingual collaborative speech translation consensus method implemented based on the system of fig. 1, and fig. 6 is a further preferred embodiment of the method of fig. 5.
In fig. 5, the method comprises the steps of:
s800: receiving voice input signals sent by a plurality of different user terminals, wherein the voice input signals comprise position information of the user terminals;
s810: grouping the voice input signals based on the location information;
s820: based on the grouping, sending voice input signals of the corresponding grouping to first translation engines of the corresponding categories, and outputting corresponding first grouping translation results by the first translation engines of each category;
s830: taking the first grouping translation result as the input of a second translation engine of a corresponding category; s840: judging whether the similarity value between the output translation result of the second translation engine and the corresponding voice input signal meets a preset recording condition or not;
if so, go to step S850:
s850: broadcasting the corresponding voice input signals and the grouping translation results to a collaborative user terminal;
the collaborative user terminal is a user terminal with the same input voice input signal language as the corresponding voice input signal.
In fig. 5, both the step S800 and the step S810 are implemented by a first process; the step S820 is implemented by a second process;
the second process and the first process communicate through a unidirectional data pipe.
The step S830 further includes:
and selecting partial group translation results from the first group translation results as the input of the second translation engine of the corresponding category.
It is worth emphasizing that the invention adopts the data pipeline technology in the multilingual collaborative translation system for the first time.
The data pipeline technology is originally a technology for transferring data between different databases (data sources), such as data backup, data restoration and the like, and by adopting the data pipeline technology, process blocking can be avoided or a third party agent is used for data transmission. For example, the chinese patent application with application number CN2020107749026 uses a data pipeline technology to read data to be backed up for data backup, and the data pipeline connects different processes for data transmission.
The invention applies the data pipeline technology to the cooperative translation system of the multilingual different translation engines for the first time, ensures that data transmission forms a unidirectional and stable multichannel among different threads, and reduces data transmission delay.
Fig. 6 presents a schematic view of the unidirectional data pipe.
The technical scheme of the invention has at least the following beneficial effects:
(1) The real-time voice translation result output instantaneity is ensured through the voice position embedding technology, and the operations comprise grouping real-time, translation engine identification real-time and the like;
(2) The data transmission delay is reduced by adopting a (unidirectional) data pipeline (data pipeline), so that the real-time output is further ensured;
(3) The ambiguity possibly existing in the machine translation engine is automatically identified and checked through the translation consensus module, so that the applicability of the technical scheme is improved.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A multi-user multi-language cooperative voice translation system comprises a user access module, a user grouping module, a grouping translation module, a translation consensus module, a translation difference recognition module and a difference result broadcasting module;
the method is characterized in that:
the user access module is used for accessing a plurality of user terminals and acquiring voice input signals of the plurality of user terminals;
the user grouping module is connected with the user access module and is used for grouping the voice input signals of the plurality of user terminals acquired by the user access module;
the grouping translation module comprises a plurality of first translation engines with different categories, acquires a plurality of grouping voice input signals output by the user grouping module, and distributes each grouping voice input signal to the corresponding category of first translation engines;
the translation consensus module comprises a plurality of second translation engines of different categories, takes a group translation result output by the first translation engine of the group translation module as an input of the second translation engine, and derives a translation consensus result based on the output of the second translation engine and the voice input signal;
the translation difference recognition module recognizes a voice input signal meeting a difference recording condition and a grouping translation result thereof based on the translation consensus result, and sends the voice input signal meeting the difference recording condition and the grouping translation result thereof to the difference result broadcasting module;
the difference result broadcasting module broadcasts the voice input signals meeting the difference recording conditions and the grouping translation results thereof to the collaborative user terminal;
the collaborative user terminal is a user terminal with the same voice input signal language as the voice input signal meeting the difference recording condition;
the user terminal comprises a voice receiving end, and the voice receiving end preprocesses voice signals input by a user and sends the voice signals to the user access module;
the preprocessing comprises embedding geographic position information of a current user terminal into a voice signal input by the user;
the user grouping module groups the voice input signals of the plurality of user terminals acquired by the user access module, and specifically includes:
the user grouping module determines the language category to be translated of the voice input signal based on geographic position information contained in the voice input signal;
the voice input signals with the same class of the language to be translated are divided into the same group.
2. The multi-user multi-lingual collaborative speech translation system according to claim 1, wherein:
the translation directions of the first translation engines of the plurality of different classes and the second translation engines of the plurality of different classes are symmetrically opposite.
3. The multi-user multi-lingual collaborative speech translation system according to claim 1, wherein:
the translation consensus module obtains a translation consensus result based on the output of the second translation engine and the voice input signal, and specifically comprises the following steps:
and the translation consensus module calculates a similarity value of an output result of the second translation engine and a voice input signal, and takes the similarity value as the translation consensus result.
4. The multi-user multi-lingual collaborative speech translation system according to claim 1, wherein:
the translation difference recognition module recognizes a voice input signal meeting a difference recording condition and a grouping translation result thereof based on the translation consensus result, and specifically comprises the following steps:
the translation difference recognition module judges whether the similarity value of the output result of the second translation engine and the voice input signal is lower than a preset standard threshold value;
if yes, the voice input signal and the grouping translation result thereof meet the difference recording condition.
5. A multi-user multi-lingual collaborative speech translation method implemented based on the multi-user multi-lingual collaborative speech translation system of any one of claims 1-4, the method comprising the steps of:
s800: receiving voice input signals sent by a plurality of different user terminals, wherein the voice input signals comprise position information of the user terminals;
s810: grouping the voice input signals based on the location information;
s820: based on the grouping, sending voice input signals of the corresponding grouping to first translation engines of the corresponding categories, and outputting corresponding first grouping translation results by the first translation engines of each category;
s830: taking the first grouping translation result as the input of a second translation engine of a corresponding category;
s840: judging whether the similarity value between the output translation result of the second translation engine and the corresponding voice input signal meets a preset recording condition or not;
if so, go to step S850:
s850: broadcasting the corresponding voice input signals and the grouping translation results to a collaborative user terminal;
the collaborative user terminal is a user terminal with the voice input signal in the same language as the corresponding voice input signal.
6. The multi-user multi-lingual collaborative speech translation method according to claim 5, wherein:
the step S800 and the step S810 are realized through a first process; the step S820 is implemented by a second process;
the second process and the first process communicate through a unidirectional data pipe.
7. The multi-user multi-lingual collaborative speech translation method according to claim 5, wherein:
the step S830 further includes:
and selecting partial group translation results from the first group translation results as the input of the second translation engine of the corresponding category.
CN202110032594.4A 2021-01-19 2021-01-19 Multi-user multi-language cooperative speech translation system and method Active CN112818702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110032594.4A CN112818702B (en) 2021-01-19 2021-01-19 Multi-user multi-language cooperative speech translation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110032594.4A CN112818702B (en) 2021-01-19 2021-01-19 Multi-user multi-language cooperative speech translation system and method

Publications (2)

Publication Number Publication Date
CN112818702A CN112818702A (en) 2021-05-18
CN112818702B true CN112818702B (en) 2024-02-27

Family

ID=75868980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110032594.4A Active CN112818702B (en) 2021-01-19 2021-01-19 Multi-user multi-language cooperative speech translation system and method

Country Status (1)

Country Link
CN (1) CN112818702B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299276B (en) * 2021-05-25 2023-08-29 北京捷通华声科技股份有限公司 Multi-person multi-language identification and translation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017191959A (en) * 2016-04-11 2017-10-19 株式会社日立製作所 Multilanguage voice translation system for tv conference system
CN107306380A (en) * 2016-04-20 2017-10-31 中兴通讯股份有限公司 A kind of method and device of the object language of mobile terminal automatic identification voiced translation
CN108304391A (en) * 2018-01-25 2018-07-20 芜湖应天光电科技有限责任公司 A kind of adaptive translator based on GPS positioning
CN108710616A (en) * 2018-05-23 2018-10-26 科大讯飞股份有限公司 A kind of voice translation method and device
CN111753559A (en) * 2020-06-28 2020-10-09 语联网(武汉)信息技术有限公司 Large-scale translation corpus task processing system under multi-source input mode

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150093482A (en) * 2014-02-07 2015-08-18 한국전자통신연구원 System for Speaker Diarization based Multilateral Automatic Speech Translation System and its operating Method, and Apparatus supporting the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017191959A (en) * 2016-04-11 2017-10-19 株式会社日立製作所 Multilanguage voice translation system for tv conference system
CN107306380A (en) * 2016-04-20 2017-10-31 中兴通讯股份有限公司 A kind of method and device of the object language of mobile terminal automatic identification voiced translation
CN108304391A (en) * 2018-01-25 2018-07-20 芜湖应天光电科技有限责任公司 A kind of adaptive translator based on GPS positioning
CN108710616A (en) * 2018-05-23 2018-10-26 科大讯飞股份有限公司 A kind of voice translation method and device
CN111753559A (en) * 2020-06-28 2020-10-09 语联网(武汉)信息技术有限公司 Large-scale translation corpus task processing system under multi-source input mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多模态输入的多语种实时翻译软件设计与实现;权朝臣;邓长明;袁凌云;;云南师范大学学报(自然科学版)(第04期);第44-48页 *

Also Published As

Publication number Publication date
CN112818702A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN110444196B (en) Data processing method, device and system based on simultaneous interpretation and storage medium
US8027276B2 (en) Mixed mode conferencing
KR101442312B1 (en) Open architecture based domain dependent real time multi-lingual communication service
US20080300852A1 (en) Multi-Lingual Conference Call
US20140198174A1 (en) Augmenting Web Conferences via Text Extracted from Audio Content
CN110266899A (en) The recognition methods and customer service system that client is intended to
CN112818702B (en) Multi-user multi-language cooperative speech translation system and method
CN110728976A (en) Method, device and system for voice recognition
CN110266900A (en) Recognition methods, device and the customer service system that client is intended to
US20190121860A1 (en) Conference And Call Center Speech To Text Machine Translation Engine
US20210312143A1 (en) Real-time call translation system and method
CN115662437B (en) Voice transcription method under scene of simultaneous use of multiple microphones
CN112818709B (en) Speech translation system and method for recording marks of multi-user speech conferences
JP2009122989A (en) Translation apparatus
CN115936002A (en) Conference identification method based on algorithm, terminal and storage medium
WO2021159734A1 (en) Data processing method and apparatus, device, and medium
CN112818708B (en) System and method for processing voice translation of multi-terminal multi-language video conference in real time
CN112270256A (en) Multi-person interactive live-broadcast teaching facial detection throttling device
CN112818706B (en) Voice translation real-time dispute recording system and method based on reverse result stability
CN112818705B (en) Multilingual speech translation system and method based on group consensus
CN115455991A (en) Translation method in conference, server and readable storage medium
CN112818703B (en) Multilingual consensus translation system and method based on multithread communication
CN112818704B (en) Multilingual translation system and method based on inter-thread consensus feedback
CN112818707B (en) Reverse text consensus-based multi-turn engine collaborative speech translation system and method
KR102248701B1 (en) Automatic Interpreting of Multilingual Voice Interpretations To control the timing, end, and provision of certain information in chatting with a given voice

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant