CN108538284A - Simultaneous interpretation result shows method and device, simultaneous interpreting method and device - Google Patents
Simultaneous interpretation result shows method and device, simultaneous interpreting method and device Download PDFInfo
- Publication number
- CN108538284A CN108538284A CN201710129317.9A CN201710129317A CN108538284A CN 108538284 A CN108538284 A CN 108538284A CN 201710129317 A CN201710129317 A CN 201710129317A CN 108538284 A CN108538284 A CN 108538284A
- Authority
- CN
- China
- Prior art keywords
- result
- voice
- recognition result
- character translation
- simultaneous interpretation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000013519 translation Methods 0.000 claims abstract description 305
- 230000004044 response Effects 0.000 claims description 57
- 230000005540 biological transmission Effects 0.000 claims description 37
- 235000013399 edible fruits Nutrition 0.000 claims description 19
- 238000012545 processing Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/005—Language recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
Abstract
Show method and device, simultaneous interpreting method and device this application provides simultaneous interpretation result, wherein the method that shows of simultaneous interpretation result includes:Determine the object language of simultaneous interpretation;The voice recognition result of the voice of original language is obtained from server, and/or, the character translation of the object language as a result, the character translation result for institute's speech recognition result translation result;Show with by institute's speech recognition result and/or character translation result.Using the embodiment of the present application, it can facilitate user is clearer on the simultaneous interpretation page on intelligent terminal to check voice recognition result and/or character translation result, it does not need each user and is equipped with distinctive simultaneous interpretation terminal, play the purpose for the cost for saving simultaneous interpretation terminal yet.
Description
Technical field
This application involves Simultaneous Interpretation Technology field, more particularly to a kind of simultaneous interpretation result shows method and device,
A kind of simultaneous interpreting method and device.
Background technology
Simultaneous interpretation, abbreviation simultaneous interpretation (Simultaneous Interpretation), also known as simultaneous interpretation, synchronous interpretation,
It refers in the application scenarios such as meeting or speech, in the case where not interrupting speaker's speech, incessantly by speaker
Speech content translate to a kind of interpretative system of personnel participating in the meeting.
In the prior art, mostly it is to provide simultaneous interpretation service by distinctive simultaneous interpretation instrument, this is turned in unison
Translating apparatus preparation has an earphone special, and speech or lines are synchronized listening to interpreters by wearing earphone special and be translated into not by user
With voice provided by simultaneous interpretation instrument after languages, different language after translation.
Invention content
Inventor has found in the course of the research, provides simultaneous interpretation service by distinctive simultaneous interpretation instrument at present,
It makes troubles to the use of user, and each user is required for being equipped with a specific simultaneous interpretation instrument and earphone special,
Waste the cost of simultaneous interpretation instrument and earphone etc..
Based on this, show method and a kind of simultaneous interpreting method this application provides a kind of simultaneous interpretation result, to
It saves as the cost of each user configuration simultaneous interpretation instrument and earphone etc., moreover it is possible to facilitate user to check simultaneous interpretation as a result, including
The voice recognition result of the voice of original language and the character translation result of object language etc..
Present invention also provides a kind of demonstration device of simultaneous interpretation result and a kind of simultaneous interpretation arrangements, on ensureing
State the realization and application of method in practice.
This application discloses a kind of method that shows of simultaneous interpretation result, this method is applied on intelligent terminal, including:
Determine the object language of simultaneous interpretation;
The voice recognition result of the voice of original language is obtained from server, and/or, the character translation knot of the object language
Fruit, the character translation result for institute's speech recognition result translation result;
Institute's speech recognition result and/or character translation result are showed.
Disclosed herein as well is a kind of simultaneous interpreting method, this method is applied on server, and this method includes:
Selected language is sent in response to intelligent terminal, and the selected language is determined as to the target language of simultaneous interpretation
Speech;
In response to triggering the voice of original language, it is identified to obtain voice recognition result to the voice of the original language, and/
Or, being translated to obtain the character translation result of the object language to institute's speech recognition result;
Institute's speech recognition result and/or character translation result are sent to the intelligent terminal to show.
Disclosed herein as well is a kind of demonstration device of simultaneous interpretation result, which is integrated on intelligent terminal,
The demonstration device includes:
Determination unit, the object language for determining simultaneous interpretation;
Acquiring unit, the voice recognition result of the voice for obtaining original language from server, and/or, the target language
The character translation of speech as a result, the character translation result for institute's speech recognition result translation result;
Show unit, for showing institute's speech recognition result and/or character translation result.
Wherein, the determination unit is specifically used for:The simultaneous interpretation page provided in the intelligent terminal in response to user
Selected language, is determined as the object language of simultaneous interpretation by upper selected language.
Wherein, the unit that shows may include showing subelement, the flag bit for being returned according to server, will be described
Voice recognition result and/or character translation result are showed on the simultaneous interpretation page, and the flag bit is for identifying
Whether a word of the voice of corresponding original language terminates.
Wherein, the subelement that shows may include in the first embodiment there are many embodiment:Flag bit
Judgment module, voice frame display module and voice frame generation module, wherein:
The flag bit judgment module, the flag bit for judging whether to receive server transmission.
The voice frame display module is used for when the judging result of the flag bit judgment module is no, will be from upper one
The secondary flag bit that receives is to the current speech recognition result between current time, in the first voice exhibition of the simultaneous interpretation page
Show in real time in existing frame.
The voice frame generation module, for being when the judging result of the flag bit judgment module:Receive server
When the flag bit of transmission, the second voice is generated on the simultaneous interpretation page and shows frame, second voice shows frame and is used for
Show next voice recognition result.
Wherein, the subelement that shows can also include phonetic decision module and voice replacement module, the phonetic decision
Module is used to send update voice recognition result in response to server, judges the update voice recognition result and the current language
Whether sound recognition result is identical, and the voice replacement module is used for when the result of the phonetic decision module is no, will be described
First voice shows the current speech recognition result showed in frame and replaces with the update voice recognition result.
In the second embodiment, the subelement that shows may include:Flag bit judgment module, textbox show mould
Block and textbox generation module, wherein:
The flag bit judgment module, the flag bit for judging whether to receive server transmission.The textbox exhibition
Existing module, for flag bit when the judging result of the flag bit judgment module is no, will to be received from the last time to current
Current character translation result between moment shows in real time in showing frame in the first word of the simultaneous interpretation page.Institute
Textbox generation module is stated, is used for when the judging result of the flag bit judgment module is no, it is raw on the simultaneous interpretation page
Show frame at the second word, second word shows frame for showing next character translation result.
Wherein, the subelement that shows can also include word judgment module and word replacement module, and the word judges
Module be used in response to server send update character translation as a result, judge the update character translation result and it is described ought be above
Whether word translation result is identical, and the word replacement module is used in the case where the result of the word judgment module is no,
First word is showed into the current character translation result showed in frame and replaces with the update character translation result.
In the third embodiment, the subelement that shows may include:
Flag bit judgment module, the flag bit for judging whether to receive server transmission;First trigger module, is used for
In the case where the result of the flag bit judgment module is no, while triggering the voice frame display module and textbox shows
Module;Second trigger module, in the case where the result of the flag bit judgment module is to be, then triggering institute's predicate simultaneously
Sound frame generation module and textbox generation module.
Wherein, the subelement that shows can also include:
Third trigger module, for triggering the phonetic decision module and the word judgment module;4th trigger module,
For different in voice recognition result, character translation result is identical, triggers the voice replacement module;5th touches
Module is sent out, voice recognition result is identical for different in character translation result, triggers word replacement module;6th
Trigger module is used in the case of character translation result and different voice recognition result, while triggering voice replacement module
With word replacement module.
Wherein, the demonstration device can also include:
Response unit is slided, in response to triggering slide on the simultaneous interpretation page, according to the sliding
The direction of operation shows each voice and shows voice recognition result in frame successively, and/or, each word shows the character translation knot of frame
Fruit.
Wherein, the demonstration device can also include:
Broadcast unit, in response to triggering the character translation as a result, playing that the character translation result is corresponding, mesh
The voice of poster speech.
Disclosed herein as well is a kind of simultaneous interpretation arrangement, the device is integrated on the server, which includes:
The selected language is determined as in unison by determination unit for sending selected language in response to intelligent terminal
The object language of translation;
Voice recognition unit is identified the voice of the original language for the voice in response to triggering original language
To voice recognition result;
Translation unit obtains the character translation knot of the object language for being translated to institute's speech recognition result
Fruit;
Transmission unit, for by institute's speech recognition result and/or character translation result be sent to the intelligent terminal with
Just show.
Disclosed herein as well is a kind of devices for showing simultaneous interpretation result, which is characterized in that and include memory,
And one either more than one program one of them or more than one program be stored in memory, and be configured to
It includes the finger for being operated below to execute the one or more programs by one or more than one processor
It enables:
Determine the object language of simultaneous interpretation;
The voice recognition result of the voice of original language is obtained from server, and/or, the character translation knot of the object language
Fruit, the character translation result for institute's speech recognition result translation result;
Institute's speech recognition result and/or character translation result are showed.
Wherein, the object language of the determining simultaneous interpretation may include:
Language is selected on the simultaneous interpretation page that the intelligent terminal provides in response to user, and selected language is determined
For the object language of simultaneous interpretation.
Wherein, described to show institute's speech recognition result and/or character translation result, may include:
According to the flag bit that server returns, by institute's speech recognition result and/or character translation result it is described in unison
Showed on the translation page, whether a word for the voice that the flag bit is used to identify corresponding original language terminates.
Wherein, the flag bit returned according to server, by institute's speech recognition result in the simultaneous interpretation page
On showed, including:
Judge whether to receive the flag bit of server transmission, if it is not, then flag bit will be received from the last time to working as
Voice recognition result between the preceding moment shows in real time in the first voice of the simultaneous interpretation page shows frame;If so,
The second voice is then generated on the simultaneous interpretation page and shows frame, and second voice shows frame for showing next voice
Recognition result.
Wherein, in the case where not receiving the flag bit of server transmission, will be from the upper primary flag bit that receives to working as
Voice recognition result between the preceding moment, it is described after the first voice of the simultaneous interpretation page shows and shows in real time in frame
Device can also be configured to execute the one or more programs by one or more than one processor to include to use
In the instruction for carrying out following operation:
Update voice recognition result is sent in response to server, judges the update voice recognition result and the current language
Whether sound recognition result is identical, is replaced if it is not, then first voice to be showed to the current speech recognition result showed in frame
For the update voice recognition result.
Wherein, the flag bit returned according to server, the character translation result is showed, may include:
Judge whether to receive the flag bit of server transmission, if it is not, then flag bit will be received from the last time to working as
Current character translation result between the preceding moment shows in real time in showing frame in the first word of the simultaneous interpretation page;
If it is, generating the second word on the simultaneous interpretation page shows frame, it is next for showing that second word shows frame
Character translation result.
Wherein, in the case where not receiving the flag bit of server transmission, will be from the upper primary flag bit that receives to working as
Current character translation result between the preceding moment shows in real time in showing frame in the first word of the simultaneous interpretation page,
Described device can also be configured to execute the one or more programs packet by one or more than one processor
Containing the instruction for being operated below:
In response to server send update character translation as a result, judge the update character translation result and it is described ought be above
Whether word translation result is identical, is replaced if it is not, then first word to be showed to the current character translation result showed in frame
For the update character translation result.
Wherein, the flag bit returned according to server, voice recognition result and character translation result are showed,
May include:
Judge whether to receive the flag bit of server transmission, if it is not, then flag bit will be received from the last time to working as
Current speech recognition result between the preceding moment is showed in the first voice of the simultaneous interpretation page shows frame, with
And by the corresponding current character translation result of the current speech recognition result, it is corresponding to show frame in first voice
First word, which shows in frame, to be showed;If it is, generating the second voice on the simultaneous interpretation page shows frame and the second text
Word shows frame, and second voice shows frame for showing next voice recognition result, and second word shows frame and is used for
Show next character translation result.
Wherein, in the case where not receiving the flag bit of server transmission, will be from the upper primary flag bit that receives to working as
Current speech recognition result between the preceding moment is showed in the first voice of the simultaneous interpretation page shows frame, with
And by the corresponding current character translation result of the current speech recognition result, it is corresponding to show frame in first voice
First word shows showed in frame after, described device can also be configured to be executed by one or more than one processor
The one or more programs include the instruction for being operated below:
Update voice recognition result and update character translation are sent in response to server as a result, more new speech described in judging is known
Whether other result and the current speech recognition result are identical, and, the update character translation result and the current character
Whether translation result is identical;If voice recognition result is different and character translation result is identical, first voice is showed
The current speech recognition result showed in frame replaces with the update voice recognition result;The language if character translation result is different
Sound recognition result is identical, then first word is showed the current character translation result showed in frame replaces with the update text
Word translation result;If different with character translation result and voice recognition result, first voice is showed in frame and is opened up
Existing current speech recognition result replaces with the update voice recognition result, and, first word is showed in frame and is opened up
Existing current character translation result replaces with the update character translation result.
Wherein, described device can also be configured to by one either more than one processor execute it is one or one
A procedure above includes the instruction for being operated below:
In response to triggering slide on the simultaneous interpretation page, show successively according to the direction of the slide
Each voice shows the voice recognition result in frame, and/or, each word shows the character translation result of frame.
Wherein, described device can also be configured to by one either more than one processor execute it is one or one
A procedure above includes the instruction for being operated below:
In response to triggering the character translation as a result, playing that the character translation result is corresponding, voice of object language.
Disclosed herein as well is a kind of devices for simultaneous interpretation, which is characterized in that includes memory and one
Either more than one program one of them or more than one program is stored in memory, and be configured to by one or
It includes the instruction for being operated below that more than one central processing unit of person, which executes the one or more programs,:
Selected language is sent in response to intelligent terminal, and the selected language is determined as to the target language of simultaneous interpretation
Speech;
In response to triggering the voice of original language, it is identified to obtain voice recognition result to the voice of the original language, and/
Or, being translated to obtain the character translation result of the object language to institute's speech recognition result;
Institute's speech recognition result and/or character translation result are sent to the intelligent terminal to show.
In the embodiment of the present application, intelligent terminal can will be got in unison after being connected to server from server
The voice recognition result and/or character translation result of translation are showed, wherein by voice recognition result and character translation result
Can individually show to show simultaneously, can facilitate user is clearer on intelligent terminal to check voice recognition result
And/or character translation is equipped with distinctive simultaneous interpretation terminal and earphone etc. as a result, also not needing each user, and it is same to play saving
Sound translates the purpose of the cost of instrument and earphone etc..
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is the exemplary scenario Organization Chart of the application in practical applications;
Fig. 2 is the flow chart for showing embodiment of the method for the simultaneous interpretation result of the application;
Fig. 3 is another flow chart for showing embodiment of the method for the simultaneous interpretation result of the application;
Fig. 4 is the application in intelligent terminal side while showing one of voice recognition result and character translation result and turn in unison
Translate the schematic diagram of the page;
Fig. 5 is the flow chart of the embodiment of the method for the simultaneous interpretation of the application;
Fig. 6 is a kind of structure diagram of the demonstration device embodiment of simultaneous interpretation result of the application;
Fig. 7 is a kind of structure diagram of simultaneous interpretation arrangement embodiment in the application;
Fig. 8 is in the application according to the frame of the demonstration device for simultaneous interpretation result shown in an exemplary embodiment
Figure;
Fig. 9 is the structural schematic diagram of simultaneous interpretation arrangement in the embodiment of the present application.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The application can be used in numerous general or special purpose computing device environment or configuration.Such as:Personal computer, service
Device computer, handheld device or portable device, laptop device, multi-processor device including any of the above device or equipment
Distributed computing environment etc..
The application can describe in the general context of computer-executable instructions executed by a computer, such as program
Module.Usually, program module includes routines performing specific tasks or implementing specific abstract data types, program, object, group
Part, data structure etc..The application can also be put into practice in a distributed computing environment, in these distributed computing environments, by
Task is executed by the connected remote processing devices of communication network.In a distributed computing environment, program module can be with
In the local and remote computer storage media including storage device.
With reference to figure 1, show embodiment of the method and simultaneous interpreting method for the simultaneous interpretation result in the embodiment of the present application
The exemplary scenario Organization Chart of embodiment in practical applications.User can be connected to server 102 by intelligent terminal 101,
For example, downloading simultaneous interpretation APP from server 102, and triggers simultaneous interpretation APP and can open the simultaneous interpretation page;Alternatively,
The simultaneous interpretation page can be got from server 102 by way of scanning the two-dimensional code, on the simultaneous interpretation page
Object language of the language as simultaneous interpretation can be selected.For example, selected English, then when the progress of server 102 simultaneous interpretation just
It needs using English as object language, and voice to be identified is then original language, for example, obtained from being spoken using Chinese
Voice, i.e. original language are Chinese.The voice of original language can be carried out speech recognition to obtain language by the server 102 in Fig. 1
Sound recognition result, and voice recognition result is translated and obtains the character translation result of object language.Server 102 can be with
Speech recognition or character translation result are individually sent to intelligent terminal 101 to show, it can also be by voice recognition result
It is sent to intelligent terminal 101 simultaneously with character translation result to be showed, user is facilitated to be turned in unison what intelligent terminal 101 provided
It translates and checks voice recognition result and/or character translation result on the page.
Based on application scenarios for example shown in FIG. 1 a kind of showing for simultaneous interpretation result of the application is shown with reference to figure 2
The flow chart of embodiment of the method, the present embodiment can be applied on intelligent terminal 101, and the present embodiment may comprise steps of:
Step 201:Determine the object language of simultaneous interpretation.
In the present embodiment, the language that can first be selected according to user determines the object language of simultaneous interpretation.In reality
In, after user triggers simultaneous interpretation using intelligent terminal by start button or link etc., alternatively, being downloaded in user
Simultaneous interpretation APP open after, etc., intelligent terminal provides the user with multilingual type and is selected for user.For example, if
The original language of the voice of speaker is Chinese, then provides the options such as Russian, English, French and selected for each user, user wishes
By the voice of the original language of spokesman in conference scenario, the word of any language is translated into, so that it may this kind of language to be chosen,
Such as then English is exactly object language to selected English.
Specifically, can for example show the choice of language frame of various language by the simultaneous interpretation page, for example, Chinese,
English, French etc. correspond to an Option Box respectively, and the language that user selectes is determined as the target language of simultaneous interpretation by intelligent terminal
Speech.
Step 202:The voice recognition result of the voice of original language is obtained from server, and/or, the text of the object language
Word translation result, the character translation result for institute's speech recognition result translation result.
In practical applications, after user's selected target language, object language can be sent to server, is existed by server
Identification obtains the voice recognition result of the voice of original language in real time, while the object language selected according to user is come to speech recognition
As a result it carries out character translation and obtains character translation result.And intelligent terminal can only get speech recognition knot from server
Fruit either only obtains character translation as a result, obtaining voice recognition result and character translation result simultaneously.
Step 203:Institute's speech recognition result and/or character translation result are showed.
Intelligent terminal can will get voice recognition result from server and/or the character translation result is opened up
It is existing.Certainly, according to the difference for showing content, speech recognition knot can only be showed on the simultaneous interpretation page of simultaneous interpretation APP
Fruit either only shows character translation as a result, voice recognition result is corresponding with character translation result is showed shows.Certainly,
The simultaneous interpretation page can not also be used, for example, voice recognition result and/or the word can be showed with the mode of dialog box
Translation result etc., no matter use which kind of ways of presentation, as long as user on intelligent terminal it can be seen that voice recognition result and/or
The character translation result.
In practical applications, because the size of the display screen of intelligent terminal is preferential, therefore, a screen simultaneous interpretation page is aobvious
The number of the voice recognition result and/or character translation result that show is limited, and the embodiment of the present application can pass through the side of sliding
Formula come check disappear on the display screen showed voice recognition result or character translation result.Then user can be in unison
Slide is triggered on the translation page, for example, upward sliding checks voice recognition result and character translation as a result, then in step
Can also include after 203:
In response to triggering slide on the simultaneous interpretation page, show successively according to the direction of the slide
Each voice shows the voice recognition result in frame, and/or, each word shows the character translation result of frame.
In this step, it can detect and whether trigger slide on the simultaneous interpretation page, for example, user's finger
The slide downward on touching screen, then can be according to the direction of sliding, each voice that will have been disappeared on the simultaneous interpretation page
Recognition result and/or character translation as a result, sequentially show according to the direction of sliding successively again.
As it can be seen that in the present embodiment, intelligent terminal can will get the voice recognition result of simultaneous interpretation from server
And/or character translation result is showed, wherein can individually show voice recognition result and character translation result can also
Simultaneously show, can facilitate user on intelligent terminal it is clearer check voice recognition result and/or character translation as a result,
It does not need each user and is equipped with distinctive simultaneous interpretation terminal and earphone etc., also act as and save simultaneous interpretation instrument and earphone yet
Deng cost purpose.
Based on application scenarios for example shown in FIG. 1 the exhibition of the application another kind simultaneous interpretation result is shown with reference to figure 3
The flow chart of existing embodiment of the method, the present embodiment can be applied on intelligent terminal 101, and the present embodiment may include following step
Suddenly:
Step 301:After server is downloaded and installs simultaneous interpretation application program, answered in response to opening the simultaneous interpretation
With program, intelligent terminal obtains the simultaneous interpretation page;Alternatively, being scanned the two-dimensional code in response to the intelligent terminal, intelligence is eventually
End obtains the simultaneous interpretation page from server.
In the present embodiment, intelligent terminal can be connected to server in several ways.With a meeting in practice
For scene, wireless network can be disposed in the conference scenario, intelligent terminal can be downloaded by wireless network from server
And simultaneous interpretation APP is installed, and then user opens the opening that simultaneous interpretation APP can trigger the simultaneous interpretation page.Alternatively, intelligence
Energy terminal can also scan the Quick Response Code in conference scenario by the camera of installation, to get simultaneous interpretation page from server
Face.Certainly, intelligent terminal, which obtains the simultaneous interpretation page, can other modes, for example, request can be sent to server to obtain
Take simultaneous interpretation page etc..
Step 302:Intelligent terminal determines the object language of simultaneous interpretation.
It is preset that some can be provided after presenting the simultaneous interpretation page on intelligent terminal, on the simultaneous interpretation page
Choice of language frame, for example, Chinese, English, French etc. correspond to an Option Box respectively, user wishes spokesman in conference scenario
Original language voice, translate into the word of any language, so that it may the corresponding Option Box of this kind of language to be chosen, intelligence eventually
The language that user selectes is determined as the object language of simultaneous interpretation by end.For example, user has selected English on the simultaneous interpretation page
Language, then using English as the object language of simultaneous interpretation.
Further, the Option Box of a Translation Type can also be provided on the simultaneous interpretation page, which uses
In the voice recognition result for indicating only to show original language on intelligent terminal, or only show the character translation of object language
As a result, still showing voice recognition result and character translation result simultaneously.
Step 303:Intelligent terminal obtains institute's speech recognition result and/or the character translation result from server.
In practical applications, in order to enable intelligent terminal can determine the disconnected of the voice recognition result received from server
Sentence situation, for example, which voice recognition result is in short, which voice recognition result is lower a word, etc., and server can
When the voice of original language terminates a word, not only to send voice recognition result, while a flag bit is also sent, used
A word to identify the voice of the corresponding original language of the voice recognition result is over.Likewise, sending character translation
As a result when, it can also indicate that in short corresponding character translation result has been sent using flag bit.
Step 304:Intelligent terminal shows institute's speech recognition result and/or the character translation result.
Intelligent terminal is again on the simultaneous interpretation page, according to the flag bit that server returns, the speech recognition that will be received
As a result and/or the character translation result is showed.Specifically, voice recognition result is only showed according to intelligent terminal, or
Person only shows character translation as a result, showing voice recognition result and character translation simultaneously as a result, real there are many this steps
Apply mode.
In the first embodiment, server only has sent voice recognition result to intelligent terminal, then according to the clothes
Be engaged in device return flag bit, the process that language recognition result is showed on the simultaneous interpretation page, may include with
Lower step A1~step A5:
Step A1:Judge whether to receive the flag bit that server is sent, if it is not, then entering step A2;If it is,
Enter step A5.
In the present embodiment, intelligent terminal is while receiving voice recognition result, can also real-time judge whether receive
The flag bit sent to server, if receiving flag bit, then it represents that receive flag bit from this and receive to the last time
The corresponding voice of voice recognition result between flag bit is complete a word, also, after this receives flag bit
The voice recognition result received again is exactly lower in short (i.e. the voice of original language) corresponding voice recognition result.And if
It is not received by flag bit, then it represents that it is corresponding to the voice recognition result current time to receive flag bit from the last time
It is also unclosed a word.
Wherein, flag bit may be used a binary character and be indicated, such as flag bit is " 1 ".
Enter step A2:Flag bit will be received from the last time to the current speech recognition result current time,
First voice of the simultaneous interpretation page shows in frame to be showed in real time.
A word corresponding to the voice of the original language of triggering is also unclosed, for the current language received
Sound recognition result can show it in real time in the same voice shows frame.For example, the voice pair of the original language for triggering
The voice recognition result answered is " purchase apple ", and does not receive flag bit also, then Chinese " purchase apple " is presented in first
Voice shows in frame.
Enter step A3:In response to receive server transmission update voice recognition result, judge described in more new speech
Whether recognition result and the current speech recognition result are identical, if it is not, then entering step A4.
In the case where not receiving next flag bit, if having received the update speech recognition of server transmission simultaneously
As a result, such as Chinese user of apple " purchase ", then in this case, intelligent terminal may determine that update voice recognition result and
Whether identical current speech recognition result is, if it is different, illustrating that voice recognition result is changed.
Step A4:First voice is showed into the current speech recognition result showed in frame and replaces with the more new speech
Recognition result.
Then Chinese " user of purchase apple " first voice of replacement is showed " the purchase apple " showed in frame by intelligent terminal,
Showed in frame in same first voice with showing " user of purchase apple ".
Step A5:The second voice is generated on the simultaneous interpretation page and shows frame, and second voice shows frame and is used for
Show next voice recognition result.
And if having received flag bit, illustrate that the corresponding voice recognition result of the voice of a word of original language has been opened up
It now finishes, then intelligent terminal can be generated shows frame for showing the second voice of next voice recognition result, that is, will be same
The corresponding voice recognition result of word is presented in the same voice and shows in frame, and the user of the viewing simultaneous interpretation page is facilitated more to hold
Easily the voice of spokesman is understood.
In the second embodiment, server only has sent character translation as a result, then according to server to intelligent terminal
The flag bit of return, the process that the character translation result is showed, may comprise steps of B1~B5:
Step B1:Judge whether to receive the flag bit that server is sent, if it is not, then B2 is entered step, if it is,
Enter step B5.
In the present embodiment, intelligent terminal is while receiving character translation result, can also real-time judge whether receive
The flag bit sent to server, if receiving flag bit, then it represents that receive flag bit from this and receive to the last time
Character translation result between flag bit is corresponding, original language voice, for it is complete in short;Also, it receives at this
The character translation received again after flag bit is as a result, be exactly lower in short (i.e. the voice of original language) corresponding character translation knot
Fruit.And if being not received by flag bit, then it represents that receive flag bit to the character translation current time from the last time
As a result corresponding is also unclosed a word.
Step B2:Flag bit will be received to the current character translation result current time, in institute from the last time
The first word for stating the simultaneous interpretation page shows in frame and shows in real time.
A word corresponding to the voice of the original language of triggering is also unclosed, ought be above for what is received
Word translation result can show it in real time in the same word shows frame.For example, the voice pair of the original language for triggering
The voice recognition result answered is " purchase apple ", then object language is that the character translation result of English is:" buy apple ", then
English " buy apple " is presented in the first word to show in frame.Wherein, each word shows frame and shows frame one with each voice respectively
One corresponds to, mutual corresponding word show frame and voice to show the voice of the original language corresponding to frame be identical.
Step B3:In response to receiving the update character translation of server transmission as a result, judging the update character translation
As a result whether identical as the current character translation result, if it is not, then entering step B4.
In the case where not receiving next flag bit, if having received the update character translation of server transmission simultaneously
As a result, such as corresponding character translation result of voice recognition result " user of purchase apple " is " Apple users ", then it is this
In the case of, intelligent terminal may determine that obtain update character translation result and whether identical current character translation result is, and
" Apple users " is different really with " buy apple ".
Step B4:First word is showed into the current character translation result showed in frame and replaces with the more new literacy
Translation result.
In the case where updating character translation result with current character translation result difference, then character translation can will be updated
As a result for example " Apple users " directly first word of replacement shows the current character translation result showed in frame.As it can be seen that at this
Whether embodiment can have sent flag bit according to server also to accomplish real time modifying to character translation result, also, work as
The same voice recognition result can also be modified character translation result, example there are when kinds of words translation result
Such as, " apple " most starts to be translated into fruit " apple ", is translated into mobile phone " Apple " later.
Step B5:The second word is generated on the simultaneous interpretation page and shows frame, and second word shows frame for showing
Next character translation result.
And if having received flag bit, illustrate that the corresponding character translation result of the voice of a word of original language has been opened up
It now finishes, then intelligent terminal can be generated shows frame for showing the second word of next character translation result, that is, will be same
The corresponding character translation result presentation of word facilitates the user of the viewing simultaneous interpretation page more to hold in the same word shows frame
Easily the voice of spokesman is understood.
In the third embodiment, to intelligent terminal simultaneously voice recognition result and character translation knot occur for server
Fruit, the then flag bit returned according to server, the process that language recognition result and character translation result are showed specifically may be used
To include the following steps C1~step C7:
Step C1:Judge whether to receive the flag bit that server is sent, if it is not, then C2 is entered step, if so, into
Enter step C6.
In the present embodiment, while receiving voice recognition result and character translation result, judgement is still needed to be
The no flag bit for receiving server transmission.
Step C2:Flag bit will be received to the current speech recognition result current time, described from the last time
First voice of the simultaneous interpretation page, which shows in frame, to be showed, and, the current speech recognition result is corresponding current
Character translation result is showed in corresponding first word shows frame.
In the present embodiment, flag bit can only identify voice recognition result or the corresponding source language of character translation result
Whether the voice of speech finishes in short.Because character translation is the result is that according to voice recognition result real time translation, therefore, language
The voice of the corresponding original language of sound recognition result finishes in short, and corresponding character translation result also just corresponds to one completely
Words, and if the voice of the corresponding original language of character translation result finish in short, corresponding voice recognition result page is with regard to right
Answer one it is complete if.
Step C3:In response to receiving the update voice recognition result of server transmission and updating character translation as a result, sentencing
Whether the disconnected update voice recognition result and the current speech recognition result are identical, and, the update character translation knot
Whether fruit and the current character translation result are identical.
In this step, can judge to update simultaneously voice recognition result and the current speech recognition result whether phase
Together, and, whether the update character translation result and the current character translation result are identical.
Step C4:If voice recognition result is different and character translation result is identical, first voice is showed into frame
In the current speech recognition result that shows replace with the update voice recognition result.
If voice recognition result is different, only update voice recognition result replacement is showed and is showed in first voice
In frame.
Step C5:If character translation result is different and voice recognition result is identical, first word is showed into frame
In the current character translation result that shows replace with the update character translation result.
If character translation result is different, first word is only showed to the current character translation result showed in frame
Replace with the update character translation result.
Step C6:If character translation result and voice recognition result are different, the first voice is showed in frame and is showed
Current speech recognition result replace with the update voice recognition result, and, first word is showed in frame and is showed
Current character translation result replace with the update character translation result.
And if character translation result and voice recognition result are different, first voice is showed in frame open up simultaneously
Existing current speech recognition result replaces with update voice recognition result, and, update character translation result is replaced described the
One word shows the current character translation result showed in frame.
Step C7:Generate that the second voice shows frame and the second word shows frame, second language on the simultaneous interpretation page
Sound shows frame and shows frame for showing next character translation knot for showing next voice recognition result, second word
Fruit.
In this step, then generate that the second voice shows frame and the second word shows frame on the simultaneous interpretation page, wherein
Second voice shows frame and shows frame for showing next character translation for showing next voice recognition result, the second word
As a result.
Refering to what is shown in Fig. 4, to show voice recognition result and character translation knot simultaneously in intelligent terminal side in practical application
The schematic diagram of a simultaneous interpretation page for fruit.Wherein, it is voice recognition result on the left of Fig. 4, one shows frame and is accordingly then
One voice shows frame, be character translation on the right side of Fig. 4 as a result, one to show frame then be that a word shows frame accordingly.Certainly,
The content of Fig. 4 is only the specific example under concrete scene, should not be construed as the restriction of the application.
In practical applications, user is also may want to hear the voice of the character translation result of object language, then in step
Can also include after rapid 304:
Step 305:In response to triggering the character translation as a result, playing that the character translation result is corresponding, target language
The voice of speech.
Intelligent terminal can detect in each character translation result showed on the simultaneous interpretation page, if produce click
Operation etc., if detecting clicking operation, the character translation result that can play click is corresponding, object language voice.
As it can be seen that in the embodiment of the present application, intelligent terminal, can be in the simultaneous interpretation page after being connected to server
On, directly the voice recognition result that simultaneous interpretation is got from server and/or character translation result are showed, wherein
Voice recognition result and character translation result can individually be showed and can also be showed simultaneously, user can be facilitated at intelligent end
It is clearer on the simultaneous interpretation page on end to check voice recognition result and/or character translation as a result, also each use
Family is all equipped with distinctive simultaneous interpretation terminal, plays the purpose for the cost for saving simultaneous interpretation terminal.
Based on application scenarios for example shown in FIG. 1 a kind of transmission of simultaneous interpretation result of the application is shown with reference to figure 5
The flow chart of embodiment of the method, the present embodiment can be applied on server 102, and the present embodiment may comprise steps of:
Step 501:Selected language is sent in response to intelligent terminal, and the selected language is determined as simultaneous interpretation
Object language.
The selected language is then sent to server, so as to server by intelligent terminal after determining object language
Also language user selected is as the object language of simultaneous interpretation.Further, if user is on the simultaneous interpretation page
Translation Type is had selected, the Translation Type is also sent to server, is only sent to intelligent terminal so that server subsequently determines
Voice recognition result or character translation to intelligent terminal as a result, still send voice recognition result and character translation knot simultaneously
Fruit.
Step 502:In response to triggering the voice of original language, the voice of the original language is identified to obtain speech recognition
As a result, and/or, institute's speech recognition result is translated to obtain the character translation result of the object language.
After intelligent terminal and server all determine object language, if there is user triggers source under conference scenario
The voice of language, such as start to talk or playback etc., then server can carry out voice knowledge to the voice of original language in real time
Not, voice recognition result is obtained.For example, user can first greet with personnel on the scene when starting speech, Chinese " everybody has been said
It is good ", then server carry out speech recognition as a result, being the Chinese character " hello " of Chinese.For another example when user's Chinese speech
A name " Zhang San " is mentioned, then voice recognition result is the Chinese character " Zhang San " of Chinese.
In the present embodiment, speech recognition process may include that the pretreatment, feature extraction, pattern match of voice signal are several
A part.Wherein, pretreatment includes the processes such as pre-filtering, sampling and quantization, adding window, end-point detection, preemphasis.Most important spy
Sign extraction is to extract characteristic parameter from pretreated voice signal, for example, the characteristic parameter of extraction can be represented effectively
Phonetic feature has good distinction;There is good independence between each rank parameter;Alternatively, characteristic parameter wants convenience of calculation,
To ensure the real-time implementation of speech recognition.And in the training stage, it is each after the characteristic parameter of extraction to be carried out to certain processing
The entry of voice establishes a model, saves as template library.In the signal of cognitive phase, voice language is obtained by identical channel
Sound characteristic parameter generates test template, is matched with reference template, will match the highest reference template of score and knows as voice
Other result.
Certainly, the speech recognition process in the embodiment of the present application has no effect on the realization of the application, knows regardless of voice
Other result can be presented according to mode disclosed in the present embodiment on intelligent terminal.
In addition, after identification obtains voice recognition result, server again turns over the voice recognition result of original language
It translates, to obtain the character translation result of object language.For example, voice recognition result is Chinese character " hello ", and object language
For English, then the character translation result translated is:“hello everyone”.
Specifically, when carrying out automatic translation, the interpretation method based on deep neural network may be used and realize, or
Person is realized using the interpretation method based on statistics.Certainly, those skilled in the art can also use other interpretative systems, for example,
The dictionary table of comparisons of a source language and the target language can be pre-established, it can be by a large amount of Chinese vocabularies, phrase or sentence point
Not with corresponding translator of English as a result, corresponding preservation is to the dictionary table of comparisons, after obtaining voice recognition result, by each speech recognition
As a result it is matched in the dictionary table of comparisons, and translation result will match to, object language is as character translation result.
Step 503:Institute's speech recognition result and/or character translation result are sent to the intelligent terminal to open up
It is existing.
In this step, server can send language according to the Translation Type that intelligent terminal reports to determine to intelligent terminal
Both sound recognition result or the character translation result are still sent, alternatively, not sending Translation Type in intelligent terminal
In the case of, according to default setting voice recognition result and/or the character translation result are sent to intelligent terminal.
In the present embodiment, server first determines object language according to the language of intelligent terminal transmission, then again will be same
The voice recognition result and/or character translation result of sound translation are sent to intelligent terminal and are showed, wherein by speech recognition knot
Fruit and character translation result can individually show and can also show simultaneously, can facilitate user's turning in unison on intelligent terminal
Translate on the page it is clearer check voice recognition result and/or character translation as a result, do not need yet each user be equipped with it is peculiar
Simultaneous interpretation terminal, play save simultaneous interpretation terminal cost purpose.
For embodiment of the method above-mentioned, for simple description, therefore it is all expressed as a series of combination of actions, still
Those skilled in the art should understand that the application is not limited by the described action sequence, because according to the application, it is certain
Step can be performed in other orders or simultaneously.Next, those skilled in the art should also know that, it is described in the specification
Embodiment belong to preferred embodiment, necessary to involved action and module not necessarily the application.
With a kind of simultaneous interpretation result of above-mentioned the application to show the method that embodiment of the method is provided corresponding, referring to figure
6, present invention also provides a kind of demonstration device embodiments of simultaneous interpretation result, and in the present embodiment, which can integrate
In on intelligent terminal, which may include:
Determination unit 601, the object language for determining simultaneous interpretation.
Wherein, the determination unit 601 specifically can be used for:It is turned in unison what the intelligent terminal provided in response to user
It translates and selectes language on the page, selected language is determined as to the object language of simultaneous interpretation.
Acquiring unit 602 obtains the speech recognition of the voice from server for the voice in response to triggering original language
As a result, and/or, the character translation of the object language is as a result, character translation result the turning over for institute's speech recognition result
Translate result.
Show unit 603, for showing institute's speech recognition result and/or character translation result.
Wherein, the unit 603 that shows may include showing subelement, the flag bit for being returned according to server, will
Institute's speech recognition result and/or character translation result are showed on the simultaneous interpretation page, and the flag bit is used for
Whether a word for identifying the voice of corresponding original language terminates.
Wherein, the subelement that shows may include in the first embodiment there are many embodiment:Flag bit
Judgment module, voice frame display module and voice frame generation module, wherein:
The flag bit judgment module, the flag bit for judging whether to receive server transmission.
The voice frame display module is used for when the judging result of the flag bit judgment module is no, will be from upper one
The secondary flag bit that receives is to the current speech recognition result between current time, in the first voice exhibition of the simultaneous interpretation page
Show in real time in existing frame.
The voice frame generation module, for being when the judging result of the flag bit judgment module:Receive server
When the flag bit of transmission, the second voice is generated on the simultaneous interpretation page and shows frame, second voice shows frame and is used for
Show next voice recognition result.
Wherein, the subelement that shows can also include phonetic decision module and voice replacement module, the phonetic decision
Module is used to send update voice recognition result in response to server, judges the update voice recognition result and the current language
Whether sound recognition result is identical, and the voice replacement module is used for when the result of the phonetic decision module is no, will be described
First voice shows the current speech recognition result showed in frame and replaces with the update voice recognition result.
In the second embodiment, the subelement that shows may include:Flag bit judgment module, textbox show mould
Block and textbox generation module, wherein:
The flag bit judgment module, the flag bit for judging whether to receive server transmission.The textbox exhibition
Existing module, for flag bit when the judging result of the flag bit judgment module is no, will to be received from the last time to current
Current character translation result between moment shows in real time in showing frame in the first word of the simultaneous interpretation page.Institute
Textbox generation module is stated, is used for when the judging result of the flag bit judgment module is no, it is raw on the simultaneous interpretation page
Show frame at the second word, second word shows frame for showing next character translation result.
Wherein, the subelement that shows can also include word judgment module and word replacement module, and the word judges
Module be used in response to server send update character translation as a result, judge the update character translation result and it is described ought be above
Whether word translation result is identical, and the word replacement module is used in the case where the result of the word judgment module is no,
First word is showed into the current character translation result showed in frame and replaces with the update character translation result.
In the third embodiment, the subelement that shows may include:
Flag bit judgment module, the flag bit for judging whether to receive server transmission;First time trigger module is used
In in the case where the result of the flag bit judgment module is no, while the voice frame display module and textbox exhibition are triggered
Existing module;Second trigger module, described in the case where the result of the flag bit judgment module is to be, then triggering simultaneously
Voice frame generation module and textbox generation module.
Wherein, the subelement that shows can also include:
Third trigger module, for triggering the phonetic decision module and the word judgment module;4th trigger module,
For different in voice recognition result, character translation result is identical, triggers the voice replacement module;5th touches
Module is sent out, voice recognition result is identical for different in character translation result, triggers word replacement module;6th
Trigger module is used in the case of character translation result and different voice recognition result, while triggering voice replacement module
With word replacement module.
Wherein, the demonstration device can also include:
Response unit is slided, in response to triggering slide on the simultaneous interpretation page, according to the sliding
The direction of operation shows each voice and shows voice recognition result in frame successively, and/or, each word shows the character translation knot of frame
Fruit.
Wherein, the demonstration device can also include:
Broadcast unit, in response to triggering the character translation as a result, playing that the character translation result is corresponding, mesh
The voice of poster speech.
As it can be seen that in the embodiment of the present application, intelligent terminal, can be in the simultaneous interpretation page after being connected to server
On, directly the voice recognition result that simultaneous interpretation is got from server and/or character translation result are showed, wherein
Voice recognition result and character translation result can individually be showed and can also be showed simultaneously, user can be facilitated at intelligent end
It is clearer on the simultaneous interpretation page on end to check voice recognition result and/or character translation as a result, also each use
Family is all equipped with distinctive simultaneous interpretation terminal, plays the purpose for the cost for saving simultaneous interpretation terminal.
Referring to Fig. 7, present invention also provides a kind of simultaneous interpretation arrangement embodiments, and in the present embodiment, which can be with
It is integrated on server, which may include:Determination unit 701, voice recognition unit 702, translation unit 703 and transmission are single
Member 704.Wherein,
The determination unit 701 determines the selected language for sending selected language in response to intelligent terminal
For the object language of simultaneous interpretation.
The voice recognition unit 702 carries out the voice of the original language for the voice in response to triggering original language
Identification obtains voice recognition result.
The translation unit 703 is translated to obtain the object language for the voice recognition result to the voice
Character translation result.
The transmission unit 704, for institute's speech recognition result and/or character translation result to be sent to the intelligence
Terminal is to show.
Simultaneous interpretation arrangement in the embodiment of the present application on server, the language that can be first sent according to intelligent terminal is come really
Set the goal language, and the voice recognition result of simultaneous interpretation and/or character translation result, which are then sent to intelligent terminal, again carries out
Show, wherein voice recognition result and character translation result can individually be showed and can also be showed simultaneously, use can be facilitated
Family is clearer on the simultaneous interpretation page on intelligent terminal to check voice recognition result and/or character translation as a result, also not
It needs each user to be equipped with distinctive simultaneous interpretation terminal, plays the purpose for the cost for saving simultaneous interpretation terminal.
Fig. 8 is a kind of block diagram for showing the device 800 of simultaneous interpretation result shown according to an exemplary embodiment.
For example, device 800 can be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet sets
It is standby, Medical Devices, body-building equipment, personal digital assistant etc..
With reference to Fig. 8, device 800 may include following one or more components:Processing component 802, memory 804, power supply
Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and
Communication component 816.
The integrated operation of 802 usual control device 800 of processing component, such as with display, call, data communication, phase
Machine operates and record operates associated operation.Processing element 802 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in equipment 800.These data are shown
Example includes instruction for any application program or method that are operated on device 800, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system
System, one or more power supplys and other generated with for device 800, management and the associated component of distribution electric power.
Multimedia component 808 is included in the screen of one output interface of offer between described device 800 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When equipment 800 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, when such as call model, logging mode and speech recognition mode, microphone by with
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loud speaker, is used for exports audio signal.
I/O interfaces 812 provide interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detect the state that opens/closes of equipment 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application application-specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, it includes the non-transitorycomputer readable storage medium instructed, example to additionally provide a kind of
Such as include the memory 804 of instruction, above-metioned instruction can be executed by the processor 820 of device 800 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of mobile terminal
When device executes so that mobile terminal is able to carry out a kind of method showing simultaneous interpretation result, the method includes:It determines in unison
The object language of translation;The voice recognition result of the voice of original language is obtained from server, and/or, the text of the object language
Word translation result, the character translation result for institute's speech recognition result translation result;By institute's speech recognition result and/
Or character translation result is showed.
Wherein, the object language of the determining simultaneous interpretation may include:It is provided in the intelligent terminal in response to user
The simultaneous interpretation page on select language, selected language is determined as to the object language of simultaneous interpretation.
Wherein, described to show institute's speech recognition result and/or character translation result, may include:According to clothes
The flag bit that business device returns, institute's speech recognition result and/or character translation result are carried out on the simultaneous interpretation page
Show, whether a word for the voice that the flag bit is used to identify corresponding original language terminates.
Wherein, the flag bit returned according to server, by institute's speech recognition result in the simultaneous interpretation page
On showed, may include:Judge whether to receive the flag bit that server is sent, if it is not, then will receive from the last time
To flag bit to the voice recognition result between current time, show in frame in real time in the first voice of the simultaneous interpretation page
Show;If it is, generating the second voice on the simultaneous interpretation page shows frame, second voice shows frame for opening up
Existing next voice recognition result.
Wherein, in the case where not receiving the flag bit of server transmission, will be from the upper primary flag bit that receives to working as
Voice recognition result between the preceding moment, it is described after the first voice of the simultaneous interpretation page shows and shows in real time in frame
Device 800 can also be configured to execute the one or more programs by one or more than one processor
Instruction for being operated below:Update voice recognition result is sent in response to server, judges the update speech recognition
As a result whether identical as the current speech recognition result, if it is not, then first voice is showed show in frame it is current
Voice recognition result replaces with the update voice recognition result.
Wherein, the flag bit returned according to server, the character translation result is showed, may include:
Judge whether to receive the flag bit of server transmission, if it is not, then will be received from the last time flag bit to current time it
Between current character translation result, show in real time in showing frame in the first word of the simultaneous interpretation page;If it is,
The second word is generated on the simultaneous interpretation page and shows frame, and second word shows frame for showing next character translation knot
Fruit.
Wherein, in the case where not receiving the flag bit of server transmission, will be from the upper primary flag bit that receives to working as
Current character translation result between the preceding moment shows in real time in showing frame in the first word of the simultaneous interpretation page,
Described device 800 can also be configured to execute the one or more programs by one or more than one processor
Including the instruction for being operated below:Update character translation is sent in response to server as a result, more new literacy described in judging
Whether translation result and the current character translation result are identical, show in frame if it is not, then first word showed
Current character translation result replaces with the update character translation result.
Wherein, the flag bit returned according to server, voice recognition result and character translation result are showed,
May include:Judge whether to receive the flag bit of server transmission, if it is not, then flag bit will be received from the last time to working as
Current speech recognition result between the preceding moment is showed in the first voice of the simultaneous interpretation page shows frame, with
And by the corresponding current character translation result of the current speech recognition result, it is corresponding to show frame in first voice
First word, which shows in frame, to be showed;If it is, generating the second voice on the simultaneous interpretation page shows frame and the second text
Word shows frame, and second voice shows frame for showing next voice recognition result, and second word shows frame and is used for
Show next character translation result.
Wherein, in the case where not receiving the flag bit of server transmission, will be from the upper primary flag bit that receives to working as
Current speech recognition result between the preceding moment is showed in the first voice of the simultaneous interpretation page shows frame, with
And by the corresponding current character translation result of the current speech recognition result, it is corresponding to show frame in first voice
First word shows showed in frame after, described device 800 can also be configured to by one or more than one processor
It includes the instruction for being operated below to execute the one or more programs:More newspeak is sent in response to server
Sound recognition result and update character translation are as a result, judge that the update voice recognition result is with the current speech recognition result
It is no identical, and, whether the update character translation result and the current character translation result are identical;If speech recognition knot
Fruit is different and character translation result is identical, then first voice is showed the current speech recognition result showed in frame replaces with
The update voice recognition result;If character translation result is different and voice recognition result is identical, by first word
Show the current character translation result showed in frame and replaces with the update character translation result;If with character translation result and
Voice recognition result is different, then by first voice show the current speech recognition result showed in frame replace with it is described more
New speech recognition result, and, by first word show the current character translation result showed in frame replace with it is described more
New literacy translation result.
Wherein, described device 800 can also be configured to by one or more than one processor execute it is one or
More than one program of person includes the instruction for being operated below:In response to the triggering sliding behaviour on the simultaneous interpretation page
Make, showing each voice successively according to the direction of the slide shows voice recognition result in frame, and/or, each word exhibition
The character translation result of existing frame.
Wherein, described device 800 can also be configured to by one or more than one processor execute it is one or
More than one program of person includes the instruction for being operated below:In response to triggering the character translation as a result, described in broadcasting
Character translation result is corresponding, object language voice.
Fig. 9 is the apparatus structure schematic diagram that simultaneous interpretation is used in the embodiment of the present invention.The device 1900 of the simultaneous interpretation
It can be server, bigger difference can be generated because configuration or performance are different, may include one or more centers
Processor (central processing units, CPU) 1922 (for example, one or more processors) and memory
1932, one or more storage application programs 1942 or data 1944 storage medium 1930 (such as one or one with
Upper mass memory unit).Wherein, memory 1932 and storage medium 1930 can be of short duration storage or persistent storage.It is stored in
The program of storage medium 1930 may include one or more modules (diagram does not mark), and each module may include to clothes
The series of instructions operation being engaged in device.Further, central processing unit 1922 could be provided as communicating with storage medium 1930,
The series of instructions operation in storage medium 1930 is executed on the device 1900 of simultaneous interpretation.
This series of instruction operation is for example, send selected language in response to intelligent terminal, by the selected language
It is determined as the object language of simultaneous interpretation;In response to triggering the voice of original language, the voice of the original language is identified
To voice recognition result, and/or, institute's speech recognition result is translated to obtain the character translation knot of the object language
Fruit;Institute's speech recognition result and/or character translation result are sent to the intelligent terminal to show.
The device 1900 of simultaneous interpretation can also include one or more power supplys 1926, one or more are wired
Or radio network interface 1950, one or more input/output interfaces 1958, one or more keyboards 1956, and/
Or, one or more operating systems 1941, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM,
FreeBSDTM etc..
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the present invention
Its embodiment.The present invention is directed to cover the present invention any variations, uses, or adaptations, these modifications, purposes or
Person's adaptive change follows the general principle of the present invention and includes the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the invention is not limited in the precision architectures for being described above and being shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims with
The upper only presently preferred embodiments of the present invention, is not intended to limit the invention, all within the spirits and principles of the present invention, institute
Any modification, equivalent substitution, improvement and etc. of work, should all be included in the protection scope of the present invention.
Claims (16)
1. a kind of simultaneous interpretation result shows method, which is characterized in that this method is applied on intelligent terminal, including:
Determine the object language of simultaneous interpretation;
The voice recognition result of the voice of original language is obtained from server, and/or, the character translation of the object language as a result,
The character translation result for institute's speech recognition result translation result;
Institute's speech recognition result and/or character translation result are showed.
2. according to the method described in claim 1, it is characterized in that, the object language of the determining simultaneous interpretation, including:
Language is selected on the simultaneous interpretation page that the intelligent terminal provides in response to user, and selected language is determined as together
The object language of sound translation.
3. according to the method described in claim 1, it is characterized in that, described by institute's speech recognition result and/or character translation
As a result showed, including:
According to the flag bit that server returns, by institute's speech recognition result and/or character translation result in the simultaneous interpretation
Showed on the page, whether a word for the voice that the flag bit is used to identify corresponding original language terminates.
4. according to the method described in claim 3, it is characterized in that, it is described according to server return flag bit, by institute's predicate
Sound recognition result is showed on the simultaneous interpretation page, including:
Judge whether to receive the flag bit of server transmission, if it is not, then will be received from the last time flag bit to it is current when
Voice recognition result between quarter shows in real time in the first voice of the simultaneous interpretation page shows frame;
If it is, generating the second voice on the simultaneous interpretation page shows frame, second voice shows frame for opening up
Existing next voice recognition result.
5. according to the method described in claim 4, it is characterized in that, do not receive server transmission flag bit in the case of,
Further include:
Update voice recognition result is sent in response to server, judges that the update voice recognition result is known with the current speech
Whether other result is identical, if it is not, then first voice, which is showed the current speech recognition result showed in frame, replaces with institute
State update voice recognition result.
6. according to the method described in claim 3, it is characterized in that, it is described according to server return flag bit, by the text
Word translation result is showed, including:
Judge whether to receive the flag bit of server transmission, if it is not, then will be received from the last time flag bit to it is current when
Current character translation result between quarter shows in real time in showing frame in the first word of the simultaneous interpretation page;
If it is, generating the second word on the simultaneous interpretation page shows frame, second word shows frame under showing
One character translation result.
7. according to the method described in claim 6, it is characterized in that, do not receive server transmission flag bit in the case of,
Further include:
Update character translation is sent as a result, judging that the update character translation result is turned over the current character in response to server
Translate whether result is identical, if it is not, then first word, which is showed the current character translation result showed in frame, replaces with institute
State update character translation result.
8. according to the method described in claim 3, it is characterized in that, the flag bit returned according to server, voice is known
Other result and character translation result are showed, including:
Judge whether to receive the flag bit of server transmission, if it is not, then will be received from the last time flag bit to it is current when
Current speech recognition result between quarter is showed in the first voice of the simultaneous interpretation page shows frame, and, it will
The corresponding current character translation result of the current speech recognition result shows corresponding first text of frame in first voice
Word shows in frame and is showed;
If it is, being generated on the simultaneous interpretation page, the second voice shows frame and the second word shows frame, second voice
Show frame and shows frame for showing next character translation knot for showing next voice recognition result, second word
Fruit.
9. according to the method described in claim 8, it is characterized in that, do not receive server transmission flag bit in the case of,
Further include:
Update voice recognition result and update character translation are sent as a result, judging the update speech recognition knot in response to server
Whether fruit and the current speech recognition result are identical, and, the update character translation result is translated with the current character
As a result whether identical;
If voice recognition result is different, character translation result is identical, first voice is showed show in frame it is current
Voice recognition result replaces with the update voice recognition result;
If character translation result is different, voice recognition result is identical, first word is showed show in frame it is current
Character translation result replaces with the update character translation result;
If different with character translation result and voice recognition result, first voice is showed show in frame it is current
Voice recognition result replaces with the update voice recognition result, and, first word is showed show in frame it is current
Character translation result replaces with the update character translation result.
10. according to the method described in claim 2~9 any one, which is characterized in that further include:
In response to triggering slide on the simultaneous interpretation page, show each language successively according to the direction of the slide
Sound shows the voice recognition result in frame, and/or, each word shows the character translation result of frame.
11. according to the method described in claim 2~9 any one, which is characterized in that further include:
In response to triggering the character translation as a result, playing that the character translation result is corresponding, voice of object language.
12. a kind of simultaneous interpreting method, which is characterized in that this method is applied on server, and this method includes:
Selected language is sent in response to intelligent terminal, and the selected language is determined as to the object language of simultaneous interpretation;
In response to the voice of triggering original language, the voice of the original language is identified to obtain voice recognition result, and/or,
Institute's speech recognition result is translated to obtain the character translation result of the object language;
Institute's speech recognition result and/or character translation result are sent to the intelligent terminal to show.
13. a kind of demonstration device of simultaneous interpretation result, which is characterized in that the demonstration device is integrated on intelligent terminal, the exhibition
Now device includes:
Determination unit, the object language for determining simultaneous interpretation;
Acquiring unit, the voice recognition result of the voice for obtaining original language from server, and/or, the object language
Character translation as a result, the character translation result for institute's speech recognition result translation result;
Show unit, for showing institute's speech recognition result and/or character translation result.
14. a kind of simultaneous interpretation arrangement, which is characterized in that the device is integrated on the server, which includes:
The selected language is determined as simultaneous interpretation by determination unit for sending selected language in response to intelligent terminal
Object language;
Voice recognition unit is identified to obtain language for the voice in response to triggering original language to the voice of the original language
Sound recognition result;
Translation unit obtains the character translation result of the object language for being translated to institute's speech recognition result;
Transmission unit, for institute's speech recognition result and/or character translation result to be sent to the intelligent terminal to open up
It is existing.
15. a kind of device for showing simultaneous interpretation result, which is characterized in that include memory and one or one
A above program, either more than one program is stored in memory and is configured to by one or one for one of them
It includes the instruction for being operated below that the above processor, which executes the one or more programs,:
Determine the object language of simultaneous interpretation;
The voice recognition result of the voice of original language is obtained from server, and/or, the character translation of the object language as a result,
The character translation result for institute's speech recognition result translation result;
Institute's speech recognition result and/or character translation result are showed.
16. a kind of device for simultaneous interpretation, which is characterized in that include memory and one or more than one
Program, either more than one program is stored in memory and is configured to by one or more than one center for one of them
It includes the instruction for being operated below that processor, which executes the one or more programs,:
Selected language is sent in response to intelligent terminal, and the selected language is determined as to the object language of simultaneous interpretation;
In response to the voice of triggering original language, the voice of the original language is identified to obtain voice recognition result, and/or,
Institute's speech recognition result is translated to obtain the character translation result of the object language;
Institute's speech recognition result and/or character translation result are sent to the intelligent terminal to show.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710129317.9A CN108538284A (en) | 2017-03-06 | 2017-03-06 | Simultaneous interpretation result shows method and device, simultaneous interpreting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710129317.9A CN108538284A (en) | 2017-03-06 | 2017-03-06 | Simultaneous interpretation result shows method and device, simultaneous interpreting method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108538284A true CN108538284A (en) | 2018-09-14 |
Family
ID=63489757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710129317.9A Pending CN108538284A (en) | 2017-03-06 | 2017-03-06 | Simultaneous interpretation result shows method and device, simultaneous interpreting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108538284A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299737A (en) * | 2018-09-19 | 2019-02-01 | 语联网(武汉)信息技术有限公司 | Choosing method, device and the electronic equipment of interpreter's gene |
CN109637541A (en) * | 2018-12-29 | 2019-04-16 | 联想(北京)有限公司 | The method and electronic equipment of voice conversion text |
CN109686363A (en) * | 2019-02-26 | 2019-04-26 | 深圳市合言信息科技有限公司 | A kind of on-the-spot meeting artificial intelligence simultaneous interpretation equipment |
CN110969028A (en) * | 2018-09-28 | 2020-04-07 | 百度(美国)有限责任公司 | System and method for synchronous translation |
CN111178086A (en) * | 2019-12-19 | 2020-05-19 | 北京搜狗科技发展有限公司 | Data processing method, apparatus and medium |
CN111399950A (en) * | 2018-12-28 | 2020-07-10 | 北京搜狗科技发展有限公司 | Voice input interface management method and device and voice input equipment |
CN111711853A (en) * | 2020-06-09 | 2020-09-25 | 北京字节跳动网络技术有限公司 | Information processing method, system, device, electronic equipment and storage medium |
CN113628626A (en) * | 2020-05-09 | 2021-11-09 | 阿里巴巴集团控股有限公司 | Speech recognition method, device and system and translation method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1991975A (en) * | 2005-12-26 | 2007-07-04 | 佳能株式会社 | Voice information processing apparatus and voice information processing method |
CN101116304A (en) * | 2005-02-04 | 2008-01-30 | 法国电信公司 | Method of transmitting end-of-speech marks in a speech recognition system |
US20080077392A1 (en) * | 2006-09-26 | 2008-03-27 | Kabushiki Kaisha Toshiba | Method, apparatus, system, and computer program product for machine translation |
CN101211335A (en) * | 2006-12-27 | 2008-07-02 | 乐金电子(中国)研究开发中心有限公司 | Mobile communication terminal with translation function, translation system and translation method |
CN102467908A (en) * | 2010-11-17 | 2012-05-23 | 英业达股份有限公司 | Multilingual voice control system and method thereof |
CN103226947A (en) * | 2013-03-27 | 2013-07-31 | 广东欧珀移动通信有限公司 | Mobile terminal-based audio processing method and device |
CN103299361A (en) * | 2010-08-05 | 2013-09-11 | 谷歌公司 | Translating languages |
CN103514153A (en) * | 2012-06-29 | 2014-01-15 | 株式会社东芝 | Speech translation apparatus, method and program |
US20150134320A1 (en) * | 2013-11-14 | 2015-05-14 | At&T Intellectual Property I, L.P. | System and method for translating real-time speech using segmentation based on conjunction locations |
-
2017
- 2017-03-06 CN CN201710129317.9A patent/CN108538284A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101116304A (en) * | 2005-02-04 | 2008-01-30 | 法国电信公司 | Method of transmitting end-of-speech marks in a speech recognition system |
CN1991975A (en) * | 2005-12-26 | 2007-07-04 | 佳能株式会社 | Voice information processing apparatus and voice information processing method |
US20080077392A1 (en) * | 2006-09-26 | 2008-03-27 | Kabushiki Kaisha Toshiba | Method, apparatus, system, and computer program product for machine translation |
CN101211335A (en) * | 2006-12-27 | 2008-07-02 | 乐金电子(中国)研究开发中心有限公司 | Mobile communication terminal with translation function, translation system and translation method |
CN103299361A (en) * | 2010-08-05 | 2013-09-11 | 谷歌公司 | Translating languages |
CN102467908A (en) * | 2010-11-17 | 2012-05-23 | 英业达股份有限公司 | Multilingual voice control system and method thereof |
CN103514153A (en) * | 2012-06-29 | 2014-01-15 | 株式会社东芝 | Speech translation apparatus, method and program |
CN103226947A (en) * | 2013-03-27 | 2013-07-31 | 广东欧珀移动通信有限公司 | Mobile terminal-based audio processing method and device |
US20150134320A1 (en) * | 2013-11-14 | 2015-05-14 | At&T Intellectual Property I, L.P. | System and method for translating real-time speech using segmentation based on conjunction locations |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299737B (en) * | 2018-09-19 | 2021-10-26 | 语联网(武汉)信息技术有限公司 | Translator gene selection method and device and electronic equipment |
CN109299737A (en) * | 2018-09-19 | 2019-02-01 | 语联网(武汉)信息技术有限公司 | Choosing method, device and the electronic equipment of interpreter's gene |
CN110969028B (en) * | 2018-09-28 | 2023-09-26 | 百度(美国)有限责任公司 | System and method for synchronous translation |
CN110969028A (en) * | 2018-09-28 | 2020-04-07 | 百度(美国)有限责任公司 | System and method for synchronous translation |
CN111399950A (en) * | 2018-12-28 | 2020-07-10 | 北京搜狗科技发展有限公司 | Voice input interface management method and device and voice input equipment |
CN109637541A (en) * | 2018-12-29 | 2019-04-16 | 联想(北京)有限公司 | The method and electronic equipment of voice conversion text |
CN109637541B (en) * | 2018-12-29 | 2021-08-17 | 联想(北京)有限公司 | Method and electronic equipment for converting words by voice |
CN109686363A (en) * | 2019-02-26 | 2019-04-26 | 深圳市合言信息科技有限公司 | A kind of on-the-spot meeting artificial intelligence simultaneous interpretation equipment |
CN111178086A (en) * | 2019-12-19 | 2020-05-19 | 北京搜狗科技发展有限公司 | Data processing method, apparatus and medium |
CN111178086B (en) * | 2019-12-19 | 2024-05-17 | 北京搜狗科技发展有限公司 | Data processing method, device and medium |
CN113628626A (en) * | 2020-05-09 | 2021-11-09 | 阿里巴巴集团控股有限公司 | Speech recognition method, device and system and translation method and system |
CN111711853B (en) * | 2020-06-09 | 2022-02-01 | 北京字节跳动网络技术有限公司 | Information processing method, system, device, electronic equipment and storage medium |
US11900945B2 (en) | 2020-06-09 | 2024-02-13 | Beijing Bytedance Network Technology Co., Ltd. | Information processing method, system, apparatus, electronic device and storage medium |
CN111711853A (en) * | 2020-06-09 | 2020-09-25 | 北京字节跳动网络技术有限公司 | Information processing method, system, device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108538284A (en) | Simultaneous interpretation result shows method and device, simultaneous interpreting method and device | |
CN110634483B (en) | Man-machine interaction method and device, electronic equipment and storage medium | |
JP6618223B2 (en) | Audio processing method and apparatus | |
CN106202150B (en) | Information display method and device | |
KR20160014465A (en) | electronic device for speech recognition and method thereof | |
CN105224601B (en) | A kind of method and apparatus of extracting time information | |
CN107992485A (en) | A kind of simultaneous interpretation method and device | |
CN104394265A (en) | Automatic session method and device based on mobile intelligent terminal | |
CN108073572A (en) | Information processing method and its device, simultaneous interpretation system | |
CN108345667A (en) | A kind of searching method and relevant apparatus | |
CN108509412A (en) | A kind of data processing method, device, electronic equipment and storage medium | |
WO2021208531A1 (en) | Speech processing method and apparatus, and electronic device | |
CN107870904A (en) | A kind of interpretation method, device and the device for translation | |
CN109977426A (en) | A kind of training method of translation model, device and machine readable media | |
WO2020240838A1 (en) | Conversation control program, conversation control method, and information processing device | |
CN105550235A (en) | Information acquisition method and information acquisition apparatuses | |
CN108255940A (en) | A kind of cross-language search method and apparatus, a kind of device for cross-language search | |
WO2019101099A1 (en) | Video program identification method and device, terminal, system, and storage medium | |
WO2023000891A1 (en) | Data processing method and apparatus, and computer device and storage medium | |
CN1937002A (en) | Intelligent man-machine dialogue system and realizing method | |
US11354520B2 (en) | Data processing method and apparatus providing translation based on acoustic model, and storage medium | |
CN109388699A (en) | Input method, device, equipment and storage medium | |
CN112133295B (en) | Speech recognition method, device and storage medium | |
CN113220590A (en) | Automatic testing method, device, equipment and medium for voice interaction application | |
CN105302335B (en) | Vocabulary recommends method and apparatus and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |