CN111200684A - Language translation terminal and method - Google Patents
Language translation terminal and method Download PDFInfo
- Publication number
- CN111200684A CN111200684A CN201811284657.XA CN201811284657A CN111200684A CN 111200684 A CN111200684 A CN 111200684A CN 201811284657 A CN201811284657 A CN 201811284657A CN 111200684 A CN111200684 A CN 111200684A
- Authority
- CN
- China
- Prior art keywords
- information
- voice
- translation
- character
- voice information
- Prior art date
Links
- 238000004519 manufacturing process Methods 0.000 claims abstract description 7
- 238000006243 chemical reactions Methods 0.000 claims description 6
- 230000001276 controlling effects Effects 0.000 claims description 3
- 230000015654 memory Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagrams Methods 0.000 description 4
- 239000011159 matrix materials Substances 0.000 description 4
- 241000238558 Eucarida Species 0.000 description 3
- 230000001808 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reactions Methods 0.000 description 3
- 281000164016 Baidu companies 0.000 description 2
- 238000000034 methods Methods 0.000 description 2
- 230000001537 neural Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000003287 optical Effects 0.000 description 2
- 230000000875 corresponding Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
Classifications
-
- H04M1/72433—
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Abstract
Description
Technical Field
The invention relates to the technical field of communication and translation, in particular to a language translation terminal and a language translation method.
Background
Translation is to translate one language into another language, and the existing translation is generally realized based on manual work, but the manual translation has large limitation and high cost.
The existing translation is based on manual translation, and is high in cost and high in popularization difficulty.
Disclosure of Invention
The embodiment of the invention provides a language translation terminal and a language translation method, which are used for realizing machine translation, reducing the cost and improving the user experience.
In a first aspect, an embodiment of the present invention provides a language translation method, where the method includes the following steps:
a method of language translation, said method comprising the steps of:
the smart phone receives first voice information;
the smart phone converts the first voice information into first character information, sends the first character information to the translator to be translated into second character information, and converts the second character information into second voice information;
the smart phone plays the second voice information through the sound production device.
Optionally, the converting the first voice message into the first text message specifically includes:
and inputting the first language information into a natural language recognition algorithm for character conversion to obtain first character information.
Optionally, the sending the first text information to the translator for translation into the second text information specifically includes:
and calling a translation website, and translating the first character information into second character information of a set language type as the input of the translation website.
Optionally, the method further includes:
and the smart phone acquires the reply information of the second voice information, performs relevance identification on the reply information and the second voice information, if the reply information is determined to be relevant to the second voice information, the translation website is not replaced, and if the reply information is determined not to be relevant to the second voice information, the translation website is replaced.
In a second aspect, a smart phone is provided, the smart phone comprising: a processor, a communication unit and a sound-emitting device,
the communication unit is used for receiving first voice information;
the processor is used for converting the first voice information into first character information, sending the first character information to the translator for translation into second character information, and converting the second character information into second voice information; and controlling the sound production equipment to play the second voice information.
Optionally, the processor is specifically configured to input the first language information into a natural language identification algorithm to perform text conversion to obtain the first text information.
Optionally, the processor is specifically configured to invoke a translation website, and translate the first text information into second text information of a set language type as an input of the translation website.
Optionally, the processor is further configured to collect reply information of the second voice information, perform relevance recognition on the reply information and the second voice information, if it is determined that the reply information is relevant to the second voice information, not replace the translation website, if it is determined that the reply information is not relevant to the second voice information, replace the translation website.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
it can be seen that, when the first voice information is obtained, the first voice information is converted into the first text information, then the first text information is translated by adopting the translation website to obtain the second text information, and then the second text information is converted (which can be realized by software such as siri and hectogram voice) into the second voice information to be played, so that the voice translation is easily realized, and the method does not need manual translation, reduces the cost, and has the advantage of low cost.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
FIG. 2 is a flow diagram of a method of language translation.
Fig. 3 is a schematic structural diagram of a smart phone according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal, as shown in fig. 1, the terminal includes: processor 101, display screen 105, communication module 102, memory 103 and speaker 104.
The processor 101 may specifically include: a multi-core processor.
Optionally, the processor 101 may further integrate a neural network processing chip. The neural network processing chip can carry a memory for data storage.
Referring to fig. 2, fig. 2 provides a language translation method, where the method is executed by the terminal shown in fig. 1, where the terminal may specifically be a mobile phone, and the method is shown in fig. 2, and includes the following steps:
step S201, the smart phone receives first voice information;
the terminal can be an intelligent device such as a mobile phone, a tablet computer, a PDA and the like, and certainly can also be a non-intelligent device such as an interphone and the like.
Optionally, the first voice information may be received through a communication connection, or may be collected through an audio collector, and the application is not limited to the specific manner of receiving the first voice information.
Step S202, the smart phone converts the first voice message into a first text message, sends the first text message to the translator for translation into a second text message, and converts the second text message into a second voice message, where the second text message is a text message of a different type from the first text message, such as chinese versus english, for example, chinese versus french, and so on.
The above-described method of converting the first voice message into the first text message may be various, for example, in an alternative embodiment,
inputting the first language information into a natural language recognition algorithm for character conversion to obtain first character information, wherein the natural language recognition algorithm specifically includes: baidu speech, apple siri, Google speech, and so on.
The sending the first text information to the translator for translation into the second text information may specifically include:
and calling a translation website, and translating the first character information into second character information of a set language type as the input of the translation website. The set language type may be a language type bound to the smartphone, such as chinese, english, and the like.
The translation sites include, for example, hundredth translation, google translation, and so on.
And step S203, the smart phone plays the second voice message through the sound production device.
According to the technical scheme, when the first voice information is obtained, the first voice information is converted into the first character information, then the translation website is adopted to translate the first character information to obtain the second character information, and then the second character information is converted (which can be realized through software such as siri and Baidu voice) into the second voice information to be played, so that voice translation is easily realized, manual translation is not needed in the method, and the cost is reduced, so that the method has the advantage of low cost.
Optionally, the method may further include:
and the smart phone acquires the reply information of the second voice information, performs relevance identification on the reply information and the second voice information, if the reply information is determined to be relevant to the second voice information, the translation website is not replaced, and if the reply information is determined not to be relevant to the second voice information, the translation website is replaced.
The relevance identification can be determined through the neural network model, namely, the reply information and the second voice information are input into the neural network model to be calculated to obtain a forward operation result, and whether relevance exists is determined according to the forward operation result.
The determining whether to associate according to the forward operation result may specifically include:
if the result of the forward operation is a result matrix, calculating the average value of non-zero elements of the result matrix, taking the average value as a threshold value, determining X positions corresponding to X elements which are larger than the threshold value in the result matrix, if the X positions exceed a set proportion (such as X/2), determining the information association, and if not, determining the information association.
The principle of the scheme is that firstly, the threshold is set to be variable, namely, whether the correlation is carried out or not cannot be determined because the absolute value of the result matrix is too small, and in addition, non-zero elements are removed, so that the influence on the accuracy of the correlation result or not caused by too many positions larger than the threshold due to too small value of the threshold is avoided.
Referring to fig. 3, fig. 3 provides a smart phone including: a processor 401, a communication unit 402 and a sound emitting device 403 (speaker or earpiece),
the communication unit is used for receiving first voice information;
the processor is used for converting the first voice information into first character information, sending the first character information to the translator for translation into second character information, and converting the second character information into second voice information; and controlling the sound production equipment to play the second voice information.
Optionally, the processor is specifically configured to input the first language information into a natural language identification algorithm to perform text conversion to obtain the first text information.
Optionally, the processor is specifically configured to invoke a translation website, and translate the first text information into second text information of a set language type as an input of the translation website.
Optionally, the processor is further configured to collect reply information of the second voice information, perform relevance recognition on the reply information and the second voice information, if it is determined that the reply information is relevant to the second voice information, not replace the translation website, if it is determined that the reply information is not relevant to the second voice information, replace the translation website.
An embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the language translation methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the language translation methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811284657.XA CN111200684A (en) | 2018-10-31 | 2018-10-31 | Language translation terminal and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811284657.XA CN111200684A (en) | 2018-10-31 | 2018-10-31 | Language translation terminal and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111200684A true CN111200684A (en) | 2020-05-26 |
Family
ID=70747388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811284657.XA CN111200684A (en) | 2018-10-31 | 2018-10-31 | Language translation terminal and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111200684A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106202301A (en) * | 2016-07-01 | 2016-12-07 | 武汉泰迪智慧科技有限公司 | A kind of intelligent response system based on degree of depth study |
CN107465816A (en) * | 2017-07-25 | 2017-12-12 | 广西定能电子科技有限公司 | A kind of call terminal and method of instant original voice translation of conversing |
CN107734160A (en) * | 2017-09-30 | 2018-02-23 | 合肥学院 | A kind of language mutual aid method based on smart mobile phone |
-
2018
- 2018-10-31 CN CN201811284657.XA patent/CN111200684A/en active Search and Examination
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106202301A (en) * | 2016-07-01 | 2016-12-07 | 武汉泰迪智慧科技有限公司 | A kind of intelligent response system based on degree of depth study |
CN107465816A (en) * | 2017-07-25 | 2017-12-12 | 广西定能电子科技有限公司 | A kind of call terminal and method of instant original voice translation of conversing |
CN107734160A (en) * | 2017-09-30 | 2018-02-23 | 合肥学院 | A kind of language mutual aid method based on smart mobile phone |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180307680A1 (en) | Keyword recommendation method and system based on latent dirichlet allocation model | |
US20160162569A1 (en) | Methods and systems for improving machine learning performance | |
KR101768509B1 (en) | On-line voice translation method and device | |
CN103268313B (en) | A kind of semantic analytic method of natural language and device | |
US8909536B2 (en) | Methods and systems for speech-enabling a human-to-machine interface | |
CN104238991B (en) | Phonetic entry matching process and device | |
US10554805B2 (en) | Information processing method, terminal, and computer-readable storage medium | |
CN102842306B (en) | Sound control method and device, voice response method and device | |
KR101838095B1 (en) | Method, interaction device, server, and system for speech recognition | |
US9363372B2 (en) | Method for personalizing voice assistant | |
US9923860B2 (en) | Annotating content with contextually relevant comments | |
TWI677796B (en) | Semantic extraction method and device of natural language and computer storage medium | |
US8332225B2 (en) | Techniques to create a custom voice font | |
US20200252356A1 (en) | Method, apparatus, and client for displaying media information, and method and apparatus for displaying graphical controls | |
CN103561217A (en) | Method and terminal for generating captions | |
US20190377788A1 (en) | Methods and systems for language-agnostic machine learning in natural language processing using feature extraction | |
CN101539836A (en) | Human-machine interface interactive system and method | |
CN102254550B (en) | Method and system for reading characters on webpage | |
US9564127B2 (en) | Speech recognition method and system based on user personalized information | |
US10191716B2 (en) | Method and apparatus for recognizing voice in portable device | |
US10083004B2 (en) | Using voice-based web navigation to conserve cellular data | |
CN103956168A (en) | Voice recognition method and device, and terminal | |
CN103377652A (en) | Method, device and equipment for carrying out voice recognition | |
EP2869298A1 (en) | Information identification method and apparatus | |
CN103000175A (en) | Voice recognition method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |