CN110765786A - Translation system, earphone translation method and translation equipment - Google Patents

Translation system, earphone translation method and translation equipment Download PDF

Info

Publication number
CN110765786A
CN110765786A CN201910967363.5A CN201910967363A CN110765786A CN 110765786 A CN110765786 A CN 110765786A CN 201910967363 A CN201910967363 A CN 201910967363A CN 110765786 A CN110765786 A CN 110765786A
Authority
CN
China
Prior art keywords
translation
earphone
voice data
source
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910967363.5A
Other languages
Chinese (zh)
Other versions
CN110765786B (en
Inventor
崔首领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jingting Auto Brokerage Co ltd
Original Assignee
Shenzhen Scene Intelligent Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Scene Intelligent Co Ltd filed Critical Shenzhen Scene Intelligent Co Ltd
Priority to CN201910967363.5A priority Critical patent/CN110765786B/en
Publication of CN110765786A publication Critical patent/CN110765786A/en
Application granted granted Critical
Publication of CN110765786B publication Critical patent/CN110765786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention relates to a translation system, an earphone translation method and translation equipment. The translation system comprises the steps that firstly, the earphone acquires source voice data, translation equipment translates the received source voice data into target voice data, then the earphone receives the target voice data sent by the translation equipment, and the target voice data are played on the loud-speaking equipment, so that intelligent translation is realized, the translation result is acquired quickly and efficiently, the translation result does not need to be acquired manually, and a large amount of human resources are saved.

Description

Translation system, earphone translation method and translation equipment
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of translation electronic equipment, in particular to a translation system, an earphone translation method and translation equipment.
[ background of the invention ]
Today, with rapid development of science and technology, people have increasingly opened communication, and in the process of tourism, entertainment and academic communication, not only information is more and more globalized, but also the related languages are more and more diversified, so that language communication barriers are common. How to conveniently and quickly provide translation contents for two parties who have communication with different languages and realize barrier-free communication between the two parties is a problem worthy of thinking.
In the process of implementing the invention, the inventor finds that the related art has at least the following problems: in the existing translation method, a large amount of human resources are consumed for manually obtaining translation results, the obtaining is difficult, time and labor are wasted, and how to quickly and efficiently obtain the translation results is a problem to be solved urgently.
[ summary of the invention ]
In order to solve the technical problem, embodiments of the present invention provide a translation system, an earphone translation method, and a translation device for quickly and efficiently obtaining a translation result.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions: a translation system. The translation system includes: at least two earphones, the earphones connect the speaker device and the translator device;
the headset includes at least one first processor and a first memory storing instructions executable by the at least one first processor to enable the at least one first processor to perform:
obtaining source voice data and sending the source voice data to the translation equipment so that the translation equipment translates the source voice data into target voice data;
and receiving the target voice data sent by the translation equipment, and playing the target voice data on the speaker equipment.
Optionally, the speaker device includes a data transmission thimble, and the speaker device is connected to the earphone through the data transmission thimble;
the playing the target voice data on the speaker device includes:
and transmitting the target voice data to the speaker device through the data transmission thimble, so that the target voice data is played on the speaker device.
Optionally, the speaker device includes a trigger thimble, and the trigger thimble is connected to the earphone; the loudspeaker equipment is also provided with a key;
the loudspeaker device comprises at least one second processor and a second memory storing instructions executable by the at least one second processor, the instructions being executable by the at least one second processor to enable the at least one second processor to perform:
when a pressing signal generated by the key is acquired, the pressing signal is sent to the earphone through the trigger thimble, so that the earphone starts to acquire the source voice data when receiving the pressing signal; and when the key senses a pressing event, converting the pressing event into a pressing signal.
Optionally, the press event comprises a touch operation or a press operation.
Optionally, the speaker device further includes a charging thimble, the charging thimble is connected to the earphone, and the charging thimble is used for charging the earphone.
Optionally, the earphone is provided with a translation starting key;
the obtaining source speech data and sending the source speech data to the translation device includes:
when a starting instruction generated by the translation starting key is acquired, the starting instruction is sent to translation equipment, so that the translation equipment generates a receiving instruction according to the starting instruction and sends the receiving instruction to the earphone; when a translation starting key senses a starting event, converting the starting event into a starting instruction;
and when the receiving instruction is obtained, starting to obtain the source voice data, and sending the source voice data to the translation equipment.
In order to solve the above technical problems, embodiments of the present invention further provide the following technical solutions: a headset translation method. The earphone translation method comprises the following steps: receiving source speech data;
translating the source speech data into target speech data;
and sending the target voice data to the earphone.
Optionally, the receiving source speech data comprises:
when a starting instruction is obtained, generating a receiving instruction;
and sending the receiving instruction to the earphone so as to enable the earphone to start sending the source speech data.
Optionally, said translating said source speech data into target speech data comprises:
performing character recognition on the source speech data to generate source character information;
translating the source text information into target text information;
and converting the target character information into target voice data.
In order to solve the above technical problems, embodiments of the present invention further provide the following technical solutions: a translation apparatus. The translation apparatus includes: at least one third processor and a third memory, the third memory storing instructions executable by the at least one third processor, the instructions being executable by the at least one third processor to enable the at least one third processor to perform the method as described above.
Compared with the prior art, the translation system provided by the embodiment of the invention obtains the source speech data through the earphone, the translation equipment translates the received source speech data into the target speech data, then the earphone receives the target speech data sent by the translation equipment and plays the target speech data on the speaker equipment, so that intelligent translation is realized, the translation result is obtained quickly and efficiently, the translation result does not need to be obtained manually, and a large amount of human resources are saved.
[ description of the drawings ]
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic structural diagram of a translation system according to an embodiment of the present invention;
FIG. 2 is a schematic view of a flowchart interaction of a translation system according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a headset translation method provided in an embodiment of the present invention, where the method is applied to a translation device;
FIG. 4 is a schematic flow chart of S10 in FIG. 3;
FIG. 5 is a schematic flow chart of S20 in FIG. 3;
fig. 6 is a block diagram of a headset translating device according to an embodiment of the present invention;
fig. 7 is a block diagram of a translation apparatus according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and specific examples. It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present. As used in this specification, the terms "upper," "lower," "inner," "outer," "bottom," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the invention and simplicity in description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are not to be considered limiting of the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Furthermore, the technical features mentioned in the different embodiments of the invention described below can be combined with each other as long as they do not conflict with each other.
Fig. 1 is a schematic structural diagram of a translation system according to an embodiment of the present invention. As shown in fig. 1, the translation system includes: at least two earphones 10, a speaker device 20 and a translator device 30, wherein the earphones 10 can be connected to the speaker device 20 and the translator device 30 respectively in a wireless or wired or detachable mode.
The headset 10 comprises at least one first processor 11 and a first memory 13 communicatively coupled to the at least one first processor 11, as exemplified by the connection via a bus in fig. 1.
The first processor 11 may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, or other electronic units.
The first memory 13 may store a program for operating the processor, and may temporarily store input/output data. In the present embodiment, the first memory 13 may store source speech data acquired by the headphone 10 and target speech data translated from the source speech data by the translation apparatus 30.
The first memory 13 may include at least one of the following types of storage media: flash memory type memory, hard disk type memory, micro multimedia card type memory, card type memory (e.g., SD or XD memory), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), magnetic memory, magnetic disk, and optical disk.
The first memory 13 stores instructions executable by the at least one first processor 11 to enable the at least one first processor 11 to perform:
source speech data are acquired and sent to the translation device 30 so that the translation device 30 translates the source speech data into target speech data.
The speaker device 20 may be any electronic device capable of converting an electrical signal into an acoustic signal, such as a sound box, a stereo speaker, etc.
Translation device 30 may be any suitable type of electronic device capable of translating received source speech data into target speech data, such as a smart phone, tablet computer, personal computer, and the like. Mobile terminals contain application software ("apps") for translating between languages.
For easy understanding, fig. 2 is a schematic diagram of the process interaction of the translation system to implement intelligent translation. The following describes the intelligent translation implemented by the translation system in detail with reference to fig. 2:
1. the earphone acquires source voice data and sends the source voice data to the translation equipment, so that the translation equipment translates the source voice data into target voice data. For example, when a user a and a user B using different languages need to perform language communication, the user a and the user B respectively hold the earphone 10A and the earphone 10B, when the user a speaks, the user a sends voice data to the earphone 10A, the first processor 11 of the earphone 10A correspondingly receives the voice data of the user a and uses the voice data as the source voice data, and then the earphone 10A keeps the obtained source voice data in the first memory 13 and sends the source voice data to the translation device 30.
2. The translation equipment receives source voice data, translates the source voice data into target voice data, and then sends the target voice data to the earphone. The translator device 30 is in communication connection with the earphone 10A and the earphone 10B through a wireless network, and the translator device 30 can receive the source speech data sent by the earphone 10 at any time. The wireless network may be a wireless communication network for establishing a data transmission channel between two nodes based on any type of data transmission principle, such as a bluetooth network, a WiFi network, a wireless cellular network or a combination thereof located in different signal frequency bands.
Wherein, in this embodiment, translation equipment 30 includes mobile terminal and high in the clouds server, mobile terminal with the high in the clouds server passes through wireless network and realizes communication connection, mobile terminal embeds there is translation application, translation application receives source speech data, and will source speech data sends the high in the clouds server, the high in the clouds server be used for with source speech data translates into target speech data, and it can be understood that target speech data can discern and understand for user B.
The cloud server sends the translated target voice data to a translation application program of the mobile terminal, and the mobile terminal sends the target voice data to the earphone 10B.
3. And the earphone receives the target voice data sent by the translation equipment and sends the target voice data to the speaker equipment. For example, the earphone 10A is connected to the speaker device 20A, the earphone 10B is connected to the speaker device 20B, and the earphone 10B receives the target voice data transmitted by the mobile terminal and transmits the target voice data to the corresponding speaker device 20B.
4. And the loudspeaker equipment receives the target voice data and plays the target voice data on the loudspeaker equipment. For example, the speaker device 20b receives the target voice data, and plays the target voice data through a loudspeaker provided on the speaker device 20.
Therefore, in this embodiment, the translation system firstly obtains source speech data through the earphone, the translation device translates the received source speech data into target speech data, then the earphone receives the target speech data sent by the translation device, and plays the target speech data on the speaker device, so that intelligent translation is realized, the translation result is quickly and efficiently obtained, the translation result does not need to be obtained manually, and a large amount of human resources are saved.
In order to facilitate the transmission of the target voice data to the speaker device 20, in this embodiment, the speaker device 20 is provided with a data transmission thimble, and the earphone 10 is correspondingly provided with a data transmission terminal, and the data transmission thimble is connected with the data transmission terminal. When the target voice data needs to be transmitted to the speaker device 20, the data transmission pins of the speaker device 20 are connected to the data transmission terminals on the earphone 10 in alignment, so that the target voice data is transmitted to the speaker device 20 through the data transmission pins, so that the target voice data is played on the speaker device 20.
In order to facilitate the earphone 10 to obtain the source audio data timely and effectively, and reduce the power consumption of the earphone 10, in this embodiment, the speaker device 20 includes a trigger pin, a button, at least one second processor, and a second memory communicatively connected to the at least one second processor.
The second processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, or other electronic units.
The second memory may store a program for operating the processor, and may temporarily store input/output data. In this embodiment, the second memory may store the target voice data.
The second memory may include at least one of the following types of storage media: flash memory type memory, hard disk type memory, micro multimedia card type memory, card type memory (e.g., SD or XD memory), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), magnetic memory, magnetic disk, and optical disk.
The second memory stores instructions executable by the at least one second processor to enable the at least one second processor to perform:
when a pressing signal generated by the key is acquired, the pressing signal is sent to the earphone 10 through the trigger thimble, so that the earphone 10 starts to acquire the source voice data when receiving the pressing signal; and when the key senses a pressing event, converting the pressing event into a pressing signal.
The triggering ejector pins are connected with the earphones 10, triggering terminals are arranged on the corresponding earphones 10, and the triggering ejector pins can be connected with the triggering terminals. When a pressing signal needs to be sent to the earphone 10, the trigger thimble of the speaker device 20 is aligned with the trigger terminal on the earphone 10, so that the earphone 10 starts to acquire the source audio data when receiving the pressing signal.
In this embodiment, when a key senses a pressing event, the pressing event is converted into a pressing signal, where the pressing event may be a touch operation or a pressing operation, and when the pressing event is a touch operation, a touch module is disposed on the corresponding key, where the touch module includes a touch sensor and a display screen, and the touch sensor is disposed below the display screen. The touch sensor may be configured to convert a pressure applied to a specific portion of the display screen or a change in capacitance generated at the specific portion of the display screen into an electrical input signal. The touch sensor may be configured to detect not only a position and an area of a touch but also a pressure of the touch.
The second processor acquires a pressing signal generated by the key, and sends the pressing signal to the earphone 10 through the trigger thimble, so that the earphone 10 starts to acquire the source voice data when receiving the pressing signal.
When the earphone 10 is used for a long time, it is avoided that the call is interrupted due to insufficient electric quantity of the earphone 10, in this embodiment, the speaker device 20 further includes a charging thimble, the charging thimble is connected to the earphone 10, a charging terminal is provided on the corresponding earphone 10, and the charging thimble can be connected to the charging terminal. When the earphone 10 needs to be charged, the charging pin of the speaker device 20 is aligned with the charging terminal on the earphone 10, so as to charge the earphone 10 in time.
In order to facilitate timely and efficient transmission of the source speech data to the translation device 30, the headset 10 is provided with a translation initiation button. And when the translation starting key senses a starting event, converting the starting event into a starting instruction. The translation starting key is arranged on the translation starting module, the translation starting module is arranged on the translation starting key, the translation starting module is arranged on the translation starting module, and the translation starting module is arranged on the translation starting module. The touch sensor may be configured to convert a pressure applied to a specific portion of the display screen or a change in capacitance generated at the specific portion of the display screen into an electrical input signal. The touch sensor may be configured to detect not only a position and an area of a touch but also a pressure of the touch.
When a starting instruction generated by the translation starting key is acquired, the starting instruction is sent to a translation application program of the mobile terminal, so that the translation application program generates a receiving instruction according to the starting instruction and sends the receiving instruction to the earphone 10; when the receiving instruction is obtained, the earphone 10 starts to obtain the source speech data, and sends the source speech data to the translation device 30.
Fig. 3 is an embodiment of a headphone translation method according to an embodiment of the present invention. As shown in fig. 3, the headset translating method may be performed by a translating device, and includes the following steps:
s10: source speech data is received.
Specifically, the translation device 30 includes a mobile terminal and a cloud server, and the mobile terminal and the cloud server are in communication connection through a wireless network.
The mobile terminal may be various types of electronic devices, such as a smart phone, a tablet computer, a personal computer, and the like. Mobile terminals contain software ("apps") for translating applications between languages. The translation application may be operable to receive the voice data.
The cloud server is a hardware device or a hardware component for providing remote translation service. The cloud server is used for translating the source speech data into corresponding target speech data.
The translation application program arranged in the mobile terminal receives the source speech data and sends the source speech data to the cloud server, so that the cloud server translates the source speech data into target speech data.
S20: translating the source speech data into target speech data.
In particular, different users may be from different countries, with users between different countries communicating in different languages, e.g., user a is from china, and the language used is chinese; user B is from the united states and the language used is english. When the user a speaks to the user B, the speech data of the user a is source speech data, and the translation device 30 needs to translate the source speech data of the user a into speech data that can be recognized and understood by the user B, that is, the target speech data. That is, when the user A speaks the Chinese source speech data containing "where you come from" to the user B, the translation apparatus 30 translates the Chinese source speech data into English target speech data containing "where are you from".
S30: and sending the target voice data to the earphone.
Specifically, the translated target voice data is transmitted to the corresponding headset 10, so that users using different languages can all hear the voice data that can be understood and recognized.
In order to receive the source speech data efficiently and in time, in some embodiments, referring to fig. 4, S10 further includes the following steps:
s11: and when the starting instruction is acquired, generating a receiving instruction.
Specifically, the start instruction is sent by the headset 10, and when the translation device 30 receives the start instruction, it indicates that the headset 10 has acquired the source speech data of the user and needs to translate the source speech data into the target language data. And the translation application of the mobile terminal generates a receiving instruction, wherein the receiving instruction is used for instructing the earphone 10 to send the source voice data to the translation application.
S12: the reception instruction is sent to the headset 10 so that the headset 10 starts sending the source speech data.
Specifically, after the translation application generates the receiving instruction, the receiving instruction is sent to the headset 10 through the wireless communication network, so that the headset 10 starts sending the source speech data.
The wireless network may be a wireless communication network for establishing a data transmission channel between two nodes based on any type of data transmission principle, such as a bluetooth network, a WiFi network, a wireless cellular network or a combination thereof located in different signal frequency bands.
In order to translate the source speech data into the target speech data in a timely manner, in some embodiments, referring to fig. 5, S20 further includes the following steps:
s21: performing character recognition on the source speech data to generate source character information;
specifically, the translation application program of the mobile terminal sends the received source voice data to the cloud server, and the cloud server performs character recognition on the source voice data to generate source character information. For example, the source speech data is speech data including "where you come from", the cloud server performs text recognition on the speech data including "where you come from" in the source speech data, and after the recognition is completed, text information "where you come from" included in the source speech data is extracted, and the text information "where you come from" is source text information.
S22: and translating the source character information into target character information.
Specifically, after the cloud server extracts the source text information, the source text information is translated into corresponding target text information, for example, the Chinese source text information of "where you come" is translated into English target text information of "where are you from".
S23: and converting the target character information into target voice data.
Specifically, the cloud server converts the translated target text information into corresponding target voice data through a third-party service, and then sends the generated target voice data to a translation application program of the mobile terminal, and the translation application program further sends the received target voice data to the earphone 10.
It should be noted that, in the foregoing embodiments, a certain order does not necessarily exist between the foregoing steps, and it can be understood by those skilled in the art from the description of the embodiments of the present application that, in different embodiments, the foregoing steps may have different execution orders, that is, may be executed in parallel, may also be executed in an exchange manner, and the like.
As another aspect of the embodiment of the present application, the embodiment of the present application provides a headset translating device 60, and the headset translating device 60 is applied to the translating equipment 30. Referring to fig. 6, the earphone translation apparatus 60 includes: a voice data receiving module 61, a translation module 62, and a transmission module 63.
The speech data receiving module 61 is used for receiving source speech data. Specifically, the translation device 30 includes a mobile terminal and a cloud server, and the mobile terminal and the cloud server are in communication connection through a wireless network. The translation application program arranged in the mobile terminal receives the source speech data and sends the source speech data to the cloud server, so that the cloud server translates the source speech data into target speech data.
The translation module 62 is used to translate the source speech data into target speech data.
The transmission module 63 is configured to send the target voice data to the earphone 10. Specifically, the translated target voice data is transmitted to the corresponding headset 10, so that users using different languages can all hear the voice data that can be understood and recognized.
Therefore, in this embodiment, the earphone 10 first acquires source speech data, the translation device 30 translates the received source speech data into target speech data, and then the earphone 10 receives the target speech data sent by the translation device 30 and plays the target speech data on the speaker device 20, so that intelligent translation is realized, the translation result is acquired quickly and efficiently, the translation result does not need to be acquired manually, and a large amount of human resources are saved.
In some embodiments, the voice data receiving module 61 includes an instruction generating unit and a transmitting unit.
The instruction generating unit is used for generating a receiving instruction when the starting instruction is acquired. Specifically, the start instruction is sent by the headset 10, and when the translation device 30 receives the start instruction, it indicates that the headset 10 has acquired the source speech data of the user and needs to translate the source speech data into the target language data. And the translation application of the mobile terminal generates a receiving instruction, wherein the receiving instruction is used for instructing the earphone 10 to send the source voice data to the translation application.
The sending unit is configured to send the receiving instruction to the headset 10, so that the headset 10 starts sending the source speech data. Specifically, after the translation application generates the receiving instruction, the receiving instruction is sent to the headset 10 through the wireless communication network, so that the headset 10 starts sending the source speech data.
The wireless network may be a wireless communication network for establishing a data transmission channel between two nodes based on any type of data transmission principle, such as a bluetooth network, a WiFi network, a wireless cellular network or a combination thereof located in different signal frequency bands.
Wherein, in some embodiments, translation module 62 includes a text recognition unit, a translation unit, a conversion unit;
the character recognition unit is used for performing character recognition on the source speech data to generate source character information. Specifically, the translation application program of the mobile terminal sends the received source voice data to the cloud server, and the cloud server performs character recognition on the source voice data to generate source character information.
The translation unit is used for translating the source character information into target character information. Specifically, after the cloud server extracts the source text information, the source text information is translated into corresponding target text information.
The conversion unit is used for converting the target character information into target voice data. Specifically, the cloud server converts the translated target text information into corresponding target voice data through a third-party service, and then sends the generated target voice data to a translation application program of the mobile terminal, and the translation application program further sends the received target voice data to the earphone 10.
Fig. 7 is a block diagram of a translation apparatus 30 according to an embodiment of the present invention. The translator device 30 may be configured to implement the functions of all or part of the functional modules in the main control chip. As shown in fig. 7, the translation apparatus 30 may include: a third processor 110, a third memory 120, and a third communication module 130.
The third processor 110, the third memory 120 and the third communication module 130 are connected to each other by a bus.
The third processor 110 may be of any type, with one or more processing cores of the third processor 110. The system can execute single-thread or multi-thread operation and is used for analyzing instructions to execute operations of acquiring data, executing logic operation functions, issuing operation processing results and the like.
The third memory 120, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the headset translation method in the embodiment of the present invention (for example, the voice data receiving module 61, the translation module 62, and the transmission module 63 shown in fig. 6). The third processor 110 executes various functional applications and data processing of the headset 10 interpreting device 90 by running non-transitory software programs, instructions and modules stored in the third memory 120, that is, implementing the headset interpreting method in any of the above method embodiments.
The third memory 120 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the headset 10 interpreting device 90, and the like. Further, the third memory 120 may include a high speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the third memory 120 optionally includes memory located remotely from the third processor 110, and these remote memories may be connected to the translator electronic device 10 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The third memory 120 stores instructions executable by the at least one third processor 110; the at least one third processor 110 is configured to execute the instructions to implement the headset interpretation method in any of the above-described method embodiments, for example, to execute the above-described method steps 10, 20, 30, etc., to implement the functions of the blocks 61-63 in fig. 6.
The third communication module 130 is a functional module for establishing a communication connection and providing a physical channel. The third communication module 130 may be any type of wireless or wired third communication module 130, including but not limited to a WiFi module or a bluetooth module, etc.
Further, embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more third processors 110, for example, by one of the third processors 110 in fig. 7, and can cause the one or more third processors 110 to execute the earphone translation method in any of the method embodiments, for example, execute the method steps 10, 20, 30, and so on described above, and implement the functions of the modules 61 to 63 in fig. 6.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by associated hardware as a computer program in a computer program product, the computer program being stored in a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by an associated apparatus, cause the associated apparatus to perform the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The product can execute the earphone translation method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the earphone translation method. For technical details that are not described in detail in this embodiment, reference may be made to the earphone translation method provided in this embodiment of the present invention.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A translation system, comprising at least two earphones, the earphones connecting a speaker device and a translation device;
the headset includes at least one first processor and a first memory storing instructions executable by the at least one first processor to enable the at least one first processor to perform:
obtaining source voice data and sending the source voice data to the translation equipment so that the translation equipment translates the source voice data into target voice data;
and receiving the target voice data sent by the translation equipment, and playing the target voice data on the speaker equipment.
2. The system of claim 1, wherein said speaker device includes a data transfer jack, said speaker device being connected to said earphone via said data transfer jack;
the playing the target voice data on the speaker device includes:
and transmitting the target voice data to the speaker device through the data transmission thimble, so that the target voice data is played on the speaker device.
3. The system of claim 1 or 2, wherein said speaker device comprises a trigger spike, said trigger spike being connected to said earphone; the loudspeaker equipment is also provided with a key;
the loudspeaker device comprises at least one second processor and a second memory storing instructions executable by the at least one second processor, the instructions being executable by the at least one second processor to enable the at least one second processor to perform:
when a pressing signal generated by the key is acquired, the pressing signal is sent to the earphone through the trigger thimble, so that the earphone starts to acquire the source voice data when receiving the pressing signal; and when the key senses a pressing event, converting the pressing event into a pressing signal.
4. The system of claim 3,
the pressing event includes a touch operation or a pressing operation.
5. The system of claim 1, wherein the speaker device further comprises a charging pin, the charging pin is connected to the earphone, and the charging pin is used for charging the earphone.
6. The system of claim 1, wherein the headset is provided with a translation initiation button;
the obtaining source speech data and sending the source speech data to the translation device includes:
when a starting instruction generated by the translation starting key is acquired, the starting instruction is sent to translation equipment, so that the translation equipment generates a receiving instruction according to the starting instruction and sends the receiving instruction to the earphone; when a translation starting key senses a starting event, converting the starting event into a starting instruction;
and when the receiving instruction is obtained, starting to obtain the source voice data, and sending the source voice data to the translation equipment.
7. A headset translation method is applied to translation equipment and is characterized by comprising the following steps:
receiving source speech data;
translating the source speech data into target speech data;
and sending the target voice data to the earphone.
8. The method of claim 7, wherein receiving source speech data comprises:
when a starting instruction is obtained, generating a receiving instruction;
and sending the receiving instruction to the earphone so as to enable the earphone to start sending the source speech data.
9. The method of claim 8, wherein said translating said source speech data into target speech data comprises:
performing character recognition on the source speech data to generate source character information;
translating the source text information into target text information;
and converting the target character information into target voice data.
10. A translation apparatus, characterized in that the translation apparatus comprises: at least one third processor and a third memory, the third memory storing instructions executable by the at least one third processor to enable the at least one third processor to perform the method of any one of claims 7-9.
CN201910967363.5A 2019-10-12 2019-10-12 Translation system, earphone translation method and translation device Active CN110765786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910967363.5A CN110765786B (en) 2019-10-12 2019-10-12 Translation system, earphone translation method and translation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910967363.5A CN110765786B (en) 2019-10-12 2019-10-12 Translation system, earphone translation method and translation device

Publications (2)

Publication Number Publication Date
CN110765786A true CN110765786A (en) 2020-02-07
CN110765786B CN110765786B (en) 2023-11-03

Family

ID=69331634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910967363.5A Active CN110765786B (en) 2019-10-12 2019-10-12 Translation system, earphone translation method and translation device

Country Status (1)

Country Link
CN (1) CN110765786B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696554A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Translation method and device, earphone and earphone storage device
CN111783481A (en) * 2020-06-30 2020-10-16 歌尔科技有限公司 Earphone control method, translation method, earphone and cloud server
CN112261633A (en) * 2020-10-12 2021-01-22 合肥星空物联信息科技有限公司 Audio recording and converting method for intelligent earphone

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412813A (en) * 2016-11-30 2017-02-15 深圳市高为通信技术有限公司 Real-time communication translation method with bluetooth headsets
CN107885732A (en) * 2017-11-08 2018-04-06 深圳市沃特沃德股份有限公司 Voice translation method, system and device
CN107885731A (en) * 2017-11-06 2018-04-06 深圳市沃特沃德股份有限公司 Voice translation method and device
CN109218883A (en) * 2018-08-27 2019-01-15 深圳市声临科技有限公司 A kind of interpretation method, translation system, TWS earphone and terminal
CN109376363A (en) * 2018-09-04 2019-02-22 出门问问信息科技有限公司 A kind of real-time voice interpretation method and device based on earphone

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412813A (en) * 2016-11-30 2017-02-15 深圳市高为通信技术有限公司 Real-time communication translation method with bluetooth headsets
CN107885731A (en) * 2017-11-06 2018-04-06 深圳市沃特沃德股份有限公司 Voice translation method and device
CN107885732A (en) * 2017-11-08 2018-04-06 深圳市沃特沃德股份有限公司 Voice translation method, system and device
CN109218883A (en) * 2018-08-27 2019-01-15 深圳市声临科技有限公司 A kind of interpretation method, translation system, TWS earphone and terminal
CN109376363A (en) * 2018-09-04 2019-02-22 出门问问信息科技有限公司 A kind of real-time voice interpretation method and device based on earphone

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696554A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Translation method and device, earphone and earphone storage device
CN111783481A (en) * 2020-06-30 2020-10-16 歌尔科技有限公司 Earphone control method, translation method, earphone and cloud server
CN111783481B (en) * 2020-06-30 2024-08-02 歌尔科技有限公司 Earphone control method, translation method, earphone and cloud server
CN112261633A (en) * 2020-10-12 2021-01-22 合肥星空物联信息科技有限公司 Audio recording and converting method for intelligent earphone
CN112261633B (en) * 2020-10-12 2023-02-21 合肥星空物联信息科技有限公司 Audio recording and converting method for intelligent earphone

Also Published As

Publication number Publication date
CN110765786B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN107277754B (en) Bluetooth connection method and Bluetooth peripheral equipment
CN105677335B (en) Improve the method and device that mobile terminal first powers on speed
CN110765786B (en) Translation system, earphone translation method and translation device
CN105528229B (en) Improve the method and device that mobile terminal first powers on speed
TWI497408B (en) Voice interaction system, mobile terminal apparatus and method of voice communication
CN108320745A (en) Control the method and device of display
EP3051782A1 (en) Method and system for sending contact information in call process
WO2017028651A1 (en) Method, apparatus and system for performing configuration setting between devices
CN110992955A (en) Voice operation method, device, equipment and storage medium of intelligent equipment
JP2021150946A (en) Wireless earphone device and method for using the same
CN111356117A (en) Voice interaction method and Bluetooth device
CN203325186U (en) Household voice box device for controlling household appliances
CN106507184A (en) Media file shares terminal, receiving terminal, transmission method and electronic equipment
CN110931000A (en) Method and device for speech recognition
CN107360332A (en) Talking state display methods, device, mobile terminal and storage medium
CN106095132B (en) Playback equipment keypress function setting method and device
CN104484151A (en) Voice control system, equipment and method
CN114996168A (en) Multi-device cooperative test method, test device and readable storage medium
US8934886B2 (en) Mobile apparatus and method of voice communication
CN103235687A (en) Method and device for setting on state of sensor, and mobile equipment
US20200202861A1 (en) Electronic device controlling system, voice output device, and methods therefor
JP7242248B2 (en) ELECTRONIC DEVICE, CONTROL METHOD AND PROGRAM THEREOF
CN111147530B (en) System, switching method, intelligent terminal and storage medium of multi-voice platform
CN115905092A (en) Communication system, communication method, communication device, and storage medium
CN111556406B (en) Audio processing method, audio processing device and earphone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200506

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets Science Park Branch Road No. 2 Building 5 floor 508B Zhengxin

Applicant after: Shenzhen Jingting auto brokerage Co.,Ltd.

Address before: 518101 Fourth Floor of Building A, Huafeng International Robot Industrial Park, Nanchang Community Avenue, Xixiang Street, Baoan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN SITUATIONAL INTELLIGENCE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant