CN111696554A - Translation method and device, earphone and earphone storage device - Google Patents

Translation method and device, earphone and earphone storage device Download PDF

Info

Publication number
CN111696554A
CN111696554A CN202010508213.0A CN202010508213A CN111696554A CN 111696554 A CN111696554 A CN 111696554A CN 202010508213 A CN202010508213 A CN 202010508213A CN 111696554 A CN111696554 A CN 111696554A
Authority
CN
China
Prior art keywords
earphone
voice data
user
target
headset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010508213.0A
Other languages
Chinese (zh)
Other versions
CN111696554B (en
Inventor
王颖
李健涛
张丹
刘宝
张硕
杨天府
梁宵
荣河江
李鹏翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010508213.0A priority Critical patent/CN111696554B/en
Publication of CN111696554A publication Critical patent/CN111696554A/en
Priority to PCT/CN2021/087836 priority patent/WO2021244159A1/en
Application granted granted Critical
Publication of CN111696554B publication Critical patent/CN111696554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups

Abstract

The embodiment of the invention provides a translation method, a translation device, earphones and an earphone storage device, wherein the method comprises the following steps: the method comprises the steps that a headset obtains source voice data, the source voice data are sent to a headset containing device connected with the headset containing device, the headset containing device translates the source voice data, and first target voice data are generated; and then the user only needs to adopt earphone equipment can realize the translation, need not to adopt special translation equipment.

Description

Translation method and device, earphone and earphone storage device
Technical Field
The invention relates to the technical field of data processing, in particular to a translation method and device, an earphone and an earphone storage device.
Background
With the advancement of globalization, the communication between the business and the life of each country becomes more and more frequent; such as international trade, international conference, international travel, etc.
Because the languages used by each country/region are different, the languages become one of the main obstacles in the business and life communication process of each country; in order to solve the language barrier, the translation equipment is produced at the right moment; such as a translator, translation pen, etc. That is, the prior art must use a special translation device to implement the translation.
Disclosure of Invention
The embodiment of the invention provides a translation method, which is used for realizing translation based on earphone equipment.
Correspondingly, the embodiment of the invention also provides a translation device, an earphone and an earphone storage device, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present invention discloses a translation method, which is applied to an earphone, wherein the earphone is connected with an earphone receiving device, and the method includes: the earphone acquires source speech data; the earphone sends the source speech data to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: the earphone receives voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the method further comprises the following steps: the earphone receives first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: in the process that the first communication user communicates with at least one second communication user through the terminal equipment, the earphone collects voice data of the first communication user and uses the voice data as source voice data; the method further comprises the following steps: the earphone receives first target voice data sent by the earphone containing device; and the earphone sends the first target voice data to the terminal equipment so that the terminal equipment sends the first target voice data to the terminal equipment of the second communication user.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the earphone acquires source speech data, and comprises: the first earphone collects voice data of the first user as source voice data; the earphone with source speech data send to earphone storage device includes: the first earphone sends the source speech data to the earphone storage device, the earphone storage device translates the source speech data to generate first target speech data, and the first target speech data are sent to the second earphone; the method further comprises the following steps: and the second earphone receives the first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the earphone acquires source speech data, and comprises: the earphone collects voice data of the first user as source voice data; wherein the first target voice data is played by the earphone receiving device.
Optionally, the method further comprises: the earphone receives second target voice data sent by the earphone containing device, and the second target voice data is generated by translating the voice data of a second user collected by the earphone containing device; and the earphone plays the second target voice data.
Optionally, the headset includes two earphones, and the method further includes: the earphone receives first target voice data sent by the earphone containing device; the earphone controls the sound channel distribution of the earphone when playing voice data according to the using condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when both earphones are used, the two earphones play the source speech data and the first target speech data, respectively.
Optionally, the method further comprises: receiving a switching instruction of a user, and switching the types of voice data played in the two earphones; or receiving a volume adjusting instruction of a user, and adjusting the volume of the earphone corresponding to the music adjusting operation; or receiving a category selection instruction of a user, wherein the two earphones play the first target voice data or the source voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when one of the earphones is used, the used earphone plays a mix of the source speech data and the first target speech data.
The embodiment of the invention also discloses a translation method, which is applied to an earphone accommodating device, wherein the earphone accommodating device is connected with an earphone, and the method comprises the following steps: the earphone receiving device receives source voice data sent by the earphone; the earphone storage device translates the source speech data to generate first target speech data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the method further comprises the following steps: the earphone storage device sends the first target voice data to the earphone, and the earphone plays the first target voice data; or, the earphone accommodating device plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the first communication user collected by the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the method further comprises the following steps: and sending the first target voice data to the earphone so that the earphone forwards the first target voice data to the terminal equipment, and sending the first target voice data to the terminal equipment of the second communication user by the terminal equipment.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the source speech data is speech data of a first user collected by the first headset; the method further comprises the following steps: and the earphone accommodating device sends the first target voice data to a second earphone, and the second earphone plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the source speech data is speech data of a first user collected by the headset; the method further comprises the following steps: the earphone storage device plays the first target voice data.
Optionally, the method further comprises: the earphone receiving device collects voice data of a second user; translating the voice data of the second user to generate second target voice data; and sending the second target voice data to the earphone, and playing the second target voice data by the earphone.
Optionally, the method further comprises: and sending the first target voice data to the earphone, and controlling the sound channel distribution of the earphone when the earphone plays the voice data according to the using condition of the earphone by the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the translating the source speech data to generate first target speech data includes: and performing simultaneous interpretation on the source speech data to generate first target speech data.
Optionally, the earphone receiving device is further connected to a server, and the translating the source speech data to generate target speech data includes: generating a translation request according to the source voice data, and sending the translation request to a server so that the server translates the source voice data according to the translation request, generates first target voice data and returns the first target voice data; and receiving first target voice data returned by the server.
The embodiment of the invention also discloses a translation device, which is applied to earphones, wherein the earphones are connected with the earphone storage device, and the translation device comprises: the acquisition module is used for acquiring source speech data; the first sending module is used for sending the source speech data to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the acquisition module includes: the voice data receiving submodule is used for receiving the voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the device further comprises: and the first playing module is used for receiving the first target voice data sent by the earphone accommodating device and playing the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the acquisition module includes: the first voice data acquisition submodule is used for acquiring the voice data of the first communication user as source voice data in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the device further comprises: the first receiving module is used for receiving first target voice data sent by the earphone accommodating device; and the second sending module is used for sending the first target voice data to the terminal equipment so that the terminal equipment sends the first target voice data to the terminal equipment of the second communication user.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the acquisition module includes: the second voice data acquisition submodule is used for acquiring the voice data of the first user as source voice data; the first sending module is used for calling the first earphone to send the source speech data to the earphone accommodating device, and the earphone accommodating device translates the source speech data to generate first target speech data and send the first target speech data to the second earphone; the device further comprises: and the second playing module is used for calling the second earphone to receive the first target voice data sent by the earphone containing device and playing the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the acquisition module includes: the third voice data acquisition submodule is used for acquiring the voice data of the first user as source voice data; wherein the first target voice data is played by the earphone receiving device.
Optionally, the apparatus further comprises: the second receiving module is used for receiving second target voice data sent by the earphone accommodating device, and the second target voice data is generated by translating the voice data of a second user collected by the earphone accommodating device; and the third playing module is used for playing the second target voice data.
Optionally, the earphone includes two earphones, and the apparatus further includes: the third receiving module is used for receiving the first target voice data sent by the earphone accommodating device; and the distribution module is used for controlling the sound channel distribution of the earphone when the earphone plays voice data according to the use condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the allocation module includes: and the first channel allocation submodule is used for playing the source speech data and the first target speech data by the two earphones respectively when the two earphones are used.
Optionally, the apparatus further comprises: the switching module is used for receiving a switching instruction of a user and switching the types of voice data played in the two earphones; the adjusting module is used for receiving a volume adjusting instruction of a user and adjusting the volume of the earphone corresponding to the music adjusting operation; or the selection module is used for receiving a category selection instruction of a user, and the two earphones play the first target voice data or the source voice data.
Optionally, the allocation module includes: and the second distribution submodule is used for playing the mixed sound of the source speech data and the first target speech data by the used earphone when one earphone is used.
The embodiment of the invention also discloses a translation device, which specifically comprises: the earphone storage device is applied to an earphone storage device, the earphone storage device is connected with an earphone, and the device comprises: the fourth receiving module is used for receiving the source speech data sent by the earphone; and the first translation module is used for translating the source voice data to generate first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the device further comprises: the third sending module is used for sending the first target voice data to the earphone, and the earphone plays the first target voice data; and the fourth playing module is used for playing the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the first communication user collected by the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the device further comprises: and the fourth sending module is used for sending the first target voice data to the earphone so that the earphone forwards the first target voice data to the terminal equipment, and the terminal equipment sends the first target voice data to the terminal equipment of the second communication user.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the source speech data is speech data of a first user collected by the first headset; the device further comprises: and the fifth sending module is used for sending the first target voice data to a second earphone, and the second earphone plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the source speech data is speech data of a first user collected by the headset; the device further comprises: and the fifth playing module is used for playing the first target voice data.
Optionally, the apparatus further comprises: the acquisition module is used for acquiring voice data of a second user; the second translation module is used for translating the voice data of the second user to generate second target voice data; and the sixth sending module is used for sending the second target voice data to the earphone, and the earphone plays the second target voice data.
Optionally, the apparatus further comprises: and the seventh sending module is used for sending the first target voice data to the earphone, and the earphone controls the sound channel distribution of the earphone when the voice data is played according to the use condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the first translation module is configured to perform simultaneous interpretation on the source speech data to generate first target speech data.
Optionally, the earphone storage device is further connected with a server, and the first translation module is configured to generate a translation request according to the source speech data and send the translation request to the server, so that the server translates the source speech data according to the translation request, generates first target speech data, and returns the first target speech data; and receiving first target voice data returned by the server.
The embodiment of the invention also discloses a readable storage medium, and when the instructions in the storage medium are executed by the processor of the earphone, the earphone can execute the translation method according to any one of the embodiments of the invention.
The embodiment of the invention also discloses a readable storage medium, and when instructions in the storage medium are executed by a processor of the earphone accommodating device, the earphone accommodating device can execute the translation method in any one of the embodiments of the invention.
Also disclosed in an embodiment of the present invention is a headset comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors comprises instructions for: obtaining source speech data; and sending the source voice data to the earphone accommodating device, and translating the source voice data by the earphone accommodating device to generate first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: the earphone receives voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment; further comprising instructions to perform the following operations: the earphone receives first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: in the process that the first communication user communicates with at least one second communication user through the terminal equipment, the earphone collects voice data of the first communication user and uses the voice data as source voice data; further comprising instructions to perform the following operations: the earphone receives first target voice data sent by the earphone containing device; and the earphone sends the first target voice data to the terminal equipment so that the terminal equipment sends the first target voice data to the terminal equipment of the second communication user.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the earphone acquires source speech data, and comprises: the first earphone collects voice data of the first user as source voice data; the earphone with source speech data send to earphone storage device includes: the first earphone sends the source speech data to the earphone storage device, the earphone storage device translates the source speech data to generate first target speech data, and the first target speech data are sent to the second earphone; further comprising instructions to perform the following operations: and the second earphone receives the first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the earphone acquires source speech data, and comprises: the earphone collects voice data of the first user as source voice data; wherein the first target voice data is played by the earphone receiving device.
Optionally, the method further includes performing the following operations: the earphone receives second target voice data sent by the earphone containing device, and the second target voice data is generated by translating the voice data of a second user collected by the earphone containing device; and the earphone plays the second target voice data.
Optionally, the headset includes two earphones, and further includes instructions for performing the following operations: the earphone receives first target voice data sent by the earphone containing device; the earphone controls the sound channel distribution of the earphone when playing voice data according to the using condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when both earphones are used, the two earphones play the source speech data and the first target speech data, respectively.
Optionally, the method further includes performing the following operations: receiving a switching instruction of a user, and switching the types of voice data played in the two earphones; or receiving a volume adjusting instruction of a user, and adjusting the volume of the earphone corresponding to the music adjusting operation; or receiving a category selection instruction of a user, wherein the two earphones play the first target voice data or the source voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when one of the earphones is used, the used earphone plays a mix of the source speech data and the first target speech data.
The embodiment of the invention also discloses an earphone accommodating device, which comprises a memory and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by one or more processors and comprise instructions for: receiving source voice data sent by the earphone; and translating the source voice data to generate first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; further comprising instructions to perform the following operations: the earphone storage device sends the first target voice data to the earphone, and the earphone plays the first target voice data; or, the earphone accommodating device plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the first communication user collected by the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; further comprising instructions to perform the following operations: and sending the first target voice data to the earphone so that the earphone forwards the first target voice data to the terminal equipment, and sending the first target voice data to the terminal equipment of the second communication user by the terminal equipment.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the source speech data is speech data of a first user collected by the first headset; further comprising instructions to perform the following operations: and the earphone accommodating device sends the first target voice data to a second earphone, and the second earphone plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the source speech data is speech data of a first user collected by the headset; further comprising instructions to perform the following operations: the earphone storage device plays the first target voice data.
Optionally, the method further includes performing the following operations: the earphone receiving device collects voice data of a second user; translating the voice data of the second user to generate second target voice data; and sending the second target voice data to the earphone, and playing the second target voice data by the earphone.
Optionally, the method further includes performing the following operations: and sending the first target voice data to the earphone, and controlling the sound channel distribution of the earphone when the earphone plays the voice data according to the using condition of the earphone by the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the translating the source speech data to generate first target speech data includes: and performing simultaneous interpretation on the source speech data to generate first target speech data.
Optionally, the earphone receiving device is further connected to a server, and the translating the source speech data to generate target speech data includes: generating a translation request according to the source voice data, and sending the translation request to a server so that the server translates the source voice data according to the translation request, generates first target voice data and returns the first target voice data; and receiving first target voice data returned by the server.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the earphone can acquire source speech data, then the source speech data is sent to the earphone accommodating device connected with the earphone accommodating device, and the earphone accommodating device translates the source speech data to generate first target speech data; and then the user only needs to adopt earphone equipment can realize the translation, need not to adopt special translation equipment.
Drawings
Fig. 1 is a flowchart illustrating steps of an embodiment of a method for interpreting a headphone side according to the present invention;
fig. 2 is a flowchart illustrating steps of an embodiment of a side translation method for an earphone storage device according to the present invention;
FIG. 3a is a schematic diagram of an embodiment of a communication scenario of the present invention;
FIG. 3b is a flowchart illustrating the steps of an alternative embodiment of a translation method of the present invention;
FIG. 4 is a flowchart of the steps of yet another alternate embodiment of a translation method of the present invention;
FIG. 5 is a flowchart of the steps of yet another alternate embodiment of a translation method of the present invention;
FIG. 6a is a diagram of a one-to-one translation scenario embodiment of the present invention;
FIG. 6b is a flowchart illustrating the steps of an alternative embodiment of a translation method of the present invention;
FIG. 7a is a schematic diagram of yet another one-to-one translation scenario embodiment of the present invention;
FIG. 7b is a flowchart illustrating the steps of an alternative embodiment of a translation method of the present invention;
FIG. 8 is a flowchart of the steps of yet another alternate embodiment of a translation method of the present invention;
FIG. 9 is a flowchart of the steps of yet another alternate embodiment of a translation method of the present invention;
fig. 10 is a block diagram illustrating an embodiment of an earphone side interpreting apparatus according to the present invention;
fig. 11 is a block diagram illustrating an alternative embodiment of the earphone side interpreting apparatus according to the present invention;
fig. 12 is a block diagram illustrating a side translation device of an earphone storage device according to an embodiment of the present invention;
fig. 13 is a block diagram of an alternative embodiment of the earphone accommodation device side translation device according to the present invention;
FIG. 14 illustrates a block diagram of an electronic device for translation, according to an example embodiment.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
With the continuous development of computer technology and communication technology, people's work and life gradually tend to be electronized, informationized and networked. The terminal device is one of important carriers for realizing electronization, informatization and networking, for example, the terminal device is adopted for office work, the terminal device is adopted for communication and the like; and further terminal devices are commonly used. In many scenarios, a user will access an earphone in a terminal device: if in a communication scene, the access earphone not only can clearly hear the voice of the opposite end, but also can liberate the two hands; for example, when watching videos, listening to music, live broadcasting, video/voice calls and the like, the access earphone can acquire high-quality audio, and can protect privacy and reduce interference to other people; this makes the headset one of the more common and stock external devices for users. Therefore, the embodiment of the invention can add the function of realizing translation in the earphone device, and then realize translation based on the earphone device without using a special translation device by a user.
The earphone device comprises an earphone and an earphone containing device, wherein the earphone containing device is connected with the earphone.
The following describes a translation method on the headphone side.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for interpreting a headphone side according to the present invention is shown, which may specifically include the following steps:
and 102, the earphone acquires source voice data.
And 104, the earphone sends the source voice data to the earphone accommodating device, and the earphone accommodating device translates the source voice data to generate first target voice data.
In this embodiment of the present invention, the source speech data may be collected by an earphone, or may be collected by other devices connected to the earphone, such as a terminal device. The earphone can be provided with a sound collection module such as a microphone array, so that the earphone can collect voice data. Wherein the source speech data may refer to speech data that is not translated.
In the embodiment of the invention, a translation function can be added in the earphone storage device of the earphone equipment, so that after the earphone acquires the source audio data, the source audio data can be sent to the earphone storage device; the earphone storage device translates the source speech data to generate first target speech data. The translation process of the earphone storage device will be described later.
After the earphone accommodating device generates the first target voice data, the earphone accommodating device can play the first target voice data by itself or send the first target voice data to an earphone for playing; the first target voice data can be sent to the corresponding terminal equipment by the earphone after being sent to the earphone; the method and the device for processing the image data can be determined according to application scenarios, and the embodiment of the invention is not limited thereto.
In summary, in the embodiment of the present invention, an earphone may obtain source speech data, then send the source speech data to an earphone storage device connected to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data; and then the user only needs to adopt earphone equipment can realize the translation, need not to adopt special translation equipment.
The following describes a method of translation on the headphone housing apparatus side.
Referring to fig. 2, a flowchart illustrating steps of an embodiment of a side translation method for an earphone storage device according to the present invention is shown, which may specifically include the following steps:
step 202, the earphone storage device receives source speech data sent by the earphone.
And step 204, translating the source voice data by the earphone accommodating device to generate first target voice data.
In the embodiment of the invention, after receiving the source speech data sent by the earphone connected with the earphone receiving device, the earphone receiving device can perform speech recognition on the source speech data and determine the corresponding speech recognition text; and then translating the voice recognition text into a translation text of a target language, and converting the translation text of the target language into corresponding first target voice data.
The target language may be a language used by a user using the earphone, or a language used by another user performing voice communication with the user using the earphone, and may be specifically determined according to an application scenario, which is not limited in this embodiment of the present invention.
After the first target voice data is generated, the earphone accommodating device can play the first target voice data by itself or send the first target voice data to an earphone; the method and the device for processing the image data can be determined according to application scenarios, and the embodiment of the invention is not limited thereto.
In summary, in the embodiment of the present invention, after obtaining source speech data sent by an earphone, an earphone storage device may translate the source speech data to generate first target speech data; and then the user only needs to adopt earphone equipment can realize the translation, need not to adopt special translation equipment.
The following describes a translation method of an embodiment of the present invention in combination with a headset and a headset storage device.
In the embodiment of the invention, the earphone can be connected with the terminal equipment while being connected with the earphone accommodating device. Correspondingly, an application scenario of the embodiment of the present invention may be: referring to fig. 3a, a communication scenario between users in different languages is shown in fig. 3 a. The communication may be a voice call performed by dialing, or a voice/video call performed by instant messaging software, which is not limited in this embodiment of the present invention.
The users participating in the communication may include at least two users, each user participating in the communication uses the earphone device, and earphones of the earphone devices used by the users participating in the communication are respectively connected with the terminal devices used by the users. Any user using the earphone device can be called a first communication user, and other users participating in communication can be called second communication users; the second communication user comprises at least one terminal device, and one terminal device is correspondingly connected with the earphones in one set of earphone devices.
In the embodiment of the present invention, the earphone may be a wired earphone or a wireless earphone, which is not limited in the embodiment of the present invention. The connection mode of the earphone, the earphone containing device and the terminal equipment can be determined according to requirements, if the earphone is a wireless earphone, the earphone and the earphone containing device can be connected through Bluetooth, and the earphone and the terminal equipment can be connected through Bluetooth; the embodiments of the present invention are not limited in this regard.
In an alternative embodiment of the invention, the headset may be a TWS (True Wireless headset).
The following describes a translation method according to an embodiment of the present invention, taking an earphone device used by a first communication user as an example.
Referring to FIG. 3b, a flowchart illustrating the steps of an alternative embodiment of a translation method of the present invention is shown.
Step 302, the earphone receives voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal device is the voice data of the user received by the terminal device in the process that the first communication user communicates with at least one second communication user through the terminal device.
In the embodiment of the present invention, in a scenario where the first communication user communicates with at least one second communication user through the terminal device of the first communication user, during a speaking process of any one second communication user (for convenience of the following description of the embodiment of the present invention, the speaking second communication user during the communication process may be referred to as a target second communication user), the terminal device of the target second communication user may collect voice data of the target second communication user. Of course, the target second communication user may also be a terminal device that acquires the voice data of the target second communication user through an earphone corresponding to the target second communication user and sends the voice data to the target second communication user. And then, the earphone of the target second communication user sends the voice data of the target second communication user to the terminal equipment of the first communication user.
Correspondingly, the terminal equipment of the first communication user can receive the voice data sent by the terminal equipment of the target second communication user; and then sending the received voice data of the target second communication user to the earphone used by the first communication user. After the earphone used by the first communication user receives the voice data sent by the terminal equipment of the first communication user, the received voice data can be used as source voice data.
And step 304, the earphone sends the source voice data to the earphone accommodating device.
In one example of the present invention, the headset used by the first communication user may determine whether the language used by the first communication user is the same as the language used by the target second communication user based on the source speech data. The earphone used by the first communication user can determine the language used by the target second communication user according to the source speech data (namely the speech data of the target second communication user); the language used by the target second communication user may also be determined according to the language of the target second communication user set by the first communication user, which is not limited in the embodiment of the present invention. The earphone used by the first communication user can determine the language used by the first communication user by acquiring the language of the first communication user set by the first communication user; the language used by the first communication user may also be determined according to the system language of the terminal device corresponding to the first communication user, which is not limited in the embodiment of the present invention.
The first communication user may set the language of the first communication user and the language of the target second communication user in the earphone storage device, or may set the language of the first communication user and the language of the second communication user in an application program corresponding to the earphone device of the terminal device, which is not limited in this embodiment of the present invention. If the language used by the first communication user is the same as the language used by the target second communication user, the source speech data does not need to be translated, and the source speech data can be directly played. And if the language used by the first communication user is different from the language used by the target second communication user, sending the source speech data to the corresponding earphone containing device, and translating the source speech data by the earphone containing device.
In an example of the present invention, the headset of the first communication user may also directly send the source audio data to the headset storing device without determining whether the language used by the first communication user is the same as the language used by the target second communication user; the earphone receiving device judges whether the language used by the first communication user is the same as the language used by the target second communication user.
And step 306, translating the source voice data by the earphone accommodating device to generate first target voice data.
Step 308, the headset receiving device sends the first target voice data to the headset.
In an example of the present invention, when the headset has determined that the language used by the first communication user is different from the language used by the target second communication user, the headset storing device may perform speech recognition on the acquired source speech data to determine a corresponding speech recognition text; and then translating the voice recognition text into a translation text corresponding to the language used by the first communication user, converting the translation text into corresponding first target voice data and returning the first target voice data to the earphone of the first communication user. When the user is the language of the first communication user set in the application program corresponding to the earphone device of the terminal device, the language of the first communication user set by the first communication user can be sent to the earphone accommodating device by the earphone. And when the user does not set the language used by the user, the earphone can send the system language of the terminal equipment corresponding to the first communication user to the earphone accommodating device.
In another example of the present invention, when the headset does not determine whether the language used by the first communication user is the same as the language used by the target second communication user; after receiving the source audio data, the earphone storage device can judge whether the language used by the first communication user is the same as the language used by the target second communication user according to the source audio data. When the language used by the first communication user is the same as the language used by the target second communication user, the source speech data can be returned to the headset of the first communication user, and the headset of the first communication user can directly play the source speech data. When it is determined that the language used by the first communication user is different from the language used by the target second communication user, the earphone receiving device may execute step 306 to generate the first target voice data.
And 310, the earphone plays the first target voice data.
The headset of the first communication user may then play the first target speech data. In the users who participate in the communication, the earphone equipment of each user can translate the voice data of the users who participate in the communication, generate the voice data of the language used by the user and play the voice data; therefore, cross-language communication of users using different languages in the communication process is realized.
Of course, in an optional embodiment of the present invention, after the earphone storage device translates the source speech data to generate first target speech data, the earphone storage device may also play the first target speech data; and then need not to return first target voice data for the earphone, improved the efficiency of broadcast voice data, and can also realize carrying out the sound with the voice data after the translation and play outward.
In summary, in the embodiment of the present invention, in a process that a first communication user communicates with at least one second communication user through the terminal device, the headset of the first communication user may use voice data of the second communication user, which is sent by the terminal device of the first communication user, as source voice data; then sending the source voice data to an earphone accommodating device, translating the source voice data by the earphone accommodating device, generating first target voice data and returning the first target voice data to an earphone of a first communication user; and then the earphone of the first communication user plays the first target voice data, thereby realizing cross-language communication in the communication process of users using different languages. In addition, compared with the prior art that translation is performed by adopting a special translation device after the power amplifier needs to be started in the communication process, the embodiment of the invention can complete translation without starting the power amplifier in the communication process; not only can the privacy be protected, but also the accuracy of translation can be improved.
Referring to the communication scenario of fig. 3a, another translation method according to an embodiment of the present invention is as follows.
The following describes the translation method according to the embodiment of the present invention by taking the earphone used by the first communication user as an example.
Referring to FIG. 4, a flowchart illustrating the steps of yet another alternate embodiment of the translation method of the present invention is shown.
Step 402, the earphone collects voice data of the first communication user as source voice data.
In the embodiment of the invention, under the scene that the first communication user communicates with at least one second communication user through the terminal equipment of the first communication user, the earphone of the first communication user can collect the voice data of the first communication user in the speaking process of the first communication user; and using the collected voice data of the first communication user as source voice data.
Step 404, the headset sends the source audio data to the headset storage device.
In an example of the present invention, when the language used by the first communication user is the same as the language used by each second communication user, the headset may directly transmit the source audio data to the terminal device of the first communication user; and sending the source voice data to the terminal equipment of each second communication user by the terminal equipment of the first communication user, sending the source voice data to the earphones used by each second communication user by the terminal equipment of each second communication user, and playing the source voice data by the earphones used by each second communication user. The headset may perform step 404 when the language of the first communication user is different from the language used by the at least one second communication user. The manner of determining whether the language used by the first communication user is the same as the language used by each second communication user is similar to the above, and the description thereof is omitted here.
In another example of the present invention, the headset of the first communication user may directly send the source audio data to the headset storing device without determining whether the language used by the first communication user is the same as the language used by each second communication user; the earphone receiving device judges whether the language used by the first communication user is the same as the language used by the second communication user.
And 406, translating the source voice data by the earphone accommodating device to generate first target voice data.
Step 408, the headset receiving device returns the first target voice data to the headset.
In an example of the present invention, when the headset of the first communication user determines that the language used by the first communication user is different from the language used by the at least one second communication user, the headset storing device may perform speech recognition on the source speech data to determine a corresponding speech recognition text; and then translating the voice recognition text into a translation text corresponding to the language used by the second communication user, converting the translation text into corresponding first target voice data and returning the first target voice data to the earphone.
In another example of the present invention, when the headset of the first communication user does not determine whether the language used by the first communication user is the same as the language used by the second communication user; after receiving the source audio data, the earphone storage device can judge whether the language used by the first communication user is the same as the language used by each second communication user according to the source audio data. When the language used by the first communication user is the same as the language used by each second communication user, the source speech data can be returned to the earphone of the first communication user, and the earphone of the first communication user can directly send the source speech data to the terminal equipment of the first communication user. When the language used by the first communication user is different from the language used by the at least one second communication user, the headset storing device may perform step 406 to generate the first target voice data and return the first target voice data to the headset.
When the users participating in communication comprise a plurality of users and a plurality of second communication users different from the language used by the first communication user, the earphone accommodating device can respectively translate the source voice data to generate first target voice data corresponding to the language used by each second communication user different from the language used by the first communication user; and then sending each first target voice data to the earphone of the first communication user.
Step 410, the earphone sends the first target voice data to the terminal device, so that the terminal device sends the first target voice data to the terminal device of the second communication user.
After the earphone used by the first communication user obtains the first target voice data, the earphone can send the first target voice data to the terminal device of the first communication user. And then, the terminal equipment of the first communication user sends the first target voice data to the terminal equipment of the second communication user. And then the terminal equipment of the second communication user can send the first target voice data to the earphone of the second communication user, and the earphone of the second communication user plays the first target voice data, thereby realizing barrier-free communication in the communication process.
When the earphone storage device of the first communication user translates the source speech data into a plurality of first target speech data, the terminal equipment of the first communication user can sequentially send the first target speech data to the terminal equipment of each second communication user; and then, the terminal equipment of each second communication user sequentially sends each first target voice data to the corresponding earphone used by the second communication user, and the earphone used by the second communication user sequentially plays each first target voice data. And then each second communication user participating in communication can acquire the understandable first target voice data from the plurality of first target voice data played in sequence. Of course, the earphone used by each second communication user may select the first target voice data matched with the language used by the corresponding second communication user from the received plurality of first target voice data to play.
Certainly, when the earphone storage device of the first communication user translates the source speech data into a plurality of first target speech data, the terminal device of the first communication user may also send each first target speech data to the corresponding terminal device of the second communication user; and then the terminal equipment of the second communication user sends the received first target voice data to the corresponding earphone used by the second communication user, and the earphone used by the second communication user plays the received first target voice data.
In addition, when a second communication user with the same language as the first communication user exists in the plurality of second communication users, the earphone of the first communication user can also send the source voice data to the terminal equipment corresponding to the first communication user; and sending the source voice data to the terminal equipment of the second communication user, which has the same language as the first communication user, by the terminal equipment corresponding to the first communication user.
In summary, in the embodiment of the present invention, the headset may acquire the voice data of the first communication user as source voice data, and send the source voice data to the headset storage device; translating the source voice data by the earphone accommodating device to generate first target voice data, returning the first target voice data to the earphone, and sending the first target voice data to the terminal equipment by the earphone; the terminal equipment of the first communication user sends the first target voice data to the terminal equipment of the second communication user, the terminal equipment of the second communication user sends the first target voice data to an earphone of the second communication user, and the earphone of the second communication user plays the first target voice data; thereby realizing cross-language communication in the communication process of users using different languages. In addition, compared with the prior art that translation is performed by adopting a special translation device after the power amplifier needs to be started in the communication process, the embodiment of the invention can complete translation without starting the power amplifier in the communication process; not only can the privacy be protected, but also the accuracy of translation can be improved.
In one embodiment of the invention, only part of the users participating in the communication can use the earphone device; then, combining the above steps 302-310, and the above steps 402-410, the cross-language communication in the communication process of the users using different languages is realized. Any user using the headset device may be referred to as a first communication user, and other users participating in communication may be referred to as second communication users.
Referring to FIG. 5, a flowchart illustrating the steps of yet another alternate embodiment of the translation method of the present invention is shown.
Step 502, the earphone receives voice data sent by the terminal equipment as source voice data; and the source voice data is the voice data of the second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment.
And step 504, the earphone sends the source voice data to the earphone accommodating device.
Step 506, the earphone storage device translates the source speech data to generate first target speech data.
Step 508, the headset storing device sends the first target voice data to the headset.
Step 510, the earphone plays the first target voice data.
And step 512, the earphone collects the voice data of the first communication user as source voice data.
And 514, the earphone sends the source voice data to the earphone accommodating device.
Step 516, the earphone storage device translates the source speech data to generate first target speech data.
Step 518, the headset receiving device returns the first target voice data to the headset.
Step 520, the earphone sends the first target voice data to the terminal device, so that the terminal device sends the target voice data to the terminal device of the second communication user.
Wherein, steps 502-510 are similar to steps 302-310, and steps 512-520 are similar to steps 402-410, and are not repeated herein.
In addition, the embodiment of the present invention does not limit whether the step 502 to the step 510 are performed first or the step 512 to the step 520 are performed first.
In summary, in the embodiment of the present invention, in the process that the first communication user communicates with the second communication user through the terminal device, the earphone may use the voice data of the second communication user sent by the terminal device of the first communication user as source voice data and send the source voice data to the earphone storage device, and the earphone storage device translates the source voice data to generate the first target voice data and returns the first target voice data to the earphone for playing; the earphone can acquire the voice data of the first communication user as source voice data and send the source voice data to the earphone accommodating device, the earphone accommodating device translates the source voice data to generate first target voice data, and the first target voice data is returned to the earphone; and then the earphone sends the first target voice data to the terminal equipment of the first communication user, and the terminal equipment of the first communication user sends the first target voice data to the terminal equipment of the second communication user. The first target voice data may be played by at least one terminal device of a second communication user, or the first target voice data may be sent to an earphone of the second communication user by the terminal device of the second communication user, and the first target voice data is played by the earphone of the second communication user; the first target voice data can be sent to the corresponding earphone containing devices through the earphones of the second communication users, and the first target voice data are played through the earphone containing devices corresponding to the second communication users; the embodiments of the present invention are not limited in this regard. And even if some users do not use the earphone device among the users participating in the communication, cross-language communication among the users using different languages in the communication process can be realized.
Yet another scenario of the embodiment of the present invention may be: multi-person translation (including one-to-one translation); referring to FIG. 6a, only one-to-one translation scenario is shown in FIG. 6 a. The number of the earphones can be multiple, one of the earphones can be called a first earphone, the other earphones can be called a second earphone, the first earphone comprises one earphone, and the second earphone comprises at least one earphone. Namely, the earphone may include: the earphone comprises a first earphone and at least one second earphone, wherein the first earphone and the second earphone are respectively connected with the earphone containing device. The first earpiece is used by a first user, the second earpiece is used by at least one second user, and one second user can use one second earpiece; and further, translation in the face-to-face voice communication process can be realized through the first earphone and the second earphone.
When the second earphone is one, the first earphone and the second earphone can be the same pair of earphones or different pairs of earphones.
The connection between the first earphone and the earphone storage device and the connection between the second earphone and the earphone storage device may be performed in various manners, such as performing a touch operation in an application corresponding to the earphone device of the terminal device, connecting the first earphone and the earphone storage device, and connecting the second earphone and the earphone storage device. The touch operation can also be executed in the earphone accommodating device, the first earphone and the earphone accommodating device are connected, and the second earphone and the earphone accommodating device are connected. Or sending a voice connection instruction, connecting the first earphone and the earphone accommodating device, connecting the second earphone and the earphone accommodating device, and the like; the embodiments of the present invention are not limited in this regard.
Referring to FIG. 6b, a flowchart illustrating the steps of yet another alternate embodiment of the translation method of the present invention is shown.
Step 602, the first earphone collects voice data of the first user as source voice data.
And step 604, the first earphone sends the source speech data to the earphone accommodating device.
And 606, translating the source voice data by the earphone accommodating device to generate first target voice data.
Step 608, the earphone accommodating device sends the first target voice data to a second earphone.
And step 610, the second earphone plays the first target voice data.
In the embodiment of the invention, in a scene of face-to-face voice communication between a first user and at least one second user, during the speaking process of the first user, the first earphone can collect voice data of the first user and uses the voice data of the first user as source voice data.
The first headset may then send the source audio data to a headset storage device; after receiving the source speech data, the earphone storage device can perform speech recognition on the source speech data and determine a corresponding speech recognition text; and then translating the voice recognition text into a translation text of a second user corresponding language, and converting the translation text into corresponding first target voice data. And then the first target voice data is sent to a second earphone used by a second user, and the second earphone plays the first target voice data.
When the second users comprise a plurality of users, the earphone accommodating device can translate the source voice data into first target voice data matched with the languages used by the second users; at this time, the earphone storage device may sequentially transmit the first target voice data to the second earphones of the second users; and sequentially playing the first target voice data by the second earphone of each second user. Further, each of the second users who communicate with each other can acquire the understandable first target speech data from the plurality of first target speech data that are sequentially played. Of course, the second earphone used by each second user may select the first target voice data matched with the language used by the corresponding second user from the received plurality of first target voice data to play.
Of course, when the second users include a plurality of users, the earphone receiving device may translate the source speech data into the first target speech data matching the languages used by the plurality of second users; at this time, the earphone storage device may also send each first target voice data to a second earphone corresponding to a second user, respectively; each second earpiece may then play the received first target speech data.
In addition, when a second user in the same language as the first user is present among the plurality of second users, the headset storing device may further transmit the source speech data to a second headset of the second user in the same language as the first user. The second earphone of the second user, which is in the same language as the first user, may play the source speech data directly.
And 612, the second earphone collects the voice data of the second user as source voice data.
And 614, the second earphone sends the source voice data to the earphone accommodating device.
And step 616, translating the source voice data by the earphone accommodating device to generate first target voice data.
Step 618, the headset receiving device sends the first target voice data to a first headset.
And step 620, the first earphone plays the first target voice data.
Correspondingly, in a scenario that the first user and at least one second user perform face-to-face voice communication, during a speaking process of any second user (for convenience of subsequent description of the embodiment of the present invention, the speaking second user may be referred to as a target second user, and a second earphone corresponding to the target second user may be referred to as a target second earphone), the target second earphone may collect voice data of the target second user, and the voice data of the target second user is used as source voice data.
The target second earpiece may then send the source audio data to an earpiece storage device; after receiving the source speech data, the earphone storage device can perform speech recognition on the source speech data and determine a corresponding speech recognition text; and then translating the voice recognition text into a translation text of the language corresponding to the first user, and converting the translation text into corresponding first target voice data. And then the first target voice data is sent to a first earphone used by a first user, and the first earphone plays the first target voice data.
When the second users comprise a plurality of users, the earphone containing device can also translate the voice recognition text into translation texts of languages corresponding to other second users, and convert the translation texts into corresponding first target voice data; and then the first target voice data is sent to other second earphones used by other corresponding second users, and the other second earphones play the corresponding first target voice data.
Of course, when the earphone accommodating apparatus generates a plurality of first target voice data, the first target voice data may be sequentially transmitted to other second earphones of each other second user and the first earphone of the first user; and sequentially playing the first target voice data by other second earphones of other second users and the first earphone of the first user. Each of the other second users and the first user who are in face-to-face communication with each other can acquire the understandable first target speech data from the plurality of first target speech data that are sequentially played. Of course, other second earphones used by each other second user may select the first target voice data matching the language used by the corresponding second user from the received plurality of first target voice data to play. And selecting the first target voice data matched with the language used by the first user from the received plurality of first target voice data by the first earphone of the first user for playing.
In addition, when the earphone storage device translates the source speech data into a plurality of first target speech data, the earphone storage device can also transmit each first target speech data to the corresponding second earphone/first earphone respectively; the second/first earpiece may then receive and play the first target speech data.
In addition, when there is another second user in the same language as the target second user, the earphone storage device may further transmit the source speech data to another second earphone of the another second user in the same language as the target second user. And directly playing the source audio data by other second earphones of other second users which have the same language as the target second user.
The embodiment of the present invention does not limit whether step 602-step 610 is executed first or step 612-step 620 is executed first.
In summary, in the embodiment of the present invention, in a scenario that a first user and at least one second user perform face-to-face voice communication, during a speaking process of the first user, the first earphone collects voice data of the first user, the voice data is used as source voice data and is sent to the earphone storage device, the earphone storage device translates the source voice data to generate first target voice data and sends the first target voice data to the second earphone, and the second earphone plays the first target voice data. Correspondingly, in the process of speaking of the second user, the second earphone can collect the voice data of the second user, the voice data serves as source voice data and is sent to the earphone containing device, the source voice data is translated by the earphone containing device, first target voice data is generated and is sent to the first earphone, and the first earphone plays the first target voice data. Furthermore, the embodiment of the invention can realize multi-person translation based on the earphone, and does not need to exchange translation equipment of each party continuously to check the translation result aiming at the other party in the translation equipment, thereby not only improving the translation efficiency, but also improving the user experience.
In one embodiment of the invention, the earphone is used by a first user, the earphone containing device is used by a second user, and translation is realized through the earphone and the earphone containing device; referring to fig. 7a, fig. 7a shows only a one-to-one translation scenario. Wherein the headset may include at least one headset receiving device, and the headset receiving device may include at least one headset receiving device, and a first user may use one headset receiving device and a second user may use one headset receiving device.
Referring to FIG. 7b, a flowchart illustrating the steps of yet another translation method embodiment of the present invention is shown.
Step 702, the earphone collects voice data of the first user as source voice data.
Step 704, the earphone sends the source audio data to the earphone storage device.
Step 706, the earphone storage device translates the source speech data to generate first target speech data.
Step 708, the earphone receiving device plays the first target voice data.
In the embodiment of the present invention, in a scenario where multiple users (including at least one first user and at least one second user) communicate with each other in a face-to-face manner, during a speaking process of any one first user (for convenience of subsequent description of the embodiment of the present invention, the speaking first user is referred to as a target first user, and an earphone used by the target first user is referred to as a target earphone), the target earphone may collect voice data of the target first user, and use the voice data of the target first user as source voice data. The target headset may then send the acquired source audio data to each headset receiving device.
Each earphone accommodating device can perform voice recognition on the source language data and determine a corresponding voice recognition text; and then translating the voice recognition text into a translation text corresponding to the language used by the second user by using the earphone containing device, converting the translation text into corresponding first target voice data and playing the first target voice data.
When a user in the same language as the target first user is present in the plurality of second users, the earphone storage device of the second user in the same language as the target first user may directly play the source audio data.
When the first users comprise a plurality of users, any one earphone accommodating device can also translate source voice data into a plurality of first target voice data corresponding to the languages used by other first users; for convenience of the following description, the headset storing device may be referred to as a target headset storing device. At this time, the target earphone accommodating device may sequentially transmit the first target voice data to other earphones used by each other first user; and sequentially playing the first target voice data by other earphones of other first users. Each of the other first users who are in face-to-face communication can acquire the understandable first target speech data from the plurality of first target speech data that are sequentially played. Of course, other earphones used by each other first user may select the first target voice data matching the language used by the corresponding first user from the received plurality of first target voice data to play.
Certainly, when the first user includes a plurality of users, the target earphone storage device translates the source speech data into a plurality of first target speech data corresponding to the language used by each of the other first users, and then the target earphone storage device can also send each of the first target speech data to the other earphones corresponding to the other first users; the earphones of the other first users can receive and play the first target voice data.
When a user in the same language as the target first user exists in the other first users, the target earphone accommodating device can also send the source voice data to other earphones of the other first users in the same language as the target first user. Other headphones of other first users in the same language as the first user may play the source audio data directly.
In one example of the present invention, the earphone accommodating apparatus may be provided with a display screen. The earphone accommodating device can synchronously display the translated text of the source voice data, namely the text corresponding to the first target voice data in the display screen while playing the first target voice data; and then the second user can understand the first target voice data conveniently, and the user experience is further improved.
In addition, the earphone storage device also has a storage function and can receive the earphones. When the earphone is a wireless earphone, the earphone receiving device can also charge the earphone.
Step 710, the headset receiving device collects voice data of a second user.
Step 712, the earphone receiving device translates the voice data of the second user to generate second target voice data.
Step 714, the earphone receiving device sends the second target voice data to the earphone.
And 716, the earphone receives the second target voice data sent by the earphone accommodating device.
Step 718, the earphone plays the second target voice data.
In a scene of face-to-face voice communication between a first user and at least one second user, in a process of speaking by any one of the second users (for convenience of a subsequent description of an embodiment of the present invention, the speaking second user may be referred to as a target second user, and an earphone storage device corresponding to the target second user may be referred to as a target earphone storage device), the target earphone storage device may collect voice data of the target second user, and use the voice data of the target second user as source voice data.
Then the target earphone storage device can perform voice recognition on the acquired source voice data and determine a corresponding voice recognition text; and then, the voice recognition text is translated into a translation text of the language corresponding to the first user, and the translation text is converted into corresponding second target voice data. Then the second target voice data is sent to an earphone used by the first user, and the earphone plays the second target voice data; the first user using the earphone can hear the translation result of the source speech data of the second user
When the first users include a plurality of users, the target earphone storage device may translate the source speech data into second target speech data corresponding to a language used by each first user. Then, the second target voice data can be sequentially sent to the earphones used by each first user; and sequentially playing the second target voice data by the earphones of the first users. Each first user who communicates face to face can acquire the understandable second target voice data from the plurality of second target voice data played in sequence. Of course, the earphone used by each first user may select the second target voice data matched with the language used by the corresponding first user from the received plurality of second target voice data to play.
Certainly, when the first user includes a plurality of users, the target earphone storage device translates the source speech data into a plurality of second target speech data corresponding to the language used by each first user, and then the target earphone storage device can also send each second target speech data to the corresponding earphone of the first user; the earphone of the first user can receive and play the second target voice data.
When a user in the same language as the target second user exists in the plurality of first users, the target earphone storage device of the target second user may further transmit the source audio data to the earphone of the first user in the same language as the target second user. The headset of the first user, in the same language as the target second user, may play the source speech data directly.
When the second users comprise a plurality of users, the target earphone storage device can also translate the source voice data into second target voice data corresponding to the languages used by other second users; and then, sending each second target voice data to the corresponding earphone accommodating devices of other second users, and playing the corresponding target voice data by the earphone accommodating devices of the other second users.
When the target earphone storage device generates second target voice data corresponding to the languages used by a plurality of other second users, the second target voice data can be sequentially sent to other earphone storage devices of the other second users; the target voice data can be played in sequence by other earphone receiving devices of other second users. Further, each of the other second users who are communicating with each other can acquire target voice data that can be understood from the plurality of target voice data that are sequentially played. Of course, the earphone storage device used by each of the other second users may select and play second target voice data matching the language used by the corresponding second user from among the plurality of received second target voice data.
Of course, when a plurality of second users are included, the target earphone storage device can also directly send the source language data to the earphone storage boxes of other second users; and translating the source speech data into second target speech data corresponding to the language used by the other second users corresponding to the source speech data by the earphone storage boxes of the other second users, and playing the second target speech data.
In addition, when there is a user in the same language as the target second user among the other second users, the target earphone storage device of the target second user may further transmit the source speech data to the earphone storage devices of the other second users in the same language as the target second user. The earphone storage device of the other second user with the same language as the target second user can directly play the source voice data.
The embodiment of the present invention does not limit whether to perform the steps 702 to 708 first or to perform the steps 710 to 718 first.
In summary, in the embodiment of the present invention, in a face-to-face voice communication scenario between a first user and at least one second user, during a speech process of the first user, an earphone of the first user collects voice data of the first user, and sends the voice data as source voice data to an earphone storage device, and the earphone storage device translates the source voice data to generate first target voice data and play the first target voice data; and a second user using the headset storing device can hear the translation result of the source speech data of the first user. Correspondingly, in the speaking process of a second user, the earphone accommodating device can collect and translate voice data of the second user to generate second target voice data; then sending the second target voice data to an earphone used by the first user, and playing the second target voice data by the earphone used by the first user; the first user, in turn, using the headset, may hear the translation of the source speech data of the second user. Therefore, the embodiment of the invention can quickly realize translation based on the earphone and the earphone containing device connected with the earphone, and communication parties do not need to continuously exchange translation equipment to check the translation result aiming at the other party in the translation equipment, so that the translation efficiency can be improved, and the user experience can be improved. In addition, compared with the one-to-one translation by using two earphones in a pair of earphones, the embodiment of the invention can also avoid the worry of sanitation in the communication process of the two earphones, and further improve the user experience.
In the embodiment of the invention, the earphone can carry out simultaneous interpretation so as to translate the source voice data in real time and improve the user experience.
Referring to FIG. 8, a flowchart illustrating the steps of yet another translation method embodiment of the present invention is shown.
Step 802, the earphone acquires source speech data.
And step 804, the earphone sends the source voice data to the earphone accommodating device.
Step 806, the headset storage device performs simultaneous interpretation on the source speech data to generate first target speech data.
In the embodiment of the invention, in both the communication scene and the multi-user translation scene, after the earphone acquires source voice data, the source voice data can be sent to the earphone accommodating device for simultaneous interpretation, and first target voice data is generated; so as to improve the translation efficiency and communication fluency.
Of course, in other scenarios, the headset device may also be transliterated contemporaneously. For example, an online conference, an online interview/interview, scenes such as an external media speech, news, external media video and the like are watched, the earphone sends source audio data sent by the terminal device to the earphone storage device, the earphone storage device can perform simultaneous interpretation on the source audio data sent by the terminal device, first target audio data are generated, and then the first target audio data are returned to the earphone to be played. For another example, when participating in an international conference/forum, the headset storage device itself may collect source speech data and perform simultaneous interpretation, generate the first target speech data, and return the first target speech data to the headset for playing.
In summary, in the embodiment of the present invention, after the earphone acquires source speech data, the earphone may send the source speech data to the earphone storage device, and the earphone storage device performs simultaneous interpretation on the source speech data to generate first target speech data; therefore, the source speech data can be translated in real time, and the translation efficiency is improved. The source speech data obtained by the earphone is sent by the terminal equipment or collected by the earphone, the noise of the source speech data is low, the accuracy of simultaneous interpretation can be improved, and the user experience is further improved. In addition, in the simultaneous interpretation process, the earphone is connected with the earphone accommodating device or the terminal equipment; compared with the prior art, the embodiment of the invention can realize simultaneous interpretation at a longer distance.
In an optional embodiment of the present invention, the earphone storage device may further have a networking function, and the earphone storage device may translate the source speech data through the server to obtain the first target speech data.
Referring to FIG. 9, a flowchart illustrating the steps of one translation method embodiment of the present invention is shown.
Step 902, the earphone acquires source speech data.
And 904, sending the source voice data to the earphone accommodating device by the earphone.
Step 906, the earphone accommodating device generates a translation request according to the source speech data, and sends the translation request to a server, so that the server translates the source speech data according to the translation request, generates first target speech data and returns the first target speech data.
Step 908, the earphone receiving device receives the first target voice data returned by the server.
The headset may include two headsets forming a pair. Different users have different habits with respect to the usage of the earphones, for example, some users are accustomed to using one of the pair of earphones, and some users are accustomed to using the pair of earphones at the same time. The use habits of the same user on the earphones are different under different scenes, for example, when the user runs, the user is used with a pair of earphones at the same time; one of the earphones is used to habit to use when talking on voice. Correspondingly, one way for the earphone to play the voice data may be: the earphone controls the sound channel distribution of the earphone when playing voice data according to the using condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data. And then can be according to the condition that the user used the earphone, the sound channel of reasonable distribution earphone improves user experience.
In an example of the present invention, a manner for controlling, by the headset according to a usage of the headset, channel allocation of the headset when playing voice data may be: when both earphones are used, the two earphones play the source speech data and the first target speech data, respectively. Wherein the source speech data and the first target speech data in the two headsets may be played synchronously. For example, the source speech data may be played in one of the earphones used for the left ear and the first target speech data may be played in one of the earphones used for the right ear. For another example, the source speech data may be played in one earphone correspondingly used in the right ear, and the first target speech data may be played in one earphone correspondingly used in the left ear; the embodiments of the present invention are not limited in this regard.
Because different users have different ears for listening to the mother language, for example, some users are used to listen to the mother language by using the right ear and listen to the foreign language by using the left ear; some users are used to listen to the mother language with the left ear and listen to the foreign language with the right ear. Therefore, in order to better meet the personalized requirements of users, the earphone of the embodiment of the invention supports the users to switch the types of voice data played in the two earphones. The user can perform switching operation aiming at the earphone/earphone containing device, and can also perform switching operation in an application program corresponding to the earphone device of the terminal equipment; correspondingly, the earphones receive a switching instruction of a user and switch the types of voice data played in the two earphones. For example, if the source speech data is played in a headset corresponding to the right ear currently, the first target speech data is played in a headset corresponding to the left ear currently; then, upon receiving the switching instruction, the source speech data may be played in one of the earphones correspondingly used for the left ear and the first target speech data may be played in one of the earphones correspondingly used for the right ear. Wherein the switching operation for the earphone/earphone accommodating device may be a touch operation; or a voice instruction sent by a user; the present invention may also be a head movement of a user when using the headset or a movement of the user with respect to the headset storing device, which is not limited in this embodiment of the present invention.
In an embodiment of the present invention, the user may further perform a volume adjustment operation on the earphone/earphone storage device, or may perform a volume adjustment operation in the terminal device, so as to adjust the playing volume of the earphone. Correspondingly, the earphone can receive a volume adjusting instruction of a user, and the volume of the earphone corresponding to the music adjusting instruction is adjusted; and the volume of each earphone can be adjusted respectively. Wherein the volume adjustment operation for the headset/headset storage device may be a touch operation; or a voice instruction sent by a user; the present invention may also be a head movement of a user when using the headset or a movement of the user with respect to the headset storing device, which is not limited in this embodiment of the present invention.
In an embodiment of the present invention, the earphones may receive a volume adjustment instruction of a user, and adjust the volume of the two earphones; and then the volume of the two earphones is adjusted simultaneously.
In addition, some users may be unaccustomed to playing voice data in different languages in the two earphones; correspondingly, the earphone can also provide a function of type selection, and a user can perform type selection operation aiming at the earphone/earphone accommodating device and also can perform switching operation in an application program corresponding to the earphone in the terminal equipment, so that voice data of the same language can be played in the two earphones. After the user executes the type selection operation, the earphones receive the type selection instruction of the user, and the two earphones play the first target voice data or the source voice data. The type selection operation for the earphone/earphone accommodating device can be a touch operation; or a voice instruction sent by a user; the present invention may also be a head movement of a user when using the headset or a movement of the user with respect to the headset storing device, which is not limited in this embodiment of the present invention.
In another example of the present invention, according to the usage of the headset, the manner of controlling the channel allocation of the headset during playing voice data may be: when one of the earphones is used, the used earphone plays a mix of the source speech data and the first target speech data. After the source speech data and the first target speech data are mixed, the mixed sound can be played in the earphone. The subsequent user can adjust the volume of the source speech data and the first target speech data in the mixed sound and the integral volume of the mixed sound so as to meet the individual requirements of the user and improve the user experience.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The embodiment of the invention also provides a translation device which is applied to the earphone.
Referring to fig. 10, a block diagram of a structure of an embodiment of the earphone side translation apparatus of the present invention is shown, which may specifically include the following modules:
an obtaining module 1002, configured to obtain source speech data;
a first sending module 1004, configured to send the source speech data to the earphone storage device, where the earphone storage device translates the source speech data to generate first target speech data.
Referring to fig. 11, a block diagram of an alternative embodiment of the headphone side interpreting apparatus according to the present invention is shown.
In an optional embodiment of the present invention, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device;
the obtaining module 1002 includes:
a voice data receiving submodule 10022, configured to receive voice data sent by the terminal device, where the voice data is used as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment;
the device further comprises:
the first playing module 1006 is configured to receive the first target voice data sent by the earphone storage device and play the first target voice data.
In an optional embodiment of the present invention, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device;
the obtaining module 1002 includes:
a first voice data collection submodule 10024, configured to collect voice data of the first communication user as source voice data in a process that the first communication user communicates with at least one second communication user through the terminal device;
the device further comprises:
a first receiving module 1008, configured to receive first target voice data sent by the earphone receiving apparatus;
a second sending module 1010, configured to send the first target voice data to the terminal device, so that the terminal device sends the first target voice data to the terminal device of the second communication user.
In an alternative embodiment of the present invention, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user;
the obtaining module 1002 includes:
a second voice data collection submodule 10026, configured to collect voice data of the first user, where the voice data is used as source voice data;
the first sending module 1004 is configured to call the first earphone to send the source speech data to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data and send the first target speech data to the second earphone;
the device further comprises:
the second playing module 1012 is configured to invoke the second earphone to receive the first target voice data sent by the earphone storage device, and play the first target voice data.
In an alternative embodiment of the invention, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user;
the obtaining module 1002 includes:
a third voice data collection submodule 10028, configured to collect voice data of the first user, where the voice data is used as source voice data; wherein the first target voice data is played by the earphone receiving device.
In an optional embodiment of the present invention, the apparatus further comprises:
a second receiving module 1014, configured to receive second target voice data sent by the earphone receiving apparatus, where the second target voice data is generated by translating the voice data of the second user collected by the earphone receiving apparatus;
a third playing module 1016, configured to play the second target voice data.
In an optional embodiment of the present invention, the earphone includes two earphones, and the apparatus further includes:
a third receiving module 1018, configured to receive the first target voice data sent by the earphone receiving apparatus;
an allocating module 1020, configured to control, according to a usage situation of the headset, a channel allocation of the headset when playing voice data, where the voice data includes source voice data and/or first target voice data.
In an alternative embodiment of the present invention, the allocating module 1020 includes:
a first channel assignment sub-module 10202 for playing the source speech data and the first target speech data respectively by the two earphones when both earphones are used.
In an optional embodiment of the present invention, the apparatus further comprises:
a switching module 1022, configured to receive a switching instruction of a user, and switch the types of the voice data played in the two earphones;
the adjusting module 1024 is configured to receive a volume adjusting instruction of a user, and adjust the volume of the earphone corresponding to the music adjusting operation; or
A selecting module 1026, configured to receive a category selecting instruction of the user, where both the two earphones play the first target speech data or both play the source speech data.
In an alternative embodiment of the present invention, the allocating module 1020 includes:
a second distributing submodule 10204, configured to, when one of the earphones is used, play the mixed sound of the source speech data and the first target speech data by the used earphone.
In summary, in the embodiment of the present invention, an earphone may obtain source speech data, then send the source speech data to an earphone storage device connected to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data; and then the user only needs to adopt earphone equipment can realize the translation, need not to adopt special translation equipment.
The embodiment of the invention also provides a translation device which is applied to the earphone storage device.
Referring to fig. 12, a block diagram of a structure of an embodiment of a side translation device of an earphone storage device of the present invention is shown, which may specifically include the following modules:
a fourth receiving module 1202, configured to receive source speech data sent by the headset;
the first translation module 1204 is configured to translate the source speech data to generate first target speech data.
Referring to fig. 13, there is shown a block diagram of an alternative embodiment of the earphone accommodation device side translation device of the present invention.
In an optional embodiment of the present invention, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment;
the device further comprises:
a third sending module 1206, configured to send the first target voice data to the headset, where the headset plays the first target voice data;
a fourth playing module 1208, configured to play the first target voice data.
In an optional embodiment of the present invention, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the first communication user collected by the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment;
the device further comprises:
a fourth sending module 1210, configured to send the first target voice data to the earphone, so that the earphone forwards the first target voice data to the terminal device, and the terminal device sends the first target voice data to the terminal device of the second communication user.
In an alternative embodiment of the present invention, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the source speech data is speech data of a first user collected by the first headset;
the device further comprises:
a fifth sending module 1212, configured to send the first target voice data to a second headset, where the second headset plays the first target voice data.
In an alternative embodiment of the invention, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the source speech data is speech data of a first user collected by the headset;
the device further comprises:
a fifth playing module 1214, configured to play the first target voice data.
In an optional embodiment of the present invention, the apparatus further comprises:
an acquisition module 1216 for acquiring voice data of the second user;
the second translation module 1218, configured to translate the voice data of the second user to generate second target voice data;
a sixth sending module 1220, configured to send the second target speech data to the earphone, and the earphone plays the second target speech data.
In an optional embodiment of the present invention, the apparatus further comprises:
a seventh sending module 1222, configured to send the first target speech data to the headset, where the headset controls channel allocation when playing speech data according to usage of the headset, where the speech data includes source speech data and/or first target speech data.
In an optional embodiment of the present invention, the first translation module 1204 is configured to perform simultaneous interpretation on the source speech data to generate first target speech data.
In an optional embodiment of the invention, the earphone receiving device is further connected with a server,
the first translation module 1204 is configured to generate a translation request according to the source speech data, and send the translation request to a server, so that the server translates the source speech data according to the translation request, generates first target speech data, and returns the first target speech data; and receiving first target voice data returned by the server.
In summary, in the embodiment of the present invention, after obtaining source speech data sent by an earphone, an earphone storage device may translate the source speech data to generate first target speech data; and then the user only needs to adopt earphone equipment can realize the translation, need not to adopt special translation equipment.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Fig. 14 is a block diagram illustrating an architecture of an electronic device 1400 for translation, according to an example embodiment. For example, the electronic device 1400 may be a headset, a headset receptacle, and the like.
Referring to fig. 14, electronic device 1400 may include one or more of the following components: a processing component 1402, a memory 1404, a power component 1406, a multimedia component 1408, an audio component 1410, an input/output (I/O) interface 1412, a sensor component 1414, and a communication component 1416.
The processing component 1402 generally controls overall operation of the electronic device 1400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1402 may include one or more processors 1420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1402 can include one or more modules that facilitate interaction between processing component 1402 and other components. For example, the processing component 1402 can include a multimedia module to facilitate interaction between the multimedia component 1408 and the processing component 1402.
The memory 1404 is configured to store various types of data to support operation at the device 1400. Examples of such data include instructions for any application or method operating on the electronic device 1400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1404 may be implemented by any type of volatile or non-volatile storage device or combination of devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 1406 provide power to the various components of electronic device 1400. Power components 1406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 1400.
The multimedia component 1408 includes a screen that provides an output interface between the electronic device 1400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1408 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1400 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1410 is configured to output and/or input audio signals. For example, the audio component 1410 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1400 is in operating modes, such as a call mode, a record mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1404 or transmitted via the communication component 1416. In some embodiments, audio component 1410 further includes a speaker for outputting audio signals.
I/O interface 1412 provides an interface between processing component 1402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1414 includes one or more sensors for providing various aspects of status assessment for the electronic device 1400. For example, the sensor component 1414 can detect an open/closed state of the device 1400, a relative positioning of components, such as a display and keypad of the electronic device 1400, a change in position of the electronic device 1400 or a component of the electronic device 1400, the presence or absence of user contact with the electronic device 1400, an orientation or acceleration/deceleration of the electronic device 1400, and a change in temperature of the electronic device 1400. The sensor assembly 1414 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1414 may also include a photosensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1416 is configured to facilitate wired or wireless communication between the electronic device 1400 and other devices. The electronic device 1400 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication section 1414 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1414 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory 1404 that includes instructions executable by the processor 1420 of the electronic device 1400 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A headset comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for: obtaining source speech data; and sending the source voice data to the earphone accommodating device, and translating the source voice data by the earphone accommodating device to generate first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: the earphone receives voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment; further comprising instructions to perform the following operations: the earphone receives first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: in the process that the first communication user communicates with at least one second communication user through the terminal equipment, the earphone collects voice data of the first communication user and uses the voice data as source voice data; further comprising instructions to perform the following operations: the earphone receives first target voice data sent by the earphone containing device; and the earphone sends the first target voice data to the terminal equipment so that the terminal equipment sends the first target voice data to the terminal equipment of the second communication user.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the earphone acquires source speech data, and comprises: the first earphone collects voice data of the first user as source voice data; the earphone with source speech data send to earphone storage device includes: the first earphone sends the source speech data to the earphone storage device, the earphone storage device translates the source speech data to generate first target speech data, and the first target speech data are sent to the second earphone; further comprising instructions to perform the following operations: and the second earphone receives the first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the earphone acquires source speech data, and comprises: the earphone collects voice data of the first user as source voice data; wherein the first target voice data is played by the earphone receiving device.
Optionally, the method further includes performing the following operations: the earphone receives second target voice data sent by the earphone containing device, and the second target voice data is generated by translating the voice data of a second user collected by the earphone containing device; and the earphone plays the second target voice data.
Optionally, the headset includes two earphones, and further includes instructions for performing the following operations: the earphone receives first target voice data sent by the earphone containing device; the earphone controls the sound channel distribution of the earphone when playing voice data according to the using condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when both earphones are used, the two earphones play the source speech data and the first target speech data, respectively.
Optionally, the method further includes performing the following operations: receiving a switching instruction of a user, and switching the types of voice data played in the two earphones; or receiving a volume adjusting instruction of a user, and adjusting the volume of the earphone corresponding to the music adjusting operation; or receiving a category selection instruction of a user, wherein the two earphones play the first target voice data or the source voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when one of the earphones is used, the used earphone plays a mix of the source speech data and the first target speech data.
A headset housing device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by one or more processors the one or more programs including instructions for: receiving source voice data sent by the earphone; and translating the source voice data to generate first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; further comprising instructions to perform the following operations: the earphone storage device sends the first target voice data to the earphone, and the earphone plays the first target voice data; or, the earphone accommodating device plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the first communication user collected by the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; further comprising instructions to perform the following operations: and sending the first target voice data to the earphone so that the earphone forwards the first target voice data to the terminal equipment, and sending the first target voice data to the terminal equipment of the second communication user by the terminal equipment.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the source speech data is speech data of a first user collected by the first headset; further comprising instructions to perform the following operations: and the earphone accommodating device sends the first target voice data to a second earphone, and the second earphone plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the source speech data is speech data of a first user collected by the headset; further comprising instructions to perform the following operations: the earphone storage device plays the first target voice data.
Optionally, the method further includes performing the following operations: the earphone receiving device collects voice data of a second user; translating the voice data of the second user to generate second target voice data; and sending the second target voice data to the earphone, and playing the second target voice data by the earphone.
Optionally, the method further includes performing the following operations: and sending the first target voice data to the earphone, and controlling the sound channel distribution of the earphone when the earphone plays the voice data according to the using condition of the earphone by the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the translating the source speech data to generate first target speech data includes: and performing simultaneous interpretation on the source speech data to generate first target speech data.
Optionally, the earphone receiving device is further connected to a server, and the translating the source speech data to generate target speech data includes: generating a translation request according to the source voice data, and sending the translation request to a server so that the server translates the source voice data according to the translation request, generates first target voice data and returns the first target voice data; and receiving first target voice data returned by the server.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a headset, enable the headset to perform a translation method, the method comprising: the earphone is connected with the earphone receiving device, and the method comprises the following steps: the earphone acquires source speech data; the earphone sends the source speech data to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: the earphone receives voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the method further comprises the following steps: the earphone receives first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the earphone acquires source speech data, and comprises: in the process that the first communication user communicates with at least one second communication user through the terminal equipment, the earphone collects voice data of the first communication user and uses the voice data as source voice data; the method further comprises the following steps: the earphone receives first target voice data sent by the earphone containing device; and the earphone sends the first target voice data to the terminal equipment so that the terminal equipment sends the first target voice data to the terminal equipment of the second communication user.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the earphone acquires source speech data, and comprises: the first earphone collects voice data of the first user as source voice data; the earphone with source speech data send to earphone storage device includes: the first earphone sends the source speech data to the earphone storage device, the earphone storage device translates the source speech data to generate first target speech data, and the first target speech data are sent to the second earphone; the method further comprises the following steps: and the second earphone receives the first target voice data sent by the earphone containing device and plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the earphone acquires source speech data, and comprises: the earphone collects voice data of the first user as source voice data; wherein the first target voice data is played by the earphone receiving device.
Optionally, the method further comprises: the earphone receives second target voice data sent by the earphone containing device, and the second target voice data is generated by translating the voice data of a second user collected by the earphone containing device; and the earphone plays the second target voice data.
Optionally, the headset includes two earphones, and the method further includes: the earphone receives first target voice data sent by the earphone containing device; the earphone controls the sound channel distribution of the earphone when playing voice data according to the using condition of the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when both earphones are used, the two earphones play the source speech data and the first target speech data, respectively.
Optionally, the method further comprises: receiving a switching instruction of a user, and switching the types of voice data played in the two earphones; or receiving a volume adjusting instruction of a user, and adjusting the volume of the earphone corresponding to the music adjusting operation; or receiving a category selection instruction of a user, wherein the two earphones play the first target voice data or the source voice data.
Optionally, the controlling, by the headset according to a usage of the headset, channel allocation of the headset while playing voice data includes: when one of the earphones is used, the used earphone plays a mix of the source speech data and the first target speech data.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a headset receiving device, enable the headset receiving device to perform a translation method, the method comprising: the earphone receiving device receives source voice data sent by the earphone; the earphone storage device translates the source speech data to generate first target speech data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the method further comprises the following steps: the earphone storage device sends the first target voice data to the earphone, and the earphone plays the first target voice data; or, the earphone accommodating device plays the first target voice data.
Optionally, the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the first communication user collected by the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment; the method further comprises the following steps: and sending the first target voice data to the earphone so that the earphone forwards the first target voice data to the terminal equipment, and sending the first target voice data to the terminal equipment of the second communication user by the terminal equipment.
Optionally, the headset comprises: the second earphone and the first earphone are respectively connected with the earphone containing device; the first earpiece is used by a first user and the second earpiece is used by at least one second user; the source speech data is speech data of a first user collected by the first headset; the method further comprises the following steps: and the earphone accommodating device sends the first target voice data to a second earphone, and the second earphone plays the first target voice data.
Optionally, the headset comprises at least one, the headset being used by at least one first user, the headset receiving means comprises at least one, the headset receiving means is used by at least one second user; the source speech data is speech data of a first user collected by the headset; the method further comprises the following steps: the earphone storage device plays the first target voice data.
Optionally, the method further comprises: the earphone receiving device collects voice data of a second user; translating the voice data of the second user to generate second target voice data; and sending the second target voice data to the earphone, and playing the second target voice data by the earphone.
Optionally, the method further comprises: and sending the first target voice data to the earphone, and controlling the sound channel distribution of the earphone when the earphone plays the voice data according to the using condition of the earphone by the earphone, wherein the voice data comprises source voice data and/or first target voice data.
Optionally, the translating the source speech data to generate first target speech data includes: and performing simultaneous interpretation on the source speech data to generate first target speech data.
Optionally, the earphone receiving device is further connected to a server, and the translating the source speech data to generate target speech data includes: generating a translation request according to the source voice data, and sending the translation request to a server so that the server translates the source voice data according to the translation request, generates first target voice data and returns the first target voice data; and receiving first target voice data returned by the server.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is provided for a translation method, a translation device, an earphone and an earphone storage device provided by the present invention, and the principle and the implementation of the present invention are explained by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A translation method is applied to earphones which are connected with an earphone receiving device, and the method comprises the following steps:
the earphone acquires source speech data;
the earphone sends the source speech data to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data.
2. The method of claim 1, wherein the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device;
the earphone acquires source speech data, and comprises:
the earphone receives voice data sent by the terminal equipment as source voice data; the voice data sent by the terminal equipment is the voice data of a second communication user received by the terminal equipment in the process that the first communication user communicates with at least one second communication user through the terminal equipment;
the method further comprises the following steps:
the earphone receives first target voice data sent by the earphone containing device and plays the first target voice data.
3. A translation method is applied to a headset receiving device, wherein the headset receiving device is connected with a headset, and the method comprises the following steps:
the earphone receiving device receives source voice data sent by the earphone;
the earphone storage device translates the source speech data to generate first target speech data.
4. The method of claim 3, wherein the headset is further connected to a terminal device, and the headset is used by a first communication user corresponding to the terminal device; the source voice data is the voice data of the second communication user received by the terminal equipment and sent to the earphone in the process that the first communication user communicates with at least one second communication user through the terminal equipment;
the method further comprises the following steps:
the earphone storage device sends the first target voice data to the earphone, and the earphone plays the first target voice data; or the like, or, alternatively,
the earphone storage device plays the first target voice data.
5. A translation device for use in a headset, the headset being connectable to a headset receiving device, the device comprising:
the acquisition module is used for acquiring source speech data;
the first sending module is used for sending the source speech data to the earphone storage device, and the earphone storage device translates the source speech data to generate first target speech data.
6. A translation device, which is applied to a headset receiving device, the headset receiving device is connected with a headset, the translation device comprises:
the fourth receiving module is used for receiving the source speech data sent by the earphone;
and the first translation module is used for translating the source voice data to generate first target voice data.
7. An earphone comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
obtaining source speech data;
and sending the source voice data to the earphone accommodating device, and translating the source voice data by the earphone accommodating device to generate first target voice data.
8. An earphone receiving device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
receiving source voice data sent by the earphone;
and translating the source voice data to generate first target voice data.
9. A readable storage medium, wherein instructions in the storage medium, when executed by a processor of a headset, enable the headset to perform the translation method of any of method claims 1-2.
10. A readable storage medium, wherein instructions in the storage medium, when executed by a processor of a headset housing device, enable the headset housing device to perform the translation method of any of method claims 3-4.
CN202010508213.0A 2020-06-05 2020-06-05 Translation method and device, earphone and earphone storage device Active CN111696554B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010508213.0A CN111696554B (en) 2020-06-05 2020-06-05 Translation method and device, earphone and earphone storage device
PCT/CN2021/087836 WO2021244159A1 (en) 2020-06-05 2021-04-16 Translation method and apparatus, earphone, and earphone storage apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508213.0A CN111696554B (en) 2020-06-05 2020-06-05 Translation method and device, earphone and earphone storage device

Publications (2)

Publication Number Publication Date
CN111696554A true CN111696554A (en) 2020-09-22
CN111696554B CN111696554B (en) 2022-04-26

Family

ID=72479612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508213.0A Active CN111696554B (en) 2020-06-05 2020-06-05 Translation method and device, earphone and earphone storage device

Country Status (2)

Country Link
CN (1) CN111696554B (en)
WO (1) WO2021244159A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112331179A (en) * 2020-11-11 2021-02-05 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
CN112506331A (en) * 2020-12-11 2021-03-16 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
WO2021244159A1 (en) * 2020-06-05 2021-12-09 北京搜狗智能科技有限公司 Translation method and apparatus, earphone, and earphone storage apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333200A (en) * 2017-07-24 2017-11-07 歌尔科技有限公司 One kind translation earphone storage box, wireless translation earphone and wireless translation system
JP6364629B2 (en) * 2016-07-08 2018-08-01 パナソニックIpマネジメント株式会社 Translation apparatus and translation method
CN108509428A (en) * 2018-02-26 2018-09-07 深圳市百泰实业股份有限公司 Earphone interpretation method and system
CN108810708A (en) * 2018-07-25 2018-11-13 中译语通科技(青岛)有限公司 A kind of translation system of the TWS bluetooth headsets of subsidiary storage box
CN109614628A (en) * 2018-11-16 2019-04-12 广州市讯飞樽鸿信息技术有限公司 A kind of interpretation method and translation system based on Intelligent hardware
CN110602675A (en) * 2019-08-22 2019-12-20 歌尔股份有限公司 Earphone pair translation method and device, earphone pair and translation system
CN110765786A (en) * 2019-10-12 2020-02-07 深圳情景智能有限公司 Translation system, earphone translation method and translation equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3735646B1 (en) * 2018-01-03 2021-11-24 Google LLC Using auxiliary device case for translation
CN208353534U (en) * 2018-02-26 2019-01-08 深圳市百泰实业股份有限公司 A kind of translation earphone system
CN109275057A (en) * 2018-08-31 2019-01-25 歌尔科技有限公司 A kind of translation earphone speech output method, system and translation earphone and storage medium
CN209517430U (en) * 2019-01-21 2019-10-18 科大讯飞股份有限公司 Translator pickup expanding unit, system of interpreting and microphone array pedestal
TWM590893U (en) * 2019-10-29 2020-02-21 鋒霖科技股份有限公司 Earphone storage box device
CN210225714U (en) * 2019-11-05 2020-03-31 葛东峰 Translation earphone receiver
CN111696554B (en) * 2020-06-05 2022-04-26 北京搜狗科技发展有限公司 Translation method and device, earphone and earphone storage device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6364629B2 (en) * 2016-07-08 2018-08-01 パナソニックIpマネジメント株式会社 Translation apparatus and translation method
CN107333200A (en) * 2017-07-24 2017-11-07 歌尔科技有限公司 One kind translation earphone storage box, wireless translation earphone and wireless translation system
CN108509428A (en) * 2018-02-26 2018-09-07 深圳市百泰实业股份有限公司 Earphone interpretation method and system
CN108810708A (en) * 2018-07-25 2018-11-13 中译语通科技(青岛)有限公司 A kind of translation system of the TWS bluetooth headsets of subsidiary storage box
CN109614628A (en) * 2018-11-16 2019-04-12 广州市讯飞樽鸿信息技术有限公司 A kind of interpretation method and translation system based on Intelligent hardware
CN110602675A (en) * 2019-08-22 2019-12-20 歌尔股份有限公司 Earphone pair translation method and device, earphone pair and translation system
CN110765786A (en) * 2019-10-12 2020-02-07 深圳情景智能有限公司 Translation system, earphone translation method and translation equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021244159A1 (en) * 2020-06-05 2021-12-09 北京搜狗智能科技有限公司 Translation method and apparatus, earphone, and earphone storage apparatus
CN112331179A (en) * 2020-11-11 2021-02-05 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
CN112506331A (en) * 2020-12-11 2021-03-16 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device

Also Published As

Publication number Publication date
WO2021244159A1 (en) 2021-12-09
CN111696554B (en) 2022-04-26

Similar Documents

Publication Publication Date Title
CN111696554B (en) Translation method and device, earphone and earphone storage device
JP6121621B2 (en) Voice call method, apparatus, program, and recording medium
CN106454644B (en) Audio playing method and device
WO2017181551A1 (en) Video processing method and device
CN104602112A (en) Configuration method and device
CN109614470B (en) Method and device for processing answer information, terminal and readable storage medium
CN111739538B (en) Translation method and device, earphone and server
CN111583952A (en) Audio processing method and device, electronic equipment and storage medium
WO2021244135A1 (en) Translation method and apparatus, and headset
US20220210501A1 (en) Method and apparatus for playing data
CN114513571A (en) Device connection method and device, electronic device and readable storage medium
CN110913276B (en) Data processing method, device, server, terminal and storage medium
CN116320514A (en) Live broadcast method, system, electronic equipment and medium for audio and video conference
WO2023216119A1 (en) Audio signal encoding method and apparatus, electronic device and storage medium
CN105307007A (en) Program sharing method, apparatus and system
CN111694539B (en) Method, device and medium for switching between earphone and loudspeaker
WO2018058331A1 (en) Method and apparatus for controlling volume
CN110213531B (en) Monitoring video processing method and device
CN114416015A (en) Audio adjusting method and device, electronic equipment and readable storage medium
CN107340990B (en) Playing method and device
CN112039756A (en) Method, device, electronic equipment and medium for establishing real-time communication
CN112511686A (en) Recording method and earphone equipment
CN105376513A (en) Information transmission method and device
CN112738341B (en) Call data processing method and earphone device
CN113286218B (en) Translation method and device and earphone equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant