CN114245255A - TWS earphone and real-time interpretation method, terminal and storage medium thereof - Google Patents

TWS earphone and real-time interpretation method, terminal and storage medium thereof Download PDF

Info

Publication number
CN114245255A
CN114245255A CN202111315662.4A CN202111315662A CN114245255A CN 114245255 A CN114245255 A CN 114245255A CN 202111315662 A CN202111315662 A CN 202111315662A CN 114245255 A CN114245255 A CN 114245255A
Authority
CN
China
Prior art keywords
earphone
translated
module
terminal
voice signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111315662.4A
Other languages
Chinese (zh)
Inventor
郭世文
杨卉
何桂晓
曹磊
童维静
吴海全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoqing Deqing Guanxu Electronics Co ltd
Original Assignee
Zhaoqing Deqing Guanxu Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Deqing Guanxu Electronics Co ltd filed Critical Zhaoqing Deqing Guanxu Electronics Co ltd
Priority to CN202111315662.4A priority Critical patent/CN114245255A/en
Publication of CN114245255A publication Critical patent/CN114245255A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B5/00Near-field transmission systems, e.g. inductive loop type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • H04R1/1066Constructional aspects of the interconnection between earpiece and earpiece support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication

Abstract

The embodiment of the invention discloses a TWS earphone, a real-time interpretation method thereof, a terminal and a storage medium. A first earphone in the TWS earphone is triggered to enter a monitoring mode through a key module, a second earphone enters an auxiliary listening mode, at the moment, the first earphone is equivalent to a radio device and is arranged at a sound source to pick up a voice signal to be translated, the first earphone sends the voice signal to be translated to a terminal, the terminal translates the voice signal to be translated to obtain a translated voice signal and sends the translated voice signal to the second earphone, the second earphone is equivalent to a playing device, and the second earphone plays the translated voice signal. Therefore, the two earphones can independently realize the functions of monitoring and auxiliary listening respectively, one earphone can be arranged at the sound emitting source for receiving sound, the other earphone can broadcast at a long distance, synchronous translation can be clearly realized without being close to the sound emitting source, and the translation quality is improved.

Description

TWS earphone and real-time interpretation method, terminal and storage medium thereof
Technical Field
The invention relates to the field of translation earphones, in particular to a TWS earphone, a real-time translation method thereof, a terminal and a storage medium.
Background
With the development of the international process, the scenes of international communication are more and more. Particularly, a video conference for cross-country communication is held, participants of different nationalities exist in the conference, the participants of different nationalities publish languages of different countries in the conference, and other participants have the requirement of multi-language real-time translation.
At present, languages of various countries are usually translated in real time through a translation earphone in the market, and the translation earphone obtains the sound of a conference, transmits the sound to a mobile phone, translates the sound and transmits the sound back to the earphone to be played. In an actual meeting scene, the existing translation earphones need the participants to sit in front of a meeting screen, namely, close to a sound source, and the participants need to translate through the translation earphones. However, in an empty large conference room, the number of participants is often large, and for the participants sitting far away from the conference screen, the sound volume of the sound equipment is too small, the sound equipment is often not clear, and the sound receiving effect of the earphone is poor; for the participants sitting close to the conference screen, the sound volume is too large, so that the participants are too noisy. Moreover, the quality of the audio equipment in the conference room is usually poor, or the volume is too large or too small, so that the error rate of the translation result is too high, and the translation quality is not satisfactory.
Disclosure of Invention
The embodiment of the invention provides a TWS earphone, a real-time translation method thereof, a terminal and a storage medium, and aims to solve the problem that the existing translation earphone is poor in translation effect.
In a first aspect, an embodiment of the present invention provides a TWS headset, which includes a first headset and a second headset, where the first headset and/or the second headset includes: the device comprises a microphone module, a Bluetooth module, a loudspeaker module and a key module, wherein the microphone module is used for picking up a voice signal to be translated; the Bluetooth module is connected with the microphone module and used for sending the voice signal to be translated to a terminal and receiving the translated voice signal from the terminal; the loudspeaker module is connected with the Bluetooth module and used for playing the translated voice signal; the button module is connected with the Bluetooth module and used for triggering to enter a monitoring mode and controlling the microphone module to be started and the loudspeaker module to be closed; the auxiliary listening control module is used for triggering to enter an auxiliary listening mode and controlling the microphone module to be closed and the loudspeaker module to be opened; wherein when one of the first earphone and the second earphone enters a listening mode, the other earphone enters an auxiliary listening mode.
In a second aspect, an embodiment of the present invention further provides a real-time translation method for a TWS headset, which is applied to a terminal, where the TWS headset is the TWS headset in the first aspect, and the method includes: receiving a speech signal to be translated picked up from a first earphone; translating the voice signal to be translated to obtain a translated voice signal; sending the translated voice signal to a second earphone to enable the second earphone to play the translated voice signal.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and a processor, where the memory stores a computer program, and the processor implements the method according to the second aspect when executing the computer program.
In a fourth aspect, the present invention also provides a computer-readable storage medium, in which a computer program is stored, the computer program including program instructions, which, when executed by a processor, cause the processor to perform the method according to the second aspect.
The embodiment of the invention provides a TWS earphone, a real-time interpretation method thereof, a terminal and a storage medium. Wherein the method comprises the following steps: receiving a speech signal to be translated picked up from a first earphone; translating the voice signal to be translated to obtain a translated voice signal; sending the translated voice signal to a second earphone to enable the second earphone to play the translated voice signal. In the embodiment of the invention, the first earphone is triggered to enter a monitoring mode through the key module, the second earphone enters an auxiliary listening mode, at the moment, the first earphone is equivalent to a radio device and is arranged at a sound production source to pick up a voice signal to be translated, the first earphone sends the voice signal to be translated to the terminal, the terminal translates the voice signal to be translated to obtain the translated voice signal and sends the translated voice signal to the second earphone, the second earphone is equivalent to a playing device, and the second earphone plays the translated voice signal. Therefore, the two earphones can independently realize the functions of monitoring and auxiliary listening respectively, one earphone can be arranged at the sound emitting source for receiving sound, the other earphone can broadcast at a long distance, synchronous translation can be clearly realized without being close to the sound emitting source, and the translation quality is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a first earpiece of a TWS earpiece provided by an embodiment of the present invention;
FIG. 2 is a schematic circuit diagram of a TWS headset according to an embodiment of the present invention;
FIG. 3 is another circuit diagram of a TWS headset according to an embodiment of the present invention;
fig. 4 is a schematic view of an application scenario of a TWS headset according to an embodiment of the present invention;
fig. 5 is a schematic view of another application scenario of the TWS headset according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a real-time translation method according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a real-time translation method according to another embodiment of the present invention;
FIG. 8 is a flowchart illustrating a real-time translation method according to another embodiment of the present invention;
FIG. 9 is a flowchart illustrating a real-time translation method according to yet another embodiment of the present invention;
FIG. 10 is a flowchart illustrating a real-time translation method according to yet another embodiment of the present invention;
fig. 11 is a schematic block diagram of a terminal provided in an embodiment of the present invention;
10. a first earphone; 11. a Bluetooth module; 12. a microphone module; 13. a horn module; 14. a key module; 15. an antenna module; 16. MCU; 17. an LED module; 20. a second earphone; 30. provided is a smart phone.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, an embodiment of the present invention provides a TWS headset including a first headset 10 and a second headset 20, the first headset 10 and/or the second headset 20 including: the device comprises a microphone module 12, a Bluetooth module 11, a loudspeaker module 13, a key module 14, and the microphone module 12, which is used for picking up a voice signal to be translated; the Bluetooth module 11 is connected with the microphone module 12, and is used for sending the voice signal to be translated to a terminal and receiving the translated voice signal from the terminal; the loudspeaker module 13 is connected with the Bluetooth module 11 and used for playing the translated voice signal; the key module 14 is connected with the bluetooth module 11 and used for triggering to enter a monitoring mode and controlling the microphone module 12 to be turned on and the loudspeaker module 13 to be turned off; the auxiliary listening control module is used for triggering to enter an auxiliary listening mode, and controlling the microphone module 12 to be closed and the loudspeaker module 13 to be opened; wherein when one of the first earphone 10 and the second earphone 20 enters the listening mode, the other one enters the auxiliary listening mode.
Through the embodiment, the first earphone 10 is triggered to enter the monitoring mode through the key module 14, the second earphone 20 enters the auxiliary listening mode, at this time, the first earphone 10 is equivalent to a radio device and is placed at a sound source to pick up a speech signal to be translated, the first earphone 10 sends the speech signal to be translated to the terminal, the terminal translates the speech signal to be translated to obtain a translated speech signal, and sends the translated speech signal to the second earphone 20, the second earphone 20 is equivalent to a playing device, and the second earphone 20 plays the translated speech signal. Therefore, the two earphones can independently realize the functions of monitoring and auxiliary listening respectively, one earphone can be arranged at the sound emitting source for receiving sound, the other earphone can broadcast at a long distance, synchronous translation can be clearly realized without being close to the sound emitting source, and the translation quality is improved.
Specifically, in the listening mode, the microphone module 12 is turned on, and the speaker module 13 is turned off, so that the headset can only perform a sound receiving function, that is, only serves to pick up a voice signal to be translated. In the auxiliary listening mode, the microphone mode is closed, the speaker module 13 is opened, and at this time, the earphone can only realize the speaker function, namely, only plays the translated voice signal. That is to say, two earphones are independent individuality, and another earphone is then broadcast when the pickup of an earphone, perhaps, another earphone is then at the pickup when the broadcast of an earphone, can not pick up the sound simultaneously also can not broadcast sound simultaneously, avoids sound crosstalk.
In actual meeting scene, all meeting personnel can all place first earphone 10 in the meeting room and be close to audio equipment's position department, adjust audio equipment's volume size for the sound that first earphone 10 picked up is clear, and meeting personnel wear second earphone 20, both can sit in the meeting room position near the front, also can sit in the meeting room position near the back, can both clearly hear the sound of synchronous translation. Therefore, the participants do not need to sit at the positions close to the sound equipment, and can sit at any position of the conference room, so that the distance between the participants and the sound source is increased, and the limitation that the participants can only sit at the positions close to the sound source is broken through.
In the present embodiment, referring to fig. 2 and 3, the first earphone 10 is a left-ear earphone and the second earphone 20 is a right-ear earphone. It is understood that the first earphone 10 may be a right-ear earphone and the second earphone 20 may be a left-ear earphone, which is not limited herein. This first earphone 10 and/or second earphone 20 still includes MCU16, antenna module 15, LED module 17, battery, and MCU16 is connected with bluetooth module 11, and LED module 17 and battery all are connected with MCU16, and antenna module 15 is connected with bluetooth module 11. The TWS headset further comprises a charging chamber for charging the first 10 and second 20 headsets.
It should be noted that the rule for triggering the key module 14 is to press five seconds long, but it is understood that other triggering rules may also be used, for example, press three times short, and the like, and the rule is not limited herein.
In an embodiment, the first headset 10 and/or the second headset 20 further comprise a positioning module (not shown in the figure) connected to the bluetooth module 11, the positioning module being configured to measure a distance to a terminal. Specifically, the Positioning module is any one of a GPS module (GPS), an LBS module (LBS), and an UWB module (UWB). This embodiment is through setting up the distance of orientation module real-time measurement earphone and terminal, because carry out data transmission through the bluetooth between earphone and the terminal, the distance of bluetooth transmission has certain restriction, and the transmission of bluetooth signal can receive the influence when surpassing the restriction distance. Therefore, the distance between the earphone and the terminal is obtained through the positioning module, the earphone can be constantly kept in the transceiving range of data transmission, when the distance exceeds the transceiving range of the data transmission, the terminal or the earphone gives an alarm to remind a user to enable the terminal to be close to the earphone, and the simultaneous interpretation effect is guaranteed.
Referring to fig. 4 and 5, it should be noted that the present embodiment can implement controllable farthest voice signal transmission by using the positioning module. Illustratively, the first earphone 10 is placed at the sound source, and the terminal (smartphone 30) is placed at a position farthest from the transmittable range of the first earphone 10, for example, at a position d meters from the first earphone 10. The user then wears the second headset 20 to stand at a position furthest from the transceiving range of the terminal, e.g. d meters from the terminal. Since the TWS headset of this embodiment picks up the speech signal to be translated through the first headset 10 and sends the speech signal to the terminal, and then the terminal sends the translated speech signal to the second headset 20, the first headset 10 and the second headset 20 only perform data transmission with the terminal at this time, and no data transmission is needed between the two headsets, when the first headset 10, the terminal, and the second headset 20 are arranged in sequence to form a straight line, and the first headset 10 and the second headset 20 are used as a relay for data transmission by the terminal, so that the first headset 10 and the second headset 20 can realize the farthest transmission in the receivable and transmittable range, that is, the farthest transmission distance is 2 d.
Referring to fig. 6, the present invention further provides a real-time translation method for a TWS headset, which is applied to a terminal, where the TWS headset is the TWS headset described in the foregoing embodiment. The steps of the TWS headset real-time translation method according to the present invention are shown in the flowchart. The method comprises the following steps: S110-S130.
And S110, receiving the voice signal to be translated picked up from the first earphone 10.
And S120, translating the voice signal to be translated to obtain a translated voice signal.
S130, sending the translated voice signal to the second earphone 20 to enable the second earphone 20 to play the translated voice signal.
In one embodiment, the speech signal to be translated is a speech signal of a foreign language such as english, french, japanese, etc. The translated speech signal is a speech signal in chinese. It will of course be appreciated that any other speech signal in a foreign language is also possible. And after receiving the voice signal to be translated, the terminal calls translation software to translate, the translation software converts the voice signal to be translated into a text to be translated and then translates the text, and the translated text is converted into the translated voice signal. The terminal then plays the translated speech signal to the user via the second earpiece 20.
In an embodiment, as shown in fig. 7, after the step S120, the method further includes: S1201-S1202.
S1201, converting the translated voice signal into text information;
and S1202, displaying the text information in a subtitle form in real time.
In an embodiment, the real-time translation method can also be applied to a scene of watching a foreign film, when the foreign film played by a television has no subtitles, a user can place the first earphone 10 beside the television and wear the second earphone 20, the first earphone 10 picks up a voice signal to be translated of the television, the terminal (the smart phone 30) translates the voice signal into a translated voice signal and converts the translated voice signal into text information, the translated voice signal is played through the second earphone 20, the text information is projected to a television screen through the terminal, and the text information is displayed in a picture in the form of subtitles, so that the problem that the foreign film has no subtitles can be watched. Specifically, the translated speech signal is converted into text information, and the text information is displayed in time-series as subtitles.
In an embodiment, as shown in fig. 8, after the step S120, the method further includes: S1203-S1204.
S1203, converting the translated voice signal into text information.
S1204, inputting the text information into a preset template file to generate a conference summary.
In one embodiment, in some important meeting scenarios, it is often necessary to record the content of the meeting to form a meeting summary, and to store the meeting summary for viewing at any time. According to the embodiment, the translated voice signal is converted into the text information and then the text information is input into the preset template file to directly generate the conference summary through setting the preset template file, manual recording is not needed, and the method is efficient and convenient. The preset template file comprises a preset typesetting format, and the text information is filled into the preset template file according to a preset input logic, for example, the text information is filled into the text content of the file, a timestamp is automatically generated and filled into the date of the file, and keywords are extracted as titles and filled into the title of the file.
In an embodiment, as shown in fig. 9, the real-time interpreting method of the TWS headset further includes the steps of: S1401-S1403.
S1401, receiving the ranging bluetooth signal transmitted from the first earphone 10.
S1402, judging whether the intensity of the ranging Bluetooth signal is lower than a preset intensity threshold value.
And S1403, if the intensity of the ranging Bluetooth signal is lower than a preset intensity threshold value, generating an alarm signal.
In one embodiment, in order to ensure efficient transmission of the speech signal to be interpreted, the distance between the first earpiece 10 and the terminal needs to be maintained. The embodiment is implemented by measuring signal strength, specifically, first, the first earphone 10 sends a ranging bluetooth signal for testing signal strength to the terminal, and after receiving the ranging bluetooth signal, the terminal compares the strength of receiving the ranging bluetooth signal with a preset strength threshold, where the preset strength threshold is preset. When the intensity of the ranging bluetooth signal is lower than the preset intensity threshold, it indicates that the first earphone 10 is too far away from the terminal, an alarm signal is generated to remind the user to get the terminal close to the first earphone 10, and effective transmission of the speech signal to be translated is ensured.
In one embodiment, as shown in fig. 10, the real-time interpreting method of the TWS headset further includes the steps of: S1501-S1503.
S1501, receiving the position data transmitted from the first headphone 10.
S1502, determining whether the first earphone 10 is within a preset range according to the position data.
S1503, if the first earphone 10 is not within the preset range, generating an alarm signal.
In one embodiment, in order to ensure efficient transmission of the speech signal to be interpreted, the distance between the first earpiece 10 and the terminal needs to be maintained. The first earphone 10 of the present embodiment obtains the current position data of the first earphone 10 through the positioning module, and the first earphone 10 sends the position data to the terminal. The terminal determines, with its current position as the center, according to a preset range, for example, within 10 meters. When the first earphone 10 is not in the preset range, it indicates that the first earphone 10 is too far away from the terminal, an alarm signal is generated to remind the user to get the terminal close to the first earphone 10, and effective transmission of the voice signal to be translated is ensured.
In other embodiments, the distance between the second earpiece 20 and the terminal needs to be maintained in order to ensure efficient transmission of the translated speech signal. The current position data of the second earphone 20 can be acquired through the positioning module, the second earphone 20 sends the position data to the terminal, the terminal judges whether the second earphone 20 is in a preset range, and if the second earphone 20 is not in the second range, an alarm signal is generated to remind a user of approaching the second earphone 20 to the terminal. The distance measuring Bluetooth signal can also be sent to the terminal through the second earphone 20, after the terminal receives the distance measuring Bluetooth signal, the intensity of the received distance measuring Bluetooth signal is compared with a preset intensity threshold value, and when the intensity of the distance measuring Bluetooth signal is lower than the preset intensity threshold value, it indicates that the second earphone 20 is too far away from the terminal at the moment, an alarm signal is generated, and a user is reminded to enable the second earphone 20 to be close to the terminal. Thereby, an effective transmission of the translated speech signal between the second earpiece 20 and the terminal is ensured, ensuring that the user can clearly hear the speech.
Referring to fig. 11, fig. 11 is a schematic block diagram of a terminal according to an embodiment of the present application. The terminal can be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant and a wearable device.
Referring to fig. 11, the terminal 500 includes a processor 502, a memory, and a network interface 505 connected by a system bus 501, wherein the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer programs 5032 include program instructions that, when executed, cause the processor 502 to perform a real-time interpretation method of a TWS headset.
The processor 502 is configured to provide computing and control capabilities to support the operation of the overall terminal 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be enabled to execute a real-time translation method for a TWS headset.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 11 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the terminal 500 to which the present application is applied, and that a particular terminal 500 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps: receiving a speech signal to be translated picked up from a first earphone; translating the voice signal to be translated to obtain a translated voice signal; sending the translated voice signal to a second earphone to enable the second earphone to play the translated voice signal.
In an embodiment, after the step of translating the speech signal to be translated to obtain the translated speech signal, the processor 502 further implements the following steps: converting the translated speech signal into text information; and displaying the text information in real time in a subtitle mode.
In an embodiment, after the step of translating the speech signal to be translated to obtain the translated speech signal, the processor 502 further implements the following steps: converting the translated speech signal into text information; and inputting the text information into a preset template file to generate a conference summary.
In one embodiment, processor 502 further implements the steps of: receiving a ranging Bluetooth signal sent by the first earphone; determining the distance between the first earphone and the terminal according to the strength of the ranging and ranging Bluetooth signal; judging whether the distance between the first earphone and the terminal exceeds a preset distance threshold value or not; and if the distance between the first earphone and the terminal exceeds a preset distance threshold, generating an alarm signal.
In one embodiment, processor 502 further implements the steps of: receiving position data transmitted from the first headset; judging whether the first earphone is in a preset range according to the position data; and if the first earphone is not in the preset range, generating an alarm signal.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program comprises program instructions. The program instructions, when executed by the processor, cause the processor to perform the steps of: receiving a speech signal to be translated picked up from a first earphone; translating the voice signal to be translated to obtain a translated voice signal; sending the translated voice signal to a second earphone to enable the second earphone to play the translated voice signal.
In an embodiment, after the step of translating the speech signal to be translated to obtain a translated speech signal is implemented by the processor by executing the program instructions, the following steps are further implemented: converting the translated speech signal into text information; and displaying the text information in real time in a subtitle mode.
In an embodiment, after the step of translating the speech signal to be translated to obtain a translated speech signal is implemented by the processor by executing the program instructions, the following steps are further implemented: converting the translated speech signal into text information; and inputting the text information into a preset template file to generate a conference summary.
In one embodiment, the processor, in executing the program instructions, further performs the steps of: receiving a ranging Bluetooth signal sent by the first earphone; determining the distance between the first earphone and the terminal according to the strength of the ranging and ranging Bluetooth signal; judging whether the distance between the first earphone and the terminal exceeds a preset distance threshold value or not; and if the distance between the first earphone and the terminal exceeds a preset distance threshold, generating an alarm signal.
In one embodiment, the processor, in executing the program instructions, further performs the steps of: receiving position data transmitted from the first headset; judging whether the first earphone is in a preset range according to the position data; and if the first earphone is not in the preset range, generating an alarm signal.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a terminal (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A TWS headset comprising a first headset and a second headset, the first headset and/or the second headset comprising:
the microphone module is used for picking up a voice signal to be translated;
the Bluetooth module is connected with the microphone module and used for sending the voice signal to be translated to a terminal and receiving the translated voice signal from the terminal;
the loudspeaker module is connected with the Bluetooth module and used for playing the translated voice signal;
the button module is connected with the Bluetooth module and used for triggering to enter a monitoring mode and controlling the microphone module to be started and the loudspeaker module to be closed; the auxiliary listening control module is used for triggering to enter an auxiliary listening mode and controlling the microphone module to be closed and the loudspeaker module to be opened;
wherein when one of the first earphone and the second earphone enters a listening mode, the other earphone enters an auxiliary listening mode.
2. A TWS headset according to claim 1, characterised in that the first headset and/or the second headset further comprises a positioning module connected with the Bluetooth module for measuring a distance to a terminal.
3. The TWS headset of claim 2, wherein the location module is any one of a GPS module, an LBS module, a UWB module.
4. A real-time translation method for a TWS headset applied in a terminal, the TWS headset of any one of claims 1-3, the method comprising:
receiving a speech signal to be translated picked up from a first earphone;
translating the voice signal to be translated to obtain a translated voice signal;
sending the translated voice signal to a second earphone to enable the second earphone to play the translated voice signal.
5. The real-time interpretation method according to claim 4, wherein after translating the speech signal to be translated to obtain a translated speech signal, further comprising:
converting the translated speech signal into text information;
and displaying the text information in real time in a subtitle mode.
6. The real-time interpretation method according to claim 4, wherein after translating the speech signal to be translated to obtain a translated speech signal, further comprising:
converting the translated speech signal into text information;
and inputting the text information into a preset template file to generate a conference summary.
7. The real-time interpretation method according to any of the claims 4-6, wherein the method further comprises:
receiving a ranging Bluetooth signal sent by the first earphone;
judging whether the intensity of the ranging Bluetooth signal is lower than a preset intensity threshold value or not;
and if the intensity of the ranging Bluetooth signal is lower than a preset intensity threshold value, generating an alarm signal.
8. The real-time interpretation method according to any of the claims 4-6, wherein the method further comprises:
receiving position data transmitted from the first headset;
judging whether the first earphone is in a preset range according to the position data;
and if the first earphone is not in the preset range, generating an alarm signal.
9. A terminal, characterized in that the terminal comprises a memory having stored thereon a computer program and a processor implementing the method according to any of claims 4-8 when the processor executes the computer program.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 4-8.
CN202111315662.4A 2021-11-08 2021-11-08 TWS earphone and real-time interpretation method, terminal and storage medium thereof Pending CN114245255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111315662.4A CN114245255A (en) 2021-11-08 2021-11-08 TWS earphone and real-time interpretation method, terminal and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111315662.4A CN114245255A (en) 2021-11-08 2021-11-08 TWS earphone and real-time interpretation method, terminal and storage medium thereof

Publications (1)

Publication Number Publication Date
CN114245255A true CN114245255A (en) 2022-03-25

Family

ID=80748774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111315662.4A Pending CN114245255A (en) 2021-11-08 2021-11-08 TWS earphone and real-time interpretation method, terminal and storage medium thereof

Country Status (1)

Country Link
CN (1) CN114245255A (en)

Similar Documents

Publication Publication Date Title
US9966084B2 (en) Method and device for achieving object audio recording and electronic apparatus
US9271077B2 (en) Method and system for directional enhancement of sound using small microphone arrays
US20160014540A1 (en) Soundbar audio content control using image analysis
CN108280067B (en) Earphone translation method and system
US20060203998A1 (en) Eyeglass-attached video display based on wireless transmission from a cell phone
CN108924361B (en) Audio playing and acquisition control method, system and computer readable storage medium
CN109360549B (en) Data processing method, wearable device and device for data processing
WO2018000764A1 (en) Method and device for automatic audio channel matching, and headphone
JP2002153684A (en) Head mounted display for watching public subtitles and closed caption text in movie theater
WO2021244159A1 (en) Translation method and apparatus, earphone, and earphone storage apparatus
US11893997B2 (en) Audio signal processing for automatic transcription using ear-wearable device
US20220021980A1 (en) Terminal, audio cooperative reproduction system, and content display apparatus
CN114697742A (en) Video recording method and electronic equipment
JP2016091057A (en) Electronic device
CN111739538B (en) Translation method and device, earphone and server
CN111953852B (en) Call record generation method, device, terminal and storage medium
WO2023231787A1 (en) Audio processing method and apparatus
CN111696552A (en) Translation method, translation device and earphone
US20230143588A1 (en) Bone conduction transducers for privacy
CN114245255A (en) TWS earphone and real-time interpretation method, terminal and storage medium thereof
US20210358475A1 (en) Interpretation system, server apparatus, distribution method, and storage medium
WO2023216119A1 (en) Audio signal encoding method and apparatus, electronic device and storage medium
US11902754B2 (en) Audio processing method, apparatus, electronic device and storage medium
EP2216975A1 (en) Telecommunication device
CN111694539B (en) Method, device and medium for switching between earphone and loudspeaker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination