CN109067965B - Translation method, translation device, wearable device and storage medium - Google Patents

Translation method, translation device, wearable device and storage medium Download PDF

Info

Publication number
CN109067965B
CN109067965B CN201810619139.2A CN201810619139A CN109067965B CN 109067965 B CN109067965 B CN 109067965B CN 201810619139 A CN201810619139 A CN 201810619139A CN 109067965 B CN109067965 B CN 109067965B
Authority
CN
China
Prior art keywords
wearable device
voice
audio
application
translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810619139.2A
Other languages
Chinese (zh)
Other versions
CN109067965A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810619139.2A priority Critical patent/CN109067965B/en
Publication of CN109067965A publication Critical patent/CN109067965A/en
Application granted granted Critical
Publication of CN109067965B publication Critical patent/CN109067965B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a translation method, a translation device, a wearable device and a storage medium, wherein the translation method is applied to wearable equipment, the wearable device comprises a first wearable device and a second wearable device, and the first wearable device and the second wearable device are in wireless connection with an electronic device; the first wearable device is used for receiving first audio which is sent by the electronic device and generated by a first application running on the electronic device, translating the first audio into first voice and playing the first voice; and the second wearable equipment is used for receiving second audio which is sent by the electronic device and is generated by a second application running on the electronic device, translating the second audio into second voice and playing the second voice. By adopting the embodiment of the application, the two wearable devices can respectively translate the audio generated by different applications in the electronic device.

Description

Translation method, translation device, wearable device and storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a translation method, a translation apparatus, a wearable apparatus, and a storage medium.
Background
With the maturity of wireless technology, wearable equipment connects electronic devices such as cell-phones through wireless technology's scene is more and more. People can realize various functions such as listening to music, making a call and the like through the wearable equipment.
Disclosure of Invention
The embodiment of the application provides a translation method, a translation device, a wearable device and a storage medium, which can realize that two wearable devices respectively translate.
In a first aspect, an embodiment of the present application provides a wearable apparatus, where the wearable apparatus includes a first wearable device and a second wearable device, and both the first wearable device and the second wearable device establish a wireless connection with an electronic apparatus;
the first wearable device is used for receiving first audio which is sent by the electronic device and generated by a first application running on the electronic device, translating the first audio into first voice and playing the first voice;
the second wearable device is used for receiving second audio sent by the electronic device and generated by a second application running on the electronic device, translating the second audio into second voice and playing the second voice.
In a second aspect, an embodiment of the present application provides a translation method, where the method includes:
the method comprises the steps that a first wearable device receives first audio which is sent by an electronic device and generated by a first application running on the electronic device, translates the first audio into first voice, and plays the first voice;
the second wearable equipment receives second audio which is sent by the electronic device and generated by a second application running on the electronic device, translates the second audio into second voice and plays the second voice; the first wearable device and the second wearable device both establish a wireless connection with the electronic apparatus.
In a third aspect, an embodiment of the present application provides a translation device, which is applied to a wearable device, and includes a first receiving unit, a first translation unit, a first playing unit, a second receiving unit, a second translation unit, and a second playing unit, where:
the first receiving unit is used for receiving first audio which is sent by an electronic device and generated by a first application running on the electronic device;
the first translation unit is used for translating the first audio into first voice;
the first playing unit is used for playing the first voice;
the second receiving unit is used for receiving second audio which is sent by the electronic device and generated by a second application running on the electronic device;
the second translation unit is used for translating the second audio into second voice;
the second playing unit is used for playing the second voice.
In a fourth aspect, embodiments of the present application provide a wearable device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a wearable device to perform some or all of the steps as described in any of the methods of the second aspect of the embodiments of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a wearable device to perform some or all of the steps as described in any of the methods of the second aspects of the embodiments of the present application. The computer program product may be a software installation package.
In an embodiment of the application, a wearable device includes a first wearable device and a second wearable device, and both the first wearable device and the second wearable device establish a wireless connection with an electronic device; the first wearable device is used for receiving first audio which is sent by the electronic device and generated by a first application running on the electronic device, translating the first audio into first voice and playing the first voice; the second wearable device is used for receiving second audio sent by the electronic device and generated by a second application running on the electronic device, translating the second audio into second voice and playing the second voice. In this embodiment of the application, two wearable devices (a first wearable device and a second wearable device) may receive and translate audio from an electronic apparatus, respectively, and may implement that the two wearable devices translate audio generated by different applications in the electronic apparatus, respectively.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a schematic diagram of a network architecture disclosed in an embodiment of the present application;
fig. 1b is a schematic structural diagram of a wearable device disclosed in the embodiments of the present application;
fig. 2 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a first wearable device disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a second wearable device disclosed in the embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of a translation method disclosed in an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of another translation method disclosed in embodiments of the present application;
fig. 7 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a translation apparatus disclosed in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes embodiments of the present application in detail.
Referring to fig. 1a, fig. 1a is a schematic diagram of a network architecture according to an embodiment of the present disclosure. In the network architecture shown in fig. 1a, a first wearable device 100, a second wearable device 200, and an electronic apparatus 300 may be included, wherein the first wearable device 100 may establish a communication connection with the electronic apparatus 300 through a wireless network (e.g., bluetooth, infrared, or WiFi), and the second wearable device 200 may also establish a communication connection with the electronic apparatus 300 through the wireless network. Both the first wearable device 100 and the second wearable device 200 may include a speaker, a processing module (e.g., a processor and memory), and a communication module (e.g., a bluetooth module). In the network architecture shown in fig. 1a, the first wearable device 100 and the second wearable device 200 both have a speech translation function, and speech data transmission can be realized between the first wearable device 100 and the electronic apparatus 300, and speech data transmission can also be realized between the second wearable device 200 and the electronic apparatus 300. The first wearable device 100 and the second wearable device 200 can respectively receive and translate audio from the electronic device 300, so that the two wearable devices can respectively translate audio generated by different applications in the electronic device.
The wearable device may be a portable listening device (e.g., a wireless headset), a smart bracelet, a smart earring, a smart headband, a smart helmet, and so forth. For convenience of explanation, the wearable device in the following embodiments is described by taking a wireless headset as an example.
The wireless earphone can be an ear-hanging earphone, an earplug earphone or a headphone, and the embodiment of the application is not limited.
The wireless headset may be housed in a headset case, which may include: two receiving cavities (a first receiving cavity and a second receiving cavity) sized and shaped to receive a pair of wireless headsets (a first wireless headset and a second wireless headset); one or more earphone housing magnetic components disposed within the case for magnetically attracting and respectively magnetically securing a pair of wireless earphones into the two receiving cavities. The earphone box may further include an earphone cover. Wherein the first receiving cavity is sized and shaped to receive a first wireless headset and the second receiving cavity is sized and shaped to receive a second wireless headset.
The wireless headset may include a headset housing, a rechargeable battery (e.g., a lithium battery) disposed within the headset housing, a plurality of metal contacts for connecting the battery to a charging device, the driver unit including a magnet, a voice coil, and a diaphragm, the driver unit for emitting sound from a directional sound port, and a speaker assembly including a directional sound port, the plurality of metal contacts disposed on an exterior surface of the headset housing.
In one possible implementation, the wireless headset may further include a touch area, which may be located on an outer surface of the headset housing, and at least one touch sensor is disposed in the touch area for detecting a touch operation, and the touch sensor may include a capacitive sensor. When a user touches the touch area, the at least one capacitive sensor may detect a change in self-capacitance to recognize a touch operation.
In one possible implementation, the wireless headset may further include an acceleration sensor and a triaxial gyroscope, the acceleration sensor and the triaxial gyroscope may be disposed within the headset housing, and the acceleration sensor and the triaxial gyroscope are used to identify a picking up action and a taking down action of the wireless headset.
In a possible implementation manner, the wireless headset may further include at least one air pressure sensor, and the air pressure sensor may be disposed on a surface of the headset housing and configured to detect air pressure in the ear after the wireless headset is worn. The wearing tightness of the wireless earphone can be detected through the air pressure sensor. When it is detected that the wireless headset is worn loosely, the wireless headset may send a prompt message to an electronic device (e.g., a mobile phone) connected to the wireless headset to prompt a user that the wireless headset is at risk of falling.
Referring to fig. 1b, fig. 1b is a schematic structural diagram of a wearable device disclosed in the embodiment of the present application, the wearable device 100 includes a storage and processing circuit 710, and a communication circuit 720 and an audio component 740 connected to the storage and processing circuit 710, wherein in some specific wearable devices, a display component 730 or a touch component may be further disposed.
The wearable device 100 may include control circuitry, which may include storage and processing circuitry 710. The storage and processing circuit 710 may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. The processing circuitry in the storage and processing circuitry 710 may be used to control the operation of the wearable device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuit 710 may be used to run software in the wearable device 100, such as Voice Over Internet Protocol (VOIP) phone call applications, simultaneous interpretation functions, media playing applications, operating system functions, and the like. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in wearable device 100, to name a few.
Wearable device 100 may also include input-output circuitry 750. The input-output circuitry 750 may be used to enable the wearable device 100 to enable input and output of data, i.e., to allow the wearable device 100 to receive data from an external device and also to allow the wearable device 100 to output data from the wearable device 100 to an external device. Input-output circuit 750 may further include a sensor 770. The sensors 770 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., based on optical touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
Input-output circuitry 750 may also include a touch sensor array (i.e., display 730 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The wearable device 100 may also include an audio component 740. The audio component 740 may be used to provide audio input and output functionality for the wearable device 100. The audio components 740 in the wearable device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sounds.
The communication circuit 720 may be used to provide the wearable device 100 with the ability to communicate with external devices. The communications circuitry 720 may include analog and digital input-output interface circuitry, and wireless communications circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 720 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 720 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, communications circuitry 720 may include a near field communications antenna and a near field communications transceiver. The communications circuitry 720 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so forth.
The wearable device 100 may further include a battery, power management circuitry, and other input-output units 760. Input-output unit 760 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes or other status indicators, and the like.
A user may input commands through the input-output circuitry 750 to control operation of the wearable device 100, and may use output data of the input-output circuitry 750 to enable receipt of status information and other outputs from the wearable device 100.
Based on the network architecture of fig. 1a, a wearable device is disclosed. Referring to fig. 2, fig. 2 is a schematic structural diagram of a wearable apparatus disclosed in the embodiment of the present application, the wearable apparatus 10 may include a first wearable device 100 and a second wearable device 200, and both the first wearable device 100 and the second wearable device 200 establish a wireless connection with an electronic apparatus 300;
the first wearable device 100 is used for receiving first audio which is sent by the electronic device 300 and generated by a first application running on the electronic device 300, translating the first audio into first voice and playing the first voice;
the second wearable device 200 is configured to receive second audio generated by a second application running on the electronic apparatus 300 and transmitted by the electronic apparatus 300, translate the second audio into a second voice, and play the second voice.
In this embodiment, the first wearable device 100 may translate the first audio into a first voice; the second wearable device 200 may translate the second audio into a second voice. Wherein the language type of the first audio is different from the language type of the first speech; the second audio is of a different language type than the second speech. For example, the language type of the first audio is english, and the language type of the first speech is chinese; the language type of the second audio is Japanese, and the language type of the second voice is English. The language type of the first audio and the language type of the second audio can be the same or different; the language type of the first speech and the language type of the second speech may be the same or different.
For example, the first wearable device 100 and the second wearable device 200 may constitute one earphone.
In the embodiment of the present application, a speech translation key may be provided on the first wearable device 100 and the second wearable device 200. The first wearable device 100 will be described as an example.
When the user presses the speech translation key arranged on the first wearable device 100, the first wearable device 100 can be triggered to enter the speech translation mode, and when the user presses the speech translation key again, the speech translation mode can be exited. Furthermore, the voice translation key can also have a language selection function, the voice translation key can be pressed up and down to open or quit the voice translation mode, and the voice translation key can be pressed left and right to switch and select the language type to be translated. And when the speech translation key is pressed left and right, an alert sound for selecting a type of language to be translated may be output at the speaker of the first wearable device 100. The embodiment of the application can be provided with one key to realize the speech translation switch and the speech translation language type selection function, so that the number of keys of the first wearable device is reduced, and the material use cost is reduced.
Optionally, the first wearable device 100 is further configured to receive a voice translation instruction input by the user, and enter a voice translation mode;
the first wearable device 100 is further configured to receive a voice selection instruction to be translated selected by the user, and select a language type corresponding to the first voice as the language type of the voice to be translated.
In this embodiment, the surface of the first wearable device 100 may be provided with a touch area for detecting a touch operation of a user. For example, a pressure sensor may be disposed in a preset area of the surface of the first wearable device 100, and the first wearable device 100 may generate a corresponding control instruction according to the pressing duration and the pressing duration of the user in the touch area to control whether to enter the speech translation mode and select a language type to be translated. For another example, the first wearable device 100 may detect the number of taps of the user in the touch area within a unit time (e.g., 1 second or two seconds), and generate a corresponding control instruction according to the correspondence between the number of taps and the control instruction. For example, after one tap, the first wearable device 100 outputs an alert tone through the speaker to prompt the user to enter a speech translation mode. According to the embodiment of the application, a physical key is not needed, the space of the first wearable device can be saved, and the space utilization rate is improved.
Optionally, the second wearable device 200 is further configured to receive a voice translation instruction input by the user, and enter a voice translation mode;
the second wearable device 200 is further configured to receive a voice selection instruction to be translated selected by the user, and select a language type corresponding to the second voice as the language type of the voice to be translated.
For a specific embodiment of the second wearable device 200 selecting the language type of the speech to be translated, reference may be made to the description of the first wearable device 100 selecting the language type of the speech to be translated. And will not be described in detail herein.
Optionally, the surface of the first wearable device 100 may further be provided with a fingerprint detection area, when the user presses the fingerprint detection area, the fingerprint sensor of the first wearable device 100 starts to work, collects a fingerprint input by the user, performs verification, and when it is detected that the fingerprint input by the user matches with a pre-stored fingerprint template, determines that the verification is passed, and allows the user to perform touch operation on the first wearable device. According to the embodiment of the application, fingerprint safety verification can be performed, an unfamiliar user is prevented from controlling the first wearable device, and the safety of the first wearable device is improved.
Optionally, please refer to fig. 3, where fig. 3 is a schematic structural diagram of a first wearable device disclosed in the embodiment of the present application. As shown in fig. 3, the first wearable device 100 may include a first communication module 11, a first speaker 12 and a first controller 13, the first communication module 11 and the first speaker 12 are connected to the first controller 13, wherein:
the first communication module 11 is configured to receive a first audio generated by a first application running on the electronic device 300 and transmitted by the electronic device 300.
A first controller 13 for translating the first audio into a first voice.
And a first speaker 12 for playing the first voice.
Optionally, the first wearable device 100 may further include at least one first microphone, and the first microphone may capture voice input by the user.
In this embodiment, the first communication module 11 may include a first bluetooth module, and the first wearable device 100 may establish a communication connection with the electronic apparatus 300 through the first bluetooth module, and may receive audio transmitted by the electronic apparatus 300 and transmit voice input by a user to the electronic apparatus 300.
The first controller 13 may include a processor and a memory, the processor being a control center of the first wearable device 100, connecting various parts of the entire wearable device using various interfaces and lines, performing various functions of the wearable device and processing data by running or executing software programs and/or modules stored in the memory, and calling data stored in the memory, thereby monitoring the wearable device as a whole. Optionally, the processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory can be used for storing software programs and modules, and the processor executes various functional applications and data processing of the wearable device by running the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the wearable device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Optionally, the first wearable device 100 may further set voiceprint authentication, and enter the speech translation mode only after the voiceprint authentication is passed. After a microphone of the first wearable device 100 collects a first voice input by a user, the first controller 13 performs voiceprint verification on the first voice, extracts a first voiceprint feature in the first voice, performs matching verification on the first voiceprint feature and a pre-stored voiceprint feature template, determines that the verification is passed when the first voiceprint feature is matched with the pre-stored voiceprint feature template, and translates the first voice into a second voice by the first controller 13 to perform subsequent operations. The method and the device for the voiceprint verification can conduct voiceprint verification, prevent a strange user from controlling the first wearable device, and improve safety of the first wearable device. Similarly, the second wearable device 200 may also set up voiceprint verification.
Optionally, please refer to fig. 4, where fig. 4 is a schematic structural diagram of a second wearable device disclosed in the embodiment of the present application. As shown in fig. 4, the second wearable device 200 may include a second communication module 21, a second speaker 22 and a second controller 23, the second communication module 21 and the second speaker 22 are connected to the second controller 23, wherein:
the second communication module 21 is configured to receive second audio, which is sent by the electronic device 300 and generated by a second application running on the electronic device 300.
A second controller 23 for translating the second audio into a second voice.
And a second speaker 22 for playing the second voice.
Optionally, the second wearable device 200 may further include at least one second microphone, and the second microphone may collect voice input by the user.
In this embodiment, the second communication module 21 may include a second bluetooth module, and the second wearable device 200 may establish a communication connection with the electronic apparatus 300 through the second bluetooth module, and may receive the audio sent by the electronic apparatus 300 and send the voice input by the user to the electronic apparatus 300.
The second controller 23, like the first controller 13, may also include a processor and a memory. The functions and principles of the second controller 23 can be referred to the description of the first controller 13, and will not be described herein.
In the embodiment of the present application, the electronic device 300 may simultaneously run the first application and the second application. The first application and the second application may generate audio separately. The first application and the second application may be different APPs, or may be split-screen applications of the same APP. The first application may generate audio during operation and the second application may also generate audio during operation. The first application and the second application may produce audio simultaneously.
Optionally, the first application may be bound with the first wearable device 100, and the second application may be bound with the second wearable device 200. After the electronic device 300 establishes communication connection with both the first wearable apparatus 100 and the second wearable apparatus 200, the electronic device 300 transmits the audio generated by the first application to the first wearable apparatus 100 and transmits the audio generated by the second application to the second wearable apparatus 20.
Alternatively, the user may bind different applications with different wearable devices. After the electronic apparatus 300 establishes communication connection with both the first wearable device 100 and the second wearable device 200, the user may select to bind the first application with the first wearable device 100 and select to bind the second application with the second wearable device 200 in the electronic apparatus 300. At this time, the first wearable device 100 can only receive the audio generated by the first application, and cannot receive the audio generated by the second application; the second wearable device 200 can only receive audio generated by the second application and cannot receive audio generated by the first application.
Optionally, the first application and the second application are both video playing applications. The first application and the second application may be displayed on the electronic device 300 in a split screen manner. For example, the first application and the second application may be displayed in two different areas of a display screen of the electronic device 300. The embodiment of the application is suitable for two users to watch two different video scenes by using the same electronic device.
In the embodiment of the application, the two wearable devices can receive the audio from the electronic device and translate the audio respectively, and the two wearable devices can translate the audio generated by different applications in the electronic device respectively.
Optionally, the first wearable device 100 translates the first audio into a first voice, specifically:
the first wearable device 100 sends a translation request to a translation server, wherein the translation request carries a first audio and a first voice identifier, and the translation request is used for the translation server to translate the first audio into a first voice corresponding to the first voice identifier;
the first wearable device 100 receives the first voice returned by the translation server.
In this embodiment, the first wearable device 100 may have a networking function, the first wearable device 100 may be connected to a cellular network, the first wearable device 100 may access a translation server through a base station, and the translation server may implement a speech translation function. Specifically, the first wearable device 100 may send a translation request to the translation server, where the translation request carries the first audio and the first voice identifier, and the first voice identifier may be generated according to the language type selected by the first user on the first wearable device 100. The translation server translates the first audio into a first voice corresponding to the first voice identifier, and sends the translated first voice to the first wearable device 100.
The translation server translates the first audio into a first voice corresponding to the first voice identifier, which may specifically be:
the translation server starts a voice recognition function, converts the first audio into a first text, translates the first text into a second text corresponding to the first voice identifier, and generates a first voice according to the second text.
In the embodiment of the application, the wearable device can have the function of connecting a cellular network, does not need to be used as the transfer of voice translation through a third-party device (such as a mobile phone), can perform voice translation anytime and anywhere, can rapidly realize voice translation, and improves the real-time performance of voice translation, thereby realizing real-time voice translation.
Optionally, the second wearable device 200 translates the second audio into a second voice, specifically:
the second wearable device 200 sends a translation request to a translation server, where the translation request carries a second audio and a second voice identifier, and the translation request is used for the translation server to translate the second audio into a second voice corresponding to the second voice identifier;
the second wearable device 200 receives the second speech returned by the translation server.
For a specific implementation manner of the second wearable device 200 to translate the second audio into the second voice, reference may be made to the specific implementation of the first wearable device 100 to translate the first audio into the first voice, which is not described herein again.
Optionally, the first application and the second application are video playing applications running on the electronic device 300 at the same time.
Wherein the first application and the second application can be displayed in two different areas of the display screen of the electronic device 300. The first application and the second application may be different APPs, or may be split-screen applications of the same APP.
Optionally, the first wearable device 100 is further configured to receive a first video playing parameter adjusting instruction input by the user, where the first video playing parameter adjusting instruction is used to adjust a video playing parameter of a first application running on the electronic apparatus 300;
the second wearable device 200 is further configured to receive a second video playing parameter adjusting instruction input by the user, where the second video playing parameter adjusting instruction is used to adjust a video playing parameter of a second application running on the electronic apparatus 300.
In this embodiment, the user may input the video playing parameter adjustment instruction through a key or a button provided on the first wearable device 100. The user may also touch-input the video playback parameter adjustment instruction through a touch area on the surface of the first wearable device 100. The video playing parameter adjusting instruction may be used to adjust a video playing parameter of a first application running on the electronic device 300. The video playing parameters of the first application comprise parameters of video playing volume, video brightness, video fast forward and fast backward and the like. The video playing parameter adjusting instruction may include: any one of a video playing volume adjusting instruction, a video brightness adjusting instruction, a video fast forward or fast backward adjusting instruction.
For example, the surface of the first wearable device 100 may be provided with a touch area for detecting a user touch operation. For example, a pressure sensor may be disposed in a preset area of the surface of the first wearable device 100, and the first wearable device 100 may generate a corresponding video playing parameter adjustment instruction according to the pressing duration and the pressing duration of the user in the touch area. For example, the first wearable device 100 may detect a pressing force degree of a first sub-region of the touch region to increase the volume of the video playing, wherein the larger the pressing force degree of the first sub-region is, the larger the volume increase amount of the video playing is; the first wearable device 100 may detect a pressing force degree of the second sub-region of the touch region to reduce the volume of the video playing, wherein the larger the pressing force degree of the second sub-region is, the larger the volume reduction amount of the video playing is. Wherein the first and second sub-regions do not have an overlapping region. According to the embodiment of the application, a physical key is not needed, the space of the first wearable device can be saved, and the space utilization rate is improved.
In the embodiment of the application, the first wearable device and the second wearable device can respectively adjust video playing parameters of a first application and a second application on the electronic device, so that two users can watch two different videos simultaneously by using the same electronic device, and the two wearable devices can respectively translate audio generated by different applications in the electronic device.
Please refer to fig. 5, fig. 5 is a flowchart illustrating a translation method according to an embodiment of the present disclosure. As shown in fig. 5, the translation method includes the following steps.
501, a first wearable device receives a first audio generated by a first application running on an electronic device and sent by the electronic device, translates the first audio into a first voice, and plays the first voice.
502, the second wearable device receives a second audio frequency generated by a second application running on the electronic device and sent by the electronic device, translates the second audio frequency into a second voice, and plays the second voice; the first wearable device and the second wearable device both establish wireless connection with the electronic device.
Step 501 and step 502 may be executed simultaneously, or step 501 may be executed after step 502, or step 501 may be executed before step 502, which is not limited in this embodiment of the present application.
Step 501 may translate and play all audio generated by a first application running on the electronic device one by one; step 502 may translate and play all audio generated by a second application running on the electronic device one by one.
Where the electronic device may periodically (e.g., every two seconds) transmit audio generated by the first application to the first wearable device, and similarly, the electronic device may periodically (e.g., every two seconds) transmit audio generated by the second application to the second wearable device.
Optionally, in step 501, translating, by the first wearable device, the first audio into the first voice may include: step (11) and step (12).
(11) The first wearable device sends a translation request to a translation server, wherein the translation request carries a first audio and a first voice identifier, and the translation request is used for the translation server to translate the first audio into a first voice corresponding to the first voice identifier;
(12) the first wearable device receives the first voice returned by the translation server.
Optionally, in step 502, the translating, by the second wearable device, the second audio into the second voice may include: step (21) and step (22).
(21) The second wearable device sends a translation request to a translation server, wherein the translation request carries a second audio and a second voice identifier, and the translation request is used for the translation server to translate the second audio into a second voice corresponding to the second voice identifier;
(22) the second wearable device receives a second voice returned by the translation server.
Optionally, the first application and the second application are video playing applications running on the electronic device at the same time.
Optionally, fig. 5 may further include the following steps:
the method comprises the steps that a first wearable device receives a first video playing parameter adjusting instruction input by a user, wherein the first video playing parameter adjusting instruction is used for adjusting video playing parameters of a first application running on an electronic device;
the second wearable device receives a second video playing parameter adjusting instruction input by the user, and the second video playing parameter adjusting instruction is used for adjusting video playing parameters of a second application running on the electronic device.
The specific implementation of the method shown in fig. 5 can refer to the embodiments of the apparatuses shown in fig. 1 to fig. 4, and is not described herein again.
In the embodiment of the application, the two wearable devices can receive the audio from the electronic device and translate the audio respectively, and the two wearable devices can translate the audio generated by different applications in the electronic device respectively.
Referring to fig. 6, fig. 6 is a schematic flowchart of another translation method disclosed in the embodiment of the present application. Fig. 6 is further optimized based on fig. 5, and as shown in fig. 6, the translation method includes the following steps.
601, the first wearable device receives a voice translation instruction input by a user and enters a voice translation mode.
602, the first wearable device receives a voice selection instruction to be translated selected by a user, and selects a language type corresponding to the first voice as a language type of the voice to be translated.
603, the first wearable device receives a first audio generated by a first application running on the electronic device and sent by the electronic device, translates the first audio into a first voice, and plays the first voice.
604, the second wearable device receives a second audio sent by the electronic device and generated by a second application running on the electronic device, translates the second audio into a second voice, and plays the second voice; the first wearable device and the second wearable device both establish wireless connection with the electronic device.
Optionally, before performing step 604, the following steps may also be performed:
the second wearable device receives a voice translation instruction input by a user and enters a voice translation mode;
the second wearable device receives a voice selection instruction to be translated selected by the user, and selects the language type corresponding to the second voice as the language type of the voice to be translated.
Step 603 and step 604 in the embodiment of the present application may refer to step 501 and step 502 shown in fig. 5, which are not described herein again.
The specific implementation of the method shown in fig. 6 can refer to the embodiments of the apparatuses shown in fig. 1 to fig. 4, and is not described herein again.
In the embodiment of the application, the first wearable device and the second wearable device can respectively adjust video playing parameters of a first application and a second application on the electronic device, so that two users can watch two different videos simultaneously by using the same electronic device, and the two wearable devices can respectively translate audio generated by different applications in the electronic device.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application, and as shown in the drawing, the wearable device 700 includes a processor 701, a memory 702, a communication interface 703, and one or more programs, where the one or more programs are stored in the memory 702 and configured to be executed by the processor 701, and the programs include instructions for performing the following steps:
the method comprises the steps of receiving a first audio frequency generated by a first application running on the electronic device and sent by the electronic device, translating the first audio frequency into a first voice, and playing the first voice.
Optionally, in terms of translating the first audio into the first voice, the program is specifically configured to execute the following steps:
sending a translation request to a translation server, wherein the translation request carries a first audio and a first voice identifier, and the translation request is used for the translation server to translate the first audio into a first voice corresponding to the first voice identifier;
and receiving the first voice returned by the translation server.
Optionally, the first application and the second application are video playing applications running on the electronic device at the same time.
Optionally, the program includes instructions for further performing the following steps:
receiving a first video playing parameter adjusting instruction input by a user, wherein the first video playing parameter adjusting instruction is used for adjusting video playing parameters of a first application running on the electronic device.
Optionally, the program includes instructions for further performing the following steps:
receiving a voice translation instruction input by a user, and entering a voice translation mode;
and receiving a voice selection instruction to be translated selected by a user, and selecting the language type corresponding to the first voice as the language type of the voice to be translated.
The specific implementation of the apparatus shown in fig. 7 can refer to the apparatus embodiments shown in fig. 1 to 4, and is not described herein again.
Implementing the wearable device shown in fig. 7, the wearable device may receive audio from the electronic device and translate it.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a translation apparatus disclosed in an embodiment of the present application, applied to a wearable apparatus, the translation apparatus 800 includes a first receiving unit 801, a first translation unit 802, a first playing unit 803, a second receiving unit 804, a second translation unit 805, and a second playing unit 806, where:
the first receiving unit 801 is configured to receive a first audio generated by a first application running on the electronic device and transmitted by the electronic device.
A first translating unit 802 for translating the first audio into a first voice.
A first playing unit 803 is used for playing the first voice.
The second receiving unit 804 is configured to receive second audio, which is transmitted by the electronic apparatus and generated by a second application running on the electronic apparatus.
A second translation unit 805 for translating the second audio into a second voice.
A second playing unit 806, configured to play the second voice.
The first translation Unit 802 and the second translation Unit 805 may be processors or controllers (e.g., Central Processing Units (CPUs)), general purpose processors (DSPs), Application-Specific Integrated circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, transistor logic devices, hardware components, or any combination thereof, the first receiving Unit 801 and the second receiving Unit 804 may be wireless communication modules (e.g., bluetooth modules), and the first playing Unit 803 and the second playing Unit 806 may be speakers.
The specific implementation of the apparatus shown in fig. 8 can refer to the apparatus embodiments shown in fig. 1 to 4, and is not described herein again.
By implementing the wearable device shown in fig. 8, the two wearable devices can respectively receive and translate audio from the electronic device, and the two wearable devices can respectively translate audio generated by different applications in the electronic device.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a wearable device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a wearable device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A wearable apparatus, comprising a first wearable device and a second wearable device, both of which establish a wireless connection with an electronic apparatus; the first wearable device and the second wearable device form a headset;
the first wearable device is used for receiving first audio which is sent by the electronic device and generated by a first application running on the electronic device, translating the first audio into first voice and playing the first voice;
the second wearable device is used for receiving second audio which is sent by the electronic device and generated by a second application running on the electronic device, translating the second audio into second voice and playing the second voice;
the first application is bound with the first wearable device, and the second application is bound with the second wearable device;
the first application and the second application are video playing applications running on the electronic device at the same time, and the first application and the second application are displayed in two different areas of a display screen of the electronic device.
2. The wearable apparatus according to claim 1, wherein the first wearable device translates the first audio into a first voice, in particular:
the first wearable device sends a translation request to a translation server, wherein the translation request carries the first audio and a first voice identifier, and the translation request is used for the translation server to translate the first audio into first voice corresponding to the first voice identifier;
the first wearable device receives the first voice returned by the translation server.
3. The wearable device of claim 1,
the first wearable device is further configured to receive a first video playing parameter adjusting instruction input by a user, where the first video playing parameter adjusting instruction is used to adjust a video playing parameter of the first application running on the electronic device;
the second wearable device is further configured to receive a second video playing parameter adjusting instruction input by a user, where the second video playing parameter adjusting instruction is used to adjust a video playing parameter of the second application running on the electronic apparatus.
4. The wearable device according to any of claims 1-3,
the first wearable device is further used for receiving a voice translation instruction input by a user and entering a voice translation mode;
the first wearable device is further configured to receive a voice selection instruction to be translated selected by the user, and select a language type corresponding to the first voice as the language type of the voice to be translated.
5. A method of translation, the method comprising:
the method comprises the steps that a first wearable device receives first audio which is sent by an electronic device and generated by a first application running on the electronic device, translates the first audio into first voice, and plays the first voice;
the second wearable equipment receives second audio which is sent by the electronic device and generated by a second application running on the electronic device, translates the second audio into second voice and plays the second voice; the first wearable device and the second wearable device both establish wireless connection with the electronic device; the first wearable device and the second wearable device form a headset;
the first application is bound with the first wearable device, and the second application is bound with the second wearable device;
the first application and the second application are video playing applications running on the electronic device at the same time, and the first application and the second application are displayed in two different areas of a display screen of the electronic device.
6. The method of claim 5, wherein the first wearable device translates the first audio into a first voice comprising:
the first wearable device sends a translation request to a translation server, wherein the translation request carries the first audio and a first voice identifier, and the translation request is used for the translation server to translate the first audio into first voice corresponding to the first voice identifier;
the first wearable device receives the first voice returned by the translation server.
7. The method of claim 5, further comprising:
the first wearable device receives a first video playing parameter adjusting instruction input by a user, wherein the first video playing parameter adjusting instruction is used for adjusting video playing parameters of the first application running on the electronic device;
the second wearable device receives a second video playing parameter adjusting instruction input by a user, and the second video playing parameter adjusting instruction is used for adjusting video playing parameters of the second application running on the electronic device.
8. The method of any of claims 5-7, wherein prior to the first wearable device receiving first audio generated by a first application running on an electronic device transmitted by the electronic device, the method further comprises:
the first wearable device receives a voice translation instruction input by a user and enters a voice translation mode;
the first wearable device receives a voice selection instruction to be translated selected by the user, and selects a language type corresponding to the first voice as the language type of the voice to be translated.
9. The translation device is applied to a wearable device and comprises a first receiving unit, a first translation unit, a first playing unit, a second receiving unit, a second translation unit and a second playing unit, wherein:
the first receiving unit is used for receiving first audio which is sent by an electronic device and generated by a first application running on the electronic device;
the first translation unit is used for translating the first audio into first voice;
the first playing unit is used for playing the first voice;
the second receiving unit is used for receiving second audio which is sent by the electronic device and generated by a second application running on the electronic device;
the second translation unit is used for translating the second audio into second voice;
the second playing unit is used for playing the second voice;
the first application is bound with a first wearable device, and the second application is bound with a second wearable device; the first wearable device and the second wearable device form a headset;
the first application and the second application are video playing applications running on the electronic device at the same time, and the first application and the second application are displayed in two different areas of a display screen of the electronic device.
10. A wearable device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 5-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a wearable device to perform the method according to any of claims 5-8.
CN201810619139.2A 2018-06-15 2018-06-15 Translation method, translation device, wearable device and storage medium Expired - Fee Related CN109067965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810619139.2A CN109067965B (en) 2018-06-15 2018-06-15 Translation method, translation device, wearable device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810619139.2A CN109067965B (en) 2018-06-15 2018-06-15 Translation method, translation device, wearable device and storage medium

Publications (2)

Publication Number Publication Date
CN109067965A CN109067965A (en) 2018-12-21
CN109067965B true CN109067965B (en) 2020-12-22

Family

ID=64821026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810619139.2A Expired - Fee Related CN109067965B (en) 2018-06-15 2018-06-15 Translation method, translation device, wearable device and storage medium

Country Status (1)

Country Link
CN (1) CN109067965B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058836B (en) * 2019-03-18 2020-11-06 维沃移动通信有限公司 Audio signal output method and terminal equipment
CN110213442A (en) * 2019-05-29 2019-09-06 努比亚技术有限公司 Speech playing method, terminal and computer readable storage medium
CN111476040A (en) * 2020-03-27 2020-07-31 深圳光启超材料技术有限公司 Language output method, head-mounted device, storage medium, and electronic device
CN111739538B (en) * 2020-06-05 2022-04-26 北京搜狗科技发展有限公司 Translation method and device, earphone and server
CN111741394A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Data processing method and device and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023593A (en) * 2014-04-18 2015-11-04 联想移动通信科技有限公司 Terminal audio playing method, terminal audio playing device and terminal
CN106375563A (en) * 2016-08-30 2017-02-01 珠海市魅族科技有限公司 Method and device for controlling audio output of terminal
CN106454605A (en) * 2016-11-30 2017-02-22 南京小脚印网络科技有限公司 Intelligent translation earphone system
CN206452520U (en) * 2017-02-28 2017-08-29 广州市东声电子科技有限公司 A kind of new simultaneous interpretation earphone

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131753A (en) * 2016-06-28 2016-11-16 乐视控股(北京)有限公司 Audio output control method, device and terminal
CN106209130B (en) * 2016-07-26 2019-01-04 维沃移动通信有限公司 A kind of wireless headset and the method using wireless headset output audio data
CN106817490A (en) * 2017-01-12 2017-06-09 努比亚技术有限公司 A kind of terminal and sound playing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023593A (en) * 2014-04-18 2015-11-04 联想移动通信科技有限公司 Terminal audio playing method, terminal audio playing device and terminal
CN106375563A (en) * 2016-08-30 2017-02-01 珠海市魅族科技有限公司 Method and device for controlling audio output of terminal
CN106454605A (en) * 2016-11-30 2017-02-22 南京小脚印网络科技有限公司 Intelligent translation earphone system
CN206452520U (en) * 2017-02-28 2017-08-29 广州市东声电子科技有限公司 A kind of new simultaneous interpretation earphone

Also Published As

Publication number Publication date
CN109067965A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN108710615B (en) Translation method and related equipment
CN109067965B (en) Translation method, translation device, wearable device and storage medium
US10466961B2 (en) Method for processing audio signal and related products
CN109040887B (en) Master-slave earphone switching control method and related product
CN109068206B (en) Master-slave earphone switching control method and related product
EP3598435B1 (en) Method for processing information and electronic device
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
CN108810693B (en) Wearable device and device control device and method thereof
CN108886653B (en) Earphone sound channel control method, related equipment and system
CN108391205B (en) Left and right channel switching method and device, readable storage medium and terminal
CN108966067B (en) Play control method and related product
CN108540900B (en) Volume adjusting method and related product
US10630826B2 (en) Information processing device
CN109040446B (en) Call processing method and related product
WO2022033176A1 (en) Audio play control method and apparatus, and electronic device and storage medium
CN109150221B (en) Master-slave switching method for wearable equipment and related product
CN108595003A (en) Function control method and relevant device
CN108897516B (en) Wearable device volume adjustment method and related product
CN106445457A (en) Headphone sound channel switching method and device
WO2021238844A1 (en) Audio output method and electronic device
CN108600887B (en) Touch control method based on wireless earphone and related product
CN108763913A (en) Data processing method, device, terminal, earphone and readable storage medium storing program for executing
CN108668018B (en) Mobile terminal, volume control method and related product
CN108923810A (en) Interpretation method and relevant device
CN115695620A (en) Intelligent glasses and control method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201222

CF01 Termination of patent right due to non-payment of annual fee