CN108710615B - Translation method and related equipment - Google Patents

Translation method and related equipment Download PDF

Info

Publication number
CN108710615B
CN108710615B CN201810414740.8A CN201810414740A CN108710615B CN 108710615 B CN108710615 B CN 108710615B CN 201810414740 A CN201810414740 A CN 201810414740A CN 108710615 B CN108710615 B CN 108710615B
Authority
CN
China
Prior art keywords
voice
wearable device
user
translation
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810414740.8A
Other languages
Chinese (zh)
Other versions
CN108710615A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810414740.8A priority Critical patent/CN108710615B/en
Publication of CN108710615A publication Critical patent/CN108710615A/en
Application granted granted Critical
Publication of CN108710615B publication Critical patent/CN108710615B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a translation method and related equipment, wherein the translation method is applied to wearable equipment, the wearable equipment comprises a microphone, a loudspeaker and a controller, and the microphone is used for collecting first voice input by a user; the controller is used for translating the first voice into second voice and sending the second voice to the second wearable device, and the second wearable device is used for playing the second voice; and the loudspeaker is used for playing the second voice. By adopting the embodiment of the application, the real-time translation of the voice can be realized.

Description

Translation method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a translation method and a related device.
Background
With the maturity of wireless technology, wearable equipment connects electronic devices such as cell-phones through wireless technology's scene is more and more. People can realize various functions such as listening to music, making a call and the like through the wearable equipment.
Disclosure of Invention
The embodiment of the application provides a translation method and related equipment, which can realize real-time translation of voice.
In a first aspect, an embodiment of the present application provides a wearable device, including a microphone, a speaker, and a controller, wherein:
the microphone is used for collecting first voice input by a user;
the controller is used for translating the first voice into a second voice and sending the second voice to a second wearable device, and the second wearable device is used for playing the second voice;
the loudspeaker is used for playing the second voice.
In a second aspect, an embodiment of the present application provides a translation method based on a wearable device, where the method includes:
the method comprises the steps that a first wearable device collects first voice input by a user;
the first wearable device translates the first voice into a second voice and sends the second voice to a second wearable device, and the second wearable device is used for playing the second voice;
the first wearable device plays the second voice.
In a third aspect, an embodiment of the present application provides a translation apparatus based on a wearable device, which is applied to the wearable device, and the translation apparatus includes a collection unit, a translation unit, a sending unit, and a playing unit, where:
the acquisition unit is used for acquiring a first voice input by a user;
the translation unit is used for translating the first voice into a second voice;
the sending unit is configured to send the second voice to a second wearable device, and the second wearable device is configured to play the second voice;
the playing unit is used for playing the second voice.
In a fourth aspect, embodiments of the present application provide a wearable device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a wearable device to perform some or all of the steps as described in any of the methods of the second aspect of the embodiments of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a wearable device to perform some or all of the steps as described in any of the methods of the second aspects of the embodiments of the present application. The computer program product may be a software installation package.
In an embodiment of the application, the wearable device comprises a microphone, a loudspeaker and a controller, wherein the microphone is used for collecting first voice input by a user; the controller is used for translating the first voice into a second voice and sending the second voice to the second wearable device, and the second wearable device is used for playing the second voice; the loudspeaker is used for playing the second voice. The speech translation of the embodiment of the application can be carried out between two wearable devices, and a third-party device is not needed, so that the real-time performance of the speech translation is improved, and the speech real-time translation is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a schematic diagram of a network architecture disclosed in an embodiment of the present application;
fig. 1b is a schematic structural diagram of a wearable device disclosed in the embodiments of the present application;
fig. 2 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
fig. 3 is a schematic flowchart of a wearable device-based translation method disclosed in an embodiment of the present application;
fig. 4 is a schematic flowchart of another wearable device-based translation method disclosed in the embodiments of the present application;
fig. 5 is a schematic flowchart of another wearable device-based translation method disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application;
fig. 7 is a schematic structural diagram of a wearable device-based translation apparatus disclosed in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes embodiments of the present application in detail.
Referring to fig. 1a, fig. 1a is a schematic diagram of a network architecture according to an embodiment of the present disclosure. In the network architecture shown in fig. 1a, a first wearable device 100 and a second wearable device 200 may be included, wherein the first wearable device 100 may be communicatively connected to the second wearable device 200 via a wireless network (e.g., bluetooth, infrared, or WiFi). Both the first wearable device 100 and the second wearable device 200 may include a microphone, a speaker, a processing module (e.g., a processor and memory), and a communication module (e.g., a bluetooth module). In the network architecture shown in fig. 1a, the first wearable device 100 and the second wearable device 200 both have a speech translation function, and speech data transmission can be implemented between the first wearable device 100 and the second wearable device 200. Speech translation can be carried out between two wearable devices, and the real-time performance of speech translation is improved without the help of third-party equipment, so that speech real-time translation is realized.
The wearable device may be a portable listening device (e.g., a wireless headset), a smart bracelet, a smart earring, a smart headband, a smart helmet, and so forth. For convenience of explanation, the wearable device in the following embodiments is described by taking a wireless headset as an example.
The wireless earphone can be an ear-hanging earphone, an earplug earphone or a headphone, and the embodiment of the application is not limited.
The wireless headset may be housed in a headset case, which may include: two receiving cavities (a first receiving cavity and a second receiving cavity) sized and shaped to receive a pair of wireless headsets (a first wireless headset and a second wireless headset); one or more earphone housing magnetic components disposed within the case for magnetically attracting and respectively magnetically securing a pair of wireless earphones into the two receiving cavities. The earphone box may further include an earphone cover. Wherein the first receiving cavity is sized and shaped to receive a first wireless headset and the second receiving cavity is sized and shaped to receive a second wireless headset.
The wireless headset may include a headset housing, a rechargeable battery (e.g., a lithium battery) disposed within the headset housing, a plurality of metal contacts for connecting the battery to a charging device, the driver unit including a magnet, a voice coil, and a diaphragm, the driver unit for emitting sound from a directional sound port, and a speaker assembly including a directional sound port, the plurality of metal contacts disposed on an exterior surface of the headset housing.
In one possible implementation, the wireless headset may further include a touch area, which may be located on an outer surface of the headset housing, and at least one touch sensor is disposed in the touch area for detecting a touch operation, and the touch sensor may include a capacitive sensor. When a user touches the touch area, the at least one capacitive sensor may detect a change in self-capacitance to recognize a touch operation.
In one possible implementation, the wireless headset may further include an acceleration sensor and a triaxial gyroscope, the acceleration sensor and the triaxial gyroscope may be disposed within the headset housing, and the acceleration sensor and the triaxial gyroscope are used to identify a picking up action and a taking down action of the wireless headset.
In a possible implementation manner, the wireless headset may further include at least one air pressure sensor, and the air pressure sensor may be disposed on a surface of the headset housing and configured to detect air pressure in the ear after the wireless headset is worn. The wearing tightness of the wireless earphone can be detected through the air pressure sensor. When it is detected that the wireless headset is worn loosely, the wireless headset may send a prompt message to an electronic device (e.g., a mobile phone) connected to the wireless headset to prompt a user that the wireless headset is at risk of falling.
Referring to fig. 1b, fig. 1b is a schematic structural diagram of a wearable device disclosed in the embodiment of the present application, the wearable device 100 includes a storage and processing circuit 710, and a communication circuit 720 and an audio component 740 connected to the storage and processing circuit 710, wherein in some specific wearable devices, a display component 730 or a touch component may be further disposed.
The wearable device 100 may include control circuitry, which may include storage and processing circuitry 710. The storage and processing circuit 710 may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. The processing circuitry in the storage and processing circuitry 710 may be used to control the operation of the wearable device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuit 710 may be used to run software in the wearable device 100, such as Voice Over Internet Protocol (VOIP) phone call applications, simultaneous interpretation functions, media playing applications, operating system functions, and the like. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in wearable device 100, to name a few.
Wearable device 100 may also include input-output circuitry 750. The input-output circuitry 750 may be used to enable the wearable device 100 to enable input and output of data, i.e., to allow the wearable device 100 to receive data from an external device and also to allow the wearable device 100 to output data from the wearable device 100 to an external device. Input-output circuit 750 may further include a sensor 770. The sensors 770 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., based on optical touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
Input-output circuitry 750 may also include a touch sensor array (i.e., display 730 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The wearable device 100 may also include an audio component 740. The audio component 740 may be used to provide audio input and output functionality for the wearable device 100. The audio components 740 in the wearable device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sounds.
The communication circuit 720 may be used to provide the wearable device 100 with the ability to communicate with external devices. The communications circuitry 720 may include analog and digital input-output interface circuitry, and wireless communications circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 720 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 720 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, communications circuitry 720 may include a near field communications antenna and a near field communications transceiver. The communications circuitry 720 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so forth.
The wearable device 100 may further include a battery, power management circuitry, and other input-output units 760. Input-output unit 760 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes or other status indicators, and the like.
A user may input commands through the input-output circuitry 750 to control operation of the wearable device 100, and may use output data of the input-output circuitry 750 to enable receipt of status information and other outputs from the wearable device 100.
Based on the network architecture of fig. 1a, a wearable device is disclosed. Referring to fig. 2, fig. 2 is a schematic structural diagram of a wearable device disclosed in the embodiment of the present application, the wearable device 100 includes a microphone 11, a speaker 12, and a controller 13, the microphone 11 and the speaker 12 are connected to the controller 13, where:
and the microphone 11 is used for collecting a first voice input by a user.
And the controller 13 is configured to translate the first voice into a second voice, and send the second voice to a second wearable device, where the second wearable device is configured to play the second voice.
And a loudspeaker 12 for playing the second voice.
The wearable device 100 in the embodiment of the present application may correspond to the first wearable device 100 in fig. 1a, and the second wearable device may correspond to the second wearable device 200 in fig. 1 a.
In this embodiment, the controller 13 may include a processor and a memory, the processor is a control center of the wearable device, various interfaces and lines are used to connect various parts of the whole wearable device, and various functions and processing data of the wearable device are executed by running or executing software programs and/or modules stored in the memory and calling the data stored in the memory, so as to perform overall monitoring on the wearable device. Optionally, the processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory can be used for storing software programs and modules, and the processor executes various functional applications and data processing of the wearable device by running the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the wearable device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
In the embodiment of the present application, at least one microphone 11 may be included in the wearable device 100, and the microphone 11 may collect voice uttered by the user. The embodiment of the application is suitable for a scene that two people with different voices have voice calls through two wearable devices. For example, a first user wears a first wearable device and a second user wears a second wearable device, the first user can speak a first language, the second user can speak a second language, the first user cannot understand the second language, and the second user cannot understand the first language. The first wearable device and the second wearable device both comprise a microphone, a loudspeaker and a wireless communication module (such as a Bluetooth module), and both have a voice acquisition function and a voice playing function.
When the first user conveys voice information to the second user, a microphone of the first wearable device collects first voice (voice corresponding to a first language) input by the first user, the first voice is translated into second voice (voice corresponding to a second language), the second voice is played by the second wearable device, and the second voice is also played by the first wearable device. The first voice is the voice input by the first user, and the second voice is the voice translated by the first wearable device.
Wherein the playing of the second voice by the second wearable device and the playing of the second voice by the speaker 12 may be performed simultaneously. This may facilitate the user wearing the first wearable device (the first user) to know whether the first speech uttered by the user is translated and the playing is completed. The first user can continue to input the voice after the first wearable device plays the second voice, or wait for the voice corresponding to the translated first language and sent by the second user.
When the second user conveys voice information to the first user, a microphone of the second wearable device collects second voice (voice corresponding to the second language) input by the second user, the second voice is translated into first voice (voice corresponding to the first language), the first voice is sent to the first wearable device, the first wearable device plays the first voice, and the second wearable device also plays the first voice. The second voice is voice input by the second user, and the first voice is voice translated by the second wearable device.
In the embodiment of the present application, the microphone 11 and the speaker 12 may be turned on in real time or in response to a user operation. For example, a speech translation key may be provided on the first wearable device, and when the user presses the speech translation key, the microphone 11 and the speaker 12 may be turned on, and when the user presses the speech translation key again, the microphone 11 may be turned off. Further, the speech translation key may have a language selection function, and when the speech translation key is pressed up and down, the microphone 11 may be turned on or off, and when the speech translation key is pressed left and right, the type of the language to be translated may be selected by switching. And when the speech translation key is pressed left and right, an alert sound of the language type of the selected translation may be output at the speaker of the first wearable device. The embodiment of the application can be provided with one key to realize the speech translation switch and the speech translation language type selection function, so that the number of keys of the first wearable device is reduced, and the material use cost is reduced.
Alternatively, the surface of the first wearable device may be provided with a touch area for detecting a touch operation of the user. For example, a pressure sensor may be disposed in a preset area of the surface of the first wearable device, and the first wearable device may generate corresponding control instructions according to the pressing duration and the pressing duration of the user in the touch area to control whether to turn on or off the microphone 11, and to select the type of language to be translated. For another example, the first wearable device may detect the number of taps of the user in the touch area within a unit time (e.g., 1 second or two seconds), and generate a corresponding control instruction according to the correspondence between the number of taps and the control instruction. For example, after one tap, the first wearable device outputs an alert tone through the speaker to prompt the user to enter a speech translation mode. According to the embodiment of the application, a physical key is not needed, the space of the first wearable device can be saved, and the space utilization rate is improved.
Optionally, the surface of the first wearable device may further be provided with a fingerprint detection area, when the user presses the fingerprint detection area, the fingerprint sensor of the first wearable device starts working, collects the fingerprint input by the user, performs verification, and when the fingerprint input by the user is detected to be matched with a pre-stored fingerprint template, determines that the verification is passed, and allows the user to perform touch operation on the first wearable device. According to the embodiment of the application, fingerprint safety verification can be performed, an unfamiliar user is prevented from controlling the first wearable device, and the safety of the first wearable device is improved.
Optionally, the first wearable device may further set voiceprint verification, and only translate the voice verified by the voiceprint. After the microphone 11 collects a first voice input by a user, the controller 13 performs voiceprint verification on the first voice, extracts a first voiceprint feature in the first voice, performs matching verification on the first voiceprint feature and a prestored voiceprint feature template, determines that the verification is passed when the first voiceprint feature is matched with the prestored voiceprint feature template, and translates the first voice into a second voice to execute subsequent operations by the controller 13. The method and the device for the voiceprint verification can conduct voiceprint verification, prevent a strange user from controlling the first wearable device, and improve safety of the first wearable device.
In the embodiment of the application, speech translation can be performed between two wearable devices, a third-party device is not needed, the real-time performance of speech translation is improved, and therefore speech real-time translation is achieved.
Optionally, the controller 13 translates the first voice into a second voice, specifically:
the controller 13 sends a translation request to the translation server, wherein the translation request carries the first voice and the second voice identifier, and the translation request is used for the translation server to translate the first voice into the second voice corresponding to the second voice identifier;
the controller 13 receives the second speech returned by the translation server.
In this embodiment, the first wearable device may have a networking function, the first wearable device may be connected to a cellular network, the first wearable device may access the translation server through the base station, and the translation server may implement a speech translation function. Specifically, the first wearable device may send a translation request to the server, where the translation request carries a first voice and a second voice identifier, and the second voice identifier may be generated according to a language type selected by the first user on the first wearable device. The translation server translates the first voice into second voice corresponding to the second voice identification, and sends the translated second voice to the first wearable device.
The translation server translates the first voice into a second voice, which may specifically be:
and the translation server starts a voice recognition function, converts the first voice into a first text, translates the first text into a second text corresponding to the second language identifier, and generates a second voice according to the second text.
In the embodiment of the application, the wearable device can have the function of connecting a cellular network, does not need to be used as the transfer of voice translation through a third-party device (such as a mobile phone), can perform voice translation anytime and anywhere, can rapidly realize voice translation, and improves the real-time performance of voice translation, thereby realizing real-time voice translation.
Optionally, the microphone 11 is configured to detect whether a voice input is performed within a first preset time period after a first voice input by a user is collected;
the controller 13 is further configured to translate the first voice into the second voice when the microphone detects that there is no voice input within the first preset time period.
In this embodiment of the application, the first preset time period may be preset and stored in the nonvolatile memory of the first wearable device. For example, the first preset time period may be set to 2 seconds, 5 seconds, 10 seconds, or the like. The embodiments of the present application are not limited. The first preset duration can be understood as a pause duration, and can also be called as a translation waiting duration, which refers to the pause duration when two persons wait for translation of the wearable device in a communication process, and when the pause duration exceeds the first preset duration, the wearable device considers that the user is waiting for translation of the wearable device, and the wearable device can start to translate, transmit and play the collected voice.
The size of the first preset duration can be determined according to different users. For example, the controller 13 may recognize a voiceprint of the user and calculate the age of the user from the voiceprint, and may set the first preset duration to 10 seconds when the age of the user falls in the aged age group and to 2 seconds when the age of the user falls in the young age group.
For another example, the controller may further identify a speech rate of the user, determine the size of the first preset duration according to the speech rate of the user, set the first preset duration to be 2 seconds if the speech rate of the user is detected to be the first speech rate interval (150 + 200 words/minute), set the first preset duration to be 10 seconds if the speech rate of the user is detected to be the second speech rate interval (60-100 words/minute), and set the first preset duration to be 5 seconds if the speech rate of the user is detected to be the third speech rate interval (100 + 150 words/minute). Generally, the faster the speech speed, the smaller the first preset duration may be set. When the speaking speeds of different users are different greatly, the embodiment of the application can determine the pause duration according to the speaking speeds of the users, can set different pause durations aiming at the users with different speaking speeds, meets the requirements of voice translation and communication of various users, and accordingly improves the user experience.
In the embodiment of the application, the first preset duration can be set as the pause duration to control the wearable device to perform voice translation properly, and the intelligence of man-machine interaction is improved.
Optionally, the controller 13 is further configured to receive a speech translation instruction input by a user, and enter a speech translation mode;
the controller 13 is further configured to receive a voice selection instruction to be translated selected by the user, and select the second voice as the voice to be translated.
In the embodiment of the application, a touch area for detecting a touch operation of a user can be arranged on the surface of the first wearable device. For example, a pressure sensor may be disposed in a preset region of the surface of the first wearable device, and the first wearable device may generate a speech translation instruction or a speech selection instruction to be translated according to the pressing duration and the pressing duration of the user in the touch region. For example, if the pressing time is 1-2 seconds and the pressing time is 1-5 newtons, a voice translation instruction is generated; and if the pressing time is 3-5 seconds and the pressing pressure is 1-10 newtons, generating a voice selection instruction to be translated, and outputting the language type corresponding to the currently selected voice to be translated through the loudspeaker 12. After entering the speech translation mode, the first wearable device opens the microphone to collect speech. After entering the speech translation mode, the language type (such as chinese, english, french, german, japanese, korean, russian, spanish, arabic, etc.) corresponding to the translated speech can be further selected.
For another example, the first wearable device may detect the number of taps of the user in the touch area within a unit time (e.g., 1 second or 2 seconds), and generate a corresponding control instruction according to the correspondence between the number of taps and the control instruction. For example, the control instruction corresponding to one tap is a speech translation instruction, and the first wearable device outputs a prompt tone through a speaker to prompt the user to enter a speech translation mode. And knocking the corresponding control instruction twice to be a voice selection instruction to be translated, and outputting a prompt tone through a loudspeaker by the first wearable device so as to prompt the user of the language type corresponding to the currently selected voice to be translated.
According to the embodiment of the application, whether the user enters the voice translation mode or not can be triggered, the intelligence of man-machine interaction is improved, physical keys do not need to be used, the space of the first wearable device can be saved, and the space utilization rate is improved.
Optionally, the microphone 11 is further configured to detect whether there is a voice input within a second preset time period and whether voice data sent by the second wearable device is received;
and the controller 13 is further configured to exit the speech translation mode when the microphone 11 detects that no speech input is made within the second preset time period and no speech data sent by the second wearable device is received.
In this embodiment of the application, the second preset time period may be set to 10 seconds, 20 seconds, 30 seconds, and the like, which is not limited in this embodiment of the application. The second preset duration is used for judging whether the user exits the voice translation mode, and when it is detected that no voice input is received and no voice data sent by the second wearable device is received after the second preset duration is exceeded, the user exits the voice translation mode. After exiting the speech translation mode, the first wearable device turns off the microphone 11, which may save power consumption.
The second preset duration may be set by a timer in the first wearable device.
And the second preset time length is greater than the first preset time length.
The embodiment of the application can automatically exit from the voice translation mode, so that the power consumption of the wearable device is saved.
Optionally, the controller 13 is further configured to receive an instruction for exiting the speech translation mode, which is input by the user, and exit the speech translation mode.
For example, the first wearable device may detect the number of taps of the user in the touch area within a unit time (e.g., 1 second or 2 seconds), and generate a corresponding control instruction according to the correspondence between the number of taps and the control instruction. For example, the control instruction corresponding to tapping three times is an instruction to exit the speech translation mode, and the first wearable device outputs a prompt tone through the speaker to prompt the user to exit the speech translation mode.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a wearable device-based translation method according to an embodiment of the present disclosure. As shown in fig. 3, the wearable device-based translation method includes the following steps.
301, a first wearable device collects a first voice input by a user.
302, the first wearable device translates the first voice into a second voice and sends the second voice to a second wearable device, and the second wearable device is used for playing the second voice.
Optionally, step 302 may include the following step (11) and step (12).
(11) The first wearable device sends a translation request to a translation server, wherein the translation request carries a first voice and a second voice identifier, and the translation request is used for the translation server to translate the first voice into a second voice corresponding to the second voice identifier;
(12) the first wearable device receives the second voice returned by the translation server.
303, the first wearable device plays the second voice.
The specific implementation of the method shown in fig. 3 can refer to the embodiments of the apparatuses shown in fig. 1 to fig. 2, and is not described herein again.
In the embodiment of the application, speech translation can be performed between two wearable devices, a third-party device is not needed, the real-time performance of speech translation is improved, and therefore speech real-time translation is achieved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another translation method based on a wearable device according to an embodiment of the present application. Fig. 4 is further optimized based on fig. 3, and as shown in fig. 6, the translation method based on the wearable device includes the following steps.
401, a first wearable device collects a first voice input by a user.
402, a first wearable device detects whether voice input is performed within a first preset duration.
403, if it is detected that there is no voice input within the first preset duration, the first wearable device translates the first voice into a second voice, and sends the second voice to a second wearable device, where the second wearable device is configured to play the second voice.
404, the first wearable device plays the second voice.
Step 401 in the embodiment of the present application may refer to step 301 shown in fig. 3, and step 404 may refer to step 303 shown in fig. 3, which is not described herein again.
The specific implementation of the method shown in fig. 4 can refer to the embodiments of the apparatuses shown in fig. 1 to fig. 2, and is not described herein again.
In the embodiment of the application, speech translation can be performed between two wearable devices, a third-party device is not needed, the real-time performance of speech translation is improved, and therefore speech real-time translation is achieved. The first preset duration can be set as the pause duration to control the wearable device to perform voice translation properly, and the intelligence of man-machine interaction is improved.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating another translation method based on a wearable device according to an embodiment of the present application. Fig. 5 is further optimized based on fig. 3, and as shown in fig. 5, the translation method based on the wearable device includes the following steps.
501, a first wearable device receives a voice translation instruction input by a user and enters a voice translation mode.
502, the first wearable device receives a voice selection instruction to be translated selected by a user, and selects a second voice as the voice to be translated.
The first wearable device collects a first voice input by the user 503.
And 504, the first wearable device translates the first voice into a second voice and sends the second voice to a second wearable device, and the second wearable device is used for playing the second voice.
505, the first wearable device plays the second voice.
506, the first wearable device detects whether voice input exists within a second preset time period and whether voice data sent by the second wearable device is received.
507, if none, the first wearable exits the speech translation mode.
Steps 503 to 505 in the embodiment of the present application may refer to steps 301 to 303 shown in fig. 3, which are not described herein again.
The specific implementation of the method shown in fig. 5 can refer to the embodiments of the apparatuses shown in fig. 1 to fig. 2, and is not described herein again.
In the embodiment of the application, speech translation can be performed between two wearable devices, a third-party device is not needed, the real-time performance of speech translation is improved, and therefore speech real-time translation is achieved. The voice translation mode can be automatically detected to exit, the microphone is closed, and power consumption can be saved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present application, and as shown in the drawing, the wearable device 600 includes a processor 601, a memory 602, a communication interface 603, and one or more programs, where the one or more programs are stored in the memory 602 and configured to be executed by the processor 601, and the programs include instructions for performing the following steps:
the method comprises the steps that a first wearable device collects first voice input by a user;
the first wearable device translates the first voice into a second voice and sends the second voice to a second wearable device, and the second wearable device is used for playing the second voice;
the first wearable device plays the second voice.
Optionally, in an aspect that the first wearable device translates the first voice into the second voice, the program is specifically configured to execute the following steps:
the first wearable device sends a translation request to a translation server, wherein the translation request carries a first voice and a second voice identifier, and the translation request is used for the translation server to translate the first voice into a second voice corresponding to the second voice identifier;
the first wearable device receives the second voice returned by the translation server.
Optionally, the program includes instructions for further performing the following steps:
the method comprises the steps that a first wearable device detects whether voice input exists within a first preset duration;
if not, the first wearable device executes the step of translating the first voice into the second voice.
Optionally, the program includes instructions for further performing the following steps:
the method comprises the steps that a first wearable device receives a voice translation instruction input by a user and enters a voice translation mode;
the first wearable device receives a voice selection instruction to be translated selected by a user, and selects the second voice as the voice to be translated.
Optionally, the program includes instructions for further performing the following steps:
the first wearable device detects whether voice input exists within a second preset time and whether voice data sent by the second wearable device is received;
and if not, exiting the voice translation mode.
The specific implementation of the apparatus shown in fig. 6 can refer to the apparatus embodiments shown in fig. 1 to fig. 2, and the detailed description thereof is omitted here.
By implementing the wearable device shown in fig. 6, speech translation can be performed between two wearable devices, without the need of a third-party device, so that the real-time performance of speech translation is improved, and thus, the real-time speech translation is realized.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a wearable device-based translation apparatus disclosed in an embodiment of the present application, and the wearable device-based translation apparatus 700 includes a collecting unit 701, a translation unit 702, a sending unit 703 and a playing unit 703, where:
the collecting unit 701 is configured to collect a first voice input by a user.
A translating unit 702 is configured to translate the first speech into the second speech.
A sending unit 703, configured to send the second voice to a second wearable device, where the second wearable device is configured to play the second voice.
A playing unit 704, configured to play the second voice.
The translating Unit 702 may be a Processor or a controller (e.g., a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof), the collecting Unit 701 may be a microphone, the transmitting Unit 703 may be a wireless communication module (e.g., a bluetooth module), and the playing Unit 704 may be a speaker.
The specific implementation of the apparatus shown in fig. 7 can refer to the apparatus embodiments shown in fig. 1 to fig. 2, and the detailed description thereof is omitted here.
By implementing the wearable device shown in fig. 7, speech translation can be performed between two wearable devices, without the need of a third-party device, so that the real-time performance of speech translation is improved, and thus, the real-time speech translation is realized.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a wearable device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a wearable device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.

Claims (6)

1. A wearable device, comprising a microphone, a speaker, a controller, a touch area, a fingerprint detection area disposed on a surface of the wearable device, and a fingerprint sensor, wherein:
the fingerprint sensor is used for collecting the fingerprint input by the user when the user presses the fingerprint detection area;
the controller is used for verifying the fingerprint, and when the fingerprint input by the user is detected to be matched with a pre-stored fingerprint template, the verification is determined to be passed;
the controller is also used for detecting the knocking times of the user in the touch area in unit time and judging whether the user inputs a voice translation instruction or not according to the corresponding relation between the knocking times and the control instruction; when receiving a voice translation instruction input by the user, entering a voice translation mode;
the controller is further used for judging whether a voice selection instruction to be translated is generated according to the pressing duration and the pressing strength of the user in the touch area;
the controller is further used for receiving a voice selection instruction to be translated selected by the user and selecting a second voice as the voice to be translated;
the microphone is used for collecting first voice input by the user; detecting whether voice input exists within a first preset time length; the first preset duration is adjusted according to the speed of speech of the user or the age of the user; the speech rate of the user is obtained by the controller recognizing the voice of the user; after the age of the user is acquired by the microphone, the controller identifies the voiceprint of the user voice and calculates the voiceprint according to the voiceprint;
the controller is configured to translate the first voice into a second voice and send the second voice to a second wearable device when the microphone detects that no voice input exists within the first preset time period, and the second wearable device is configured to play the second voice;
the loudspeaker is used for playing the second voice;
the microphone is further used for detecting whether voice input exists within a second preset time length and whether voice data sent by the second wearable device is received;
the controller is further configured to exit the speech translation mode when the microphone detects that no speech input is made within the second preset time period and no speech data sent by the second wearable device is received.
2. Wearable device according to claim 1, wherein the controller translates the first speech into a second speech, in particular:
the controller sends a translation request to a translation server, wherein the translation request carries the first voice and a second voice identifier, and the translation request is used for the translation server to translate the first voice into a second voice corresponding to the second voice identifier;
the controller receives the second voice returned by the translation server.
3. A translation method based on a wearable device, the method comprising:
when a user presses a fingerprint detection area, a first wearable device collects a fingerprint input by the user through a fingerprint sensor; the fingerprint is verified, and when the fingerprint input by the user is matched with a pre-stored fingerprint template, the verification is determined to be passed;
the first wearable device detects the number of times of knocking of the user in a touch area in unit time, and judges whether the user inputs a voice translation instruction or not according to the corresponding relation between the number of times of knocking and a control instruction; judging whether a voice selection instruction to be translated is generated or not according to the pressing duration and the pressing strength of the user in the touch area; when receiving a voice translation instruction input by the user, entering a voice translation mode;
the first wearable device receives a voice selection instruction to be translated selected by the user, and selects a second voice as the voice to be translated;
the first wearable device collects first voice input by the user;
the first wearable device detects whether voice input exists within a first preset time length; the first preset duration is adjusted according to the speed of speech of the user or the age of the user; the speech rate of the user is obtained by the controller recognizing the voice of the user; after the age of the user is collected by a microphone, the controller identifies the voiceprint of the user voice and calculates the voiceprint according to the voiceprint;
if not, the first wearable device translates the first voice into a second voice and sends the second voice to a second wearable device, and the second wearable device is used for playing the second voice;
the first wearable device plays the second voice;
the first wearable device detects whether voice input exists within a second preset time and whether voice data sent by the second wearable device is received; and if not, exiting the voice translation mode.
4. The method of claim 3, wherein the first wearable device translates the first voice to a second voice, comprising:
the first wearable device sends a translation request to a translation server, wherein the translation request carries the first voice and a second voice identifier, and the translation request is used for the translation server to translate the first voice into a second voice corresponding to the second voice identifier;
the first wearable device receives the second voice returned by the translation server.
5. A wearable device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 3-4.
6. A computer-readable storage medium, characterized by storing a computer program for electronic data exchange, wherein the computer program causes a wearable device to perform the method of any of claims 3-4.
CN201810414740.8A 2018-05-03 2018-05-03 Translation method and related equipment Expired - Fee Related CN108710615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810414740.8A CN108710615B (en) 2018-05-03 2018-05-03 Translation method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810414740.8A CN108710615B (en) 2018-05-03 2018-05-03 Translation method and related equipment

Publications (2)

Publication Number Publication Date
CN108710615A CN108710615A (en) 2018-10-26
CN108710615B true CN108710615B (en) 2020-03-03

Family

ID=63867719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810414740.8A Expired - Fee Related CN108710615B (en) 2018-05-03 2018-05-03 Translation method and related equipment

Country Status (1)

Country Link
CN (1) CN108710615B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109462789A (en) * 2018-11-10 2019-03-12 东莞市华睿电子科技有限公司 A kind of earphone plays the interpretation method of audio
CN109360549B (en) * 2018-11-12 2023-07-18 北京搜狗科技发展有限公司 Data processing method, wearable device and device for data processing
CN109787966B (en) * 2018-12-29 2020-12-01 北京金山安全软件有限公司 Monitoring method and device based on wearable device and electronic device
CN110099325B (en) * 2019-05-24 2020-06-05 歌尔科技有限公司 Wireless earphone box-entering detection method and device, wireless earphone and earphone product
CN110347215A (en) * 2019-08-02 2019-10-18 汕头大学 The wearable translation of one kind and information indexing device and its application method
CN110558698A (en) * 2019-09-17 2019-12-13 临沂大学 Portable translator
CN111104042A (en) * 2019-12-27 2020-05-05 惠州Tcl移动通信有限公司 Human-computer interaction system and method and terminal equipment
CN111476040A (en) * 2020-03-27 2020-07-31 深圳光启超材料技术有限公司 Language output method, head-mounted device, storage medium, and electronic device
CN111739538B (en) * 2020-06-05 2022-04-26 北京搜狗科技发展有限公司 Translation method and device, earphone and server
CN111696552B (en) * 2020-06-05 2023-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN112394771A (en) * 2020-11-24 2021-02-23 维沃移动通信有限公司 Communication method, communication device, wearable device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246643A (en) * 2012-02-10 2013-08-14 株式会社东芝 Speech translation apparatus and speech translation method
CN104462070A (en) * 2013-09-19 2015-03-25 株式会社东芝 A speech translating system and a speech translating method
CN106462571A (en) * 2014-04-25 2017-02-22 奥斯特豪特集团有限公司 Head-worn computing systems
CN106935240A (en) * 2017-03-24 2017-07-07 百度在线网络技术(北京)有限公司 Voice translation method, device, terminal device and cloud server based on artificial intelligence
CN206907022U (en) * 2017-06-05 2018-01-19 中国地质大学(北京) Easily worn formula instant translation machine

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267075A1 (en) * 2015-03-13 2016-09-15 Panasonic Intellectual Property Management Co., Ltd. Wearable device and translation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246643A (en) * 2012-02-10 2013-08-14 株式会社东芝 Speech translation apparatus and speech translation method
CN104462070A (en) * 2013-09-19 2015-03-25 株式会社东芝 A speech translating system and a speech translating method
CN106462571A (en) * 2014-04-25 2017-02-22 奥斯特豪特集团有限公司 Head-worn computing systems
CN106935240A (en) * 2017-03-24 2017-07-07 百度在线网络技术(北京)有限公司 Voice translation method, device, terminal device and cloud server based on artificial intelligence
CN206907022U (en) * 2017-06-05 2018-01-19 中国地质大学(北京) Easily worn formula instant translation machine

Also Published As

Publication number Publication date
CN108710615A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN108710615B (en) Translation method and related equipment
CN109067965B (en) Translation method, translation device, wearable device and storage medium
EP3598435B1 (en) Method for processing information and electronic device
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
EP3562130B1 (en) Control method at wearable apparatus and related apparatuses
CN107978316A (en) The method and device of control terminal
US9838522B2 (en) Information processing device
CN108886653B (en) Earphone sound channel control method, related equipment and system
CN111432303B (en) Monaural headset, intelligent electronic device, method, and computer-readable medium
CN108595003A (en) Function control method and relevant device
CN108959273B (en) Translation method, electronic device and storage medium
CN108897516B (en) Wearable device volume adjustment method and related product
CN113747412A (en) Method for asking for help in emergency, related device, storage medium and program product
WO2022033176A1 (en) Audio play control method and apparatus, and electronic device and storage medium
WO2021103449A1 (en) Interaction method, mobile terminal and readable storage medium
CN108923810A (en) Interpretation method and relevant device
CN108668018B (en) Mobile terminal, volume control method and related product
GB2516075A (en) Sensor input recognition
CN108566221A (en) Call control method and relevant device
CN109116982A (en) Information broadcasting method, device and electronic device
CN109040425B (en) Information processing method and related product
CN108958481B (en) Equipment control method and related product
CN108829244B (en) Volume adjusting method and related equipment
WO2019084752A1 (en) Method for adjusting volume of earphone by voice and related product
CN207884861U (en) A kind of smart bluetooth communication bracelet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200303

CF01 Termination of patent right due to non-payment of annual fee