CN108710615A - Interpretation method and relevant device - Google Patents

Interpretation method and relevant device Download PDF

Info

Publication number
CN108710615A
CN108710615A CN201810414740.8A CN201810414740A CN108710615A CN 108710615 A CN108710615 A CN 108710615A CN 201810414740 A CN201810414740 A CN 201810414740A CN 108710615 A CN108710615 A CN 108710615A
Authority
CN
China
Prior art keywords
voice
wearable device
translation
user
voiced translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810414740.8A
Other languages
Chinese (zh)
Other versions
CN108710615B (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810414740.8A priority Critical patent/CN108710615B/en
Publication of CN108710615A publication Critical patent/CN108710615A/en
Application granted granted Critical
Publication of CN108710615B publication Critical patent/CN108710615B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application discloses a kind of interpretation method and relevant device, which is applied to wearable device, which includes microphone, loud speaker and controller, microphone, for acquiring the first voice input by user;Second voice for being the second voice by the first voiced translation, and is sent to the second wearable device by controller, and the second wearable device is for playing the second voice;Loud speaker, for playing the second voice.Voice real time translation may be implemented using the embodiment of the present application.

Description

Interpretation method and relevant device
Technical field
This application involves electronic technology field more particularly to a kind of interpretation methods and relevant device.
Background technology
With the maturation of wireless technology, the scene that wearable device connects the electronic devices such as mobile phone by wireless technology is more next It is more.People can be realized by wearable device the various functions such as listens to music, makes a phone call.
Invention content
A kind of interpretation method of the embodiment of the present application offer and relevant device, may be implemented voice real time translation.
In a first aspect, the embodiment of the present application provides a kind of wearable device, including microphone, loud speaker and controller, In:
The microphone, for acquiring the first voice input by user;
Second voice for being the second voice by first voiced translation, and is sent to the by the controller Two wearable devices, second wearable device is for playing second voice;
The loud speaker, for playing second voice.
Second aspect, the embodiment of the present application provide a kind of interpretation method based on wearable device, the method includes:
First wearable device acquires the first voice input by user;
First voiced translation is the second voice by first wearable device, and second voice is sent to Second wearable device, second wearable device is for playing second voice;
First wearable device plays second voice.
The third aspect, the embodiment of the present application provide a kind of translating equipment based on wearable device, are set applied to wearable Standby, the translating equipment includes collecting unit, translation unit, transmission unit and broadcast unit, wherein:
The collecting unit, for acquiring the first voice input by user;
The translation unit, for being the second voice by first voiced translation;
The transmission unit, for second voice to be sent to the second wearable device, described second wearable sets It is ready for use on and plays second voice;
The broadcast unit, for playing second voice.
Fourth aspect, the embodiment of the present application provide a kind of wearable device, including processor, memory, communication interface with And one or more programs, wherein said one or multiple programs are stored in above-mentioned memory, and are configured by above-mentioned Processor executes, and above procedure includes the instruction for executing the step in the embodiment of the present application second aspect either method.
5th aspect, the embodiment of the present application provide a kind of computer readable storage medium, wherein above computer is readable Computer program of the storage medium storage for electronic data interchange, wherein above computer program makes wearable device hold Row step some or all of as described in the embodiment of the present application second aspect either method.
6th aspect, the embodiment of the present application provide a kind of computer program product, wherein above computer program product Non-transient computer readable storage medium including storing computer program, above computer program are operable to make to wear It wears equipment and executes the step some or all of as described in the embodiment of the present application second aspect either method.The computer program Product can be a software installation packet.
In the embodiment of the present application, wearable device includes microphone, loud speaker and controller, and microphone is used for acquiring First voice of family input;Controller is used to the first voiced translation be the second voice, and the second voice is sent to second can Wearable device, the second wearable device is for playing the second voice;Loud speaker is for playing the second voice.The embodiment of the present application Voiced translation can carry out between two wearable devices, without third party device, improve the real-time of voiced translation, To realize voice real time translation.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 a are a kind of network architecture schematic diagrames disclosed in the embodiment of the present application;
Fig. 1 b are a kind of structural schematic diagrams of wearable device disclosed in the embodiment of the present application;
Fig. 2 is a kind of structural schematic diagram of wearable device disclosed in the embodiment of the present application;
Fig. 3 is a kind of flow diagram of the interpretation method based on wearable device disclosed in the embodiment of the present application;
Fig. 4 is the flow diagram of interpretation method of the another kind disclosed in the embodiment of the present application based on wearable device;
Fig. 5 is the flow diagram of interpretation method of the another kind disclosed in the embodiment of the present application based on wearable device;
Fig. 6 is the structural schematic diagram of another wearable device disclosed in the embodiment of the present application;
Fig. 7 is a kind of structural schematic diagram of the translating equipment based on wearable device disclosed in the embodiment of the present application.
Specific implementation mode
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, technical solutions in the embodiments of the present application are clearly and completely described, it is clear that described embodiment is only The embodiment of the application part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people The every other embodiment that member is obtained without making creative work should all belong to the model of the application protection It encloses.
It is described in detail separately below.
Term " first ", " second ", " third " in the description and claims of this application and the attached drawing and " Four " etc. be for distinguishing different objects, rather than for describing particular order.In addition, term " comprising " and " having " and it Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process of series of steps or unit, method, be The step of system, product or equipment are not limited to list or unit, but further include the steps that optionally not listing or list Member, or further include optionally for the intrinsic other steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
It describes in detail below to the embodiment of the present application.
A is please referred to Fig.1, Fig. 1 a are a kind of network architecture schematic diagrames disclosed in the embodiment of the present application.The net shown in Fig. 1 a May include the first wearable device 100 and the second wearable device 200, wherein the first wearable device 100 in network framework It can be communicated to connect by wireless network (for example, bluetooth, infrared ray or WiFi) and the second wearable device 200.First can wear It can includes microphone, loud speaker, processing module (for example, processor and depositing to wear equipment 100 and the second wearable device 200 Reservoir) and communication module (for example, bluetooth module).In the network architecture shown in Fig. 1 a, the first wearable device 100 and second Wearable device 200 all has voice translation functions, can be between the first wearable device 100 and the second wearable device 200 Realize voice data transmission.Voiced translation can be carried out between two wearable devices, without third party device, improved The real-time of voiced translation, to realize voice real time translation.
Wearable device can be portable listening equipment (for example, wireless headset), Intelligent bracelet, intelligence ring, intelligence Headband, intelligent helmet etc..For convenience of description, the wearable device in following embodiment is illustrated by taking wireless headset as an example.
Wireless headset can be clip-on type earphone, or PlayGear Stealth, or headphone, the application Embodiment does not limit.
Wireless headset can be accommodated in Earphone box, and Earphone box may include:Two receiving cavities (the first receiving cavity and second Receiving cavity), the size and shape of two receiving cavities is designed to receive a pair of of wireless headset (the first wireless headset and second wireless Earphone);The one or more earphone outer covering magnetic parts being arranged in box, said one or multiple earphone outer covering magnetic parts are used In by a pair of of wireless headset magnetic attraction and magnetic fixed in two receiving cavities respectively.Earphone box can also include ear cap. Wherein, the size and shape of the first receiving cavity is designed to receive the first wireless headset, the size and shape design of the second receiving cavity At receiving the second wireless headset.
The battery for the recyclable charging that wireless headset may include earphone outer covering, be arranged in earphone outer covering is (for example, lithium is electric Pond), the loud speakers of multiple hard contacts including actuator unit and direct sound port for connecting battery and charging unit Component, wherein actuator unit includes magnet, voice coil and diaphragm, and actuator unit is used to make a sound from direct sound port, The outer surface in earphone outer covering is arranged in above-mentioned multiple hard contacts.
In one possible implementation, wireless headset can also include Petting Area, which can be located in ear The outer surface of machine shell is provided at least one touch sensor in Petting Area, for detecting touch operation, touch sensor It may include capacitance sensor.When user touches Petting Area, at least one capacitance sensor can detect selfcapacity Variation is to identify touch operation.
In one possible implementation, wireless headset can also include acceleration transducer and three-axis gyroscope, add Velocity sensor and three-axis gyroscope can be arranged in earphone outer covering, acceleration transducer and three-axis gyroscope for identification without The pick-up of line earphone and remove action.
In one possible implementation, wireless headset can also include at least one baroceptor, air pressure sensing Device can be arranged on the surface of earphone outer covering, for the air pressure in detection ear after wireless headset wearing.Air pressure sensing can be passed through Device detects the wearing elasticity of wireless headset.When detect wireless headset wear it is more loose when, wireless headset can to wireless ear The electronic device (for example, mobile phone) of machine connection sends prompt message, to prompt user's wireless headset to fall risk.
B is please referred to Fig.1, Fig. 1 b are a kind of structural schematic diagrams of wearable device disclosed in the embodiment of the present application, wearable Equipment 100 includes storage and processing circuit 710, and the telecommunication circuit 720 and sound being connect with the storage and processing circuit 710 Frequency component 740, wherein in some specific wearable devices, display module 730 or touch control component can also be set.
Wearable device 100 may include control circuit, which may include storage and processing circuit 710.It should Storing and processing circuit 710 can be with memory, such as hard drive memory, and nonvolatile memory (such as flash memory or is used for Form other electrically programmable read only memories etc. of solid state drive), volatile memory (such as either statically or dynamically deposit at random Access to memory etc.) etc., the embodiment of the present application is not restricted.Processing circuit in storage and processing circuit 710 can be used for controlling The operating of wearable device 100.The processing circuit can microprocessor based on one or more, microcontroller, Digital Signal Processing Device, baseband processor, power management unit, audio codec chip, application-specific integrated circuit, display-driver Ics etc. To realize.
Storage and processing circuit 710 can be used for running the software in wearable device 100, such as voice over internet protocol (Voice over Internet Protocol, VOIP) call application program, simultaneous interpretation function, media play application Program, operation system function etc..These softwares can be used for executing some control operations, for example, the image based on camera is adopted Collection, the ambient light measurement based on ambient light sensor, the proximity sensor based on proximity sensor measure, based on such as luminous two The information display function that the positioning indicators such as the status indicator lamp of pole pipe are realized, the touch event detection based on touch sensor, Operation associated with wireless communication function is executed, operation associated with collecting and generating audio signal, with collection and processing Other functions etc. in the associated control operation of button press event data and wearable device 100, the embodiment of the present application It is not restricted.
Wearable device 100 can also include input-output circuit 750.Input-output circuit 750 can be used for making to wear It wears equipment 100 and realizes outputting and inputting for data, that is, allow wearable device 100 from outer equipment receiving data and also permission can Wearable device 100 exports data to external equipment from wearable device 100.Input-output circuit 750 may further include Sensor 770.Sensor 770 may include ambient light sensor, the proximity sensor based on light and capacitance, touch sensor (for example, being based on light touch sensor and/or capacitive touch sensors, wherein touch sensor can be touching display screen A part can also be used as a touch sensor arrangement and independently use), acceleration transducer and other sensors etc..
Input-output circuit 750 (can also be shown including touch sensor array that is, display 730 can be touch-control Screen).Touch sensor can be the electricity formed by transparent touch sensor electrode (such as tin indium oxide (ITO) electrode) array Appearance formula touch sensor, or can be the touch sensor formed using other touching techniques, such as sound wave touch-control, it is pressure-sensitive to touch It touches, resistive touch, optical touch etc., the embodiment of the present application is not restricted.
Wearable device 100 can also include audio component 740.Audio component 740 can be used for for wearable device 100 Audio input and output function are provided.Audio component 740 in wearable device 100 may include loud speaker, microphone, buzzing Device, tone generator and other components for generating and detecting sound.
Telecommunication circuit 720 can be used for providing the ability with external device communication for wearable device 100.Telecommunication circuit 720 May include analog- and digital- input-output interface circuit, and the radio communication circuit based on radiofrequency signal and/or optical signal. Radio communication circuit in telecommunication circuit 720 may include radio-frequency transceiver circuitry, power amplifier circuit, low noise amplification Device, switch, filter and antenna.For example, the radio communication circuit in telecommunication circuit 720 may include for passing through transmitting The circuit of near-field communication (Near Field Communication, NFC) is supported with near-field coupling electromagnetic signal is received.Example Such as, telecommunication circuit 720 may include near-field communication aerial and near-field communication transceiver.Telecommunication circuit 720 can also include honeycomb Telephone transceiver and antenna, wireless lan transceiver circuit and antenna etc..
Wearable device 100 can further include battery, power management circuitry and other input-output units 760. Input-output unit 760 may include button, control stick, click wheel, scroll wheel, touch tablet, keypad, keyboard, camera, Light emitting diode or other positioning indicators etc..
User can input a command for the operation of control wearable device 100 by input-output circuit 750, and can To use the output data of input-output circuit 750 status information from wearable device 100 and other defeated is received to realize Go out.
Based on the network architecture of Fig. 1 a, a kind of wearable device is disclosed.Referring to Fig. 2, Fig. 2 is the embodiment of the present application public affairs A kind of structural schematic diagram for the wearable device opened, wearable device 100 include microphone 11, loud speaker 12 and controller 13, Microphone 11, loud speaker 12 connect controller 13, wherein:
Microphone 11, for acquiring the first voice input by user.
Second voice for being the second voice by the first voiced translation, and is sent to second and wearable set by controller 13 Standby, the second wearable device is for playing the second voice.
Loud speaker 12, for playing the second voice.
Wearable device 100 in the embodiment of the present application can be with the first wearable device 100 in corresponding diagram 1a, and second can Wearable device can be with the second wearable device 200 in corresponding diagram 1a.
In the embodiment of the present application, controller 13 may include processor and memory, which is wearable device Control centre is stored in using the various pieces of various interfaces and the entire wearable device of connection by running or executing Software program in memory and/or module, and the data being stored in memory are called, execute the various of wearable device Function and processing data, to carry out integral monitoring to wearable device.Optionally, processor can integrate application processor and tune Demodulation processor processed, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate is mediated Reason device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor.
Wherein, memory can be used for storing software program and module, and processor is stored in the soft of memory by operation Part program and module, to execute various function application and the data processing of wearable device.Memory can include mainly Storing program area and storage data field, wherein storing program area can storage program area, the application journey needed at least one function Sequence etc.;Storage data field can be stored uses created data etc. according to wearable device.In addition, memory may include height Fast random access memory, can also include nonvolatile memory, a for example, at least disk memory, flush memory device, Or other volatile solid-state parts.
May include at least one microphone 11 in the embodiment of the present application, in wearable device 100, microphone 11 can be received The voice that collection user sends out.The embodiment of the present application is suitable for the different people of two voices and carries out voice by two wearable devices The scene of call.For example, the first user wears the first wearable device, and second user wears the second wearable device, the One user can say that the first language, second user can say that second of language, the first user can not understand second of language, and second uses Family can not understand the first language.Wherein, the first wearable device and the second wearable device all include microphone, loud speaker and Wireless communication module (for example, bluetooth module), all has the function of voice collecting and voice play function.
When the first user conveys voice messaging to second user, the microphone of the first wearable device acquires the first user First voiced translation is that (second of language corresponds to the second voice by the first voice (the corresponding voice of the first language) of input Voice) be sent to the second wearable device, the second wearable device plays the second voice, and the first wearable device also plays the Two voices.Wherein, the first voice is the first voice input by user, and the second voice is after the translation of the first wearable device Voice.
Wherein, the second wearable device plays the second voice and the second voice of broadcasting of loud speaker 12 and can be carried out at the same time.This Sample can know whether the first voice oneself sent out is translated simultaneously in order to wear the user (the first user) of the first wearable device Complete broadcasting.First user can continue voice input after the first wearable device plays the second voice, or The corresponding voice of the first language after translation for waiting for second user to send out.
When second user conveys voice messaging to the first user, the microphone of the second wearable device acquires second user Second voiced translation is that (the first language corresponds to the first voice by the second voice (the corresponding voice of second of language) of input Voice) be sent to the first wearable device, the first wearable device plays the first voice, and the second wearable device also plays the One voice.Wherein, the second voice is the voice of second user input, and the first voice is after the translation of the second wearable device Voice.
In the embodiment of the present application, microphone 11 and loud speaker 12 can be opened in real time, can also be opened in response to user's operation It opens.For example, voiced translation button can be arranged on the first wearable device, when user presses voiced translation button, you can beat Microphone 11 and loud speaker 12 are opened, when user presses voiced translation button again, you can mute microphone (MIC) 11.Further, language Sound, which translates button, can also have language selection function, press voiced translation button up and down, can open or mute microphone (MIC) 11, pressing voiced translation button in left and right can switch selection and need the language form translated.And press voiced translation in left and right It, can be in the prompt tone of the language form of the loud speaker of the first wearable device output selected text translation when button.The application is implemented The selection function that a button realizes the language form of voiced translation switch and voiced translation can be arranged in example, and saving first can The button usage quantity of wearable device reduces material use cost.
Optionally, the touch area for detecting user's touch operation can be arranged in the surface of the first wearable device.Example Such as, can pressure sensor be set in the predeterminable area on the surface of the first wearable device, the first wearable device can basis Pressing duration and pressing dynamics of the user in touch area generate corresponding control instruction, to control whether to open or close Microphone 11, and selection need the language form translated.In another example the first wearable device can detect unit interval (ratio Such as:1 second or two seconds) number of taps of interior user in touch area, it is given birth to according to the correspondence of number of taps and control instruction At corresponding control instruction.For example, after tapping once, the first wearable device exports prompt tone by loud speaker, to prompt to use Family enters voiced translation pattern.The embodiment of the present application can save the space of the first wearable device without using physical button, Improve space availability ratio.
Optionally, fingerprint detection region can also be arranged in the surface of the first wearable device, when user presses fingerprint detection When region, the fingerprint sensor of the first wearable device is started to work, and is acquired fingerprint input by user, and verified, is worked as inspection When measuring fingerprint input by user and being matched with pre-stored fingerprint template, determination is verified, and allows user couple first that can wear It wears equipment and carries out touch control operation.The embodiment of the present application can carry out finger print safety verification, prevent strange user couple first wearable Equipment is manipulated, and the safety of the first wearable device is improved.
Optionally, voice print verification can also be arranged in the first wearable device, only to being turned over by the voice of voice print verification It translates.After microphone 11 acquires the first voice input by user, controller 13 carries out voice print verification, extraction first to the first voice First vocal print feature and pre-stored vocal print feature template are carried out matching verification by the first vocal print feature in voice, when the When one vocal print feature and pre-stored vocal print feature template matches, determination is verified, and controller 13 is by the first voiced translation For the second voice, subsequent operation is executed.The embodiment of the present application can carry out voice print verification, prevent strange user couple first from can wear It wears equipment to be manipulated, improves the safety of the first wearable device.
In the embodiment of the present application, voiced translation can be carried out between two wearable devices, is set without third party It is standby, the real-time of voiced translation is improved, to realize voice real time translation.
Optionally, the first voiced translation is the second voice by controller 13, specially:
Controller 13 sends translation request to translating server, and translation request carries the first voice and the second voice mark Know, the first voiced translation is corresponding second voice of the second voice identifier for translating server by translation request;
Controller 13 receives the second voice that translating server returns.
In the embodiment of the present application, the first wearable device can have network savvy, the first wearable device that can connect Voiced translation can may be implemented by base stations translating server, translating server in cellular network, the first wearable device Function.Specifically, the first wearable device can to server send translation request, the translation request carry the first voice and Second voice identifier, after the second voice identifier can be the language form selected on the first wearable device according to the first user It generates.First voiced translation is corresponding second voice of the second voice identifier by translating server, and by second after translation Voice is sent to the first wearable device.
First voiced translation is the second voice by translating server, is specifically as follows:
Translating server starts speech identifying function, and the first voice is converted to the first text, the first text is translated as Second language identifies corresponding second text, according to second the second voice of text generation.
In the embodiment of the present application, wearable device can have the function of connection cellular network, be set without passing through third party The standby transfer of (such as mobile phone) as voiced translation, can carry out voiced translation, and can fast implement voice whenever and wherever possible Translation, improves the real-time of voiced translation, to realize voice real time translation.
Optionally, microphone 11, after acquiring the first voice input by user, detection is in the first preset duration It is no to have voice input;
Controller 13 is additionally operable to when microphone is detected and inputted without voice in the first preset duration, by the first voice It is translated as the second voice.
In the embodiment of the present application, the first preset duration can be set and be stored in the non-of the first wearable device in advance In volatile memory.For example, the first preset duration could be provided as 2 seconds, 5 seconds, 10 seconds etc..The embodiment of the present application does not limit It is fixed.First preset duration can be understood as pause duration, the duration to be translated such as is referred to as, refers to two personal comminication's processes Pause duration when middle waiting wearable device translation, when pause duration is more than the first preset duration, it is believed that user is waiting for Wearable device is translated, and wearable device can start that collected voice is translated, sent and played.
Wherein, the size of the first preset duration can be determined according to different users.For example, controller 13 can be known The vocal print of other user, and can be by first when falling into old age bracket at the age of user according to the age of vocal print calculating user Preset duration is set as 10 seconds, when falling into young age bracket at the age of user, can be set the first preset duration to 2 seconds.
For another example, controller can also identify the word speed of user, and the size of preset duration is determined according to the word speed of user, If detecting that the word speed of user is the first word speed section (150-200 words per minutes clock), the first preset duration can be set to 2 seconds, if detecting that the word speed of user is the second word speed section (60-100 words per minutes clock), the first preset duration can be set It is set to 10 seconds, if detecting that the word speed of user is third word speed section (100-150 words per minutes clock), first can be preset Duration is set as 5 seconds.In general, word speed is faster, the first preset duration can be arranged smaller.When speaking for different users When word speed has larger difference, the embodiment of the present application can determine pause duration according to user speed, can be directed to different languages The different pause duration of fast user setting, meets the voiced translation of various users and the demand exchanged, to improve user's body It tests.
In the embodiment of the present application, the first preset duration can be set as pause duration, it is suitable to control wearable device Voiced translation is carried out, the intelligent of human-computer interaction is improved.
Optionally, controller 13 are additionally operable to receive voiced translation instruction input by user, into voiced translation pattern;
Controller 13 is additionally operable to receive the voice selecting to be translated instruction of user's selection, selects the second voice as waiting turning over Translate voice.
In the embodiment of the present application, it can be arranged for detecting touching for user's touch operation on the surface of the first wearable device Touch region.For example, pressure sensor, the first wearable device can be arranged in the predeterminable area on the surface of the first wearable device Pressing duration and pressing dynamics that can be according to user in touch area generate voice interpretive order or voice selecting to be translated Instruction.Such as it is 1-2 seconds a length of when pressing, pressing dynamics are 1-5 newton, then generate voiced translation instruction;A length of 3-5 when pressing Second, pressing dynamics are 1-10 newton, then generate voice selecting to be translated instruction, and waiting for of currently selecting is exported by loud speaker 12 The corresponding category of language of voice of translation.Into after voiced translation pattern, the first wearable device opens microphone, carries out voice Acquisition.After entering voiced translation pattern, can further select the corresponding category of language of band translated speech (such as:Chinese, English, French, German, Japanese, Korean, Russian, Spanish, Arabic etc.).
In another example the first wearable device can detect the unit interval (such as:1 second or 2 seconds) interior user is in touch area Interior number of taps, according to number of taps control instruction corresponding with the generation of the correspondence of control instruction.For example, tapping primary Corresponding control instruction be voiced translation instruction, the first wearable device by loud speaker export prompt tone, with prompt user into Enter voiced translation pattern.It taps corresponding control instruction twice to instruct for voice selecting to be translated, the first wearable device passes through Loud speaker exports prompt tone, with the corresponding category of language of voice to be translated for prompting user currently to select.
The embodiment of the present application can trigger whether enter voiced translation pattern by user, improve the intelligence of human-computer interaction Property, and without using physical button, the space of the first wearable device can be saved, improve space availability ratio.
Optionally, microphone 11, are additionally operable to whether detection has voice to input and whether receive in the second preset duration The voice data sent to the second wearable device;
Controller 13 is additionally operable to detect no voice input in the second preset duration in microphone 11 and not connect When receiving the voice data of the second wearable device transmission, voiced translation pattern is exited.
In the embodiment of the present application, the second preset duration could be provided as 10 seconds, 20 seconds, 30 seconds etc., and the embodiment of the present application is not It limits.Second preset duration is for judging whether user exits voiced translation pattern, when detecting more than the second preset duration There is no voice input to be not received by the voice data of the second wearable device transmission yet, then exits voiced translation pattern.It exits After voiced translation pattern, the first wearable device closes microphone 11, can save power consumption.
Second preset duration can be set by the timer in the first wearable device.
Wherein, the second preset duration is more than the first preset duration.
The embodiment of the present application can automatically exit from voiced translation pattern, to save the power consumption of wearable device.
Optionally, controller 13, be additionally operable to receive it is input by user exit voiced translation mode instruction, exit voiced translation Pattern.
For example, the first wearable device can detect the unit interval (such as:1 second or 2 seconds) interior user is in touch area Number of taps, according to number of taps control instruction corresponding with the generation of the correspondence of control instruction.For example, percussion is right three times The control instruction answered is to exit voiced translation mode instruction, and the first wearable device exports prompt tone by loud speaker, with prompt User exits voiced translation pattern.
Show referring to Fig. 3, Fig. 3 is a kind of flow of the interpretation method based on wearable device disclosed in the embodiment of the present application It is intended to.Include the following steps as shown in figure 3, being somebody's turn to do the interpretation method based on wearable device.
301, the first wearable device acquires the first voice input by user.
302, the first voiced translation is the second voice by the first wearable device, and the second voice is sent to second can wear Equipment is worn, the second wearable device is for playing the second voice.
Optionally, step 302 may include steps of (11) and step (12).
(11) first wearable devices to translating server send translation request, the translation request carry the first voice and First voiced translation is corresponding second language of the second voice identifier for translating server by the second voice identifier, translation request Sound;
(12) first wearable devices receive the second voice that translating server returns.
303, the first wearable device plays the second voice.
The specific implementation of method shown in Fig. 3 may refer to Fig. 1~device embodiment shown in Fig. 2, and details are not described herein again.
In the embodiment of the present application, voiced translation can be carried out between two wearable devices, is set without third party It is standby, the real-time of voiced translation is improved, to realize voice real time translation.
Referring to Fig. 4, Fig. 4 is the flow of interpretation method of the another kind disclosed in the embodiment of the present application based on wearable device Schematic diagram.Fig. 4 advanced optimizes to obtain on the basis of Fig. 3, as shown in fig. 6, should the translation side based on wearable device Method includes the following steps.
401, the first wearable device acquires the first voice input by user.
402, whether the detection of the first wearable device has voice input in the first preset duration.
403, it is inputted without voice if detecting in the first preset duration, the first voiced translation is by the first wearable device Second voice, and the second voice is sent to the second wearable device, the second wearable device is for playing the second voice.
404, the first wearable device plays the second voice.
Step 401 in the embodiment of the present application may refer to step 301 shown in Fig. 3, and step 404 may refer to Fig. 3 institutes The step 303 shown, details are not described herein again.
The specific implementation of method shown in Fig. 4 may refer to Fig. 1~device embodiment shown in Fig. 2, and details are not described herein again.
In the embodiment of the present application, voiced translation can be carried out between two wearable devices, is set without third party It is standby, the real-time of voiced translation is improved, to realize voice real time translation.When first preset duration can be set as pausing It is long, voiced translation is properly carried out to control wearable device, improves the intelligent of human-computer interaction.
Referring to Fig. 5, Fig. 5 is the flow of interpretation method of the another kind disclosed in the embodiment of the present application based on wearable device Schematic diagram.Fig. 5 advanced optimizes to obtain on the basis of Fig. 3, as shown in figure 5, should the translation side based on wearable device Method includes the following steps.
501, the first wearable device receives voiced translation instruction input by user, into voiced translation pattern.
502, the first wearable device receives the voice selecting to be translated instruction of user's selection, selects the second voice as waiting for Translated speech.
503, the first wearable device acquires the first voice input by user.
504, the first voiced translation is the second voice by the first wearable device, and the second voice is sent to second can wear Equipment is worn, the second wearable device is for playing the second voice.
505, the first wearable device plays the second voice.
506, whether the detection of the first wearable device has voice to input and whether receive the in the second preset duration The voice data that two wearable devices are sent.
507, if not having, first wearable exits voiced translation pattern.
Step 503- steps 505 in the embodiment of the present application may refer to step 301 shown in Fig. 3 to step 303, herein It repeats no more.
The specific implementation of method shown in fig. 5 may refer to Fig. 1~device embodiment shown in Fig. 2, and details are not described herein again.
In the embodiment of the present application, voiced translation can be carried out between two wearable devices, is set without third party It is standby, the real-time of voiced translation is improved, to realize voice real time translation.The voiced translation pattern that exits can be detected automatically, it will Microphone is closed, and power consumption can be saved.
Referring to Fig. 6, Fig. 6 is the structural schematic diagram of another wearable device disclosed in the embodiment of the present application, as schemed institute Show, which includes processor 601, memory 602, communication interface 603 and one or more programs, wherein Said one or multiple programs are stored in above-mentioned memory 602, and are configured to be executed by above-mentioned processor 601, above-mentioned Program includes the instruction for executing following steps:
First wearable device acquires the first voice input by user;
First voiced translation is the second voice by the first wearable device, and the second voice is sent to second and wearable is set Standby, the second wearable device is for playing the second voice;
First wearable device plays the second voice.
Optionally, in terms of the first voiced translation is the second voice by the first wearable device, above procedure is specifically used for Execute the instruction of following steps:
First wearable device sends translation request to translating server, and translation request carries the first voice and the second language Phonetic symbol is known, and the first voiced translation is corresponding second voice of the second voice identifier for translating server by translation request;
First wearable device receives the second voice that translating server returns.
Optionally, above procedure includes the instruction for being additionally operable to execute following steps:
Whether the detection of the first wearable device has voice input in the first preset duration;
If it is not, the first wearable device execute by the first voiced translation be the second voice the step of.
Optionally, above procedure includes the instruction for being additionally operable to execute following steps:
First wearable device receives voiced translation instruction input by user, into voiced translation pattern;
First wearable device receives the voice selecting to be translated instruction of user's selection, selects the second voice as to be translated Voice.
Optionally, above procedure includes the instruction for being additionally operable to execute following steps:
Whether the detection of the first wearable device has voice to input and whether receive second in the second preset duration can The voice data that wearable device is sent;
If not having, voiced translation pattern is exited.
The specific implementation of device shown in fig. 6 may refer to Fig. 1~device embodiment shown in Fig. 2, and details are not described herein again.
Implement wearable device shown in fig. 6, voiced translation can be carried out between two wearable devices, without Third party device improves the real-time of voiced translation, to realize voice real time translation.
Show referring to Fig. 7, Fig. 7 is a kind of structure of the translating equipment based on wearable device disclosed in the embodiment of the present application Be intended to, be applied to wearable device, the translating equipment 700 based on wearable device include collecting unit 701, translation unit 702, Transmission unit 703 and broadcast unit 703, wherein:
Collecting unit 701, for acquiring the first voice input by user.
Translation unit 702, for being the second voice by the first voiced translation.
Transmission unit 703, for the second voice to be sent to the second wearable device, the second wearable device is for playing Second voice.
Broadcast unit 704, for playing the second voice.
Wherein, translation unit 702 can be processor or controller, (such as can be central processing unit (Central Processing Unit, CPU), general processor, digital signal processor (Digital Signal Processor, DSP), Special integrated manipulator (Application-Specific Integrated Circuit, ASIC), field programmable gate array It is (Field Programmable Gate Array, FPGA) or other programmable logic device, transistor logic, hard Part component or its arbitrary combination.Collecting unit 701 can be microphone, and transmission unit 703 can be wireless communication module (example Such as, bluetooth module), broadcast unit 704 can be loud speaker.
The specific implementation of device shown in Fig. 7 may refer to Fig. 1~device embodiment shown in Fig. 2, and details are not described herein again.
Implement wearable device shown in Fig. 7, voiced translation can be carried out between two wearable devices, without Third party device improves the real-time of voiced translation, to realize voice real time translation.
The embodiment of the present application also provides a kind of computer storage media, wherein computer storage media storage is for electricity The computer program that subdata exchanges, the computer program make computer execute any as described in above method embodiment Some or all of method step, above computer include wearable device.
The embodiment of the present application also provides a kind of computer program product, and above computer program product includes storing calculating The non-transient computer readable storage medium of machine program, above computer program are operable to that computer is made to execute such as above-mentioned side Some or all of either method described in method embodiment step.The computer program product can be a software installation Packet, above computer includes wearable device.
It should be noted that for each method embodiment above-mentioned, for simple description, therefore it is all expressed as a series of Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because According to the application, certain steps can be performed in other orders or simultaneously.Secondly, those skilled in the art should also know It knows, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily the application It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way It realizes.For example, the apparatus embodiments described above are merely exemplary, for example, said units division, it is only a kind of Division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or can To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Coupling, direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING or communication connection of device or unit, Can be electrical or other forms.
The above-mentioned unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can be stored in a computer-readable access to memory.Based on this understanding, the technical solution of the application substantially or Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment (can be personal computer, server or network equipment etc.) executes all or part of each embodiment above method of the application Step.And memory above-mentioned includes:USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer-readable memory, memory May include:Flash disk, read-only memory (English:Read-Only Memory, referred to as:ROM), random access device (English: Random Access Memory, referred to as:RAM), disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas; Meanwhile for those of ordinary skill in the art, according to the thought of the application, can in specific implementation and application range There is change place, to sum up above-mentioned, the contents of this specification should not be construed as limiting the present application.

Claims (13)

1. a kind of wearable device, which is characterized in that including microphone, loud speaker and controller, wherein:
The microphone, for acquiring the first voice input by user;
The controller, for being the second voice by first voiced translation, and second voice is sent to second can Wearable device, second wearable device is for playing second voice;
The loud speaker, for playing second voice.
2. wearable device according to claim 1, which is characterized in that first voiced translation is by the controller Second voice, specially:
The controller sends translation request to translating server, and the translation request carries first voice and the second language Phonetic symbol is known, and the translation request corresponds to first voiced translation for second voice identifier for the translating server The second voice;
The controller receives second voice that the translating server returns.
3. wearable device according to claim 1 or 2, which is characterized in that
The microphone, after acquiring the first voice input by user, whether detection has voice in the first preset duration Input;
The controller is additionally operable to when the microphone is detected and inputted without voice in first preset duration, by institute It is the second voice to state the first voiced translation.
4. according to claim 1-3 any one of them wearable devices, which is characterized in that
The controller is additionally operable to receive the voiced translation instruction input by user, into voiced translation pattern;
The controller is additionally operable to receive the voice selecting to be translated instruction of user's selection, second voice is selected to make For voice to be translated.
5. wearable device according to claim 4, which is characterized in that
The microphone, is additionally operable to whether detection has voice to input and whether receive described second in the second preset duration The voice data that wearable device is sent;
The controller is additionally operable to detect no voice input in second preset duration in the microphone and not have When having the voice data for receiving the second wearable device transmission, voiced translation pattern is exited.
6. a kind of interpretation method based on wearable device, which is characterized in that the method includes:
First wearable device acquires the first voice input by user;
First voiced translation is the second voice by first wearable device, and second voice is sent to second Wearable device, second wearable device is for playing second voice;
First wearable device plays second voice.
7. according to the method described in claim 6, it is characterized in that, first wearable device is by first voiced translation For the second voice, including:
First wearable device sends translation request to translating server, the translation request carry first voice with And first voiced translation is second voice for the translating server by second voice identifier, the translation request Identify corresponding second voice;
First wearable device receives second voice that the translating server returns.
8. the method described according to claim 6 or 7, which is characterized in that the first wearable device acquisition is input by user After first voice and first wearable device by first voiced translation be the second voice before, the method Further include:
Whether the first wearable device detection has voice input in the first preset duration;
If it is not, first wearable device execute it is described will first voiced translation for the second voice the step of.
9. according to claim 6-8 any one of them methods, which is characterized in that the first wearable device acquisition user is defeated Before the first voice entered, the method further includes:
First wearable device receives the voiced translation instruction input by user, into voiced translation pattern;
First wearable device receives the voice selecting to be translated instruction of user's selection, and second voice is selected to make For voice to be translated.
10. according to the method described in claim 9, it is characterized in that, first wearable device plays second voice Later, the method further includes:
Whether first wearable device detection has voice to input and whether receive described the in the second preset duration The voice data that two wearable devices are sent;
If not having, voiced translation pattern is exited.
11. a kind of translating equipment based on wearable device, which is characterized in that be applied to wearable device, the translating equipment Including collecting unit, translation unit, transmission unit and broadcast unit, wherein:
The collecting unit, for acquiring the first voice input by user;
The translation unit, for being the second voice by first voiced translation;
The transmission unit, for second voice to be sent to the second wearable device, second wearable device is used In broadcasting second voice;
The broadcast unit, for playing second voice.
12. a kind of wearable device, which is characterized in that including processor, memory, communication interface, and one or more journeys Sequence, one or more of programs are stored in the memory, and are configured to be executed by the processor, described program Include for executing the instruction such as the step in claim 6-10 any one of them methods.
13. a kind of computer readable storage medium, which is characterized in that computer program of the storage for electronic data interchange, In, the computer program makes wearable device execute such as claim 6-10 any one of them methods.
CN201810414740.8A 2018-05-03 2018-05-03 Translation method and related equipment Expired - Fee Related CN108710615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810414740.8A CN108710615B (en) 2018-05-03 2018-05-03 Translation method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810414740.8A CN108710615B (en) 2018-05-03 2018-05-03 Translation method and related equipment

Publications (2)

Publication Number Publication Date
CN108710615A true CN108710615A (en) 2018-10-26
CN108710615B CN108710615B (en) 2020-03-03

Family

ID=63867719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810414740.8A Expired - Fee Related CN108710615B (en) 2018-05-03 2018-05-03 Translation method and related equipment

Country Status (1)

Country Link
CN (1) CN108710615B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360549A (en) * 2018-11-12 2019-02-19 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing
CN109462789A (en) * 2018-11-10 2019-03-12 东莞市华睿电子科技有限公司 A kind of earphone plays the interpretation method of audio
CN109787966A (en) * 2018-12-29 2019-05-21 北京金山安全软件有限公司 Monitoring method and device based on wearable device and electronic device
CN110099325A (en) * 2019-05-24 2019-08-06 歌尔科技有限公司 A kind of wireless headset enters box detection method, device, wireless headset and earphone products
CN110558698A (en) * 2019-09-17 2019-12-13 临沂大学 Portable translator
CN111104042A (en) * 2019-12-27 2020-05-05 惠州Tcl移动通信有限公司 Human-computer interaction system and method and terminal equipment
CN111476040A (en) * 2020-03-27 2020-07-31 深圳光启超材料技术有限公司 Language output method, head-mounted device, storage medium, and electronic device
CN111696552A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN111739538A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Translation method and device, earphone and server
WO2021023012A1 (en) * 2019-08-02 2021-02-11 汕头大学 Wearable translation and information retrieval apparatus and use method therefor
CN112394771A (en) * 2020-11-24 2021-02-23 维沃移动通信有限公司 Communication method, communication device, wearable device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246643A (en) * 2012-02-10 2013-08-14 株式会社东芝 Speech translation apparatus and speech translation method
CN104462070A (en) * 2013-09-19 2015-03-25 株式会社东芝 A speech translating system and a speech translating method
US20160267075A1 (en) * 2015-03-13 2016-09-15 Panasonic Intellectual Property Management Co., Ltd. Wearable device and translation system
CN106462571A (en) * 2014-04-25 2017-02-22 奥斯特豪特集团有限公司 Head-worn computing systems
CN106935240A (en) * 2017-03-24 2017-07-07 百度在线网络技术(北京)有限公司 Voice translation method, device, terminal device and cloud server based on artificial intelligence
CN206907022U (en) * 2017-06-05 2018-01-19 中国地质大学(北京) Easily worn formula instant translation machine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246643A (en) * 2012-02-10 2013-08-14 株式会社东芝 Speech translation apparatus and speech translation method
CN104462070A (en) * 2013-09-19 2015-03-25 株式会社东芝 A speech translating system and a speech translating method
CN106462571A (en) * 2014-04-25 2017-02-22 奥斯特豪特集团有限公司 Head-worn computing systems
US20160267075A1 (en) * 2015-03-13 2016-09-15 Panasonic Intellectual Property Management Co., Ltd. Wearable device and translation system
CN106935240A (en) * 2017-03-24 2017-07-07 百度在线网络技术(北京)有限公司 Voice translation method, device, terminal device and cloud server based on artificial intelligence
CN206907022U (en) * 2017-06-05 2018-01-19 中国地质大学(北京) Easily worn formula instant translation machine

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109462789A (en) * 2018-11-10 2019-03-12 东莞市华睿电子科技有限公司 A kind of earphone plays the interpretation method of audio
CN109360549A (en) * 2018-11-12 2019-02-19 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing
CN109360549B (en) * 2018-11-12 2023-07-18 北京搜狗科技发展有限公司 Data processing method, wearable device and device for data processing
CN109787966A (en) * 2018-12-29 2019-05-21 北京金山安全软件有限公司 Monitoring method and device based on wearable device and electronic device
CN110099325A (en) * 2019-05-24 2019-08-06 歌尔科技有限公司 A kind of wireless headset enters box detection method, device, wireless headset and earphone products
WO2021023012A1 (en) * 2019-08-02 2021-02-11 汕头大学 Wearable translation and information retrieval apparatus and use method therefor
CN110558698A (en) * 2019-09-17 2019-12-13 临沂大学 Portable translator
CN111104042A (en) * 2019-12-27 2020-05-05 惠州Tcl移动通信有限公司 Human-computer interaction system and method and terminal equipment
CN111476040A (en) * 2020-03-27 2020-07-31 深圳光启超材料技术有限公司 Language output method, head-mounted device, storage medium, and electronic device
CN111696552A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN111739538A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Translation method and device, earphone and server
CN111739538B (en) * 2020-06-05 2022-04-26 北京搜狗科技发展有限公司 Translation method and device, earphone and server
CN111696552B (en) * 2020-06-05 2023-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN112394771A (en) * 2020-11-24 2021-02-23 维沃移动通信有限公司 Communication method, communication device, wearable device and readable storage medium

Also Published As

Publication number Publication date
CN108710615B (en) 2020-03-03

Similar Documents

Publication Publication Date Title
CN108710615A (en) Interpretation method and relevant device
CN109005480A (en) Information processing method and related product
WO2021184549A1 (en) Monaural earphone, intelligent electronic device, method and computer readable medium
CN109067965A (en) Interpretation method, translating equipment, wearable device and storage medium
WO2018045536A1 (en) Sound signal processing method, terminal, and headphones
CN108595003A (en) Function control method and relevant device
CN108108142A (en) Voice information processing method, device, terminal device and storage medium
CN208227260U (en) A kind of smart bluetooth earphone and bluetooth interactive system
CN108810693A (en) Apparatus control method and Related product
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
US9838522B2 (en) Information processing device
CN108289244A (en) Video caption processing method, mobile terminal and computer readable storage medium
CN108923810A (en) Interpretation method and relevant device
CN110070863A (en) A kind of sound control method and device
CN108769387A (en) Application control method and relevant device
CN109561420A (en) A kind of method and relevant device of emergency help
CN108683799A (en) Wearable device lookup method and relevant device
CN106328176B (en) A kind of method and apparatus generating song audio
CN103186232A (en) Voice keyboard device
CN108541080A (en) First electronic equipment and the second electronic equipment carry out the method and Related product of Hui Lian
CN108959273A (en) Interpretation method, electronic device and storage medium
CN110097875A (en) Interactive voice based on microphone signal wakes up electronic equipment, method and medium
CN106878390A (en) Electronic pet interaction control method, device and wearable device
CN109144454A (en) double-sided screen display control method and related product
CN112230877A (en) Voice operation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200303