CN108683790B - Voice processing method and related product - Google Patents

Voice processing method and related product Download PDF

Info

Publication number
CN108683790B
CN108683790B CN201810368538.6A CN201810368538A CN108683790B CN 108683790 B CN108683790 B CN 108683790B CN 201810368538 A CN201810368538 A CN 201810368538A CN 108683790 B CN108683790 B CN 108683790B
Authority
CN
China
Prior art keywords
recording
target
sound signal
head
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810368538.6A
Other languages
Chinese (zh)
Other versions
CN108683790A (en
Inventor
郭富豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810368538.6A priority Critical patent/CN108683790B/en
Publication of CN108683790A publication Critical patent/CN108683790A/en
Application granted granted Critical
Publication of CN108683790B publication Critical patent/CN108683790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/656Recording arrangements for recording a message from the calling party for recording conversations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/21Combinations with auxiliary equipment, e.g. with clocks or memoranda pads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones

Abstract

The application discloses a voice processing method and a related product, which are applied to wearable equipment, wherein the wearable equipment is worn on the head of a user, and the method comprises the following steps: collecting motion parameters of the head of the user; when the motion parameters meet preset conditions, generating a recording instruction; and responding to the recording instruction, and executing recording operation. By adopting the embodiment of the application, the recording instruction can be generated according to the head movement of the user, the recording function is realized, the intelligence and the convenience of the wearable device are improved, and the user experience is also improved.

Description

Voice processing method and related product
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a speech processing method and a related product.
Background
With the maturity of wireless technology, wireless earphones are connected with electronic devices such as mobile phones through wireless technology in more and more scenes. People can realize various functions such as listening to music, making a call and the like through the wireless earphone. However, the current wireless headset is single, and therefore, the user experience is reduced.
Disclosure of Invention
The embodiment of the application provides a voice processing method and a related product, which can realize a recording function and improve the intelligence and convenience of wearable equipment.
In a first aspect, the embodiments of the present application provide a wearable device, which is applied to a wearable device worn on the head of a user, the wearable device including a storage and processing circuit, and a sensor and an audio component connected to the storage and processing circuit, wherein,
the sensor is used for acquiring the motion parameters of the head of the user;
the storage and processing circuit is used for generating a recording instruction when the motion parameter meets a preset condition;
and the audio component is used for responding to the recording instruction and executing recording operation.
In a second aspect, an embodiment of the present application provides a speech processing method, which is applied to a wearable device, where the wearable device is worn on a head of a user, and the method includes:
collecting motion parameters of the head of the user;
when the motion parameters meet preset conditions, generating a recording instruction;
and responding to the recording instruction, and executing recording operation.
In a third aspect, an embodiment of the present application provides a speech processing apparatus, which is applied to a wearable device, where the wearable device is worn on a head of a user, and the speech processing apparatus includes a collecting unit, a generating unit, and an executing unit, where:
the acquisition unit is used for acquiring the motion parameters of the head of the user;
the generating unit is used for generating a recording instruction when the motion parameters meet preset conditions;
and the execution unit is used for responding to the recording instruction and executing the recording operation.
In a fourth aspect, embodiments of the present application provide a wearable device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods in the second aspect of the present application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in any one of the methods of the second aspect of the present application. The computer program product may be a software installation package.
In this application embodiment, wearable equipment wears in user's head, gathers the motion parameter of user's head, when motion parameter satisfies preset condition, generates the recording instruction, and response recording instruction carries out the recording operation to, according to user's head motion, generate the recording instruction, realize the recording function, promoted wearable equipment's intelligence and convenience, also promoted user experience.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
fig. 1B is an illustrative diagram of a wireless headset disclosed in an embodiment of the present application;
FIG. 1C is a schematic diagram of a network architecture for speech processing according to an embodiment of the present application;
FIG. 1D is a schematic flow chart diagram illustrating a speech processing method disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another speech processing method disclosed in the embodiments of the present application;
fig. 3 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application;
fig. 4A is a schematic structural diagram of a speech processing apparatus disclosed in an embodiment of the present application;
fig. 4B is a schematic structural diagram of another speech processing apparatus disclosed in the embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Electronic devices may include various handheld devices, vehicle mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with wireless communication capabilities, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal), and so forth. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The wearable device may include at least one of: wireless headsets, brain wave acquisition devices, Augmented Reality (AR)/Virtual Reality (VR) devices, smart earrings/earrings, smart hearing aids, smart hair pins, smart glasses, and the like. The wireless headset may communicate by: wireless fidelity (Wi-Fi) technology, bluetooth technology, visible light communication technology, invisible light communication technology (infrared communication technology, ultraviolet communication technology), and the like. For convenience of explanation, the wearable device in the following embodiments is described by taking a wireless headset as an example.
The wireless earphone can be an ear-hanging earphone, an earplug earphone or a headphone, and the embodiment of the application is not limited.
The wireless headset may be housed in a headset case, which may include: two receiving cavities (a first receiving cavity and a second receiving cavity) sized and shaped to receive a pair of wireless headsets (a first wireless headset and a second wireless headset); one or more earphone housing magnetic components disposed within the case for magnetically attracting and respectively magnetically securing a pair of wireless earphones into the two receiving cavities. The earphone box may further include an earphone cover. Wherein the first receiving cavity is sized and shaped to receive a first wireless headset and the second receiving cavity is sized and shaped to receive a second wireless headset. The wireless headset may include a headset housing, a rechargeable battery (e.g., a lithium battery) disposed within the headset housing, a plurality of metal contacts for connecting the battery to a charging device, the driver unit including a magnet, a voice coil, and a diaphragm, the driver unit for emitting sound from a directional sound port, and a speaker assembly including a directional sound port, the plurality of metal contacts disposed on an exterior surface of the headset housing. In one possible implementation, the wireless headset may further include a touch area, which may be located on an outer surface of the headset housing, and at least one touch sensor is disposed in the touch area for detecting a touch operation, and the touch sensor may include a capacitive sensor. When a user touches the touch area, the at least one capacitive sensor may detect a change in self-capacitance to recognize a touch operation.
In one possible implementation, the wireless headset may further include an acceleration sensor and a triaxial gyroscope, the acceleration sensor and the triaxial gyroscope may be disposed within the headset housing, and the acceleration sensor and the triaxial gyroscope are used to identify a picking up action and a taking down action of the wireless headset.
In a possible implementation manner, the wireless headset may further include at least one air pressure sensor, and the air pressure sensor may be disposed on a surface of the headset housing and configured to detect air pressure in the ear after the wireless headset is worn. The wearing tightness of the wireless earphone can be detected through the air pressure sensor. When it is detected that the wireless earphone is worn loosely, the wireless earphone can send prompt information to an electronic device connected with the wireless earphone so as to prompt a user that the wireless earphone has a risk of falling.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application, the wearable device 100 includes a storage and processing circuit 110, and a sensor 170 and an audio component 140 connected to the storage and processing circuit 110, optionally, a specific form of the wearable device shown in fig. 1A may refer to fig. 1B, where the wearable device 100 is specifically as follows:
the wearable device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. The processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the wearable device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the wearable device 100, such as an internet browsing application, a Voice Over Internet Protocol (VOIP) phone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on touch sensors, functionality associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in wearable device 100, to name a few, embodiments of the present application are not limited.
The wearable device 100 may also include input-output circuitry 150. The input-output circuitry 150 may be used to enable the wearable device 100 to enable input and output of data, i.e., to allow the wearable device 100 to receive data from an external device and also to allow the wearable device 100 to output data from the wearable device 100 to an external device. The input-output circuit 150 may further include a sensor 170. The sensors 170 may include ambient light sensors, light and capacitance based proximity sensors, ultrasonic sensors, radar sensors, touch sensors (e.g., based on light touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch screen display or may be used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
Input-output circuitry 150 may also include one or more displays, such as display 130. Display 130 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, displays using other display technologies. Display 130 may include an array of touch sensors (i.e., display 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The audio component 140 may be used to provide audio input and output functionality for the wearable device 100. The audio components 140 in the wearable device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sounds.
The communication circuit 120 may be used to provide the wearable device 100 with the ability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The wearable device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through the input-output circuitry 150 to control operation of the wearable device 100, and may use output data of the input-output circuitry 150 to enable receiving status information and other outputs from the wearable device 100.
Based on the wearable device described in fig. 1A above, the following functions may be implemented:
the sensor 170 is used for acquiring motion parameters of the head of the user;
the storage and processing circuit 110 is configured to generate a recording instruction when the motion parameter meets a preset condition;
the audio component 140 is configured to respond to the recording instruction and perform a recording operation.
In the embodiment of the application, be applied to wearable equipment, this wearable equipment is worn in user's head, gathers the motion parameter of user's head, when motion parameter satisfies preset condition, generates the recording instruction, and response recording instruction carries out the recording operation to, according to user's head motion, generate the recording instruction, realize the recording function, promoted wearable equipment's intelligence and convenience, also promoted user experience.
In an embodiment of the present application, the sensor 170 may include at least one of the following: acceleration sensors, brain wave acquisition devices, proximity sensors, light-sensitive sensors, ultrasonic sensors, radar sensors, pressure sensors, temperature sensors, displacement sensors, nerve sensors, muscle sensors, and the like, and the audio module 140 may include at least a recording chip and a voice signal processing circuit.
In one possible example, the motion parameters include a first motion parameter of a first head organ and a second motion parameter of a second head organ;
in terms of the record generation instruction, the storage and processing circuit 110 is specifically configured to:
determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter;
determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
and generating a recording instruction according to the target recording range parameter and the target recording object.
In one possible example the motion parameters include: target amplitude and target direction of facial muscle movement; the storage and processing circuit 110 is further specifically configured to:
and when the target amplitude is in a preset amplitude range and the target direction is in a preset direction range, determining that the motion parameter meets the preset condition.
In one possible example, in connection with the performing the recording operation, the audio component 140 is specifically configured to:
when an environment sound signal is received, filtering the environment sound signal to obtain a reference sound signal;
performing analog-to-digital conversion on the reference sound signal to obtain a target sound signal;
dividing the target sound signal into a plurality of sound signal segments according to the time sequence;
respectively determining the energy value of each sound signal segment in the plurality of sound signal segments to obtain a plurality of energy values;
and selecting a target energy value larger than a preset energy threshold value from the plurality of energy values, and splicing the sound signal segments corresponding to the target energy value according to the time sequence to obtain a final recording segment.
In one possible example, the wearable device is a wireless headset including a master headset and a slave headset, the recording operation being performed by the master headset and/or the slave headset.
The wearable device described in fig. 1A can be used to perform the following speech processing method, specifically as follows:
the sensor 170 collects the motion parameters of the user's head;
the storage and processing circuit 110 generates a recording instruction when the motion parameter meets a preset condition;
the audio module 140 responds to the recording command to perform a recording operation.
In one possible example, please refer to fig. 1C, where fig. 1C is a schematic diagram of a network architecture for speech processing disclosed in the embodiment of the present application. In the network architecture of speech processing shown in fig. 1C, an electronic device and a wearable device may be included, wherein the wearable device may be in communication connection with the electronic device through a wireless network (e.g., bluetooth, infrared or wireless fidelity (Wi-Fi), or visible light communication technology or invisible light communication technology). It should be noted that the number of the wireless headsets may be one or two, and the embodiment of the present application is not limited. In the network architecture of speech processing shown in fig. 1C, the wearable device and the electronic device may be able to successfully connect within a certain distance. Taking the wireless bluetooth headset as an example, if the maximum power capacity of the wireless bluetooth headset is 2.5mW, the wireless bluetooth headset and the electronic device can be successfully connected only within a range of about 10 meters, and when the distance between the wireless bluetooth headset and the electronic device is greater than 10 meters, the wireless bluetooth headset and the electronic device are generally disconnected. Electronic equipment can communicate with wearable equipment in certain extent, can realize the recording function through wearable equipment of electronic equipment control, perhaps, after the wearable equipment recording, transmits the audio file of having recorded for electronic equipment.
Referring to fig. 1D, please refer to the wearable device described in fig. 1A or the network architecture for speech processing described in fig. 1C, where fig. 1D is a schematic flow chart of a speech processing method disclosed in the embodiment of the present application. As shown in fig. 1D, the speech processing method is applied to a wearable device, the wearable device is worn on the head of a user, and the speech processing method includes the following steps.
101. And acquiring the motion parameters of the head of the user.
Wherein, the motion parameter of the head of the user may include at least one of the following: muscle motor parameters, nerve motor parameters, eye motor parameters, head motor parameters, respiratory motor parameters, and the like,
optionally, the muscle movement parameters may include facial expressions, in particular, by acquiring muscle movement parameters, the user expressions are analyzed by the muscle movement parameters, and for another example, the muscle movement parameters may further include at least one of the following: direction of muscle movement, location of muscle movement (e.g., eye muscles, lip muscles, throat muscles, ear muscles, etc.), amplitude of muscle movement, and the like.
Optionally, the neuromotor parameters may include at least one of: neuron types (e.g., speech, motion, etc.), neuron motor activity (e.g., user brain waves may be collected, as reflected by brain wave energy), neuron motor regions (e.g., left brain movement, right brain movement, etc.).
Optionally, the eye movement parameter may be at least one of: blink motion parameters (monocular motion parameters and binocular motion parameters) and eyeball motion parameters (monocular motion parameters and binocular motion parameters), wherein the blink motion parameters can be specifically as follows: the blink frequency, blink amplitude (e.g., closed eye and squinting eye, the blink amplitudes are different), and the eye movement parameters may specifically be: eye movement left and right, eye movement up and down, eye wandering (e.g., eye movement up and down or eye movement left and right, etc.), whitish eye, and the like.
Optionally, the head movement parameters may include at least one of: up-down motion parameters (e.g., nod, heads-down, heads-up, squat), left-right motion parameters (e.g., head left yaw, head right yaw), head rotation parameters (e.g., head left clockwise turn, head right clockwise turn, head up clockwise turn, head down clockwise turn).
Optionally, the respiratory motion parameters may include at least one of: number of breaths, breathing frequency, breathing amplitude (e.g., shallow and deep breaths are not the same), and the like.
102. And generating a recording instruction when the motion parameters meet preset conditions.
The preset condition may be set by the default of the system, or the user sets the preset condition by himself, for example, the motion parameter may be the number of times of nodding, when the number of times of nodding is greater than the preset number of times, the recording instruction is generated, and the preset number of times may be set by the user himself, or the default of the system is set by the user.
Optionally, the motion parameters comprise a first motion parameter of a first head organ and a second motion parameter of a second head organ; in the step 102, generating the recording instruction may include the following steps:
21. determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter;
22. determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
23. and generating a recording instruction according to the target recording range parameter and the target recording object.
Wherein, the first and second head organs may include, but not limited to, at least one of the following: eyes, nose, mouth, respiratory tract, ears, cheeks, eyebrows, etc. For example, the first head organ may be a left eye, and the second head organ may be a right eye. The recording range parameter may include: recording distance, recording direction, recording sound size, recording sound frequency, recording duration, and the like. The recording object may be understood as the identity of the speaker (e.g., at class, only the sound of a teacher, at a meeting, only the sound of a person), or the number of speakers (e.g., 5 persons), or the type of speaker (e.g., person, cat, dog, wind, etc.).
Optionally, the recording distance may be understood as recording only sound within a certain distance range, for example, within a range of 5 meters, in a specific implementation, the wearable device may be a wireless headset, the wireless headset may include a left headset and a right headset, because both the left headset and the right headset may receive environmental sound, for the same sound, the time received by the left headset and the time received by the right headset are different, and therefore, the distance between the sound source and the wireless headset may be identified through a time difference between the two, and then, selective recording of sound is implemented. Recording direction is understood to mean recording only sound from a certain direction, sound being a wave form which, during propagation, propagates along a certain trajectory, so that there is a propagation direction. Recording sound frequencies may be understood as recording only sounds of a certain frequency range. The recording duration may be understood as how long recording is required, for example, 10 minutes, and recording may be stopped after 10 minutes.
In the concrete implementation, the motion parameters include a first motion parameter of a first head organ and a second motion parameter of a second head organ, a first mapping relation between a preset motion parameter of the first head organ and a recording range parameter and a second mapping relation between a preset motion parameter of the second head organ and a recording object can be stored in the electronic equipment in advance, further, a target recording range parameter corresponding to the first motion parameter can be determined according to the first mapping relation between the preset motion parameter of the first head organ and the recording range parameter, a target recording object corresponding to the second motion parameter is determined according to a second mapping relation between the preset motion parameter of the second head organ and the recording object, a recording instruction is generated according to the target recording range parameter and the target recording object, and therefore, personalized recording is realized for the target recording range parameter and the target recording object, on the one hand, memory consumption can be reduced, and on the other hand, the recording is more focused, and the user experience is improved, so that the post-processing is convenient.
Optionally, the motion parameters include: target amplitude and target direction of facial muscle movement; between the above step 101 and step 102, the following steps may be further included:
and when the target amplitude is in a preset amplitude range and the target direction is in a preset direction range, determining that the motion parameter meets the preset condition.
The preset amplitude range is set by a user or defaulted by a system, and the preset direction range can also be set by the user or defaulted by the system. For example, the user puckered to the right, then can detect the amplitude that the action of puckered mouth corresponds to and the direction, and then, discern whether it satisfies preset conditions, if satisfy preset conditions, then can generate the recording instruction, if unsatisfied, then can not generate the recording instruction. Therefore, the recording instruction can be quickly generated through head action, and recording is realized.
103. And responding to the recording instruction, and executing recording operation.
Wherein, but the recording instruction is carried out according to the instruction of recording instruction to the response recording instruction, so, utilizes the head action to realize the recording function, has promoted wearable equipment's operation intelligence and convenience greatly.
Optionally, in the step 103, the executing the recording operation may include the following steps:
31. when an environment sound signal is received, filtering the environment sound signal to obtain a reference sound signal;
32. performing analog-to-digital conversion on the reference sound signal to obtain a target sound signal;
33. dividing the target sound signal into a plurality of sound signal segments according to the time sequence;
34. respectively determining the energy value of each sound signal segment in the plurality of sound signal segments to obtain a plurality of energy values;
35. and selecting a target energy value larger than a preset energy threshold value from the plurality of energy values, and splicing the sound signal segments corresponding to the target energy value according to the time sequence to obtain a final recording segment.
The environmental sound signal includes not only sounds of different people, but also natural sounds, i.e., sounds made by animals, such as wind sounds, rain sounds, other sounds, and the like, and the filtering process may include at least one of the following: wavelet transform, median filtering, bilateral filtering, band-pass filtering, low-pass filtering, high-pass filtering, etc., and the preset energy threshold may be set by the user or by default. Different sounds have different frequencies and waveforms, so that after receiving the environment sound signal, the environment sound signal is filtered to obtain a reference sound signal, natural sounds can be filtered, then the reference sound signal is subjected to analog-to-digital conversion to obtain a target sound signal, the target sound signal is divided into a plurality of sound signal segments according to the time sequence, the energy value of each sound signal segment is respectively determined to obtain a plurality of energy values, a target energy value larger than a preset energy threshold value can be selected from the plurality of energy values, the sound segments lower than the preset energy threshold value can be understood as pauses (a sound producer does not produce sound at every moment, and also can pause or think), and then the sound signal segments corresponding to the target energy value are spliced according to the time sequence to obtain a final recording segment, therefore, the recording wanted by the user can be accurately obtained, and the user experience is improved.
The voice processing method described in the embodiment of the application is applied to wearable equipment, the wearable equipment is worn on the head of a user, the motion parameters of the head of the user are collected, when the motion parameters meet preset conditions, a recording instruction is generated, the recording instruction is responded, and recording operation is executed, so that the recording instruction is generated according to the motion of the head of the user, a recording function is realized, the intelligence and the convenience of the wearable equipment are improved, and the user experience is also improved.
In one possible example, the wearable device is a wireless headset including a master headset and a slave headset, and the recording operation is performed by the master headset and/or the slave headset. So, can realize carrying out the recording through main earphone, perhaps, record through following the earphone, perhaps, realize the recording through main earphone and follow the earphone.
In accordance with the above, please refer to fig. 2, and fig. 2 is a flowchart illustrating a speech processing method according to an embodiment of the present application. As shown in fig. 2, the speech processing method is applied to a wearable device, the wearable device is worn on the head of a user, and the speech processing method comprises the following steps.
201. A wearable device collects motion parameters of the head of the user, the wearable device comprising a master earpiece and a slave earpiece.
202. And the wearable equipment generates a recording instruction when the motion parameters meet preset conditions.
203. And the wearable equipment responds to the recording instruction and controls the master earphone and the slave earphone to execute recording operation.
For the detailed description of the steps 201 to 203, reference may be made to the speech processing method described in the above fig. 1D, and details are not repeated here.
204. When the wearable device answers the call, the main earphone is controlled to answer the call, and the slave earphone continues to execute the recording operation.
When the call is answered, the master earphone can be controlled to answer the call, the slave earphone continues to perform the recording operation, and certainly, after the call is hung up, the master earphone and the slave earphone can continue to perform the recording operation. Therefore, the two earphones can realize the recording operation, and the slave earphones can realize the recording operation when a call comes, so that the intelligence and the convenience of the wireless earphones in recording are improved.
In one possible example, the wearable device is a wireless headset including a master headset and a slave headset, and the recording operation is performed by the master headset and/or the slave headset. So, can realize carrying out the recording through main earphone, perhaps, record through following the earphone, perhaps, realize the recording through main earphone and follow the earphone.
The voice processing method described in the embodiment of the application is applied to wearable equipment, the wearable equipment comprises a master earphone and a slave earphone, the wearable equipment is worn on the head of a user, the motion parameters of the head of the user are collected, when the motion parameters meet preset conditions, a recording instruction is generated, the recording instruction is responded, the master earphone and the slave earphone are controlled to execute recording operation, when the incoming call is answered, the master earphone is controlled to answer the incoming call, the slave earphone continues to execute the recording operation, therefore, according to the motion of the head of the user, the recording instruction is generated, the recording function is realized, the intelligence and the convenience of the wearable equipment are improved, and the user experience is also improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present application, and as shown in the drawing, the wearable device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
collecting motion parameters of the head of the user;
when the motion parameters meet preset conditions, generating a recording instruction;
and responding to the recording instruction, and executing recording operation.
The wearable device described in the embodiment of the application is worn on the head of a user, collects the motion parameters of the head of the user, generates the recording instruction when the motion parameters meet the preset conditions, responds to the recording instruction, and executes the recording operation, so that the recording instruction is generated according to the motion of the head of the user, the recording function is realized, the intelligence and the convenience of the wearable device are improved, and the user experience is also improved.
In one possible example, the motion parameters include a first motion parameter of a first head organ and a second motion parameter of a second head organ;
in the aspect of the recording generation instruction, the program includes instructions for performing the following steps:
determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter;
determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
and generating a recording instruction according to the target recording range parameter and the target recording object.
In one possible example, the motion parameters include: target amplitude and target direction of facial muscle movement; the program further includes instructions for performing the steps of:
and when the target amplitude is in a preset amplitude range and the target direction is in a preset direction range, determining that the motion parameter meets the preset condition.
In one possible example, in the performing the recording operation, the program includes instructions for performing the following steps:
when an environment sound signal is received, filtering the environment sound signal to obtain a reference sound signal;
performing analog-to-digital conversion on the reference sound signal to obtain a target sound signal;
dividing the target sound signal into a plurality of sound signal segments according to the time sequence;
respectively determining the energy value of each sound signal segment in the plurality of sound signal segments to obtain a plurality of energy values;
and selecting a target energy value larger than a preset energy threshold value from the plurality of energy values, and splicing the sound signal segments corresponding to the target energy value according to the time sequence to obtain a final recording segment.
In one possible example, the wearable device is a wireless headset including a master headset and a slave headset, and the recording operation is performed by the master headset and/or the slave headset.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of a speech processing apparatus disclosed in an embodiment of the present application, and is applied to a wearable device, the speech processing apparatus 400 includes an acquisition unit 401, a generation unit 402, and an execution unit 403, where:
the acquisition unit 401 is configured to acquire motion parameters of the head of the user;
the generating unit 402 is configured to generate a recording instruction when the motion parameter meets a preset condition;
the execution unit 403 is configured to respond to the recording instruction and execute a recording operation.
In one possible example, the motion parameters include a first motion parameter of a first head organ and a second motion parameter of a second head organ;
the instruction for generating a sound recording, where the generating unit 402 is specifically configured to:
determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter;
determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
and generating a recording instruction according to the target recording range parameter and the target recording object.
In one possible example, the motion parameters include: target amplitude and target direction of facial muscle movement; as shown in fig. 4B, fig. 4B is a further modified structure of the speech processing apparatus depicted in fig. 4A, which may further include, compared with fig. 4A: the determining unit 404 is specifically as follows:
the determining unit 404 is configured to determine that the motion parameter meets the preset condition when the target amplitude is within a preset amplitude range and the target direction is within a preset direction range.
In one possible example, in terms of the executing the recording operation, the executing unit 403 is specifically configured to:
when an environment sound signal is received, filtering the environment sound signal to obtain a reference sound signal;
performing analog-to-digital conversion on the reference sound signal to obtain a target sound signal;
dividing the target sound signal into a plurality of sound signal segments according to the time sequence;
respectively determining the energy value of each sound signal segment in the plurality of sound signal segments to obtain a plurality of energy values;
and selecting a target energy value larger than a preset energy threshold value from the plurality of energy values, and splicing the sound signal segments corresponding to the target energy value according to the time sequence to obtain a final recording segment.
In one possible example, the wearable device is a wireless headset including a master headset and a slave headset, and the recording operation is performed by the master headset and/or the slave headset.
The voice processing device described in the embodiment of the application is applied to wearable equipment, the wearable equipment is worn on the head of a user, the motion parameters of the head of the user are collected, when the motion parameters meet preset conditions, a recording instruction is generated, the recording instruction is responded, and recording operation is executed, so that the recording instruction is generated according to the motion of the head of the user, a recording function is realized, the intelligence and the convenience of the wearable equipment are improved, and the user experience is also improved.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A wearable device, applied to a wearable device worn on a user's head, comprising a storage and processing circuit, and a sensor and audio component connected to the storage and processing circuit, wherein,
the sensor is used for acquiring motion parameters of the head of the user, wherein the motion parameters comprise a first motion parameter of a first head organ and a second motion parameter of a second head organ;
the storage and processing circuit is used for generating a recording instruction when the motion parameter meets a preset condition;
the audio component is used for responding to the recording instruction and executing recording operation;
in the aspect of generating the recording instruction, the storage and processing circuit is specifically configured to:
determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter, wherein the recording range parameter comprises a recording distance, a recording direction, a recording sound size, a recording frequency and a recording duration;
determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
and generating a recording instruction according to the target recording range parameter and the target recording object.
2. The wearable device of claim 1, wherein the motion parameters comprise: target amplitude and target direction of facial muscle movement; the storage and processing circuitry is further specifically configured to:
and when the target amplitude is in a preset amplitude range and the target direction is in a preset direction range, determining that the motion parameter meets the preset condition.
3. The wearable device of claim 1, wherein in connection with the performing the recording operation, the audio component is specifically configured to:
when an environment sound signal is received, filtering the environment sound signal to obtain a reference sound signal;
performing analog-to-digital conversion on the reference sound signal to obtain a target sound signal;
dividing the target sound signal into a plurality of sound signal segments according to the time sequence;
respectively determining the energy value of each sound signal segment in the plurality of sound signal segments to obtain a plurality of energy values;
and selecting a target energy value larger than a preset energy threshold value from the plurality of energy values, and splicing the sound signal segments corresponding to the target energy value according to the time sequence to obtain a final recording segment.
4. Wearable device according to any of claims 1-3, wherein the wearable device is a wireless headset comprising a master headset and a slave headset, wherein the recording operation is performed by the master headset and/or the slave headset.
5. A speech processing method, applied to a wearable device worn on the head of a user, the method comprising:
acquiring motion parameters of the head of the user, wherein the motion parameters comprise a first motion parameter of a first head organ and a second motion parameter of a second head organ;
when the motion parameters meet preset conditions, generating a recording instruction;
responding to the recording instruction, and executing recording operation;
wherein, the generating the recording instruction comprises:
determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter, wherein the recording range parameter comprises a recording distance, a recording direction, a recording sound size, a recording frequency and a recording duration;
determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
and generating a recording instruction according to the target recording range parameter and the target recording object.
6. The method of claim 5, wherein the motion parameters comprise: target amplitude and target direction of facial muscle movement; the method further comprises the following steps:
and when the target amplitude is in a preset amplitude range and the target direction is in a preset direction range, determining that the motion parameter meets the preset condition.
7. The method of claim 5, wherein the performing a recording operation comprises:
when an environment sound signal is received, filtering the environment sound signal to obtain a reference sound signal;
performing analog-to-digital conversion on the reference sound signal to obtain a target sound signal;
dividing the target sound signal into a plurality of sound signal segments according to the time sequence;
respectively determining the energy value of each sound signal segment in the plurality of sound signal segments to obtain a plurality of energy values;
and selecting a target energy value larger than a preset energy threshold value from the plurality of energy values, and splicing the sound signal segments corresponding to the target energy value according to the time sequence to obtain a final recording segment.
8. The method of any of claims 5-7, wherein the wearable device is a wireless headset comprising a master headset and a slave headset, and wherein the recording is performed by the master headset and/or the slave headset.
9. A speech processing device is applied to a wearable device, the wearable device is worn on the head of a user, the speech processing device comprises an acquisition unit, a generation unit and an execution unit, wherein:
the acquisition unit is used for acquiring the motion parameters of the head of the user, wherein the motion parameters comprise a first motion parameter of a first head organ and a second motion parameter of a second head organ;
the generating unit is used for generating a recording instruction when the motion parameters meet preset conditions;
the execution unit is used for responding to the recording instruction and executing recording operation;
wherein the generating unit is specifically configured to:
determining a target recording range parameter corresponding to the first motion parameter according to a preset first mapping relation between the motion parameter of the first head organ and the recording range parameter, wherein the recording range parameter comprises a recording distance, a recording direction, a recording sound size, a recording frequency and a recording duration;
determining a target sound recording object corresponding to the second motion parameter according to a preset second mapping relation between the motion parameter of the second head organ and the sound recording object;
and generating a recording instruction according to the target recording range parameter and the target recording object.
10. A wearable device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 5-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of the claims 5-8.
CN201810368538.6A 2018-04-23 2018-04-23 Voice processing method and related product Active CN108683790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810368538.6A CN108683790B (en) 2018-04-23 2018-04-23 Voice processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810368538.6A CN108683790B (en) 2018-04-23 2018-04-23 Voice processing method and related product

Publications (2)

Publication Number Publication Date
CN108683790A CN108683790A (en) 2018-10-19
CN108683790B true CN108683790B (en) 2020-09-22

Family

ID=63801248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810368538.6A Active CN108683790B (en) 2018-04-23 2018-04-23 Voice processing method and related product

Country Status (1)

Country Link
CN (1) CN108683790B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110286748A (en) * 2019-05-23 2019-09-27 深圳前海达闼云端智能科技有限公司 The function of headset equipment determines method, apparatus, system, medium and equipment
CN111488212A (en) * 2020-04-16 2020-08-04 歌尔科技有限公司 Recording method and device of wearable device and wearable device
CN112134989A (en) * 2020-10-23 2020-12-25 珠海格力电器股份有限公司 Recording method and device of terminal equipment
CN113709654A (en) * 2021-08-27 2021-11-26 维沃移动通信(杭州)有限公司 Recording method, recording apparatus, recording device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102543063A (en) * 2011-12-07 2012-07-04 华南理工大学 Method for estimating speech speed of multiple speakers based on segmentation and clustering of speakers
CN104238732A (en) * 2013-06-24 2014-12-24 由田新技股份有限公司 Device, method and computer readable recording medium for detecting facial movements to generate signals
CN105161100A (en) * 2015-08-24 2015-12-16 联想(北京)有限公司 Control method and electronic device
CN106774914A (en) * 2016-12-26 2017-05-31 苏州欧菲光科技有限公司 The control method and Wearable of Wearable
CN107172526A (en) * 2017-07-19 2017-09-15 联想(北京)有限公司 A kind of intelligent earphone and control method
CN107678541A (en) * 2017-09-20 2018-02-09 深圳市科迈爱康科技有限公司 Intelligent glasses and its information gathering and transmission method, computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102543063A (en) * 2011-12-07 2012-07-04 华南理工大学 Method for estimating speech speed of multiple speakers based on segmentation and clustering of speakers
CN104238732A (en) * 2013-06-24 2014-12-24 由田新技股份有限公司 Device, method and computer readable recording medium for detecting facial movements to generate signals
CN105161100A (en) * 2015-08-24 2015-12-16 联想(北京)有限公司 Control method and electronic device
CN106774914A (en) * 2016-12-26 2017-05-31 苏州欧菲光科技有限公司 The control method and Wearable of Wearable
CN107172526A (en) * 2017-07-19 2017-09-15 联想(北京)有限公司 A kind of intelligent earphone and control method
CN107678541A (en) * 2017-09-20 2018-02-09 深圳市科迈爱康科技有限公司 Intelligent glasses and its information gathering and transmission method, computer-readable recording medium

Also Published As

Publication number Publication date
CN108683790A (en) 2018-10-19

Similar Documents

Publication Publication Date Title
CN108810693B (en) Wearable device and device control device and method thereof
CN108683790B (en) Voice processing method and related product
JP6158317B2 (en) Glasses adapter
US10455313B2 (en) Wireless earpiece with force feedback
CN109120790B (en) Call control method and device, storage medium and wearable device
US20170347348A1 (en) In-Ear Utility Device Having Information Sharing
CN109068206A (en) Principal and subordinate's earphone method for handover control and Related product
US10045130B2 (en) In-ear utility device having voice recognition
US9838771B1 (en) In-ear utility device having a humidity sensor
US20170347179A1 (en) In-Ear Utility Device Having Tap Detector
CN108874130B (en) Play control method and related product
CN108683968A (en) Display control method and related product
CN108966067A (en) Control method for playing back and Related product
CN109240639A (en) Acquisition methods, device, storage medium and the terminal of audio data
CN108769850A (en) Apparatus control method and Related product
CN108762711A (en) Method, apparatus, electronic device and the storage medium of screen sounding
US20220159389A1 (en) Binaural Hearing System for Identifying a Manual Gesture, and Method of its Operation
CN108810198A (en) Sounding control method, device, electronic device and computer-readable medium
CN113194383A (en) Sound playing method and device, electronic equipment and readable storage medium
CN106302974B (en) information processing method and electronic equipment
CN108670275A (en) Signal processing method and related product
CN109039355B (en) Voice prompt method and related product
CN108429956B (en) Wireless earphone, control operation method and related product
CN205282093U (en) Audio player
US20230379615A1 (en) Portable audio device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant