CN111698600A - Processing execution method and device and readable medium - Google Patents

Processing execution method and device and readable medium Download PDF

Info

Publication number
CN111698600A
CN111698600A CN202010507491.4A CN202010507491A CN111698600A CN 111698600 A CN111698600 A CN 111698600A CN 202010507491 A CN202010507491 A CN 202010507491A CN 111698600 A CN111698600 A CN 111698600A
Authority
CN
China
Prior art keywords
target
earphone
head
user
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010507491.4A
Other languages
Chinese (zh)
Inventor
赵楠
崔文华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010507491.4A priority Critical patent/CN111698600A/en
Publication of CN111698600A publication Critical patent/CN111698600A/en
Priority to PCT/CN2021/074913 priority patent/WO2021244058A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups

Abstract

The embodiment of the application provides a processing execution method, a processing execution device and a readable medium. The method comprises the following steps: the method comprises the steps of detecting the working state of the earphone, identifying the motion state of a user through an acceleration sensor built in the earphone, determining target processing associated with the motion state in the working state, and executing the target processing, so that the triggering of the earphone on various target processing is realized according to the motion state of the user, the triggering of the user through hand action or speaking is not needed, the triggering convenience of various target processing is improved, the corresponding target processing of the same motion state of the user in different working states of the earphone can be different, the triggering of more target processing is realized on the earphone, and the problem that the target processing is difficult to trigger by hand action or speaking of the user in some scenes is solved.

Description

Processing execution method and device and readable medium
Technical Field
The present application relates to the field of wireless headset technology, and in particular, to a process execution device, a device for process execution, and a machine-readable medium.
Background
With the development of Wireless headset technology, True Wireless headsets (TWS) are becoming the second choice for consumers. The left earphone body and the right earphone body of the real wireless earphone are completely separated, and an exposed wire cannot be seen, so that the real wireless earphone is a real wireless earphone. Compared with the traditional wireless earphone, the connection of the real wireless earphone is not only the signal transmission between the earphone and the signal transmitting equipment, but also the wireless connection between the main earphone and the auxiliary earphone.
True wireless earphone has the characteristics of convenience, easy storage, and tone quality and battery duration are all better and better moreover, but because true wireless earphone's volume is less, need to solve urgently and let true wireless earphone can realize more functions. For example, adjusting volume, answering or rejecting calls, waking up a voice assistant, etc. cannot accomplish triggering in some inconvenient hands-on or speech scenarios.
Disclosure of Invention
In view of the above problems, the embodiments of the present application provide a process execution device, a device for process execution, and a machine readable medium that overcome or at least partially solve the above problems, and can solve the problems of less headset functions and inconvenient triggering.
In order to solve the above problem, the present application discloses a processing execution method, applied to a headset, including:
detecting the working state of the earphone;
identifying the motion state of a user through an acceleration sensor arranged in the earphone;
determining a target process associated with the motion state in the working state;
the target process is executed.
Optionally, the identifying, by an acceleration sensor built in the headset, a motion state of the user includes:
acquiring acceleration information of the earphone through the acceleration sensor;
determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
Optionally, the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; the whole body movement includes at least one of: walking, running, standing and sitting.
Optionally, the determining the motion state of the user according to the acceleration information includes at least one of:
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the requirement of preset frequency;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement;
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement.
Optionally, the executing the target process includes at least one of: answering the call, refusing to answer the call, hanging up the call, calling back, canceling the calling-out call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the playing of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting to record, ending to record, pausing to record, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
Optionally, before the voice prompting the user, the method further includes:
identifying the current motion state of a user through an acceleration sensor built in the earphone;
and determining whether the current motion state meets a preset requirement, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
Optionally, the determining the motion state of the user according to the acceleration information includes:
sending the acceleration information to an earphone accommodating device or a cloud server;
and receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
Optionally, the executing the target process includes:
and sending the processing instruction corresponding to the target processing to an earphone accommodating device or a cloud server so as to control the earphone accommodating device or the cloud server to execute the target processing.
The embodiment of the present application further discloses a processing execution device, which is applied to an earphone, and includes:
the state acquisition module is used for detecting the working state of the earphone;
the state identification module is used for identifying the motion state of a user through an acceleration sensor arranged in the earphone;
the processing determining module is used for determining target processing associated with the motion state in the working state;
and the processing execution module is used for executing the target processing.
Optionally, the state identification module includes:
the information acquisition submodule is used for acquiring the acceleration information of the earphone through the acceleration sensor;
the state determining submodule is used for determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
Optionally, the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; the whole body movement includes at least one of: walking, running, standing and sitting.
Optionally, the state determination submodule is specifically configured to at least one of:
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the requirement of preset frequency;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement;
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement.
Optionally, the processing execution module is configured to at least one of:
answering the call, refusing to answer the call, hanging up the call, calling back, canceling the calling-out call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the playing of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting to record, ending to record, pausing to record, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
Optionally, the apparatus further comprises:
the current state identification module is used for identifying the current motion state of the user through an acceleration sensor arranged in the earphone before the voice prompt is carried out on the user;
and the state determining module is used for determining whether the current motion state meets a preset requirement or not, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
Optionally, the state identification module includes:
the information sending submodule is used for sending the acceleration information to the earphone storage device or the cloud server;
and the state receiving submodule is used for receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
Optionally, the processing execution module includes:
and the instruction sending submodule is used for sending the processing instruction corresponding to the target processing to an earphone storage device or a cloud server so as to control the earphone storage device or the cloud server to execute the target processing.
The embodiment of the application also discloses a device for processing execution, which comprises a memory and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by one or more processors and comprise instructions for:
detecting the working state of the earphone;
identifying the motion state of a user through an acceleration sensor arranged in the earphone;
determining a target process associated with the motion state in the working state;
the target process is executed.
Optionally, the identifying, by an acceleration sensor built in the headset, a motion state of the user includes:
acquiring acceleration information of the earphone through the acceleration sensor;
determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
Optionally, the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; the whole body movement includes at least one of: walking, running, standing and sitting.
Optionally, the determining the motion state of the user according to the acceleration information includes at least one of:
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the requirement of preset frequency;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement;
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement.
Optionally, the executing the target process includes at least one of:
answering the call, refusing to answer the call, hanging up the call, calling back, canceling the calling-out call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the playing of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting to record, ending to record, pausing to record, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
Optionally, before the voice prompting the user, the operating instructions further include:
identifying the current motion state of a user through an acceleration sensor built in the earphone;
and determining whether the current motion state meets a preset requirement, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
Optionally, the determining the motion state of the user according to the acceleration information includes:
sending the acceleration information to an earphone accommodating device or a cloud server;
and receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
Optionally, the executing the target process includes:
and sending the processing instruction corresponding to the target processing to an earphone accommodating device or a cloud server so as to control the earphone accommodating device or the cloud server to execute the target processing.
The embodiment of the application also discloses a machine readable medium, which stores instructions, when executed by one or more processors, causes the device to execute the processing execution device.
The embodiment of the application has the following advantages:
in summary, according to the embodiment of the present application, by detecting the operating state of the earphone, identifying the motion state of the user through the acceleration sensor built in the earphone, determining the target process associated with the motion state in the operating state, and executing the target process, the triggering of the earphone on multiple target processes is realized according to the motion state of the user, and the triggering of various target processes is not required to be triggered by the user through a hand motion or a speech, so that the convenience of triggering various target processes is improved.
Drawings
FIG. 1 illustrates a flow chart of steps of one process execution method embodiment of the present application;
FIG. 2 is a flow chart illustrating the steps of another process execution method embodiment of the present application;
FIG. 3 is a block diagram illustrating an embodiment of a process execution apparatus of the present application;
FIG. 4 is a block diagram illustrating an apparatus for processing execution in accordance with an exemplary embodiment;
FIG. 5 is a block diagram of a server in some embodiments of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a processing execution method according to the present application is shown, and is applied to a headset, where the method specifically includes the following steps:
step 101, detecting the working state of the earphone.
In this embodiment of the present application, the working state of the earphone includes a working state of playing an audio, a working state of being in a telephone call, a working state of being in an incoming call, a working state of being in a standby state, a working state of being in an audio call, a video call, or a speech input in an instant messaging software, a working state of a speech input in an input method, and the like, or any other suitable working state, which is not limited in this embodiment of the present application.
In this embodiment of the present application, the implementation manner of detecting the working state of the earphone may include multiple manners, for example, when the earphone is connected to the device such as the mobile terminal through bluetooth, data transmitted between the earphone and the device such as the mobile terminal is detected, and according to an identifier carried in the data transmitted by the device such as the mobile terminal or a data amount of data sent and/or received by the earphone within a certain time, it may be determined that the earphone is currently in a working state in which an audio is being played, or in a working state in which a call is being made, or in a working state in which a call is coming, or in a working state in standby; when the earphone is directly connected with the internet through the mobile communication chip, the data sent or received by the earphone is detected, and the earphone can be determined to be in a working state of playing audio currently, or in a working state of calling currently, or in a working state of standby currently and the like according to the identification carried in the sent or received data, or the data volume of the data sent and/or received by the earphone within a certain time; or directly detecting a currently running program on the earphone, or determining that the earphone is currently in a working state of playing audio, or in a working state of calling, or in a working state of standby, and the like; any suitable implementation may be specifically included, and the embodiments of the present application do not limit this.
And 102, identifying the motion state of the user through an acceleration sensor built in the earphone.
In the embodiment of the present application, an acceleration sensor (G-sensor) refers to a sensor that senses the acceleration of a sensed object and converts the acceleration into a usable output signal. The main sensing mode is that when the sensitive element in the sensor is deformed under the action of acceleration, the deformation of the sensitive element is measured, and the deformation of the sensitive element is converted into voltage by using a related circuit to be output. The acceleration sensor is arranged in the earphone and can be used for identifying the motion state of a user, for example, the user can point, raise, shake, lower, walk, run, stand, sit down and other various movement changes and can be converted into electric signals by the G-sensor in the earphone, and different electric signals correspond to different motion states.
For example, the acceleration sensor may be a three-axis acceleration sensor, and the three-axis acceleration sensor may measure a spatial acceleration, which may comprehensively reflect the motion property of the object. The components of the acceleration signals generated by the head of the user on three coordinate axes are measured, and the three-axis acceleration sensor can accurately detect the acceleration signals generated by the head of the user under the condition that various possible head actions exist.
In the embodiment of the application, the acceleration sensor built in the earphone is used, and the algorithm for identifying the motion state of the user is embedded in the earphone, so that the earphone has the capability of identifying the motion state of the user. The implementation manner of identifying the motion state of the user may include multiple manners, for example, after the acceleration sensor collects the electric signal, the electric signal is matched with the electric signals of multiple motion states, and if the electric signal is matched with the electric signal of the target motion state, the current motion state of the user is identified as the target motion state; or after the acceleration sensor collects the electric signal, the electric signal is converted into acceleration information in the form of the magnitude and the direction of the acceleration, and the current motion state of the user is determined according to the acceleration information.
Step 103, determining the target processing associated with the motion state in the working state.
In this embodiment of the present application, the target processing includes answering or rejecting a call, marking a target audio in the call, turning up or down a volume, turning on or off a voice processing function, performing a voice prompt on the user, and the like, or any other suitable processing, which is not limited in this embodiment of the present application.
The call includes, but is not limited to, a telephone call, an audio call or a video call in instant messaging software, and the like. The target audio in the call includes the audio of the own party or the other party. The voice processing includes a processing procedure of recognizing and understanding the audio and making corresponding feedback, including converting the audio into corresponding text or command, recognizing voice information in the audio, and making corresponding feedback according to understanding, or any other suitable voice processing, which is not limited in this embodiment of the application. Such as voice assistant functionality. The voice prompts include, but are not limited to, voice prompts to provide a sedentary reminder, a long-time heads-down reminder, etc. to the user.
In the embodiment of the application, the motion state is associated with the target treatment, the earphone determines the target treatment associated with the motion state after identifying the motion state, and the target treatment associated with the same motion state in different working states can be different. For example, the headphone sets the target process associated with shaking the head to reject the call in an operating state in which the phone is in a call or an incoming call, and sets the target process associated with shaking the head to switch songs in an operating state in which audio is played.
Step 104, executing the target processing.
In the embodiment of the present application, after determining the target process associated with the motion state in the operating state, the target process is executed. Some target processing can be independently executed on the earphone, and some target processing is executed by sending an instruction to be executed to a connected device such as a mobile terminal or an earphone accommodating device by the earphone or sending the instruction to a cloud server through the internet, namely, the earphone controls the device such as the mobile terminal or the earphone accommodating device or the cloud server to execute.
For example, in the working state of a call, the nodding of a user can trigger answering of the call, the shaking of the head can trigger refusing of the call, and the nodding marks a target audio in the call. Under the working state of playing audio, the user shaking the head can trigger to switch songs, and lifting the head twice can trigger to turn up the volume, and lowering the head twice can trigger to turn down the volume. In the standby working state, the user can quickly turn down to the left and can trigger the awakening of the voice processing function.
In the embodiment of the present application, optionally, the executing the target process includes at least one of: answering the call, refusing to answer the call, hanging up the call, calling back, canceling the calling-out call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the playing of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting to record, ending to record, pausing to record, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
When the target audio in the call is marked, the target audio needs to be stored and marked, for example, when the call is made, a user nods the head for the first time, the target audio after nodding is stored on an earphone or an earphone storage device or a mobile terminal or a cloud server, the user nods the head for the second time, the storage of the target audio is finished, the target audio is marked as important content, the target audio can be marked as other labels, different motion states can be specifically set to correspond to different labels, and the target audio is marked by adopting the labels corresponding to the motion states.
The processing during the audio playing may include starting playing, pausing playing, ending playing, switching the played audio, or any other suitable processing, which is not limited in this embodiment of the application. The first audio and the second audio may be stored on the headset, or on the headset storage device, or on the cloud server or on the mobile terminal.
When the first audio or the second audio is stored in the earphone, the earphone can directly execute the processing of starting playing the first audio, pausing playing the first audio, ending playing the first audio, or switching to playing the second audio when playing the first audio. When the first audio or the second audio is stored in the earphone storage device, the earphone can send processing instructions corresponding to processing of starting playing, pausing playing, stopping playing, switching playing of the audio and the like to the earphone storage device through the Bluetooth, and accordingly the earphone storage device is controlled to execute corresponding processing. When the first audio or the second audio is stored in the cloud server, the earphone can send processing instructions corresponding to processing of the audio which is played starting, played pausing, played stopping, played switching and the like to the cloud server through the mobile communication chip, and accordingly the cloud server is controlled to execute corresponding processing. When the first audio or the second audio is stored in the mobile terminal, the earphone can send processing instructions corresponding to processing of starting playing, pausing playing, stopping playing, switching playing of the audio and the like to the mobile terminal through the Bluetooth, so that the mobile terminal is controlled to execute corresponding processing.
The speech processing function includes an algorithm, a database, a computing resource, and the like called for processing speech in audio, or any other suitable content related to speech processing, which is not limited in this embodiment of the present application.
The speech processing function can call the computing resources of earphones, an earphone storage box, a cloud server and the like, and the embodiment of the application does not limit the computing resources. For example, one speech processing function is to call only the local computing resource of the headset, and recognize the speech in the audio by using the local speech recognition model of the headset, where the speech recognition model stores the speech features extracted from the pre-collected audio, the recognized speech is relatively limited to the speech features in the local speech model, and the recognition speed is limited to the local computing resource; the other voice processing function is to utilize the cloud server, upload the audio to the cloud server, call computing resources on the cloud server, utilize the voice recognition model to recognize the voice in the audio, understand the voice in the audio, make corresponding feedback to the voice, no longer limit in local computing resources and sample bank, can have better voice processing effect, obtain more complicated various results.
Before the earphone recognizes the motion state associated with the starting of the voice processing function, the earphone collects the audio data and then does not give the audio data to the voice processing function to process the audio data, and after the earphone collects the audio data, the earphone gives the audio data to the voice processing function to process the audio data. It can also set some motion state to be associated with closing the speech processing function, when the motion state is recognized, the speech processing function is closed. For example, the collected audio data is the 'temperature of the current air temperature' spoken by the user, the earphone and the cloud server process the audio data by using the voice processing function to obtain a processing result of the 'temperature of 25 degrees of the current air temperature', the processing result is sent to the earphone, and the processing result is played.
The voice prompt is carried out on the user, and the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt. When the user is identified to be in the exercise state of sitting for a long time, the voice of the sedentary prompt is played on the earphone, or when the user is identified to be in the exercise state of head lowering for a long time, the voice of the long-time head lowering prompt is played on the earphone, so that the unhealthy state of the user is reminded. For example, the played voice is: "you have been sitting longer than an hour", "you have been lowering their head for more than half an hour", etc.
In this embodiment of the present application, optionally, before performing the voice prompt on the user, the method may further include: identifying the current motion state of a user through an acceleration sensor built in the earphone; and determining whether the current motion state meets a preset requirement, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
When the user is prompted by voice, the motion state of the user is often recognized for a period of time, and when the earphone judges that the motion state of the user for a period of time meets a certain condition, the user is prompted by voice, but sometimes the user changes the previous motion state at the moment, and the user does not need to be prompted by voice. In order to avoid unnecessary disturbance to a user, before voice prompt is carried out on the user, the current motion state of the user is identified, whether the current motion state meets a preset requirement or not is determined, if the current motion state meets the preset requirement, the step of carrying out voice prompt on the user is executed, and if the current motion state does not meet the preset requirement, the step of carrying out voice prompt on the user is not executed.
The preset requirement may correspond to a voice prompt, for example, before a sedentary prompt, the preset requirement is that the user does not stand up or move by a large margin currently, and when it is detected that the user walks less than a preset number of steps within a certain time and does not move currently, the voice prompt is played on the earphone; before the prompt of long-time head lowering, the preset requirement is that the degree of the current head lowering of the user exceeds a preset threshold value, the preset requirement for any use can be specifically set according to actual needs, and the embodiment of the application does not limit the preset requirement.
In this embodiment of the present application, optionally, an implementation manner of executing the target processing includes: and sending the processing instruction corresponding to the target processing to an earphone accommodating device or a cloud server so as to control the earphone accommodating device or the cloud server to execute the target processing.
The headset can be connected with the headset storage device or the cloud server, for example, the headset can be connected with the headset storage device through bluetooth, or the headset can be connected with the cloud server on a network through a built-in mobile communication chip.
In some cases, the target processing is not directly executed by the headset, and a processing instruction corresponding to the target processing on the headset is sent to the headset storage device or the cloud server to control the headset storage device or the cloud server to execute the target processing. The processing instruction may include audio data collected by the headset or the headset storage device, and the audio data is delivered to the headset storage device or the cloud server and processed by using a voice processing function.
For example, the earphone sends a processing instruction corresponding to the first audio playing pause to the earphone storage device, and the earphone storage device pauses the playing of the first audio. The earphone is limited by the volume or the electric quantity, too much data cannot be stored or the function with higher requirements for computing capacity cannot be realized, the earphone storage device or the cloud server with higher computing capacity is used for carrying out target processing, the problem that the function with higher requirements for computing capacity cannot be realized by the earphone is solved, and the computing capacity of the earphone is expanded.
In this embodiment of the application, when the target processing is executed, the target processing may be executed by the cloud server in addition to the execution of the target processing by the earphone storage device, after the earphone storage device obtains the processing instruction, the processing instruction is sent to the cloud server, the cloud server executes the target processing corresponding to the processing instruction, and then the cloud server sends the processing result to the earphone storage device. When earphone storage device's computational capability is not enough to carry out the target processing, can send processing instruction to the high in the clouds server, for example, translate according to audio data by the high in the clouds server, obtain target audio data or target text data of target language, specifically can include the processing of arbitrary being suitable for, and this application embodiment does not do the restriction to this. The cloud server can provide stronger computing power, so that more complex target processing can be executed, more complex functions can be realized, the power consumption of the earphone storage device is reduced, the processing speed is increased, and the like.
In summary, according to the embodiment of the present application, by detecting the operating state of the earphone, identifying the motion state of the user through the acceleration sensor built in the earphone, determining the target process associated with the motion state in the operating state, and executing the target process, the triggering of the earphone on multiple target processes is realized according to the motion state of the user, and the triggering of various target processes is not required to be triggered by the user through a hand motion or a speech, so that the convenience of triggering various target processes is improved.
Referring to fig. 2, a flowchart illustrating steps of another embodiment of a processing execution method according to the present application is shown, and is applied to a headset, where the method specifically includes the following steps:
step 201, detecting the working state of the earphone.
In the embodiment of the present application, a specific implementation manner of this step may refer to the description in the foregoing embodiment, and is not described herein again.
Step 202, acquiring acceleration information of the earphone through the acceleration sensor.
In the embodiment of the application, the acceleration information includes the magnitude and the direction of the acceleration, the acceleration sensor can acquire the current acceleration information of the headset, and for a part of motion states, the current acceleration information can be determined only, and for the part of motion states, the acceleration information within a period of time can be determined. And acquiring the acceleration information of the earphone at the current moment or in a period of time.
Step 203, determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
In the embodiment of the present application, the motion state includes at least one of head motion, whole body motion, motion frequency, motion speed, motion duration, motion amplitude, or any other suitable state information, which is not limited in the embodiment of the present application. The above-mentioned motion state of the user can be determined from the collected acceleration information.
The head movement may be a head nod, a head shaking, a head lowering, a head raising, a head lowering below the left, a head lowering below the right, a head raising above the left, a head raising above the right, or other head movements that are arbitrarily applicable when the user wears the headphones, and the embodiment of the present application does not limit this.
The whole-body exercise can be walking, running, standing, sitting and the like when the user wears the earphone, or any other suitable whole-body exercise, which is not limited in the embodiment of the application.
The motion frequency may be a motion frequency of head motion or whole body motion, for example, the number of times of nodding the head in unit time, the number of times of shaking the head in unit time, the number of times of lowering the head at the lower left in unit time, the number of steps in walking in unit time, or any other suitable motion frequency, which is not limited in this embodiment of the present application.
The movement speed may be a movement speed of head movement or body movement, for example, a speed of lowering head to the left and downwards, a walking speed, a running speed, or the like, or any other suitable movement speed, which is not limited in the embodiments of the present application.
The exercise duration may be a head exercise duration or a whole body exercise duration, for example, a head lowering duration, a sitting duration, or the like, or any other suitable exercise duration, which is not limited in the embodiments of the present application.
The motion amplitude may be a motion amplitude of head motion or whole body motion, for example, an angle lower than a normal state when the head is lowered, an angle spanned by left and right when the head is shaken, or any other suitable motion amplitude, which is not limited in the embodiment of the present application.
In embodiments of the present application, optionally, the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; whole body movement includes at least one of: walking, running, standing and sitting.
The head lowering, head tilting, head shaking, head nodding and other actions are commonly used, the head movement is easy to generate false triggering, the head lowering at the lower left side, the head lowering at the lower right side, the head tilting at the upper left side, the head pitching at the upper right side and the like are not commonly used, the simple head movement can avoid the problem of false triggering, the user can also easily complete the head movement, and the use of the user is facilitated.
In this embodiment of the application, optionally, according to the acceleration information, an implementation manner of determining the motion state of the user may include at least one of the following:
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the preset frequency requirement.
The head movement and the whole-body movement may include a plurality of kinds, and one or more of the head movement or the whole-body movement is preset as the target head movement or the target whole-body movement. Setting a target head movement or a target whole body movement is associated with various target treatments.
The preset frequency requirement may be that the motion frequency is consistent with a certain preset frequency, or that the motion frequency exceeds a certain preset frequency, or any other suitable preset frequency requirement, which is not limited in the embodiment of the present application.
For example, in a working state of a call or playing audio, it is determined that the user has performed head up according to the acceleration information, and the number of times of head up in unit time is twice, it is determined that the user has performed head up twice, the corresponding target processing may be volume up, it is determined that the user has performed head down according to the acceleration information, and the number of times of head down in unit time is twice, it is determined that the user has performed target head movement, that is, head down twice, and the corresponding target processing may be volume down.
And determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement.
The preset speed requirement may be that the movement speed is within a preset speed range, or that the movement speed exceeds a certain preset speed, or any other suitable preset speed requirement, which is not limited in the embodiment of the present application. The movement speed of the target head movement or the whole body movement is required to meet the preset speed requirement, false triggering caused by some unintentional movements of a user can be avoided, and processing errors are reduced.
For example, according to the acceleration information, it is determined that the user performs downward left heading, and the speed of the downward heading exceeds a preset speed, it is determined that the user performs downward left heading at a higher speed, and the corresponding target processing may be to wake up a voice assistant on an earphone or a mobile phone.
And determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement.
The preset time requirement may be that the movement duration exceeds a preset time duration, or may be that the movement duration exceeds a certain proportion of the detection time duration, or any other suitable preset time requirement, which is not limited in the embodiment of the present application.
For example, if it is determined that the user has performed a long-time head lowering based on the acceleration information within a predetermined time and the duration of the head lowering movement exceeds 95% of the predetermined time, the user may be determined to perform a long-time head lowering, and the corresponding target processing may be voice prompt for the user.
And determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement.
The preset amplitude requirement may be that the motion amplitude exceeds the preset amplitude, or the motion amplitude is within a preset amplitude range, or any other suitable preset amplitude requirement, which is not limited in the embodiment of the present application.
For example, if it is determined that the user has performed a long-time head lowering according to the acceleration information within a certain time and the motion amplitude of the head lowering exceeds a preset angle, it is determined that the user has performed a long-time head lowering, and the corresponding target processing may be voice prompt for the user.
For the above multiple implementation manners of determining the motion state of the user, a single manner of determining the motion state of the user may be adopted, or multiple manners of determining the motion state of the user may be combined, for example, according to the acceleration information, it is determined that the user has performed the target head motion or the target whole body motion, the motion duration of the target head motion or the target whole body motion meets the preset time requirement, the motion amplitude of the target head motion or the target whole body motion meets the preset amplitude requirement, and both the motion duration and the motion amplitude are required.
For example, if it is determined that the user has performed a head lowering within a certain time period based on the acceleration information, and the duration of the head lowering movement exceeds 95% of the certain time period, and the head lowering movement range exceeds a preset angle, it is determined that the user has performed a long-time head lowering, and the corresponding target processing may be voice prompt for the user.
In this embodiment, optionally, in an implementation manner of determining the motion state of the user according to the acceleration information, the determining may include: sending the acceleration information to an earphone accommodating device or a cloud server; and receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
In some cases, the earphone is limited by the volume or the electric quantity of the earphone, the motion state of the user cannot be independently determined by the earphone, and the earphone can send the acquired acceleration information to the earphone storage device or the cloud server. The earphone and the earphone storage device can be connected through the Bluetooth, and the acceleration information is transmitted through the Bluetooth. The earphone can be provided with a mobile communication chip, the earphone is connected to a private network or the Internet through the mobile communication chip, and the acceleration information is sent to the cloud server through the mobile communication chip. The headset storage device or the cloud server determines the motion state of the user according to the acceleration information, and the motion state is determined in the manner as described above. The computing capability of the earphone is expanded by the earphone storage device or the cloud server, and the problem that the earphone cannot achieve the function with higher requirement on the computing capability is solved.
Step 204, determining the target processing associated with the motion state in the working state.
In the embodiment of the present application, a specific implementation manner of this step may refer to the description in the foregoing embodiment, and is not described herein again.
Step 205, executing the target process.
In the embodiment of the present application, a specific implementation manner of this step may refer to the description in the foregoing embodiment, and is not described herein again.
In summary, according to the embodiment of the present application, the working state of the earphone is detected, the acceleration information of the earphone is collected through the acceleration sensor, and the motion state of the user is determined according to the acceleration information; the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude, target processing associated with the motion state in the working state is determined, the target processing is executed, so that triggering of the earphone on multiple target processing is realized according to the motion state of a user, the triggering of various target processing is not required to be triggered by the user through hand motion or speaking, the triggering convenience of various target processing is improved, corresponding target processing of the same motion state of the user in different working states of the earphone can be different, triggering of more target processing is realized on the earphone, and the problem that the target processing is difficult to be triggered by the user through hand motion or speaking in some scenes is solved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of motion combinations, but those skilled in the art should understand that the embodiment of the present application is not limited by the described sequence of motion actions, because some steps may be performed in other sequences or simultaneously according to the embodiment of the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are preferred and that the motions described are not necessarily required for the embodiments of the present application.
Referring to fig. 3, a block diagram of an embodiment of a processing execution device according to the present application is shown, and is applied to a headset, and specifically, the processing execution device may include:
a state obtaining module 301, configured to detect a working state of the earphone;
a state identification module 302, configured to identify a motion state of a user through an acceleration sensor built in the earphone;
a process determining module 303, configured to determine a target process associated with the motion state in the working state;
a process executing module 304, configured to execute the target process.
In this embodiment of the application, optionally, the state identification module includes:
the information acquisition submodule is used for acquiring the acceleration information of the earphone through the acceleration sensor;
the state determining submodule is used for determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
In an embodiment of the present application, optionally, the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; the whole body movement includes at least one of: walking, running, standing and sitting.
In this embodiment of the application, optionally, the state determination sub-module is specifically configured to at least one of:
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the requirement of preset frequency;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement;
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement.
In this embodiment of the application, optionally, the processing execution module is configured to at least one of: answering the call, refusing to answer the call, hanging up the call, calling back, canceling the calling-out call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the playing of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting to record, ending to record, pausing to record, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
In this embodiment of the present application, optionally, the apparatus further includes:
the current state identification module is used for identifying the current motion state of the user through an acceleration sensor arranged in the earphone before the voice prompt is carried out on the user;
and the state determining module is used for determining whether the current motion state meets a preset requirement or not, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
In this embodiment of the application, optionally, the state identification module includes:
the information sending submodule is used for sending the acceleration information to the earphone storage device or the cloud server;
and the state receiving submodule is used for receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
In this embodiment of the application, optionally, the processing execution module includes:
and the instruction sending submodule is used for sending the processing instruction corresponding to the target processing to an earphone storage device or a cloud server so as to control the earphone storage device or the cloud server to execute the target processing.
In summary, according to the embodiment of the present application, by detecting the operating state of the earphone, identifying the motion state of the user through the acceleration sensor built in the earphone, determining the target process associated with the motion state in the operating state, and executing the target process, the triggering of the earphone on multiple target processes is realized according to the motion state of the user, and the triggering of various target processes is not required to be triggered by the user through a hand motion or a speech, so that the convenience of triggering various target processes is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
FIG. 4 is a block diagram illustrating an apparatus 400 for processing execution in accordance with an example embodiment. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 402 may include one or more processors 420 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the device 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply components 406 provide power to the various components of device 400. The power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor component 414 can detect the open/closed state of the device 400, the relative positioning of components, such as a display and keypad of the apparatus 400, the sensor component 414 can also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 5 is a schematic diagram of a server in some embodiments of the invention. The server 1900, which may vary widely in configuration or performance, may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Still further, a central processor 1922 may be provided in communication with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer-readable storage medium in which instructions, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform a process execution method, the method comprising:
detecting the working state of the earphone;
identifying the motion state of a user through an acceleration sensor arranged in the earphone;
determining a target process associated with the motion state in the working state;
the target process is executed.
Optionally, the identifying, by an acceleration sensor built in the headset, a motion state of the user includes:
acquiring acceleration information of the earphone through the acceleration sensor;
determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
Optionally, the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; the whole body movement includes at least one of: walking, running, standing and sitting.
Optionally, the determining the motion state of the user according to the acceleration information includes at least one of:
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the requirement of preset frequency;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement;
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement. Optionally, the executing the target process includes at least one of: answering a call, refusing the call, hanging up the call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the play of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
Optionally, before the voice prompting the user, the operating instructions further include:
identifying the current motion state of a user through an acceleration sensor built in the earphone;
and determining whether the current motion state meets a preset requirement, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
Optionally, the determining the motion state of the user according to the acceleration information includes:
sending the acceleration information to an earphone accommodating device or a cloud server;
and receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
Optionally, the executing the target process includes:
and sending the processing instruction corresponding to the target processing to an earphone accommodating device or a cloud server so as to control the earphone accommodating device or the cloud server to execute the target processing.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is provided for a process execution method, a process execution device, a device for process execution, and a machine-readable medium, and the principles and embodiments of the present application are explained herein using specific examples, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A processing execution method applied to a headset comprises the following steps:
detecting the working state of the earphone;
identifying the motion state of a user through an acceleration sensor arranged in the earphone;
determining a target process associated with the motion state in the working state;
the target process is executed.
2. The method of claim 1, wherein the identifying the motion state of the user by an acceleration sensor built into the headset comprises:
acquiring acceleration information of the earphone through the acceleration sensor;
determining the motion state of the user according to the acceleration information; wherein the motion state comprises at least one of head motion, whole body motion, motion frequency, motion speed, motion duration and motion amplitude.
3. The method of claim 2, wherein the head movement comprises at least one of: head-up, head-down, nodding, head-shaking, head-down to the left-lower side, head-down to the right-lower side, head-up to the left-upper side, head-up to the right-upper side; the whole body movement includes at least one of: walking, running, standing and sitting.
4. The method of claim 2, wherein determining the motion state of the user based on the acceleration information comprises at least one of:
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement frequency of the target head movement or the target whole-body movement meets the requirement of preset frequency;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement speed of the target head movement or the target whole-body movement meets the preset speed requirement;
determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement duration of the target head movement or the target whole-body movement meets the preset time requirement;
and determining that the user performs target head movement or target whole-body movement according to the acceleration information, wherein the movement amplitude of the target head movement or the target whole-body movement meets the preset amplitude requirement.
5. The method of claim 1, wherein the performing the target process comprises at least one of: answering the call, refusing to answer the call, hanging up the call, calling back, canceling the calling-out call, marking a target audio in the call, starting to play a first audio, pausing to play the first audio, ending the playing of the first audio, switching to play a second audio, turning up the volume, turning down the volume, starting to record, ending to record, pausing to record, starting a voice processing function, closing the voice processing function and carrying out voice prompt on the user when the first audio is played, wherein the voice prompt comprises at least one of the following: sedentary prompt and long-time head lowering prompt.
6. The method of claim 5, wherein prior to said voice prompting the user, the method further comprises:
identifying the current motion state of a user through an acceleration sensor built in the earphone;
and determining whether the current motion state meets a preset requirement, and if the current motion state meets the preset requirement, executing a step of carrying out voice prompt on the user.
7. The method of claim 2, wherein determining the motion state of the user based on the acceleration information comprises:
sending the acceleration information to an earphone accommodating device or a cloud server;
and receiving the motion state determined by the earphone accommodating device or the cloud server according to the acceleration information.
8. The method of claim 1, wherein the performing the target process comprises:
and sending the processing instruction corresponding to the target processing to an earphone accommodating device or a cloud server so as to control the earphone accommodating device or the cloud server to execute the target processing.
9. A processing execution device applied to a headphone includes:
the state acquisition module is used for detecting the working state of the earphone;
the state identification module is used for identifying the motion state of a user through an acceleration sensor arranged in the earphone;
the processing determining module is used for determining target processing associated with the motion state in the working state;
and the processing execution module is used for executing the target processing.
10. An apparatus for processing execution, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors comprises instructions for:
detecting the working state of the earphone;
identifying the motion state of a user through an acceleration sensor arranged in the earphone;
determining a target process associated with the motion state in the working state;
the target process is executed.
11. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform a process execution apparatus as recited in one or more of claims 1-8.
CN202010507491.4A 2020-06-05 2020-06-05 Processing execution method and device and readable medium Pending CN111698600A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010507491.4A CN111698600A (en) 2020-06-05 2020-06-05 Processing execution method and device and readable medium
PCT/CN2021/074913 WO2021244058A1 (en) 2020-06-05 2021-02-02 Process execution method, device, and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507491.4A CN111698600A (en) 2020-06-05 2020-06-05 Processing execution method and device and readable medium

Publications (1)

Publication Number Publication Date
CN111698600A true CN111698600A (en) 2020-09-22

Family

ID=72479635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507491.4A Pending CN111698600A (en) 2020-06-05 2020-06-05 Processing execution method and device and readable medium

Country Status (2)

Country Link
CN (1) CN111698600A (en)
WO (1) WO2021244058A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542963A (en) * 2021-07-21 2021-10-22 RealMe重庆移动通信有限公司 Sound mode control method, device, electronic equipment and storage medium
WO2021244058A1 (en) * 2020-06-05 2021-12-09 北京搜狗科技发展有限公司 Process execution method, device, and readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2451187A2 (en) * 2010-11-05 2012-05-09 Sony Ericsson Mobile Communications AB Headset with accelerometers to determine direction and movements of user head and method
CN104199655A (en) * 2014-08-27 2014-12-10 深迪半导体(上海)有限公司 Audio switching method, microprocessor and earphones
CN104361016A (en) * 2014-10-15 2015-02-18 广东小天才科技有限公司 Method and device for adjusting music play effect according to sport state
CN106535057A (en) * 2016-12-07 2017-03-22 歌尔科技有限公司 Operation switching method and system for master device and slave device
CN108540669A (en) * 2018-04-20 2018-09-14 Oppo广东移动通信有限公司 Wireless headset, the control method based on headset detection and Related product

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104718766A (en) * 2013-10-08 2015-06-17 清科有限公司 Multifunctional sport headset
US20170195795A1 (en) * 2015-12-30 2017-07-06 Cyber Group USA Inc. Intelligent 3d earphone
CN105848029A (en) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 Earphone, and control method and device thereof
CN107708009B (en) * 2017-09-27 2020-01-31 广东小天才科技有限公司 Wireless earphone operation method and device, wireless earphone and storage medium
CN110246285A (en) * 2019-05-29 2019-09-17 苏宁智能终端有限公司 Cervical vertebra moving reminding method, device, earphone and storage medium
CN111698600A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Processing execution method and device and readable medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2451187A2 (en) * 2010-11-05 2012-05-09 Sony Ericsson Mobile Communications AB Headset with accelerometers to determine direction and movements of user head and method
CN104199655A (en) * 2014-08-27 2014-12-10 深迪半导体(上海)有限公司 Audio switching method, microprocessor and earphones
CN104361016A (en) * 2014-10-15 2015-02-18 广东小天才科技有限公司 Method and device for adjusting music play effect according to sport state
CN106535057A (en) * 2016-12-07 2017-03-22 歌尔科技有限公司 Operation switching method and system for master device and slave device
CN108540669A (en) * 2018-04-20 2018-09-14 Oppo广东移动通信有限公司 Wireless headset, the control method based on headset detection and Related product

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021244058A1 (en) * 2020-06-05 2021-12-09 北京搜狗科技发展有限公司 Process execution method, device, and readable medium
CN113542963A (en) * 2021-07-21 2021-10-22 RealMe重庆移动通信有限公司 Sound mode control method, device, electronic equipment and storage medium
CN113542963B (en) * 2021-07-21 2022-12-20 RealMe重庆移动通信有限公司 Sound mode control method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2021244058A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
CN104125396B (en) Image capturing method and device
US11551726B2 (en) Video synthesis method terminal and computer storage medium
CN104850828B (en) Character recognition method and device
US20170126192A1 (en) Method, device, and computer-readable medium for adjusting volume
EP3125529A1 (en) Method and device for image photographing
US10499156B2 (en) Method and device of optimizing sound signal
US10133957B2 (en) Method and device for recognizing object
US9924090B2 (en) Method and device for acquiring iris image
EP3299946B1 (en) Method and device for switching environment picture
US10230891B2 (en) Method, device and medium of photography prompts
CN108848313B (en) Multi-person photographing method, terminal and storage medium
EP3328062A1 (en) Photo synthesizing method and device
EP3024211A1 (en) Method and device for announcing voice call
CN111984347A (en) Interaction processing method, device, equipment and storage medium
CN111613213B (en) Audio classification method, device, equipment and storage medium
EP3435283A1 (en) Method and device for optical fingerprint recognition, and computer-readable storage medium
CN111698600A (en) Processing execution method and device and readable medium
CN104573642B (en) Face identification method and device
CN109831817B (en) Terminal control method, device, terminal and storage medium
US20170244891A1 (en) Method for automatically capturing photograph, electronic device and medium
US20170034347A1 (en) Method and device for state notification and computer-readable storage medium
CN108509863A (en) Information cuing method, device and electronic equipment
CN111341317B (en) Method, device, electronic equipment and medium for evaluating wake-up audio data
US20160142885A1 (en) Voice call prompting method and device
CN108962189A (en) Luminance regulating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922

RJ01 Rejection of invention patent application after publication