WO2021244058A1 - 一种处理执行方法、装置和可读介质 - Google Patents

一种处理执行方法、装置和可读介质 Download PDF

Info

Publication number
WO2021244058A1
WO2021244058A1 PCT/CN2021/074913 CN2021074913W WO2021244058A1 WO 2021244058 A1 WO2021244058 A1 WO 2021244058A1 CN 2021074913 W CN2021074913 W CN 2021074913W WO 2021244058 A1 WO2021244058 A1 WO 2021244058A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
movement
head
user
motion
Prior art date
Application number
PCT/CN2021/074913
Other languages
English (en)
French (fr)
Inventor
赵楠
崔文华
Original Assignee
北京搜狗科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京搜狗科技发展有限公司 filed Critical 北京搜狗科技发展有限公司
Publication of WO2021244058A1 publication Critical patent/WO2021244058A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups

Definitions

  • This application relates to the technical field of true wireless earphones, in particular to processing execution devices, processing execution devices, processing execution devices, and machine-readable media.
  • TWS True Wireless Stereo
  • the left and right earphone bodies of true wireless earphones are "completely separated", and no trace of exposed wires can be seen. It is a true wireless earphone.
  • the connection of true wireless earphones is not only the signal transmission between the earphone and the signal transmitting device, but also the wireless connection between the main and auxiliary earphones.
  • True wireless earphones are convenient and easy to store, and the sound quality and battery life are getting better and better.
  • the embodiments of the present application propose a processing execution device, a processing execution device, a processing execution device, and a machine-readable medium that overcome or at least partially solve the foregoing problems.
  • the embodiments of the present application can solve the problem of earphones. Fewer functions, inconvenient to trigger the problem.
  • this application discloses a processing execution method applied to earphones, including:
  • the recognizing the movement state of the user through the acceleration sensor built in the earphone includes:
  • the motion state includes at least one of head motion, body motion, motion frequency, motion speed, motion duration, and motion amplitude.
  • the head movement includes at least one of the following: head up, head down, head nodding, head shaking, head down to the left and bottom, head down to the right, head up to the left and head up to the right; the whole body movement It includes at least one of the following: walking, running, standing, and sitting.
  • the determining the movement state of the user according to the acceleration information includes at least one of the following:
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement frequency of the target head movement or the target whole body movement meets a preset frequency requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement speed of the target head movement or the target whole body movement meets a preset speed requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets a preset time requirement;
  • the acceleration information it is determined that the user has performed a target head motion or a target whole body motion, and the motion amplitude of the target head motion or the target whole body motion meets a preset amplitude requirement.
  • the execution of the target processing includes at least one of the following: answering the call, rejecting the call, hanging up the call, calling back, canceling the outgoing call, marking the target audio in the call, starting to play the first audio, Pause the first audio, end the first audio, switch to the second audio when the first audio is played, turn up the volume, turn down the volume, start recording, end recording, pause recording,
  • the voice processing function is turned on, the voice processing function is turned off, and a voice prompt is given to the user.
  • the voice prompt includes at least one of the following: a sedentary prompt and a long-term bow prompt.
  • the method before the voice prompting to the user, the method further includes:
  • the determining the motion state of the user according to the acceleration information includes:
  • the execution of the target processing includes:
  • the processing instruction corresponding to the target processing is sent to the earphone storage device or the cloud server to control the earphone storage device or the cloud server to execute the target processing.
  • the embodiment of the application also discloses a processing execution device, which is applied to a headset, and includes:
  • a state acquisition module used to detect the working state of the headset
  • the state recognition module is used to recognize the movement state of the user through the acceleration sensor built in the earphone;
  • a processing determination module configured to determine the target processing associated with the motion state in the working state
  • the processing execution module is used to execute the target processing.
  • the state recognition module includes:
  • An information collection module configured to collect acceleration information of the headset through the acceleration sensor
  • the state determination module is used to determine the movement state of the user according to the acceleration information; wherein the movement state includes at least one of head movement, whole body movement, movement frequency, movement speed, movement duration, and movement amplitude .
  • the head movement includes at least one of the following: head up, head down, head nodding, head shaking, head down to the left and bottom, head down to the right, head up to the left and head up to the right; the whole body movement It includes at least one of the following: walking, running, standing, and sitting.
  • the state determining module is specifically used for at least one of the following:
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement frequency of the target head movement or the target whole body movement meets a preset frequency requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement speed of the target head movement or the target whole body movement meets a preset speed requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets a preset time requirement;
  • the acceleration information it is determined that the user has performed a target head motion or a target whole body motion, and the motion amplitude of the target head motion or the target whole body motion meets a preset amplitude requirement.
  • processing execution module is used for at least one of the following:
  • the voice prompt includes at least one of the following: a long-time sitting prompt and a long-term bowing prompt.
  • the device further includes:
  • the current state recognition module is configured to recognize the current movement state of the user through the acceleration sensor built in the earphone before the voice prompt to the user;
  • the state determination module is used to determine whether the current exercise state meets the preset requirements, and if the current exercise state meets the preset requirements, perform the step of giving voice prompts to the user.
  • the state recognition module includes:
  • An information sending module for sending the acceleration information to the earphone storage device or the cloud server;
  • the state receiving module is configured to receive the motion state determined by the earphone storage device or the cloud server according to the acceleration information.
  • processing execution module includes:
  • the instruction sending module is used to send the processing instruction corresponding to the target processing to the earphone storage device or the cloud server to control the earphone storage device or the cloud server to execute the target processing.
  • the embodiment of the present application also discloses a device for processing execution, including a memory, and one or more programs.
  • One or more programs are stored in the memory and configured to be run by one or more processors. Executing the one or more programs includes instructions for performing the following operations:
  • the recognizing the movement state of the user through the acceleration sensor built in the earphone includes:
  • the motion state includes at least one of head motion, body motion, motion frequency, motion speed, motion duration, and motion amplitude.
  • the head movement includes at least one of the following: head up, head down, head nodding, head shaking, head down to the left and bottom, head down to the right, head up to the left and head up to the right; the whole body movement It includes at least one of the following: walking, running, standing, and sitting.
  • the determining the movement state of the user according to the acceleration information includes at least one of the following:
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement frequency of the target head movement or the target whole body movement meets a preset frequency requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement speed of the target head movement or the target whole body movement meets a preset speed requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets a preset time requirement;
  • the acceleration information it is determined that the user has performed a target head motion or a target whole body motion, and the motion amplitude of the target head motion or the target whole body motion meets a preset amplitude requirement.
  • the execution of the target processing includes at least one of the following:
  • the voice prompt includes at least one of the following: a long-time sitting prompt and a long-term bowing prompt.
  • the operation instruction further includes:
  • the determining the motion state of the user according to the acceleration information includes:
  • the execution of the target processing includes:
  • the processing instruction corresponding to the target processing is sent to the earphone storage device or the cloud server to control the earphone storage device or the cloud server to execute the target processing.
  • the embodiment of the present application also discloses a machine-readable medium on which instructions are stored, which when executed by one or more processors, cause the device to execute the processing execution device as described above.
  • the embodiment of the present invention also discloses a computer program, including computer readable code, when the computer readable code runs on a computing processing device, it causes the computing processing device to execute the processing execution method according to any one of the foregoing .
  • the acceleration sensor built into the headset is used to identify the user's movement state, and determine what is associated with the movement state in the working state.
  • Target processing the execution of the target processing enables the headset to trigger a variety of target processing according to the user's motion state, without the user's hand motion or speech triggering, which improves the convenience of triggering various target processing, and the user
  • the corresponding target processing can also be different under different working states of the headset.
  • the trigger for more target processing is achieved on the headset, which solves the problem that it is difficult for the user to trigger the target processing through hand movements or speech in some scenarios. problem.
  • Fig. 1 shows a flow chart of steps of an embodiment of a processing execution method of the present application
  • Fig. 2 shows a flow chart of the steps of another embodiment of the processing execution method of the present application
  • Fig. 3 shows a structural block diagram of an embodiment of a processing execution device of the present application
  • Fig. 4 is a block diagram showing a device for processing execution according to an exemplary embodiment
  • FIG. 5 is a schematic diagram of the structure of the server in some embodiments of the present application.
  • step flow chart of an embodiment of a processing execution method of the present application, which is applied to a headset, and may specifically include the following steps:
  • Step 101 Detect the working state of the headset.
  • the working state of the headset includes the working state of playing audio, the working state of the phone call, the working state of the incoming call, the working state of the standby, the audio call or the video call or the voice in the instant messaging software.
  • the working state of the input, the working state of the voice input in the input method, etc., or any other applicable working state, the embodiment of the present application does not limit this.
  • the headset and the mobile terminal and other devices are tested for data transmitted between each other.
  • the identification carried in the data transmitted by the terminal and other equipment, or the data volume of the data sent and/or received by the headset within a certain period of time can determine that the headset is currently in the working state of playing audio, or working in a call, or incoming call
  • the headset is directly connected to the Internet through the mobile communication chip, the data sent or received by the headset is detected, according to the identifier carried in the data sent or received, or the headset sends and receives data within a certain period of time.
  • the headset can be determined that the headset is currently in the working state of playing audio, or in the working state in the call, or in the working state in the incoming call, or in the standby working state, etc.; or directly detect the current current on the headset
  • the running program can also determine that the headset is currently in the working state of playing audio, or in the working state in the call, or in the working state in the incoming call, or in the standby working state, etc.; specifically, it can include any applicable implementation methods.
  • the application embodiment does not limit this.
  • Step 102 Identify the user's motion state through the acceleration sensor built into the earphone.
  • an acceleration sensor refers to a sensor that can sense the acceleration of an object being sensed and convert the acceleration into a usable output signal.
  • the main sensing method is to measure the deformation of the sensitive element when the sensitive element in the sensor deforms under the action of acceleration, and use the relevant circuit to convert the deformation of the sensitive element into a voltage for output.
  • the acceleration sensor is built into the headset, which can be used to identify the user's movement status. For example, the user nodding, raising his head, shaking his head, lowering his head, walking, running, standing, sitting and other movement changes can be affected by the G- in the headset.
  • the sensor is converted into electrical signals, and different electrical signals correspond to different motion states.
  • the aforementioned acceleration sensor may be a three-axis acceleration sensor, which can measure spatial acceleration and can fully reflect the nature of the motion of the object. The components of the acceleration signal generated by the user's head on the three coordinate axes are measured. When there are multiple possibilities for the user's head movement, the three-axis acceleration sensor can accurately detect the acceleration signal generated by the user's head.
  • the acceleration sensor is built into the headset, and an algorithm for recognizing the user's motion state is embedded in the headset, so that the headset has the ability to recognize the user's motion state.
  • There are many ways to recognize the user's motion state For example, after the acceleration sensor collects the electrical signal, it matches the electrical signal of multiple motion states. If it matches the electrical signal of the target motion state, the user's current motion is identified. The state is the target motion state; or after the acceleration sensor collects the electrical signal, the electrical signal is converted into acceleration information, which is expressed as the magnitude and direction of the acceleration. According to the acceleration information, the user’s current motion state can be determined by any applicable implementation method. The embodiments of this application do not limit this.
  • Step 103 Determine the target processing associated with the motion state in the working state.
  • the target processing includes answering or rejecting the call, marking the target audio in the call, turning up or down the volume, turning on or off the voice processing function, giving voice prompts to the user, etc., or other Any applicable processing is not limited in the embodiment of this application.
  • calls include but are not limited to telephone calls, audio calls or video calls in instant messaging software, etc.
  • the target audio in the call includes the audio of the own party or the other party.
  • Voice processing includes the process of recognizing, understanding, and giving corresponding feedback to audio, including converting audio into corresponding text or commands, recognizing voice information in the audio, and making corresponding feedback based on understanding, or any other applicable Voice processing is not limited in this embodiment of the application.
  • the voice assistant function include, but are not limited to, voice prompts that remind the user to sit for a long time, and to remind the user to bow his head for a long time.
  • the motion state is associated with the target processing.
  • the headset After the headset recognizes the motion state, it determines the target processing associated with the motion state, and the target processing associated with the same motion state in different working states Can be different. For example, when the headset is in the working state of a call or an incoming call, the target processing associated with shaking the head is set to reject the call, and in the working state of playing audio, the target processing associated with shaking the head is set to switching songs.
  • Step 104 execute the target processing.
  • the target processing is executed.
  • Some target processing can be executed independently on the headset, and some target processing, the headset sends the instructions to be executed to the connected mobile terminal and other equipment, or the headset storage device, or sent to the cloud server via the Internet, that is, the headset controls the mobile terminal And other equipment, or earphone storage device, or cloud server.
  • the user's head nodding can trigger the answering of the call, shaking the head can trigger the rejection of the call, and nodding of the head can mark the target audio in the call.
  • the user's head shaking can trigger to switch songs, raising the head twice can trigger the volume up, and lowering the head twice can trigger the volume down.
  • the user can trigger the wake-up of the voice processing function by lowering his head to the lower left at a faster speed.
  • executing target processing includes at least one of the following: answering the call, rejecting the call, hanging up the call, calling back, canceling the outgoing call, marking the target audio in the call, and starting to play the first call.
  • One audio pause the playback of the first audio, end the playback of the first audio, switch to the second audio when playing the first audio, turn up the volume, turn down the volume, start recording, end recording, Pause the recording, turn on the voice processing function, turn off the voice processing function, and give a voice prompt to the user.
  • the voice prompt includes at least one of the following: a sedentary prompt and a long-term bow prompt.
  • different motion states can be set to correspond to Different tags, using tags corresponding to the motion state to mark the target audio.
  • the processing during audio playback may include starting playback, pausing playback, ending playback, switching played audio, etc., or any other applicable processing, which is not limited in the embodiment of the present application.
  • the first audio and the second audio can be stored on the headset, or on the headset storage device, or on the cloud server or on the mobile terminal.
  • the headset can directly start playing the first audio, or pause the first audio, or end the playback of the first audio, or switch to play when the first audio is played Second audio and other processing.
  • the earphone can send the corresponding processing instructions for starting, pausing, stopping, and switching the played audio to the earphone storage device via Bluetooth, thereby controlling the earphone storage
  • the device executes the corresponding processing.
  • the headset can send the corresponding processing instructions for starting, pausing, stopping, switching the played audio to the cloud server through the mobile communication chip, thereby controlling the cloud server to execute Deal with it accordingly.
  • the headset can send the corresponding processing instructions such as start playing, pause playing, stop playing, switch playing audio, etc. to the mobile terminal via Bluetooth, thereby controlling the mobile terminal to execute the corresponding deal with.
  • the voice processing function includes algorithms, databases, and computing resources called for processing voice in audio, or any other applicable content related to voice processing, which is not limited in the embodiment of the present application.
  • the voice processing function can call computing resources such as earphones, earphone storage boxes, cloud servers, etc., which are not limited in the embodiment of the present application.
  • a voice processing function is to call only the local computing resources of the headset, and use the local voice recognition model of the headset to recognize the voice in the audio.
  • the voice recognition model saves the voice features extracted from the pre-collected audio, and the recognizable voice Relatively limited to the voice features in the local voice model, the recognition speed is limited to the local computing resources;
  • another voice processing function is to use the cloud server to upload audio to the cloud server, call the computing resources on the cloud server, and use the voice recognition model Recognize the speech in the audio, understand the speech in the audio, and give corresponding feedback to the speech. It is no longer limited to local computing resources and sample libraries, and can have better speech processing effects and obtain more complex and diverse results. .
  • the headset Before the headset recognizes the motion state associated with turning on the voice processing function, after the headset collects audio data, it will not hand it over to the voice processing function to process the audio data. After that, after the headset collects the audio data, It will be handed over to the voice processing function to process the audio data. You can also set a certain motion state to be associated with turning off the voice processing function. When the motion state is recognized, the voice processing function is turned off. For example, the collected audio data is what the user said "what is the temperature today", the headset and cloud server use the voice processing function to process the audio data to obtain the processing result of "today's temperature 25 degrees", and send the processing result to the headset , And play the processing result.
  • the user is given a voice prompt
  • the voice prompt includes at least one of the following: a long-time sitting prompt and a long-term bowing prompt.
  • the voice prompt includes at least one of the following: a long-time sitting prompt and a long-term bowing prompt.
  • play the voice of sedentary prompt on the headset or when recognizing that the user is in the state of long-term bowing of the head, play the voice of the long head down prompt on the headset to remind the user Unhealthy state.
  • the voice played is: "You have been sitting for more than an hour", "You have bowed your head for more than half an hour”, etc.
  • the method may further include: recognizing the user's current motion state through an acceleration sensor built into the headset; and determining whether the current motion state conforms to The preset requirement, if the current exercise state meets the preset requirement, the step of giving voice prompts to the user is executed.
  • the preset requirement can correspond to the voice prompt.
  • the preset requirement is that the user is not currently standing up or moving significantly. Within a certain period of time, it is detected that the user walks less than the preset number of steps, and the current When there is no activity, a voice reminder is played on the headset; before the long-term bowing prompt, the preset requirement is that the user's current bowing degree exceeds the preset threshold.
  • the preset requirements for any use can be set according to actual needs. There is no restriction.
  • an implementation manner of executing the target processing includes: sending a processing instruction corresponding to the target processing to a headset storage device or a cloud server to control the headset storage device Or the cloud server executes the target processing.
  • the headset can establish a connection with the headset storage device or a cloud server.
  • the headset can establish a connection with the headset storage device via Bluetooth, or the headset can connect to a cloud server on the network through a built-in mobile communication chip.
  • the headset does not directly perform the target processing, and the processing instructions corresponding to the target processing on the headset are sent to the headset storage device or the cloud server to control the headset storage device or the cloud server to perform the target processing.
  • the processing instruction may include the audio data collected by the earphone or the earphone storage device, the audio data is delivered to the earphone storage device or the cloud server, and the audio data is processed by the voice processing function.
  • the earphone sends a processing instruction corresponding to the pause of playing the first audio to the earphone storage device, and the earphone storage device pauses the playback of the first audio.
  • the headset is limited by volume or power, and cannot store too much data or can not implement functions that require higher computing power.
  • Using headset storage devices or cloud servers with stronger computing power for target processing solves the problem of high computing power requirements
  • the problem that the function of the headset cannot be realized by the headset has expanded the computing power of the headset.
  • the target processing when executed, in addition to the target processing performed by the headset storage device, it can also be executed by the cloud server.
  • the headset storage device obtains the processing instruction
  • the processing instruction is sent to the cloud server, and the cloud server
  • the target processing corresponding to the processing instruction is executed, and then the cloud server sends the processing result to the earphone storage device.
  • the processing instruction can be sent to the cloud server.
  • the cloud server performs translation based on the audio data to obtain the target audio data or target text data in the target language, which may include any applicable
  • the embodiment of this application does not limit this processing.
  • the cloud server can provide more powerful computing capabilities, so that more complex target processing can be performed, more and more complex functions can be implemented, the power consumption of the earphone storage device can be reduced, and the processing speed can be increased.
  • the acceleration sensor built into the headset is used to identify the user's movement state, and determine what is associated with the movement state in the working state.
  • Target processing the execution of the target processing enables the headset to trigger a variety of target processing according to the user's motion state, without the user's hand motion or speech triggering, which improves the convenience of triggering various target processing, and the user
  • the corresponding target processing can also be different under different working states of the headset.
  • the trigger for more target processing is achieved on the headset, which solves the problem that it is difficult for the user to trigger the target processing through hand movements or speech in some scenarios. problem.
  • Step 201 Detect the working state of the headset.
  • Step 202 Collect acceleration information of the earphone through the acceleration sensor.
  • the acceleration information includes the magnitude and direction of the acceleration
  • the acceleration sensor can collect the acceleration information of the current earphone. For part of the motion state, only the current acceleration information can be determined, while for the part of the motion state, it takes a period of time. Acceleration information within can be determined. Collect the acceleration information of the headset at the current moment or within a period of time.
  • Step 203 Determine the movement state of the user according to the acceleration information; wherein the movement state includes at least one of head movement, whole body movement, movement frequency, movement speed, movement duration, and movement amplitude.
  • the movement state includes at least one of head movement, whole body movement, movement frequency, movement speed, movement duration, movement amplitude, or any other applicable state information, which is not done in this embodiment. limit.
  • the above-mentioned motion state of the user can be determined according to the collected acceleration information.
  • the head movement can be the user's head movement such as nodding, shaking his head, lowering his head, raising his head, lowering his head at the bottom left, lowering his head at the bottom right, raising his head at the upper left, raising his head at the upper right, or any other applicable head movements.
  • the embodiments of this application do not limit this.
  • the whole body exercise may be walking, running, standing, sitting and other exercises performed by the user while wearing headphones, or any other applicable whole body exercise, which is not limited in the embodiment of the present application.
  • the movement frequency can be the movement frequency of the head movement or the whole body movement, for example, the number of nodding in a unit time, the number of shaking the head in a unit time, the number of bowing the lower left head in a unit time, and the number of times when walking in a unit time.
  • the number of steps, etc., or any other applicable motion frequency is not limited in the embodiment of the present application.
  • the movement speed may be the movement speed of the head movement or the whole body movement, for example, the speed of lowering the head to the lower left, the speed of walking, the speed of running, etc., or any other suitable movement speed, which is not done in the embodiment of this application. limit.
  • the exercise duration may be the exercise duration of the head movement or the whole body movement, for example, the duration of lowering the head, the duration of sitting down, etc., or any other applicable duration of exercise, which is not limited in the embodiment of the application. .
  • the motion amplitude can be the motion amplitude of head motion or whole body motion, for example, the lower angle when lowering the head compared to the normal state, the angle spanned by left and right when shaking the head, or any other applicable motion amplitude.
  • the embodiments of the present application There is no restriction on this.
  • the head movement includes at least one of the following: head up, head down, head nodding, shaking head, head down to left, head down to right, head up to left, head up to right, head up ;
  • Whole body exercise includes at least one of the following: walking, running, standing, and sitting.
  • Head down, head up, shaking, and nodding are more commonly used. As head movements, they are prone to false triggers, while bowing to the lower left, lowering to the right, tilting to the upper left, and tilting to the upper right are not commonly used, but they are still
  • the relatively simple head movement can avoid the problem of false triggering, and the user can easily complete the head movement, which is convenient for the user.
  • the implementation manner of determining the motion state of the user according to the acceleration information may include at least one of the following:
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement frequency of the target head movement or the target whole body movement meets a preset frequency requirement.
  • the head movement and the whole body movement can include multiple types, and one or more of the head movement or the whole body movement is preset as the target head movement or the target whole body movement. Set the target head motion or target whole body motion to be associated with various target processing.
  • the preset frequency requirement may be that the exercise frequency is consistent with a certain preset frequency, or the exercise frequency exceeds a certain preset frequency, etc., or any other applicable preset frequency requirements, which are not limited in the embodiment of the application. .
  • the acceleration information it is determined that the user has raised his head, and the number of times of raising his head in a unit time is twice, then it is determined that the user has raised his head twice, and the corresponding target processing can be increase Volume
  • the acceleration information it is determined that the user has bowed his head, and the number of times of bowing in a unit time is two, then it is determined that the user has performed a target head movement, that is, lowering the head twice, the corresponding target processing may be lowering the volume.
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement speed of the target head movement or the target whole body movement meets a preset speed requirement.
  • the preset speed requirement may be that the movement speed is within the preset speed range, or that the movement speed exceeds a certain preset speed, or any other applicable preset speed requirement, which is not limited in the embodiment of the present application.
  • the movement speed of the target head movement or the whole body movement is required to meet the preset speed requirement, which can avoid false triggers caused by some unintentional movements of the user and reduce processing errors.
  • the corresponding target processing can be to wake up the headset or the mobile phone. voice assistant.
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets a preset time requirement.
  • the preset time requirement may be that the duration of the exercise exceeds the preset duration, or it may be that the duration of the exercise exceeds a certain percentage of the detection duration, or any other applicable preset time requirement, which is not done in this embodiment of the application. limit.
  • the acceleration information For example, within a certain period of time, according to the acceleration information, it is determined that the user has bowed his head, and the duration of the bowing movement exceeds 95% of the certain time. It is determined that the user has bowed his head for a long time, and the corresponding target processing can be performed on the user. Voice prompts.
  • the acceleration information it is determined that the user has performed a target head motion or a target whole body motion, and the motion amplitude of the target head motion or the target whole body motion meets a preset amplitude requirement.
  • the preset amplitude requirement may be that the motion amplitude exceeds the preset amplitude, or the motion amplitude is within the preset amplitude range, or any other applicable preset amplitude requirements, which are not limited in the embodiment of the present application.
  • the corresponding target processing may be a voice prompt to the user.
  • determining the user’s motion state it is possible to use a single method to determine the user’s motion state, or to combine multiple determinations of the user’s motion state. For example, determine the user’s motion state based on the acceleration information. The user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets the preset time requirement, and the movement amplitude of the target head movement or the target whole body movement satisfies Pre-set amplitude requirements, and at the same time have requirements for exercise duration and exercise amplitude.
  • the acceleration information it is determined that the user has bowed his head, and the duration of the bowed movement exceeds 95% of the certain time, and the amplitude of the bowed movement exceeds a preset angle, then it is determined that the user has performed for a long time. Lower your head, and the corresponding target processing can be a voice prompt to the user.
  • an implementation manner of determining the motion state of the user according to the acceleration information it may include: sending the acceleration information to the earphone storage device or the cloud server; and receiving the acceleration information.
  • the headset cannot independently determine the user's motion state, and the headset can send the collected acceleration information to the headset storage device or the cloud server.
  • the earphone and the earphone storage device can be connected via Bluetooth, and the acceleration information is transmitted via Bluetooth.
  • the headset may be equipped with a mobile communication chip, which is connected to a dedicated network or the Internet through the mobile communication chip, and the acceleration information is sent to the cloud server through the mobile communication chip.
  • the earphone storage device or the cloud server determines the user's exercise state according to the acceleration information, and has a method for determining the exercise state. As described above, the earphone storage device or the cloud server determines the exercise state, and then sends the exercise state to the earphone.
  • the earphone storage device or the cloud server is used to expand the computing power of the earphone, and solve the problem that the earphone cannot realize functions that require high computing power.
  • Step 204 Determine the target processing associated with the motion state in the working state.
  • Step 205 execute the target processing.
  • the acceleration information of the earphone is collected through the acceleration sensor, and the user's motion state is determined according to the acceleration information;
  • the movement state includes at least one of head movement, whole body movement, movement frequency, movement speed, movement duration, and movement amplitude, determining the target processing associated with the movement state in the working state, and executing the target Processing enables the headset to trigger multiple target processing according to the user’s motion state, without the user’s hand motion or speech triggering, which improves the convenience of triggering various target processing, and the user’s same motion state is in the headset’s
  • Corresponding target processing can also be different in different working states.
  • the headset can be used to trigger more target processing, which solves the problem that it is difficult for users to trigger target processing through hand movements or speech in some scenarios.
  • FIG. 3 there is shown a structural block diagram of an embodiment of a processing execution device of the present application, which is applied to a headset, and may specifically include:
  • the state acquisition module 301 is used to detect the working state of the headset
  • the state recognition module 302 is used to recognize the movement state of the user through the acceleration sensor built in the earphone;
  • the processing determination module 303 is configured to determine the target processing associated with the motion state in the working state
  • the processing execution module 304 is configured to execute the target processing.
  • the state recognition module includes:
  • An information collection module configured to collect acceleration information of the headset through the acceleration sensor
  • the state determination module is used to determine the movement state of the user according to the acceleration information; wherein the movement state includes at least one of head movement, whole body movement, movement frequency, movement speed, movement duration, and movement amplitude .
  • the head movement includes at least one of the following: head up, head down, nodding, shaking head, head down to left and bottom, head down to right and bottom, head up to left, and head up to right.
  • Head up; the whole body exercise includes at least one of the following: walking, running, standing, and sitting.
  • the state determination module is specifically used for at least one of the following:
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement frequency of the target head movement or the target whole body movement meets a preset frequency requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement speed of the target head movement or the target whole body movement meets a preset speed requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets a preset time requirement;
  • the acceleration information it is determined that the user has performed a target head motion or a target whole body motion, and the motion amplitude of the target head motion or the target whole body motion meets a preset amplitude requirement.
  • the processing execution module is used for at least one of the following: answer the call, reject the call, hang up the call, call back, cancel the outgoing call, mark the target audio in the call, Start playing the first audio, pause the first audio, end the first audio, switch to the second audio when playing the first audio, turn up the volume, turn down the volume, start recording, End the recording, pause the recording, turn on the voice processing function, turn off the voice processing function, and give a voice prompt to the user.
  • the voice prompt includes at least one of the following: a sedentary prompt and a long-term bow prompt.
  • the device further includes:
  • the current state recognition module is configured to recognize the current movement state of the user through the acceleration sensor built in the earphone before the voice prompt to the user;
  • the state determination module is used to determine whether the current exercise state meets the preset requirements, and if the current exercise state meets the preset requirements, perform the step of giving voice prompts to the user.
  • the state recognition module includes:
  • An information sending module for sending the acceleration information to the earphone storage device or the cloud server;
  • the state receiving module is configured to receive the motion state determined by the earphone storage device or the cloud server according to the acceleration information.
  • the processing execution module includes:
  • the instruction sending module is used to send the processing instruction corresponding to the target processing to the earphone storage device or the cloud server to control the earphone storage device or the cloud server to execute the target processing.
  • the acceleration sensor built into the headset is used to identify the user's movement state, and determine what is associated with the movement state in the working state.
  • Target processing the execution of the target processing enables the headset to trigger a variety of target processing according to the user's motion state, without the user's hand motion or speech triggering, which improves the convenience of triggering various target processing, and the user
  • the corresponding target processing can also be different under different working states of the headset.
  • the trigger for more target processing is achieved on the headset, which solves the problem that it is difficult for the user to trigger the target processing through hand movements or speech in some scenarios. problem.
  • the device embodiments described above are merely illustrative.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
  • Fig. 4 is a block diagram showing a device 400 for processing execution according to an exemplary embodiment.
  • the device 400 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the device 400 may include one or more of the following components: a processing component 402, a memory 404, a power supply component 406, a multimedia component 408, an audio component 410, an input/output (I/O) interface 412, a sensor component 414, And communication component 416.
  • the processing component 402 generally controls the overall operations of the device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing element 402 may include one or more processors 420 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 402 may include one or more modules to facilitate the interaction between the processing component 402 and other components.
  • the processing component 402 may include a multimedia module to facilitate the interaction between the multimedia component 408 and the processing component 402.
  • the memory 404 is configured to store various types of data to support the operation of the device 400. Examples of these data include instructions for any application or method operating on the device 400, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 404 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 406 provides power to various components of the device 400.
  • the power supply component 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 400.
  • the multimedia component 408 includes a screen that provides an output interface between the device 400 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 408 includes a front camera and/or a rear camera. When the device 400 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 410 is configured to output and/or input audio signals.
  • the audio component 410 includes a microphone (MIC), and when the device 400 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 404 or transmitted via the communication component 416.
  • the audio component 410 further includes a speaker for outputting audio signals.
  • the I/O interface 412 provides an interface between the processing component 402 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 414 includes one or more sensors for providing the device 400 with various aspects of status assessment.
  • the sensor component 414 can detect the on/off status of the device 400 and the relative positioning of components.
  • the component is the display and keypad of the device 400.
  • the sensor component 414 can also detect the position change of the device 400 or a component of the device 400. , The presence or absence of contact between the user and the device 400, the orientation or acceleration/deceleration of the device 400, and the temperature change of the device 400.
  • the sensor component 414 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices.
  • the device 400 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 416 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel.
  • the communication component 416 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the apparatus 400 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component is used to implement the above method.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component is used to implement the above method.
  • non-transitory computer-readable storage medium including instructions, such as the memory 404 including instructions, and the foregoing instructions may be executed by the processor 420 of the device 400 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • Fig. 5 is a schematic diagram of the structure of a server in some embodiments of the present application.
  • the server 1900 may have relatively large differences due to different configurations or performances, and may include one or more central processing units (CPU) 1922 (for example, one or more processors) and a memory 1932, one or one
  • the above storage medium 1930 (for example, one or one storage device with a large amount of storage) for storing the application program 1942 or the data 1944.
  • the memory 1932 and the storage medium 1930 may be short-term storage or persistent storage.
  • the program stored in the storage medium 1930 may include one or more modules (not shown in the figure), and each module may include a series of command operations on the server.
  • the central processing unit 1922 may be configured to communicate with the storage medium 1930, and execute a series of instruction operations in the storage medium 1930 on the server 1900.
  • the server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input and output interfaces 1958, one or more keyboards 1956, and/or, one or more operating systems 1941 , Such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM and so on.
  • a non-transitory computer-readable storage medium When instructions in the storage medium are executed by a processor of a device (smart terminal or server), the device can execute a processing execution method.
  • the method includes:
  • the recognizing the movement state of the user through the acceleration sensor built in the earphone includes:
  • the motion state includes at least one of head motion, body motion, motion frequency, motion speed, motion duration, and motion amplitude.
  • the head movement includes at least one of the following: head up, head down, head nodding, head shaking, head down to the left and bottom, head down to the right, head up to the left and head up to the right; the whole body movement It includes at least one of the following: walking, running, standing, and sitting.
  • the determining the movement state of the user according to the acceleration information includes at least one of the following:
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement frequency of the target head movement or the target whole body movement meets a preset frequency requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement speed of the target head movement or the target whole body movement meets a preset speed requirement;
  • the acceleration information it is determined that the user has performed a target head movement or a target whole body movement, and the movement duration of the target head movement or the target whole body movement meets a preset time requirement;
  • the acceleration information it is determined that the user has performed a target head motion or a target whole body motion, and the motion amplitude of the target head motion or the target whole body motion meets a preset amplitude requirement.
  • the execution of the target processing includes at least one of the following: answering the call, rejecting the call, hanging up the call, marking the target audio in the call, starting to play the first audio, and pausing to play the first audio , End the playback of the first audio, switch to the second audio when playing the first audio, turn up the volume, turn down the volume, turn on the voice processing function, turn off the voice processing function, and perform voice to the user Prompt
  • the voice prompt includes at least one of the following: a sedentary prompt and a long-term bow prompt.
  • the operation instruction further includes:
  • the determining the motion state of the user according to the acceleration information includes:
  • the execution of the target processing includes:
  • the processing instruction corresponding to the target processing is sent to the earphone storage device or the cloud server to control the earphone storage device or the cloud server to execute the target processing.
  • the embodiments of the embodiments of the present application can be provided as methods, devices, or computer program products. Therefore, the embodiments of the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present application may adopt the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operation steps are executed on the computer or other programmable terminal equipment to produce computer-implemented processing, so that the computer or other programmable terminal equipment
  • the instructions executed above provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Abstract

本申请实施例提供了一种处理执行方法、装置及可读介质。所述方法包括:检测所述耳机的工作状态,通过内置于所述耳机的加速度传感器,识别用户的运动状态,确定在所述工作状态下所述运动状态相关联的目标处理,执行所述目标处理,使得根据用户的运动状态实现耳机对于多种目标处理的触发,无需用户通过手部动作或讲话来触发,提高了各种目标处理的触发的便利性,而且用户的同一运动状态在耳机的不同工作状态下对应的目标处理也可以不同,在耳机上实现了对于更多目标处理的触发,解决了一些场景下用户难以通过手部动作或讲话来触发目标处理的问题。

Description

一种处理执行方法、装置和可读介质
本申请要求在中国申请的申请号为202010507491.4、申请日为2020年6月5日发明名称为“一种处理执行方法、装置及可读介质”的发明专利申请的全部优先权。
技术领域
本申请涉及真无线耳机技术领域,特别是涉及处理执行装置、处理执行装置、用于处理执行的装置、机器可读介质。
背景技术
随着无线耳机技术的发展,真无线耳机(True Wireless Stereo,TWS)逐渐成为消费者的不二选择。真无线耳机的左右两个耳机本体“完全分离”,看不到一丝一毫的外露线材,是真正意义上的无线耳机。对比传统的“无线耳机”,真无线耳机的连接不仅是耳机和信号发射设备之间信号传输,还有主副耳机之间也存在无线连接。
真无线耳机具有方便、易收纳的特点,而且音质和电池续航能力都越来越好,但是由于真无线耳机的体积较小,亟待解决让真无线耳机能实现更多的功能。例如,调节音量、接听或拒接通话、对于语音助手的唤醒等,在某些不方便动手或讲话的场景下,无法完成触发。
发明内容
鉴于上述问题,本申请实施例提出了一种克服上述问题或者至少部分地解决上述问题的处理执行装置、处理执行装置、用于处理执行的装置、机器可读介质,本申请实施例能够解决耳机功能较少,不方便触发的问题。
为了解决上述问题,本申请公开了一种处理执行方法,应用于耳机,包括:
检测所述耳机的工作状态;
通过内置于所述耳机的加速度传感器,识别用户的运动状态;
确定在所述工作状态下所述运动状态相关联的目标处理;
执行所述目标处理。
可选地,所述通过内置于所述耳机的加速度传感器,识别用户的运动状态包括:
通过所述加速度传感器,采集所述耳机的加速度信息;
根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
可选地,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
可选地,所述根据所述加速度信息,确定所述用户的运动状态包括以下至少一种:
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
可选地,所述执行所述目标处理包括以下至少一种:接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二 音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
可选地,在所述对所述用户进行语音提示之前,所述方法还包括:
通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
可选地,所述根据所述加速度信息,确定所述用户的运动状态包括:
将所述加速度信息发送至耳机收纳装置或云端服务器;
接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
可选地,所述执行所述目标处理包括:
将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
本申请实施例还公开了一种处理执行装置,应用于耳机,包括:
状态获取模块,用于检测所述耳机的工作状态;
状态识别模块,用于通过内置于所述耳机的加速度传感器,识别用户的运动状态;
处理确定模块,用于确定在所述工作状态下所述运动状态相关联的目标处理;
处理执行模块,用于执行所述目标处理。
可选地,所述状态识别模块包括:
信息采集模块,用于通过所述加速度传感器,采集所述耳机的加速度信息;
状态确定模块,用于根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
可选地,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
可选地,所述状态确定模块,具体用于以下至少一种:
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
可选地,所述处理执行模块用于以下至少一种:
接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
可选地,所述装置还包括:
当前状态识别模块,用于在所述对所述用户进行语音提示之前,通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
状态确定模块,用于确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
可选地,所述状态识别模块包括:
信息发送模块,用于将所述加速度信息发送至耳机收纳装置或云端服务器;
状态接收模块,用于接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
可选地,所述处理执行模块包括:
指令发送模块,用于将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
本申请实施例还公开了一种用于处理执行的装置,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:
检测所述耳机的工作状态;
通过内置于所述耳机的加速度传感器,识别用户的运动状态;
确定在所述工作状态下所述运动状态相关联的目标处理;
执行所述目标处理。
可选地,所述通过内置于所述耳机的加速度传感器,识别用户的运动状态包括:
通过所述加速度传感器,采集所述耳机的加速度信息;
根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
可选地,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
可选地,所述根据所述加速度信息,确定所述用户的运动状态包括以下至少一种:
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
可选地,所述执行所述目标处理包括以下至少一种:
接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
可选地,在所述对所述用户进行语音提示之前,所述操作的指令还包括:
通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
可选地,所述根据所述加速度信息,确定所述用户的运动状态包括:
将所述加速度信息发送至耳机收纳装置或云端服务器;
接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
可选地,所述执行所述目标处理包括:
将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
本申请实施例还公开了一种机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得装置执行如上述的处理执行装置。
本发明实施例还公开了一种计算机程序,包括计算机可读代码,当所述计算机可读 代码在计算处理设备上运行时,导致所述计算处理设备执行根据上述任一所述的处理执行方法。
本申请实施例包括以下优点:
综上所述,依据本申请实施例,通过检测所述耳机的工作状态,通过内置于所述耳机的加速度传感器,识别用户的运动状态,确定在所述工作状态下所述运动状态相关联的目标处理,执行所述目标处理,使得根据用户的运动状态实现耳机对于多种目标处理的触发,无需用户通过手部动作或讲话来触发,提高了各种目标处理的触发的便利性,而且用户的同一运动状态在耳机的不同工作状态下对应的目标处理也可以不同,在耳机上实现了对于更多目标处理的触发,解决了一些场景下用户难以通过手部动作或讲话来触发目标处理的问题。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
图1示出了本申请的一种处理执行方法实施例的步骤流程图;
图2示出了本申请的另一种处理执行方法实施例的步骤流程图;
图3示出了本申请的一种处理执行装置实施例的结构框图;
图4是根据一示例性实施例示出的一种用于处理执行的装置的框图;
及图5是本申请的一些实施例中服务器的结构示意图。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参照图1,示出了本申请的一种处理执行方法实施例的步骤流程图,应用于耳机,具体可以包括如下步骤:
步骤101,检测所述耳机的工作状态。
在本申请实施例中,耳机的工作状态包括正在播放音频的工作状态、正在电话通话中的工作状态、来电中的工作状态、待机中的工作状态、即时通讯软件中音频通话或视频通话或语音输入的工作状态、输入法中语音输入的工作状态等,或者其他任意适用的工作状态,本申请实施例对此不做限制。
在本申请实施例中,检测耳机的工作状态的实现方式可以包括多种,例如,在耳机与移动终端等设备通过蓝牙连接时,检测耳机与移动终端等设备相互之间传输的数据,根据移动终端等设备传输来的数据中携带的标识、或者一定时间内耳机发送和/或接收的数据的数据量,可以确定耳机当前处于正在播放音频的工作状态、或正在通话中的工作状态、或来电中的工作状态、或待机中的工作状态等;在耳机直接通过移动通信芯片连接互联网时,检测耳机发送或接收的数据,根据发送或接收的数据中携带的标识、或者一定时间内耳机发送和/或接收的数据的数据量,可以确定耳机当前处于正在播放音频的工作状态、或正在通话中的工作状态、或来电中的工作状态、或待机中的工作状态等;或者直接检测耳机上当前运行的程序,也可以确定耳机当前处于正在播放音频的工作状态、或正在通话中的工作状态、或来电中的工作状态、或待机中的工作状态等;具体可 以包括任意适用的实现方式,本申请实施例对此不做限制。
步骤102,通过内置于所述耳机的加速度传感器,识别用户的运动状态。
在本申请实施例中,加速度传感器(G-sensor)指的是能感受到被感应物体的加速度,并将加速度转换成可用输出信号的传感器。其主要感应方式是在传感器中的敏感元件受加速度的作用而发生形变时,对上述敏感元件的形变量进行测量,并利用相关电路把上述敏感元件的形变量转换成电压进行输出。将加速度传感器内置于耳机,可以用于识别用户的运动状态,比如,用户进行点头、仰头、摇头、低头、行走、跑步、站立、坐下等各种移动变化都能被耳机中的G-sensor转化成电信号,不同的电信号对应于不同的运动状态。
例如,上述加速度传感器可以为三轴加速度传感器,三轴加速度传感器可以测量空间加速度,能够全面反映物体的运动性质。测得用户头部所产生的加速度信号在三个坐标轴上的分量,在用户的头部动作存在多种可能的情况下,三轴加速度传感器可以准确检测用户的头部所产生的加速度信号。
在本申请实施例中,通过内置于耳机的加速度传感器,并且在耳机内嵌入识别用户的运动状态的算法,使得耳机具备识别用户的运动状态的能力。识别用户的运动状态的实现方式可以包括多种,例如,加速度传感器采集电信号后,与多种运动状态的电信号进行匹配,若与目标运动状态的电信号匹配,则识别出用户当前的运动状态为目标运动状态;或者加速度传感器采集电信号后,将电信号转换为加速度信息,表现形式为加速度的大小和方向,根据加速度信息,确定用户当前的运动状态,具体可以通过任意适用的实现方式,本申请实施例对此不做限制。
步骤103,确定在所述工作状态下所述运动状态相关联的目标处理。
在本申请实施例中,目标处理包括接听或拒接通话、对通话中的目标音频进行标记、调高或调低音量、开启或关闭语音处理功能、对所述用户进行语音提示等,或者其他任意适用的处理,本申请实施例对此不做限制。
其中,通话包括但不限于电话、即时通讯软件中的音频通话或视频通话等。通话中的目标音频包括己方或对方的音频。语音处理包括对音频进行识别、理解并作出相应反馈的处理过程,包括将音频转换为相应的文本、或者命令,针对音频中的语音信息进行识别,并根据理解作出相应反馈,或者其他任意适用的语音处理,本申请实施例对此不做限制。例如,语音助手功能。语音提示包括但不限于对用户进行久坐提醒、长时间低头提醒等的语音提示。
在本申请实施例中,运动状态是与目标处理相关联的,耳机在识别到运动状态后,确定与该运动状态相关联的目标处理,而且同一运动状态在不同工作状态下相关联的目标处理可以不同。例如,耳机在正在通话或来电中的工作状态下,设置与摇头相关联的目标处理为拒接通话,而在播放音频的工作状态下,设置与摇头相关联的目标处理为切换歌曲。
步骤104,执行所述目标处理。
在本申请实施例中,在确定在工作状态下运动状态相关联的目标处理后,执行该目标处理。有的目标处理,在耳机上可以独立执行,有的目标处理,耳机将待执行的指令发送给连接的移动终端等设备、或者耳机收纳装置、或者通过互联网发送给云端服务器,即耳机控制移动终端等设备、或耳机收纳装置、或云端服务器来执行。
例如,在通话的工作状态下,用户的点头可以触发接听通话,摇头可以触发拒接通话,点头则对通话中的目标音频进行标记。在播放音频的工作状态下,用户的摇头可以触发切换歌曲,抬两次头可以触发调高音量,低两次头可以触发调低音量。在待机的工作状态下,用户以较快的速度向左下方低头可以触发语音处理功能的唤醒。
在本申请实施例中,可选地,执行目标处理包括以下至少一种:接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处 理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
其中,对通话中的目标音频进行标记时,需要对目标音频进行存储,并对目标音频进行标记,例如,在通话时,用户第一次点头,将这次点头之后的目标音频存储在耳机或耳机收纳装置或移动终端或云端服务器上,用户第二次点头,结束目标音频的存储,将目标音频标记为重要内容,还可以将目标音频标记为其他标签,具体可以设置不同的运动状态对应于不同的标签,采用运动状态对应的标签对目标音频进行标记。
其中,音频播放时的处理可以包括开始播放、暂停播放、结束播放、切换播放的音频等,或者其他任意适用的处理,本申请实施例对此不做限制。第一音频和第二音频可以存储在耳机上、或耳机收纳装置上、或云端服务器上或移动终端上。
当第一音频或第二音频存储在耳机上时,耳机可以直接执行开始播放第一音频、或暂停播放第一音频、或结束第一音频的播放、或在播放第一音频时,切换为播放第二音频等处理。当第一音频或第二音频存储在耳机收纳装置上时,耳机可以将开始播放、暂停播放、停止播放、切换播放的音频等处理对应的处理指令通过蓝牙发送给耳机收纳装置,从而控制耳机收纳装置执行相应的处理。当第一音频或第二音频存储在云端服务器时,耳机可以将开始播放、暂停播放、停止播放、切换播放的音频等处理对应的处理指令通过移动通信芯片发送给云端服务器,从而控制云端服务器执行相应的处理。当第一音频或第二音频存储在移动终端时,耳机可以将开始播放、暂停播放、停止播放、切换播放的音频等处理对应的处理指令通过蓝牙发送给移动终端,从而控制移动终端执行相应的处理。
其中,语音处理功能包括对于音频中的语音进行处理所调用的算法、数据库、以及计算资源等,或者其他任意适用的与语音处理相关的内容,本申请实施例对此不做限制。
语音处理功能可以调用耳机、耳机收纳盒、云端服务器等的计算资源,本申请实施例对此不做限制。例如,一个语音处理功能为仅调用耳机本地的计算资源,利用耳机本地的语音识别模型对音频中的语音进行识别,语音识别模型保存着从预先收集的音频中抽取的语音特征,可识别的语音相对限制于本地的语音模型中语音特征,识别的速度限制于本地的计算资源;另一个语音处理功能为利用云端服务器,将音频上传到云端服务器,调用云端服务器上的计算资源,利用语音识别模型对音频中的语音进行识别,并理解音频中的语音,对语音作出相应的反馈,不再限制于本地的计算资源和样本库,可以有更好的语音处理效果,得到更为复杂多样的结果。
耳机在识别到与开启语音处理功能相关联的运动状态前,耳机采集到音频数据后,并不会交给语音处理功能来对音频数据进行处理,而在此之后,耳机采集到音频数据后,就会交给语音处理功能来对音频数据进行处理。还可以设置某个运动状态与关闭语音处理功能相关联,识别到该运动状态时,则关闭语音处理功能。例如,采集到音频数据是用户说的“今天气温多少度”,耳机和云端服务器利用语音处理功能对该音频数据进行处理,得到“今天气温25度”的处理结果,将该处理结果发送给耳机,并播放该处理结果。
其中,对用户进行语音提示,语音提示包括以下至少一种:久坐提示、长时间低头提示。在识别用户处于长时间坐着的运动状态时,在耳机上播放久坐提示的语音,或者在识别用户处于长时间低头的运动状态时,在耳机上播放长时间低头提示的语音,以提醒用户不健康的状态。例如,播放的语音为:“您已经久坐超过一小时”、“您已经低头超过半小时”等。
在本申请实施例中,可选地,在对所述用户进行语音提示之前,还可以包括:通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
在对用户进行语音提示时,常常是已经对用户的运动状态进行了一段时间的识别,当耳机判断一段时间的用户的运动状态符合了一定条件时,才会对用户进行语音提示,但有时用户此时正在改变之前的运动状态,已经不需要对用户进行语音提示。为了避免 对用户不必要的打扰,在对用户进行语音提示之前,识别用户的当前运动状态,然后确定当前运动状态是否符合预设要求,若符合预设要求,才执行对用户进行语音提示的步骤,若不符合预设要求,则不执行对用户进行语音提示的步骤。
预设要求可以与语音提示相对应,例如,在久坐提示之前,预设要求为用户当前没有站起来或者大幅度的运动,在检测到一定时间内,用户走路小于预设步数,且当前没有活动时,在耳机上播放语音提醒;在长时间低头提示之前,预设要求为用户当前低头的程度超过预设阈值,具体可以根据实际需要设置任意使用的预设要求,本申请实施例对此不做限制。
在本申请实施例中,可选地,执行所述目标处理的一种实现方式中,包括:将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
耳机可以与耳机收纳装置或云端服务器建立连接,例如,耳机可以通过蓝牙与耳机收纳装置建立连接,或者耳机可以通过内置的移动通信芯片连接网络上的云端服务器。
在一些情况下,耳机不直接执行目标处理,耳机上目标处理对应的处理指令发送给耳机收纳装置或云端服务器,以控制耳机收纳装置或云端服务器执行目标处理。处理指令可以包括耳机或耳机收纳装置采集的音频数据,音频数据交由耳机收纳装置或云端服务器,利用语音处理功能对音频数据进行处理。
例如,耳机将暂停播放第一音频对应的处理指令发送给耳机收纳装置,耳机收纳装置暂停第一音频的播放。耳机受体积或电量的限制,无法存储太多数据或者无法实现对计算能力要求较高的功能,利用具有更强计算能力的耳机收纳装置或云端服务器进行目标处理,解决了对计算能力要求较高的功能无法依赖耳机实现的问题,扩充了耳机的计算能力。
在本申请实施例中,在执行目标处理时,除了由耳机收纳装置执行目标处理外,还可以由云端服务器执行,耳机收纳装置在获取处理指令后,将处理指令发送至云端服务器,由云端服务器执行处理指令对应的目标处理,之后云端服务器将处理结果发送给耳机收纳装置。耳机收纳装置的计算能力不足以执行目标处理时,可以将处理指令发送给云端服务器,例如,由云端服务器根据音频数据进行翻译,得到目标语言的目标音频数据或目标文本数据,具体可以包括任意适用的处理,本申请实施例对此不做限制。云端服务器可以提供更强大的计算能力,从而可以执行更加复杂的目标处理,能够实现更多更复杂的功能,减少耳机收纳装置的功耗,提高处理速度等。
综上所述,依据本申请实施例,通过检测所述耳机的工作状态,通过内置于所述耳机的加速度传感器,识别用户的运动状态,确定在所述工作状态下所述运动状态相关联的目标处理,执行所述目标处理,使得根据用户的运动状态实现耳机对于多种目标处理的触发,无需用户通过手部动作或讲话来触发,提高了各种目标处理的触发的便利性,而且用户的同一运动状态在耳机的不同工作状态下对应的目标处理也可以不同,在耳机上实现了对于更多目标处理的触发,解决了一些场景下用户难以通过手部动作或讲话来触发目标处理的问题。
参照图2,示出了本申请的另一种处理执行方法实施例的步骤流程图,应用于耳机,具体可以包括如下步骤:
步骤201,检测所述耳机的工作状态。
在本申请实施例中,此步骤的具体实现方式可以参见前述实施例中的描述,此处不另赘述。
步骤202,通过所述加速度传感器,采集所述耳机的加速度信息。
在本申请实施例中,加速度信息包括加速度的大小和方向,加速度传感器可以采集当前耳机的加速度信息,对于部分运动状态,只需要当前的加速度信息既可以确定,而对于部分运动状态,需要一段时间内的加速度信息才可以确定。采集耳机在当前时刻或者一段时间内的加速度信息。
步骤203,根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
在本申请实施例中,运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种,或者其他任意适用的状态信息,本申请实施例对此不做限制。根据采集的加速度信息可以确定用户的上述运动状态。
其中,头部运动可以是用户戴着耳机时进行的点头、摇头、低头、仰头、左下方低头、右下方低头、左上方仰头、右上方仰头等运动,或者其他任意适用的头部运动,本申请实施例对此不做限制。
其中,全身运动可以是用户戴着耳机时进行的行走、跑步、站立、坐下等运动,或者其他任意适用的全身运动,本申请实施例对此不做限制。
其中,运动频率可以是头部运动或全身运动的运动频率,例如,单位时间内完成点头的次数、单位时间内完成摇头的次数、单位时间内完成左下方低头的次数,单位时间内行走时的步数等,或者其他任意适用的运动频率,本申请实施例对此不做限制。
其中,运动速度可以是头部运动或全身运动的运动速度,例如,向左下方低头的速度,行走的速度,跑步的速度等,或者其他任意适用的运动速度,本申请实施例对此不做限制。
其中,运动持续时间可以是头部运动或全身运动的运动持续时间,例如,低头持续的时间,坐下持续的时间等,或者其他任意适用的运动持续时间,本申请实施例对此不做限制。
其中,运动幅度可以是头部运动或全身运动的运动幅度,例如,低头时相比正常状态所低下的角度,摇头时左右所跨过的角度,或者其他任意适用的运动幅度,本申请实施例对此不做限制。
在本申请实施例中,可选地,头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;全身运动包括以下至少一种:行走、跑步、站立、坐下。
低头、仰头、摇头、点头等动作较为常用,作为头部运动容易发生误触发,而向左下方低头、向右下方低头、向左上方仰头、向右上方仰头等不太常用,但仍比较简单的头部运动,可以避免误触发的问题,而且用户也很容易完成头部运动,方便了用户的使用。
在本申请实施例中,可选地,根据所述加速度信息,确定所述用户的运动状态的实现方式可以包括以下至少一种:
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求。
头部运动和全身运动可以包括多种,将其中某一种或几种头部运动或全身运动预先设定为目标头部运动或目标全身运动。设置目标头部运动或目标全身运动与各种目标处理相关联。
其中,预设频率要求可以是运动频率与某一预设频率一致,也可以是运动频率超过某一预设频率等,或者其他任意适用的预设频率要求,本申请实施例对此不做限制。
例如,在通话或者播放音频的工作状态下,根据加速度信息,确定用户进行了抬头,且单位时间内抬头次数为两次,则确定用户进行了抬两次头,对应的目标处理可以为调高音量,根据加速度信息,确定用户进行了低头,且单位时间内低头次数为两次,则确定用户进行了目标头部运动,即低两次头,对应的目标处理可以为调低音量。
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求。
其中,预设速度要求可以是运动速度在预设速度范围内,也可以是运动速度超过某一预设速度等,或者其他任意适用的预设速度要求,本申请实施例对此不做限制。要求目标头部运动或全身运动的运动速度满足预设速度要求,可以避免用户一些无意的运动造成的误触发,降低处理的错误。
例如,根据加速度信息,确定用户进行了向左下方低头,且低头的速度超过预设速度,则确定用户进行了以较快速度向左下方低头,对应的目标处理可以为唤醒耳机或手机上的语音助手。
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求。
其中,预设时间要求可以是运动持续时间超过预设时长,或者可以是运动持续时间超过了检测时长的一定占比以上,或者其他任意适用的预设时间要求,本申请实施例对此不做限制。
例如,在一定时间内,根据加速度信息,确定用户进行了低头,且低头的运动持续时间超过该一定时间的95%以上,则确定用户进行了长时间低头,对应的目标处理可以为对用户进行语音提示。
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
其中,预设幅度要求可以是运动幅度超过预设幅度,或者运动幅度在预设幅度范围内,或者其他任意适用的预设幅度要求,本申请实施例对此不做限制。
例如,在一定时间内,根据加速度信息,确定用户进行了低头,且低头的运动幅度超过预设的角度,则确定用户进行了长时间低头,对应的目标处理可以为对用户进行语音提示。
对于以上多种确定用户的运动状态的是实现方式,可以采用单一一种确定用户的运动状态的方式,也可以将多种确定用户的运动状态进行组合,例如,根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求,同时对运动持续时间和运动幅度都有要求。
例如,在一定时间内,根据加速度信息,确定用户进行了低头,且低头的运动持续时间超过该一定时间的95%以上,且低头的运动幅度超过预设的角度,则确定用户进行了长时间低头,对应的目标处理可以为对用户进行语音提示。
在本申请实施例中,可选地,根据所述加速度信息,确定所述用户的运动状态的一种实现方式中,可以包括:将所述加速度信息发送至耳机收纳装置或云端服务器;接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
在一些情况下,受限于耳机的体积或电量,耳机无法独立确定用户的运动状态,耳机可以将采集的加速度信息发送给耳机收纳装置或云端服务器。耳机与耳机收纳装置可以通过蓝牙连接,加速度信息通过蓝牙传输。耳机可以具备移动通信芯片,通过移动通信芯片连接到专用网络或互联网,加速度信息通过移动通信芯片发送给云端服务器。耳机收纳装置或云端服务器根据加速度信息确定用户的运动状态,具备确定运动状态的方式如上所述,耳机收纳装置或云端服务器确定运动状态后,将运动状态再发送给耳机。利用耳机收纳装置或云端服务器扩充了耳机的计算能力,解决了耳机无法实现对计算能力要求较高的功能的问题。
步骤204,确定在所述工作状态下所述运动状态相关联的目标处理。
在本申请实施例中,此步骤的具体实现方式可以参见前述实施例中的描述,此处不另赘述。
步骤205,执行所述目标处理。
在本申请实施例中,此步骤的具体实现方式可以参见前述实施例中的描述,此处不另赘述。
综上所述,依据本申请实施例,通过检测所述耳机的工作状态,通过所述加速度传感器,采集所述耳机的加速度信息,根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种,确定在所述工作状态下所述运动状态相关联的目标处理,执行所述目标处理,使得根据用户的运动状态实现耳机对于多种目标处理的触发,无需用户通 过手部动作或讲话来触发,提高了各种目标处理的触发的便利性,而且用户的同一运动状态在耳机的不同工作状态下对应的目标处理也可以不同,在耳机上实现了对于更多目标处理的触发,解决了一些场景下用户难以通过手部动作或讲话来触发目标处理的问题。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的运动动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的运动动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的运动动作并不一定是本申请实施例所必须的。
参照图3,示出了本申请的一种处理执行装置实施例的结构框图,应用于耳机,具体可以包括:
状态获取模块301,用于检测所述耳机的工作状态;
状态识别模块302,用于通过内置于所述耳机的加速度传感器,识别用户的运动状态;
处理确定模块303,用于确定在所述工作状态下所述运动状态相关联的目标处理;
处理执行模块304,用于执行所述目标处理。
在本申请实施例中,可选地,所述状态识别模块包括:
信息采集模块,用于通过所述加速度传感器,采集所述耳机的加速度信息;
状态确定模块,用于根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
在本申请实施例中,可选地,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
在本申请实施例中,可选地,所述状态确定模块,具体用于以下至少一种:
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
在本申请实施例中,可选地,所述处理执行模块用于以下至少一种:接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
在本申请实施例中,可选地,所述装置还包括:
当前状态识别模块,用于在所述对所述用户进行语音提示之前,通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
状态确定模块,用于确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
在本申请实施例中,可选地,所述状态识别模块包括:
信息发送模块,用于将所述加速度信息发送至耳机收纳装置或云端服务器;
状态接收模块,用于接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
在本申请实施例中,可选地,所述处理执行模块包括:
指令发送模块,用于将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
综上所述,依据本申请实施例,通过检测所述耳机的工作状态,通过内置于所述耳机的加速度传感器,识别用户的运动状态,确定在所述工作状态下所述运动状态相关联的目标处理,执行所述目标处理,使得根据用户的运动状态实现耳机对于多种目标处理的触发,无需用户通过手部动作或讲话来触发,提高了各种目标处理的触发的便利性,而且用户的同一运动状态在耳机的不同工作状态下对应的目标处理也可以不同,在耳机上实现了对于更多目标处理的触发,解决了一些场景下用户难以通过手部动作或讲话来触发目标处理的问题。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
图4是根据一示例性实施例示出的一种用于处理执行的装置400的框图。例如,装置400可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图4,装置400可以包括以下一个或多个组件:处理组件402,存储器404,电源组件406,多媒体组件408,音频组件410,输入/输出(I/O)的接口412,传感器组件414,以及通信组件416。
处理组件402通常控制装置400的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理元件402可以包括一个或多个处理器420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件402可以包括一个或多个模块,便于处理组件402和其他组件之间的交互。例如,处理部件402可以包括多媒体模块,以方便多媒体组件408和处理组件402之间的交互。
存储器404被配置为存储各种类型的数据以支持在设备400的操作。这些数据的示例包括用于在装置400上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件406为装置400的各种组件提供电力。电源组件406可以包括电源管理系统,一个或多个电源,及其他与为装置400生成、管理和分配电力相关联的组件。
多媒体组件408包括在所述装置400和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑行操作相关的持续时间和压力。在一些实施例中,多媒体组件408包括一个前置摄像头和/或后置摄像头。当设备400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件410被配置为输出和/或输入音频信号。例如,音频组件410包括一个麦克风(MIC),当装置400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器404或经 由通信组件416发送。在一些实施例中,音频组件410还包括一个扬声器,用于输出音频信号。
I/O接口412为处理组件402和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件414包括一个或多个传感器,用于为装置400提供各个方面的状态评估。例如,传感器组件414可以检测到设备400的打开/关闭状态,组件的相对定位,例如所述组件为装置400的显示器和小键盘,传感器组件414还可以检测装置400或装置400一个组件的位置改变,用户与装置400接触的存在或不存在,装置400方位或加速/减速和装置400的温度变化。传感器组件414可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件414还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件414还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件416被配置为便于装置400和其他设备之间有线或无线方式的通信。装置400可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件416还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置400可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器404,上述指令可由装置400的处理器420执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
图5是本申请的一些实施例中服务器的结构示意图。该服务器1900可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)1922(例如,一个或一个以上处理器)和存储器1932,一个或一个以上存储应用程序1942或数据1944的存储介质1930(例如一个或一个以上海量存储设备)。其中,存储器1932和存储介质1930可以是短暂存储或持久存储。存储在存储介质1930的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器1922可以设置为与存储介质1930通信,在服务器1900上执行存储介质1930中的一系列指令操作。
服务器1900还可以包括一个或一个以上电源1926,一个或一个以上有线或无线网络接口1950,一个或一个以上输入输出接口1958,一个或一个以上键盘1956,和/或,一个或一个以上操作系统1941,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由装置(智能终端或者服务器)的处理器执行时,使得装置能够执行一种处理执行方法,所述方法包括:
检测所述耳机的工作状态;
通过内置于所述耳机的加速度传感器,识别用户的运动状态;
确定在所述工作状态下所述运动状态相关联的目标处理;
执行所述目标处理。
可选地,所述通过内置于所述耳机的加速度传感器,识别用户的运动状态包括:
通过所述加速度传感器,采集所述耳机的加速度信息;
根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
可选地,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
可选地,所述根据所述加速度信息,确定所述用户的运动状态包括以下至少一种:
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
可选地,所述执行所述目标处理包括以下至少一种:接听通话、拒接通话、挂断通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
可选地,在所述对所述用户进行语音提示之前,所述操作的指令还包括:
通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
可选地,所述根据所述加速度信息,确定所述用户的运动状态包括:
将所述加速度信息发送至耳机收纳装置或云端服务器;
接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
可选地,所述执行所述目标处理包括:
将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方 框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种处理执行方法、一种处理执行装置、一种用于处理执行的装置、一种机器可读介质,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (26)

  1. 一种处理执行方法,其特征在于,应用于耳机,包括:
    检测所述耳机的工作状态;
    通过内置于所述耳机的加速度传感器,识别用户的运动状态;
    确定在所述工作状态下所述运动状态相关联的目标处理;
    执行所述目标处理。
  2. 根据权利要求1所述的方法,其特征在于,所述通过内置于所述耳机的加速度传感器,识别用户的运动状态包括:
    通过所述加速度传感器,采集所述耳机的加速度信息;
    根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
  3. 根据权利要求2所述的方法,其特征在于,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述加速度信息,确定所述用户的运动状态包括以下至少一种:
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
  5. 根据权利要求1所述的方法,其特征在于,所述执行所述目标处理包括以下至少一种:接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
  6. 根据权利要求5所述的方法,其特征在于,在所述对所述用户进行语音提示之前,所述方法还包括:
    通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
    确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
  7. 根据权利要求2所述的方法,其特征在于,所述根据所述加速度信息,确定所述用户的运动状态包括:
    将所述加速度信息发送至耳机收纳装置或云端服务器;
    接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
  8. 根据权利要求1所述的方法,其特征在于,所述执行所述目标处理包括:
    将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
  9. 一种处理执行装置,其特征在于,应用于耳机,包括:
    状态获取模块,用于检测所述耳机的工作状态;
    状态识别模块,用于通过内置于所述耳机的加速度传感器,识别用户的运动状态;
    处理确定模块,用于确定在所述工作状态下所述运动状态相关联的目标处理;
    处理执行模块,用于执行所述目标处理。
  10. 根据权利要求9所述的装置,其特征在于,所述状态识别模块包括:
    信息采集模块,用于通过所述加速度传感器,采集所述耳机的加速度信息;
    状态确定模块,用于根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
  11. 根据权利要求10所述的装置,其特征在于,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
  12. 根据权利要求10所述的装置,其特征在于,所述状态确定模块,具体用于以下至少一种:
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
  13. 根据权利要求9所述的装置,其特征在于,所述处理执行模块用于以下至少一种:接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
  14. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    当前状态识别模块,用于在所述对所述用户进行语音提示之前,通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
    状态确定模块,用于确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
  15. 根据权利要求10所述的装置,其特征在于,所述状态识别模块包括:
    信息发送模块,用于将所述加速度信息发送至耳机收纳装置或云端服务器;
    状态接收模块,用于接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
  16. 根据权利要求9所述的装置,其特征在于,所述处理执行模块包括:
    指令发送模块,用于将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
  17. 一种用于处理执行的装置,其特征在于,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:
    检测所述耳机的工作状态;
    通过内置于所述耳机的加速度传感器,识别用户的运动状态;
    确定在所述工作状态下所述运动状态相关联的目标处理;
    执行所述目标处理。
  18. 根据权利要求17所述的装置,其特征在于,所述通过内置于所述耳机的加速度传感器,识别用户的运动状态包括:
    通过所述加速度传感器,采集所述耳机的加速度信息;
    根据所述加速度信息,确定所述用户的运动状态;其中,所述运动状态包括头部运动、全身运动、运动频率、运动速度、运动持续时间、运动幅度中至少一种。
  19. 根据权利要求18所述的装置,其特征在于,所述头部运动包括以下至少一种:仰头、低头、点头、摇头、向左下方低头、向右下方低头、向左上方仰头、向右上方仰头;所述全身运动包括以下至少一种:行走、跑步、站立、坐下。
  20. 根据权利要求18所述的装置,其特征在于,所述根据所述加速度信息,确定所述用户的运动状态包括以下至少一种:
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动频率满足预设频率要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动速度满足预设速度要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动持续时间满足预设时间要求;
    根据所述加速度信息,确定所述用户进行了目标头部运动或目标全身运动,且所述目标头部运动或目标全身运动的运动幅度满足预设幅度要求。
  21. 根据权利要求17所述的装置,其特征在于,所述执行所述目标处理包括以下至少一种:接听通话、拒接通话、挂断通话、回拨、取消呼出通话、对通话中的目标音频进行标记、开始播放第一音频、暂停播放所述第一音频、结束所述第一音频的播放、在播放所述第一音频时,切换为播放第二音频、调高音量、调低音量、开始录音、结束录音、暂停录音、开启语音处理功能、关闭语音处理功能、对所述用户进行语音提示,所述语音提示包括以下至少一种:久坐提示、长时间低头提示。
  22. 根据权利要求21所述的装置,其特征在于,在所述对所述用户进行语音提示之前,所述操作的指令还包括:
    通过内置于所述耳机的加速度传感器,识别用户的当前运动状态;
    确定所述当前运动状态是否符合预设要求,若所述当前运动状态符合预设要求,则执行对所述用户进行语音提示的步骤。
  23. 根据权利要求18所述的装置,其特征在于,所述根据所述加速度信息,确定所述用户的运动状态包括:
    将所述加速度信息发送至耳机收纳装置或云端服务器;
    接收所述耳机收纳装置或云端服务器根据所述加速度信息确定的所述运动状态。
  24. 根据权利要求17所述的装置,其特征在于,所述执行所述目标处理包括:
    将所述目标处理对应的处理指令发送至耳机收纳装置或云端服务器,以控制所述耳机收纳装置或云端服务器执行所述目标处理。
  25. 一种机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得装置执行如权利要求1至8中一个或多个所述的处理执行装置。
  26. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行根据权利要求1-8中的任一个所述的处理执行方法。
PCT/CN2021/074913 2020-06-05 2021-02-02 一种处理执行方法、装置和可读介质 WO2021244058A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010507491.4A CN111698600A (zh) 2020-06-05 2020-06-05 一种处理执行方法、装置及可读介质
CN202010507491.4 2020-06-05

Publications (1)

Publication Number Publication Date
WO2021244058A1 true WO2021244058A1 (zh) 2021-12-09

Family

ID=72479635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/074913 WO2021244058A1 (zh) 2020-06-05 2021-02-02 一种处理执行方法、装置和可读介质

Country Status (2)

Country Link
CN (1) CN111698600A (zh)
WO (1) WO2021244058A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698600A (zh) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 一种处理执行方法、装置及可读介质
CN113542963B (zh) * 2021-07-21 2022-12-20 RealMe重庆移动通信有限公司 声音模式控制方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104718766A (zh) * 2013-10-08 2015-06-17 清科有限公司 多功能运动耳机
CN105848029A (zh) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 耳机及其控制方法及装置
US20170195795A1 (en) * 2015-12-30 2017-07-06 Cyber Group USA Inc. Intelligent 3d earphone
CN107708009A (zh) * 2017-09-27 2018-02-16 广东小天才科技有限公司 无线耳机的操作方法、装置、无线耳机及存储介质
CN108540669A (zh) * 2018-04-20 2018-09-14 Oppo广东移动通信有限公司 无线耳机、基于耳机检测的控制方法及相关产品
CN110246285A (zh) * 2019-05-29 2019-09-17 苏宁智能终端有限公司 颈椎活动提示方法、装置、耳机和存储介质
CN111698600A (zh) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 一种处理执行方法、装置及可读介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9237393B2 (en) * 2010-11-05 2016-01-12 Sony Corporation Headset with accelerometers to determine direction and movements of user head and method
CN104199655A (zh) * 2014-08-27 2014-12-10 深迪半导体(上海)有限公司 一种音频切换方法、微处理器及耳机
CN104361016B (zh) * 2014-10-15 2018-05-29 广东小天才科技有限公司 一种根据运动状态调整音乐播放效果的方法及装置
CN106535057B (zh) * 2016-12-07 2022-04-12 歌尔科技有限公司 一种主设备与从设备的运行切换方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104718766A (zh) * 2013-10-08 2015-06-17 清科有限公司 多功能运动耳机
US20170195795A1 (en) * 2015-12-30 2017-07-06 Cyber Group USA Inc. Intelligent 3d earphone
CN105848029A (zh) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 耳机及其控制方法及装置
CN107708009A (zh) * 2017-09-27 2018-02-16 广东小天才科技有限公司 无线耳机的操作方法、装置、无线耳机及存储介质
CN108540669A (zh) * 2018-04-20 2018-09-14 Oppo广东移动通信有限公司 无线耳机、基于耳机检测的控制方法及相关产品
CN110246285A (zh) * 2019-05-29 2019-09-17 苏宁智能终端有限公司 颈椎活动提示方法、装置、耳机和存储介质
CN111698600A (zh) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 一种处理执行方法、装置及可读介质

Also Published As

Publication number Publication date
CN111698600A (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
CN105451111B (zh) 耳机播放控制方法、装置及终端
US9467403B2 (en) Method and mobile terminal for speech communication
CN108510987A (zh) 语音处理方法及装置
JP6314286B2 (ja) 音声信号の最適化方法及びその装置、プログラム、及び記録媒体
US10635152B2 (en) Information processing apparatus, information processing system, and information processing method
CN108052079A (zh) 设备控制方法、装置、设备控制装置及存储介质
CN104484045B (zh) 音频播放控制方法及装置
CN104598130A (zh) 模式切换方法、终端、可穿戴设备及装置
CN105138319B (zh) 事件提醒方法及装置
CN107515925A (zh) 音乐播放方法及装置
WO2021244058A1 (zh) 一种处理执行方法、装置和可读介质
WO2015078155A1 (en) A method and mobile terminal for speech communication
CN108521501B (zh) 语音输入方法及移动终端、计算机可读存储介质
CN108874121A (zh) 穿戴设备的控制方法、穿戴设备及计算机可读存储介质
CN107919124A (zh) 设备唤醒方法及装置
CN105607738B (zh) 确定单手模式的方法及装置
CN111696553A (zh) 一种语音处理方法、装置及可读介质
US11144130B2 (en) Information processing apparatus, information processing system, and information processing method
CN108540653A (zh) 终端设备以及交互控制方法和装置
CN109429152A (zh) 扬声器装置、电子设备、音效调节方法及装置
CN111081283A (zh) 一种音乐播放方法、装置、存储介质及终端设备
CN109862171A (zh) 终端设备控制方法及装置
CN109787890A (zh) 即时通信方法、装置及存储介质
CN106292247B (zh) 用于关闭电子设备中闹钟的方法、装置、系统及其设备
CN108962189A (zh) 亮度调整方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21818318

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21818318

Country of ref document: EP

Kind code of ref document: A1