CN111248915A - Processing method and device and electronic equipment - Google Patents

Processing method and device and electronic equipment Download PDF

Info

Publication number
CN111248915A
CN111248915A CN201911398471.1A CN201911398471A CN111248915A CN 111248915 A CN111248915 A CN 111248915A CN 201911398471 A CN201911398471 A CN 201911398471A CN 111248915 A CN111248915 A CN 111248915A
Authority
CN
China
Prior art keywords
behavior data
condition
actions
action
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911398471.1A
Other languages
Chinese (zh)
Other versions
CN111248915B (en
Inventor
马彬强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911398471.1A priority Critical patent/CN111248915B/en
Publication of CN111248915A publication Critical patent/CN111248915A/en
Application granted granted Critical
Publication of CN111248915B publication Critical patent/CN111248915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a processor device and an electronic device, and the method comprises the following steps: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions; determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time; and executing corresponding first operation according to the control instruction. According to the processing method, the processing device and the electronic equipment, the behavior data that the user does not make obvious sound and does not show obvious actions is detected through the bone conduction sensor, and the control will of the user on the electronic equipment can be determined according to the definition of the behavior data in advance, so that the user is helped to transmit information or autonomously control the equipment in specific scenes which are inconvenient to make sound or make actions, the control form of the equipment is enriched, and the use experience of the user is improved.

Description

Processing method and device and electronic equipment
Technical Field
The present application relates to data processing technologies, and in particular, to a processing method and apparatus, and an electronic device.
Background
In some specific scenarios, it is inconvenient for the user to make a sound or perform an action, such as hijacking by a miscreant, a situation requiring help, or in some situations requiring silence, the user has something in his hand that is inconvenient to control the operating device. Under these circumstances, if the user wants to realize his/her will, he/she needs to complete the information transfer or device control by speaking or by a specific action. Due to the limitation of actual conditions, the intention that many users want to transmit information or control equipment cannot be realized, so that the users cannot save themselves or control electronic equipment more conveniently in a dangerous situation.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
a method of processing, comprising:
acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions;
determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time;
and executing corresponding first operation according to the control instruction.
Optionally, the at least two actions satisfy a first condition, including: and determining that a first condition is met under the condition that the at least two actions belong to preset action types and the time interval between any two actions in the at least two actions is smaller than a first threshold value.
Optionally, before acquiring the first behavior data, the method further includes:
acquiring action and/or behavior data input by a user;
configuring the first condition in accordance with the action and/or behavior data.
Optionally, if the vibration frequency generated when the user enters the motion data is detected and acquired by the bone conduction sensor, configuring the first condition according to the motion and/or behavior data includes:
configuring the first condition at least according to the vibration frequency detected and acquired by the bone conduction sensor when the user enters the action data.
Optionally, after the first operation executed according to the control instruction, the method further includes:
and outputting voice prompt information so that a user can input second behavior data according to the voice prompt information, wherein the voice prompt information comprises the corresponding relation between the behavior data and the control instruction.
Optionally, after the first operation executed according to the control instruction, the method further includes:
acquiring second behavior data, wherein the second behavior data is the same as or different from the first behavior data;
and determining a corresponding control instruction based on the second behavior data, and executing a corresponding second operation according to the control instruction.
Optionally, after the acquiring the first behavior data, the method further includes:
determining whether a duration of a first activity is greater than a second threshold based on the first activity data;
in the event that the duration of the first behavior data is not greater than the second threshold, it is still further determined whether the at least two actions satisfy a first condition.
Optionally, the at least two actions include tapping the teeth, grinding the teeth, and/or opening and closing the mouth.
The application also discloses a processing apparatus, includes:
the data acquisition module is used for acquiring first behavior data, the first behavior data are detected and acquired through a bone conduction sensor, and the first behavior data comprise at least two actions;
the instruction determining module is used for determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time;
and the operation execution module is used for executing corresponding first operation according to the control instruction.
The application also discloses an electronic device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the executable instructions comprise: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions; determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time; and executing corresponding first operation according to the control instruction.
As can be seen from the foregoing technical solutions, compared with the prior art, an embodiment of the present application discloses a processing apparatus and an electronic device, and the method includes: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions; determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time; and executing corresponding first operation according to the control instruction. According to the processing method, the processing device and the electronic equipment, the behavior data that the user does not make obvious sound and does not show obvious actions is detected through the bone conduction sensor, and the control will of the user on the electronic equipment can be determined according to the definition of the behavior data in advance, so that the user is helped to transmit information or autonomously control the equipment in specific scenes which are inconvenient to make sound or make actions, the control form of the equipment is enriched, and the use experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a processing method disclosed in an embodiment of the present application;
FIG. 2 is a schematic view of wearable glasses provided with bone conduction sensors;
FIG. 3 is a flow chart of another processing method disclosed in the embodiments of the present invention;
FIG. 4 is a flow chart of another processing method disclosed in the embodiments of the present invention;
FIG. 5 is a flow chart of a fourth processing method disclosed in the embodiments of the present invention;
fig. 6 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of a processing method disclosed in an embodiment of the present application, and referring to fig. 1, the data processing method may include:
step 101: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions.
The bone conduction sensor may be disposed in a head wearable device, and may be, but is not limited to, various types of earphones, various types of glasses, and the like. In this application, bone conduction sensor can be used for detecting the relevant action of user's tooth or other movable skeletons of head, for the convenience detects accurate data, combines human physiology structural feature, can set up bone conduction sensor near the position of ear on the wearable equipment of head. Fig. 2 is a schematic view of wearable glasses provided with bone conduction sensors, which can be understood in conjunction with fig. 2, but of course, for the sake of understanding, the bone conduction sensors are shown in a visible manner in fig. 2, and in actual cases, the bone conduction sensors may be provided on the side of the glasses close to the skin of the user or embedded inside the device, and are not visible to the outside.
The first behavior data may be related actions of moving bones of the head or teeth of the user, and since everyone has a habit of moving the teeth or other bones of the head inadvertently in life, in order to avoid false detection, the first behavior data is determined to include at least two actions, and the at least two actions may be the same action, such as two times of continuously tapping the teeth; may be a different action, such as tapping a tooth and grinding the tooth; it is also possible to combine the same motions for part of the motions, for example tapping twice the teeth and grinding once.
It should be noted that the actions included in the first behavior data are all actions that do not make a distinct sound or present a distinct action, so that the processing method disclosed in the present application can be applied in some scenarios of limiting sound and limiting action, which is convenient for users to use.
Step 102: and determining a control instruction corresponding to the first action data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time.
The first condition may be, but is not limited to, at least related to the action type and the action interval time, that is, the first condition is satisfied, and it is not only required that the action type included in the first action data meets the requirement, but also that the interval time of at least two actions meets the requirement.
Assuming two successive taps of the teeth correspond to a device wake-up command, but the user may have performed a tap of the teeth one time at ordinary times, e.g., a person is likely to tap one tooth inadvertently after sigh, if the user taps one tooth again or inadvertently after 5 minutes, two cumulative taps of the teeth, but this is obviously not the user's intention to wake up the device; in addition, if the user makes two actions, the interval time meets the requirement, but the two actions are not the preset action types, the control instructions configured in advance are not related. Thus, the first condition described in this application is related to at least the action type and the action interval time.
Step 103: and executing corresponding first operation according to the control instruction.
After the control instruction corresponding to the first behavior data is determined, a corresponding first operation may be performed according to the control instruction, so as to control the electronic device (head wearable device) to perform the corresponding operation according to the intention of the user. For example, in a scene that a user encounters a gangster, if the user taps the teeth for three times continuously, the bluetooth headset worn by the user directly sends out distress information to an alarm center or a pre-configured number terminal, wherein the distress information can contain positioning data, so that a person receiving the distress information can know the position information of the user in time and perform rescue actions.
In the embodiment, the processing method detects the behavior data that the user does not make obvious sound and does not present obvious actions through the bone conduction sensor, and can determine the control will of the user on the electronic equipment according to the definition of the behavior data in advance, so that the user is helped to transmit information or autonomously control the equipment in some specific scenes where the user does not make sound or make actions conveniently, the control form of the equipment is enriched, and the use experience of the user is promoted.
In the above embodiment, the at least two actions satisfy the first condition, and may include: and determining that a first condition is met under the condition that the at least two actions belong to preset action types and the time interval between any two actions in the at least two actions is smaller than a first threshold value.
For example, the preset action type includes tooth fastening and tooth grinding, and the user continuously performs one tooth fastening and one mouth opening and closing action, and the action includes two actions, but the mouth opening and closing action is not the preset action type, and it is determined that at least two actions do not satisfy the first condition. For another example, the user performs a tooth-fastening action and performs a tooth-grinding action after 2 seconds, and the action includes two actions both belonging to the preset action type, but since the interval between the two actions is longer, it is also determined that at least two actions do not satisfy the first condition. For another example, if the first threshold is 1 second, the user continuously performs two actions of tapping the teeth, the tooth is a preset action type, and the time interval between two times of tooth fastening of the user is less than 1 second, it is determined that at least two actions satisfy the first condition.
Fig. 3 is a flowchart of another processing method disclosed in the embodiment of the present invention, and referring to fig. 3, the processing method may include:
step 301: action and/or behavior data entered by a user is obtained.
Even with the same motion or behavior, different people do the same data detected by the bone conduction sensor when they do so. It can be understood that the old and young people may have different vibration frequency and duration detected by the bone conduction sensor during tooth grinding due to different tooth density and action speed. Therefore, in the present application, before the head wearable device including the bone conduction sensor is put into use, the motion and/or behavior data entered by the user may be collected, and then, based on the motion and/or behavior data, it is determined whether the user has made a preset motion and/or behavior data that meets the requirements within an allowable error range.
In addition, since the bone conduction sensor is easy to be confused with the user's daily eating and other actions when detecting the tapping times or the grinding actions of the user, the action and/or behavior data entered by the user is obtained first in the present application, which is used as a reference for subsequent more precise distinction of the intended special actions of the user or the daily careless actions of the user.
Step 302: configuring the first condition in accordance with the action and/or behavior data.
The first condition is configured according to the action and/or the behavior data and is used as a standard for subsequently judging the user behavior data or the action, as mentioned above, the special action intended by the user and the daily unconscious action can be effectively distinguished, and the accuracy of action or behavior recognition is improved.
Step 303: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions.
Step 304: and determining a control instruction corresponding to the first action data when the at least two actions meet a first condition.
Wherein the first condition is related to at least an action type and an action interval time.
Step 305: and executing corresponding first operation according to the control instruction.
In the embodiment, for different users, the standard conditions for subsequently discriminating the tooth actions or behaviors of the user are configured by collecting and analyzing the data detected by the bone conduction sensor when the user executes a specific action or behavior, so that some false identifications and false operations can be effectively avoided, and the use experience of the user is improved.
In the above embodiment, if the vibration frequency generated when the user enters the motion data is detected and acquired by the bone conduction sensor, configuring the first condition according to the motion and/or behavior data may include: and configuring the first condition at least according to the vibration frequency detected and acquired by the bone conduction sensor when the user inputs the action data.
As mentioned above, the data detected by the bone conduction sensor is different when different people perform the same action, and is different from the daily careless tooth action of the user, so that the user action/action recognition can be configured individually for different users by inputting the action or action data of the user, so as to adapt to different users.
In a specific implementation, the process of configuring the first condition may include the following:
1) the user can enter the frequency of tapping (including the frequency of vibration and the time interval) to more accurately distinguish between a particular tapping action and a daily eating action.
2) Setting and recording the interval of the two times of the teeth, and dividing the interval into a short interval and a long interval (the two intervals can be distinguished by using a recording mode).
3) The long and short intervals are coded and classified differently. Such as one long and one short, one short and one long, two long and two short, etc. The long interval corresponds to code 0 and the short interval corresponds to code 1.
4) The action of tooth rubbing may also be engaged. The action input can be changed into one-time friction and one-time tooth-buckling, and the tooth-buckling and the molar are respectively used for corresponding coded 0 and 1.
5) The resulting code is used for wake-up.
6) When a user inputs the interval time of the long and short teeth, a redundant permission error (which can be set by the user) of 10% -20% is set.
7) And when the recorded long tooth-fastening time and short tooth-fastening time plus the permission error exceed the behavior recognition time, prompting to set a conflict.
Fig. 4 is a flowchart of another processing method disclosed in the embodiment of the present invention, and as shown in fig. 4, the processing method may include:
step 401: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions.
Step 402: and determining a control instruction corresponding to the first action data when the at least two actions meet a first condition.
Wherein the first condition is related to at least an action type and an action interval time.
Step 403: and executing corresponding first operation according to the control instruction.
Step 404: and outputting voice prompt information so that a user can input second behavior data according to the voice prompt information.
The voice prompt information comprises the corresponding relation between the behavior data and the control instruction.
For example, when the user bites on the teeth, the sensor will receive vibrations of the teeth bite. When the user snaps 2 times quickly, the system will wake up. And prompting the user to perform the next operation through the earphone voice. For example: play music please tap twice the teeth quickly again, dial an emergency call please tap 3 times the teeth quickly, cancel the request to tap once the teeth directly, or you can use voice to give more complex instructions. Of course, in a specific implementation, the emergency phone may also be called directly when the user taps the teeth 3 times directly and quickly while the device is not awake.
Step 405: and acquiring second behavior data, wherein the second behavior data is the same as or different from the first behavior data.
The first behavior data alone may not be able to complete the user's intended operation, and subsequent further behavior control may be required to complete the intended operation. For example, the user wants the ear belt device in the sleep state to play a song, and the user controls the wake-up device through the first behavior and further controls the start of playing the song through the second behavior after wake-up.
Step 406: and determining a corresponding control instruction based on the second behavior data, and executing a corresponding second operation according to the control instruction.
In consideration of the fact that the user cannot realize the control intention of the device through one-time behavior data in some scenarios, the final control intention of the user is realized through at least two-time behavior data input in the embodiment. The electronic equipment is controlled in a mode without obvious sound and obvious action, so that the mode is more concealed, and the method is convenient for users and simple and easy to operate.
Fig. 5 is a flowchart of a fourth processing method disclosed in the embodiment of the present invention, and with reference to fig. 5, the processing method may include:
step 501: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions.
Step 502: it is determined whether the duration of the first activity is greater than a second threshold based on the first activity data, and if not, step 503 is entered.
In some cases, the user may not autonomously perform some habitual actions, such as grinding teeth, and if the user repeatedly grinds teeth for a number of times greater than the corresponding number of times of grinding teeth in all the preset behavior data, the user cannot be considered to want to control the device by grinding teeth. For example, the execution time of all the behavior data does not exceed 3 seconds, and if it is detected that the first behavior duration of the user exceeds 3 seconds, it is considered that the user has performed an erroneous operation, and the subsequent processing is not performed. If the first action duration does not exceed 3 seconds, the subsequent processing may be continued.
Step 503: and determining a control instruction corresponding to the first action data when the at least two actions meet a first condition.
Wherein the first condition is related to at least an action type and an action interval time.
Step 504: and executing corresponding first operation according to the control instruction.
In this embodiment, whether the first behavior is a control behavior that meets the real will of the user is determined by determining the duration of the first behavior, and processing based on the first behavior data is further performed without removing a habitual behavior that the user carelessly does, which is beneficial to improving the accuracy of behavior recognition.
In the various embodiments described above, the at least two actions may include, but are not limited to, tapping the teeth, grinding the teeth, and/or opening and closing the mouth. The vibration frequencies corresponding to different actions are different, which type of action is executed by a user can be determined by identifying the action vibration frequency and the corresponding relation between the identified frequency and the preset action, and then the user action and the corresponding operation instruction are determined according to the corresponding relation between the identified action and the preset action. For example, the vibration frequency corresponding to tapping the teeth is a first vibration frequency, the vibration frequency corresponding to molars is a second vibration frequency, the bone conduction sensor detects the first vibration frequency and the second vibration frequency in 3 seconds, and a command for closing the device corresponding to the behavior of tapping the teeth and the behavior of molars once are configured in the preset configuration, so that the system firstly determines the behavior of closing the device corresponding to the behavior of the user according to the first vibration frequency and the second vibration frequency, and then controls to generate and execute a command for closing the device.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The method is described in detail in the embodiments disclosed in the present application, and the method of the present application can be implemented by various types of apparatuses, so that an apparatus is also disclosed in the present application, and the following detailed description is given of specific embodiments.
Fig. 6 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention, and referring to fig. 6, the processing apparatus 60 may include:
the data acquisition module 601 is configured to acquire first behavior data, where the first behavior data is acquired through bone conduction sensor detection, and the first behavior data includes at least two actions.
An instruction determining module 602, configured to determine a control instruction corresponding to the first behavior data if the at least two actions satisfy a first condition, where the first condition is at least related to an action type and an action interval time.
The operation executing module 603 is configured to execute a corresponding first operation according to the control instruction.
In this embodiment, the processing device detects, through the bone conduction sensor, behavior data that the user does not make an obvious sound and does not exhibit an obvious action, and can determine the control will of the user on the electronic device according to the definition of the behavior data in advance, thereby helping the user to transfer information or autonomously control the device in some specific scenes where the user does not make a sound or make an action conveniently, enriching the control form of the device, and facilitating the improvement of the user experience.
In other implementations, the processing device may further include an action entry module for acquiring action and/or behavior data entered by the user; and the condition configuration module is used for configuring the first condition according to the action and/or behavior data. Therefore, for different users, the standard conditions for subsequently discriminating the tooth actions or behaviors of the users are configured by collecting and analyzing data detected by the bone conduction sensor when the users execute specific actions or behaviors, so that some false identifications and false operations can be effectively avoided, and the use experience of the users is improved.
In other implementations, the processing device may further include a voice prompt module, configured to output voice prompt information after the operation execution module executes the first operation according to the control instruction, so that a user inputs the second behavior data according to the voice prompt information, where the voice prompt information includes a corresponding relationship between the behavior data and the control instruction. In consideration of the fact that the user cannot realize the control intention of the device through one-time behavior data in some scenarios, the final control intention of the user is realized through at least two-time behavior data input in the embodiment. The electronic equipment is controlled in a mode without obvious sound and obvious action, so that the mode is more concealed, and the method is convenient for users and simple and easy to operate.
In other implementations, the processing device further includes a time determination module to determine whether a duration of the first activity is greater than a second threshold based on the first activity data. Whether the first behavior is a control behavior according with the real intention of the user is determined by judging the duration of the first behavior, and the behavior recognition accuracy is improved by further processing based on the first behavior data under the condition that the habitual behavior of the user is eliminated.
For specific implementation of each module of the processing apparatus, reference may be made to detailed descriptions of related contents in the method embodiments, and details are not repeated here.
Further, the present application also discloses an electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the executable instructions comprise: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions; determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time; and executing corresponding first operation according to the control instruction.
Any one of the processing devices described in the above embodiments includes a processor and a memory, and the data acquisition module, the instruction determination module, the operation execution module, the action entry module, the condition configuration module, the voice prompt module, the time determination module, and the like in the above embodiments may be stored in the memory as program modules, and the processor executes the program modules stored in the memory to implement corresponding functions.
An embodiment of the present application further provides a computer storage medium, in which computer-executable instructions are stored, and when executed by a processor, the computer storage medium implements the following operations: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions; determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time; and executing corresponding first operation according to the control instruction.
The processor comprises a kernel, and the kernel calls the corresponding program module from the memory. The kernel can be provided with one or more, and the processing of the return visit data is realized by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The embodiment of the present application provides a processor, where the processor is configured to execute a program, where the program executes the processing method described in the foregoing embodiment when running.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing, comprising:
acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions;
determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time;
and executing corresponding first operation according to the control instruction.
2. The processing method of claim 1, the at least two actions satisfying a first condition, comprising: and determining that a first condition is met under the condition that the at least two actions belong to preset action types and the time interval between any two actions in the at least two actions is smaller than a first threshold value.
3. The processing method according to claim 1, further comprising, before acquiring the first behavior data:
acquiring action and/or behavior data input by a user;
configuring the first condition in accordance with the action and/or behavior data.
4. The processing method according to claim 3, wherein a vibration frequency generated when the user enters the motion data is detected and acquired by the bone conduction sensor, and the configuring the first condition according to the motion and/or behavior data comprises:
configuring the first condition at least according to the vibration frequency detected and acquired by the bone conduction sensor when the user enters the action data.
5. The processing method according to claim 1, further comprising, after the first operation performed according to the control instruction:
and outputting voice prompt information so that a user can input second behavior data according to the voice prompt information, wherein the voice prompt information comprises the corresponding relation between the behavior data and the control instruction.
6. The processing method according to claim 1, further comprising, after the first operation performed in accordance with the control instruction:
acquiring second behavior data, wherein the second behavior data is the same as or different from the first behavior data;
and determining a corresponding control instruction based on the second behavior data, and executing a corresponding second operation according to the control instruction.
7. The processing method according to claim 1, further comprising, after said acquiring first behavior data:
determining whether a duration of a first activity is greater than a second threshold based on the first activity data;
in the event that the duration of the first behavior data is not greater than the second threshold, it is still further determined whether the at least two actions satisfy a first condition.
8. The treatment method of any one of claims 1-7, wherein the at least two actions include tapping the teeth, grinding the teeth, and/or opening and closing the mouth.
9. A processing apparatus, comprising:
the data acquisition module is used for acquiring first behavior data, the first behavior data are detected and acquired through a bone conduction sensor, and the first behavior data comprise at least two actions;
the instruction determining module is used for determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time;
and the operation execution module is used for executing corresponding first operation according to the control instruction.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the executable instructions comprise: acquiring first behavior data, wherein the first behavior data is acquired through bone conduction sensor detection, and the first behavior data comprises at least two actions; determining a control instruction corresponding to the first behavior data under the condition that the at least two actions meet a first condition, wherein the first condition is at least related to action types and action interval time; and executing corresponding first operation according to the control instruction.
CN201911398471.1A 2019-12-30 2019-12-30 Processing method and device and electronic equipment Active CN111248915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398471.1A CN111248915B (en) 2019-12-30 2019-12-30 Processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398471.1A CN111248915B (en) 2019-12-30 2019-12-30 Processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111248915A true CN111248915A (en) 2020-06-09
CN111248915B CN111248915B (en) 2021-08-17

Family

ID=70943938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398471.1A Active CN111248915B (en) 2019-12-30 2019-12-30 Processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111248915B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360206A (en) * 2022-03-21 2022-04-15 荣耀终端有限公司 Intelligent alarm method, earphone, terminal and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116405A (en) * 2013-03-06 2013-05-22 胡三清 Real-time detection and control device and method for brain and muscle electricity in tooth movement states
CN103412640A (en) * 2013-05-16 2013-11-27 胡三清 Device and method for character or command input controlled by teeth
CN104020868A (en) * 2013-02-28 2014-09-03 联想(北京)有限公司 Information processing method and electronic equipment
CN104317388A (en) * 2014-09-15 2015-01-28 联想(北京)有限公司 Interaction method and wearable electronic equipment
US20150174468A1 (en) * 2012-01-19 2015-06-25 Nike, Inc. Action Detection and Activity Classification
CN104921732A (en) * 2015-06-12 2015-09-23 联想(北京)有限公司 Method for determining food intake action of user and electronic device
CN108337914A (en) * 2015-10-06 2018-07-27 柯尼卡美能达株式会社 Movement detection system, movement detection device, movement detection method and movement detection program
CN110187767A (en) * 2019-05-31 2019-08-30 奥佳华智能健康科技集团股份有限公司 A kind of massage armchair gestural control system and method
US10455324B2 (en) * 2018-01-12 2019-10-22 Intel Corporation Apparatus and methods for bone conduction context detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150174468A1 (en) * 2012-01-19 2015-06-25 Nike, Inc. Action Detection and Activity Classification
CN104020868A (en) * 2013-02-28 2014-09-03 联想(北京)有限公司 Information processing method and electronic equipment
CN103116405A (en) * 2013-03-06 2013-05-22 胡三清 Real-time detection and control device and method for brain and muscle electricity in tooth movement states
CN103412640A (en) * 2013-05-16 2013-11-27 胡三清 Device and method for character or command input controlled by teeth
CN104317388A (en) * 2014-09-15 2015-01-28 联想(北京)有限公司 Interaction method and wearable electronic equipment
CN104921732A (en) * 2015-06-12 2015-09-23 联想(北京)有限公司 Method for determining food intake action of user and electronic device
CN108337914A (en) * 2015-10-06 2018-07-27 柯尼卡美能达株式会社 Movement detection system, movement detection device, movement detection method and movement detection program
US10455324B2 (en) * 2018-01-12 2019-10-22 Intel Corporation Apparatus and methods for bone conduction context detection
CN110187767A (en) * 2019-05-31 2019-08-30 奥佳华智能健康科技集团股份有限公司 A kind of massage armchair gestural control system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360206A (en) * 2022-03-21 2022-04-15 荣耀终端有限公司 Intelligent alarm method, earphone, terminal and system

Also Published As

Publication number Publication date
CN111248915B (en) 2021-08-17

Similar Documents

Publication Publication Date Title
US9414964B2 (en) Earplug for selectively providing sound to a user
CN107743289B (en) Intelligent sound box control method in intelligent household scene
JP6365939B2 (en) Sleep assist system
EP2120712B1 (en) Arrangement and method to wake up a sleeping subject at an advantageous time instant associated with natural arousal
US20150258301A1 (en) Sleep state management by selecting and presenting audio content
KR101839396B1 (en) System and method for device action and configuration based on user context detection from sensors in peripheral devices
WO2017133099A1 (en) Information processing method, device, wearable device and storage medium
CN105163180A (en) Play control method, play control device and terminal
CN104991755B (en) A kind of information processing method and electronic equipment
WO2011100890A1 (en) Reminding method of environmental sound and mobile terminal thereof
CN106527723B (en) Mobile terminal alarm-clock control method, device and mobile terminal
JP2009265818A (en) Driver awakening equipment
CN110035358B (en) Vehicle-mounted audio output device, audio output control method, and recording medium
US11100913B2 (en) Information security/privacy via a decoupled security cap to an always listening assistant device
CN111248915B (en) Processing method and device and electronic equipment
CN112487235A (en) Audio resource playing method and device, intelligent terminal and storage medium
CN107515765B (en) Alarm turn-off method, system and terminal equipment
CN110740550B (en) Control method and device for story accompanying lamp, story accompanying lamp and storage medium
CN106648540B (en) Music switching method and device
CN105791522A (en) Electronic alarm clock, electronic alarm clock shutdown method and terminal
US10666796B2 (en) Method and device for setting up a voice call
CN109275070B (en) Early warning prompting method and device for preventing microphone from being damaged and terminal equipment
CN111292771A (en) Method and device for controlling audio and video equipment and terminal equipment
CN111415442A (en) Access control method, electronic device and storage medium
CN111316623A (en) Alarm clock ringing method and system of intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant