CN113407046A - User action recognition method and device, electronic equipment and storage medium - Google Patents

User action recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113407046A
CN113407046A CN202110726719.3A CN202110726719A CN113407046A CN 113407046 A CN113407046 A CN 113407046A CN 202110726719 A CN202110726719 A CN 202110726719A CN 113407046 A CN113407046 A CN 113407046A
Authority
CN
China
Prior art keywords
user
action
sequence
data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110726719.3A
Other languages
Chinese (zh)
Inventor
谢昂
黄翀宇
罗晨
鲁威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110726719.3A priority Critical patent/CN113407046A/en
Publication of CN113407046A publication Critical patent/CN113407046A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The disclosure relates to a user action identification method, a user action identification device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring motion data of equipment held by a user at the moment of sudden change of user action; determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action; determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence; and determining a user action recognition result based on the first similarity. The essence of the method is that the motion data of the equipment held by the user when the user action changes suddenly is captured, and then the user action is identified only by taking the motion data of the equipment held by the user at the moment when the user action changes suddenly as a basis, so that the data volume needing to be reported can be fully reduced, the reporting rate of sensor data is reduced, and the requirements on the performance of the equipment held by the user and the performance of a terminal are reduced. The formed data sequence is shorter, so that the time spent on calculating the first similarity is shortened, and the space required by storing the data sequence is reduced.

Description

User action recognition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of human-computer interaction technologies, and in particular, to a user action recognition method and apparatus, an electronic device, and a storage medium.
Background
User action recognition is a means of computationally solving the language of a user's limbs. The current user action recognition method is mainly completed by depending on a camera or a millimeter wave radar chip and the like.
If the millimeter wave radar chip and/or the camera are relied on for user action identification, the requirement on the precision of the millimeter wave radar chip and/or the camera is high. And the final action recognition result is obtained based on the collected original data, the whole calculation process is complex, and the requirement on hardware of the calculation equipment is high. These factors all make the equipment for completing the user action recognition expensive, which is not favorable for large-scale landing and popularization.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present disclosure provides a user action recognition method, apparatus, electronic device, and storage medium.
In a first aspect, the present disclosure provides a user action recognition method, including:
acquiring motion data of equipment held by a user at the moment of sudden change of user action;
determining a user posture data variation sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence;
and determining a user action recognition result based on the first similarity.
In a second aspect, the present disclosure further provides a standard action recording method, including:
acquiring motion data of equipment held by a user at the moment of sudden change of the user action in the process of finishing a preset standard action by the user;
determining a user posture data variation sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
and recording the user posture data variable quantity sequence as the standard action posture data variable quantity sequence.
In a third aspect, the present disclosure further provides a user action recognition apparatus, including:
the first acquisition module is used for acquiring motion data of equipment held by a user at the moment of sudden change of user action;
the first sequence determination module is used for determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action;
the similarity determining module is used for determining first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence;
and the recognition result determining module is used for determining a user action recognition result based on the first similarity.
In a fourth aspect, the present disclosure also provides a standard action recording apparatus, including:
the second acquisition module is used for acquiring the motion data of the equipment held by the user at the moment of sudden change of the user action in the process of finishing the preset standard action by the user;
the second sequence determination module is used for determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
and the recording module is used for recording the user posture data variable quantity sequence as the standard action posture data variable quantity sequence.
In a fifth aspect, the present disclosure also provides an electronic device, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a user action recognition method and/or a standard action recording method as described above.
In a sixth aspect, the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the user action recognition method and/or the standard action recording method as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the technical scheme provided by the embodiment of the disclosure, in the whole process of user action identification, the millimeter wave chip and the camera are not required to be relied on for user action identification, the whole identification and calculation process is simple, the requirement on hardware of the computing equipment is low, and the requirements of large-scale landing and popularization can be met.
The technical scheme provided by the embodiment of the disclosure acquires the motion data of the equipment held by the user at the moment of sudden change of the user action through setting; the method comprises the steps of determining a user posture data variable quantity sequence based on motion data of equipment held by a user at the moment of user action mutation, substantially capturing the motion data of the equipment held by the user at the moment of user action mutation, and subsequently identifying the user action by taking the motion data of the equipment held by the user at the moment of user action mutation as a basis, so that the data volume needing to be reported can be fully reduced, the sensor data reporting rate is reduced, and the requirements on the performance of the equipment held by the user and the performance of a terminal are reduced. The data sequence formed by the technical scheme provided by the embodiment of the disclosure is shorter, which is beneficial to shortening the time spent on calculating the first similarity and reducing the space required by data sequence storage.
The technical scheme provided by the embodiment of the disclosure can realize the purpose of identifying the specific actions (such as horizontal stroke, vertical stroke, wavy line drawing, circular drawing, digital writing, English letter writing and the like) finished by the user in the space. The user action identification method can be applied to equipment control through the completion of specific actions of a user in the man-machine interaction process, such as unlocking of control equipment, switching of a display interface of the control equipment, control of characters in a game to complete certain tasks and the like.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is an application scenario diagram of a user action recognition method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a user action recognition method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another user action recognition method provided by the embodiment of the present disclosure;
fig. 4 is a flowchart of a standard action recording method provided by the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a user action recognition device in an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a standard motion recording apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Fig. 1 is an application scenario diagram of a user action recognition method according to an embodiment of the present disclosure. The user action recognition method provided by the application can be applied to the application environment shown in fig. 1. The user action recognition method is applied to a user action recognition system. The user action recognition system comprises a terminal 1 and a device 2 held by a user.
Alternatively, the terminal 1 and the device 2 held by the user may be integrated into a single body or may be separately provided. If the terminal 1 and the device 2 held by the user are separately set, the terminal 1 and the device 2 held by the user communicate with each other through a network.
The device 2 held by the user is equipped with a sensor which can be used for collecting the motion data of the user. The user-held device 2 may be, but is not limited to, a smartphone, an air mouse, a gamepad, a wearable device, and the like.
The terminal 1 is used for identifying user actions during man-machine interaction. Specifically, the terminal 1 is configured to perform acquiring motion data of a device held by a user at a moment when the user action changes suddenly; determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action; determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence; and determining a user action recognition result based on the first similarity. The terminal 1 may specifically be, but not limited to, a smart phone, a palm computer, a tablet computer, a wearable device with a display screen, a desktop computer, a notebook computer, an all-in-one machine, a smart home device, and the like.
Fig. 2 is a flowchart of a user action recognition method provided in an embodiment of the present disclosure, referring to fig. 2, the method includes:
and S110, acquiring motion data of the equipment held by the user at the moment of sudden change of the user action.
The sudden change of the user action means that the moving direction of the limbs of the user is greatly changed in the process of finishing a certain action by the user. Illustratively, the user draws a wave line in the air with the right hand, and the moment when the right hand of the user reaches the peak or the trough of the drawn wave line can be regarded as the moment of abrupt change of the user action.
When acquiring the motion data of the device held by the user, the motion data may be acquired by an inertial sensor installed in the device held by the user. In this case, the motion data of the device held by the user refers to the raw data collected by the inertial sensor.
There are various implementation methods of this step, and for example, the implementation method of this step may be: the equipment held by the user periodically collects and records the motion data of the equipment held by the user through an inertial sensor arranged in the equipment; then the equipment held by the user selects the motion data of the equipment held by the user at the moment of sudden change of the user action from all the collected motion data; and finally reporting the selected motion data of the equipment held by the user at the moment of sudden change of the user action to the terminal, so that the terminal can acquire the motion data of the equipment held by the user at the moment of sudden change of the user action.
Further, the device held by the user selects motion data of the device held by the user at the moment when the user action changes suddenly from all the collected motion data, specifically, the moving direction of the limb of the user at each collection moment is determined based on all the collected motion data; if the included angle between the moving direction of the user limb at the qth acquisition moment and the moving direction of the user limb at the qth-1 acquisition moment is larger than a preset included angle, determining that the moment of sudden change of the user action is the qth acquisition moment; and taking the motion data of the equipment held by the user at the qth acquisition moment as the motion data of the equipment held by the user at the moment when the user action suddenly changes. Wherein q is a positive integer greater than or equal to 2.
Alternatively, the implementation method of this step may further include: the method comprises the steps that equipment held by a user firstly judges whether the current moment is the moment of user action mutation; if so, recording the motion data of the equipment held by the user at the current moment, and reporting the recorded motion data to the terminal, so that the terminal can acquire the motion data of the equipment held by the user at the moment when the user action is suddenly changed.
The essence of the step is that the motion data of the equipment held by the user when the user action changes suddenly is captured, the motion data of the equipment held by the user at the moment of the user action change is taken as a basis for user action identification, the motion data of the equipment held by the user is not periodically and continuously collected, and all the collected motion data of the equipment held by the user is reported to the terminal.
And S120, determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action.
The user posture data refers to data capable of reflecting the body motion of the user. Alternatively, the motion data of the device held by the user may be directly used as the user gesture data; attitude angle data, rotation matrix data, rotation vector data, or the like may also be employed as the user attitude data. The attitude angle data, the rotation matrix data, or the rotation vector data can be regarded as a result of compression processing of the raw data (i.e., the motion data of the device held by the user) collected by the sensor. By adopting the attitude angle data, the rotation matrix data or the rotation vector data as the user attitude data, the complexity of calculating the first similarity of the user attitude data variable quantity sequence and the preset standard action attitude data variable quantity sequence in the follow-up process can be reduced, and the calculation efficiency is improved.
And selecting the posture angle data as user posture data relative to the rotation matrix data or the rotation vector data, wherein the posture angle data can more directly reflect the change situation of the user limb action.
Further, if the attitude angle data is used as the user attitude data, the implementation method of this step may include: determining attitude angle data of the equipment held by the user at the moment of sudden change of the user action based on the motion data of the equipment held by the user at the moment of sudden change of the user action; the attitude angle comprises at least one of an azimuth angle, a pitch angle and a roll angle; based on a plurality of attitude angle data determined successively, a sequence of user attitude angle variations is determined.
The azimuth angle, the pitch angle and the roll angle are all angles defined based on a world coordinate system. Specifically, Azimuth (Azimuth) is the horizontal declination between the current pointing direction of the device held by the user and the magnetic north pole. Pitch angle (Pitch), i.e. the up and down Pitch angle between the plane of the device held by the user and the ground plane. Roll angle (Roll) is the angle between the plane of the device and the ground plane.
The "determining the attitude angle data of the device held by the user at the moment of sudden change of the user's motion based on the motion data of the device held by the user at the moment of sudden change of the user's motion" may specifically be that the raw data collected by an accelerometer, a gyroscope, and a magnetometer, which are built in an inertial sensor, is subjected to data fusion processing based on a kalman filter, so as to obtain the attitude angle data of the device held by the user at the moment of sudden change of the user's motion.
The "determining the sequence of the change amount of the user attitude angle based on the plurality of continuously determined attitude angle data" may specifically be that, in the plurality of continuously determined attitude angle data, the attitude angle data at the moment of the abrupt change of the user action at any time is subtracted from the attitude angle data at the moment of the abrupt change of the user action at the previous time to obtain a series of change amounts of the attitude angle data; and arranging a series of attitude angle data variable quantities according to the time sequence of the user action mutation moment to obtain a user attitude angle variable quantity sequence.
For example, if the user performs a certain action, the user action abrupt change times include times T1, T2, T3, T4, … … and Tn. The azimuth angle at the time of T1 is a1, and the pitch angle is p1The angle of inclination is r1. Azimuth angle at time T2 is a2Pitch angle p2The angle of inclination is r2. Azimuth angle at time T3 is a3Pitch angle p3The angle of inclination is r3。T4Azimuth of time a4Pitch angle p4The angle of inclination is r4. … … are provided. Azimuth angle at time Tn of anPitch angle pnThe angle of inclination is rn. The azimuth angle variation sequence is: a is2-a1、a3-a2、a4-a3、……、an-an-1. The pitch angle variation sequence is as follows: p is a radical of2-p1、p3-p2、p4-p3、……、pn-pn-1. The roll angle variation sequence is: r is2-r1、r3-r2、r4-r3、……、rn-rn-1
S130, determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence.
The preset standard motion posture data variable quantity sequence refers to a standard motion posture data variable quantity sequence which is stored in the terminal in advance before the user motion recognition method provided by the disclosure is executed.
And the standard motion attitude data variable quantity sequence is obtained through learning and recording. Alternatively, the steps of the method of learning the standard motion gesture data variation sequence are similar to the steps of S110-S130 described above. Specifically, the terminal may send an instruction to the user to complete a certain action, such as an instruction "please complete an action of drawing a circle with the right hand", and the user draws a circle with the right hand after receiving the instruction. Acquiring motion data of equipment held by a user at the moment of sudden change of user action in the process of drawing a circle by using the right hand of the user; determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action; and recording the user posture data variable quantity sequence as the standard action posture data variable quantity sequence. Thus, learning and recording of the standard motion attitude data variation sequence are completed.
The execution subject of the learning record standard motion posture data variation sequence may be the same as or different from the execution subject of the user motion recognition method provided by the present disclosure. If the execution subject learning and recording the standard motion posture data variation sequence is different from the execution subject learning and recording the user motion recognition method provided by the present disclosure, the recorded standard motion posture data variation sequence may be sent to the execution subject learning and recording the standard motion posture data variation sequence.
There are various algorithms that can be used to determine the first similarity between the user gesture data variation sequence and the preset standard motion gesture data variation sequence, and the first similarity between the user gesture data variation sequence and the preset standard motion gesture data variation sequence can be determined by one or more of a Hausdorff Distance (Hausdorff Distance) algorithm, a Dynamic Time Warping (DTW) algorithm, a freiche discrete Distance (freichent Distance) algorithm, and a Longest Common Subsequence (Longest Common Subsequence) algorithm, for example.
Compared with other algorithms, the first similarity between the user posture data variable quantity sequence determined based on the dynamic time normalization algorithm and the preset standard action posture data variable quantity sequence is more accurate. This is because, in practice, different users require different times to complete the same action. The dynamic time warping algorithm can automatically warp the time sequence (i.e. perform local scaling on the time axis) so that the morphology of the two sequences is as consistent as possible, resulting in the maximum possible similarity.
Further, if the attitude angle variation sequence includes an azimuth angle variation sequence, a pitch angle variation sequence, and a roll angle variation sequence; the implementation method of the step comprises the following steps: determining a second similarity between the azimuth angle variation sequence of the equipment held by the user and a preset standard action azimuth angle variation sequence; determining a third similarity of a pitch angle variable sequence of equipment held by a user and a preset standard action pitch angle variable sequence; determining a fourth similarity of the roll angle variation sequence of the equipment held by the user and a preset standard action roll angle variation sequence; and determining the first similarity of the user posture data variable quantity sequence and the preset standard action posture data variable quantity sequence based on the second similarity, the third similarity and the fourth similarity.
Further, there are various specific implementation methods for determining the first similarity between the user gesture data variation sequence and the preset standard action gesture data variation sequence based on the second similarity, the third similarity and the fourth similarity, which are not limited in the present application. Optionally, taking a geometric mean of the second similarity, the third similarity and the fourth similarity as the first similarity; or taking the arithmetic mean of the second similarity, the third similarity and the fourth similarity as the first similarity; or, firstly, the square sum of the second similarity, the third similarity and the fourth similarity is obtained; and then, carrying out evolution on the square sum, and taking the evolution result as the first similarity.
And S140, determining a user action recognition result based on the first similarity.
There are various methods for implementing this step, and this application does not limit this. Alternatively, a similarity threshold may be set in advance; if the first similarity is greater than the similarity threshold, determining the user action as the standard action mentioned in S130; otherwise, it is determined that the user action is not the standard action mentioned in S130. Therefore, the aim of identifying the user action is fulfilled.
Further, a posture data variation sequence including a plurality of different preset standard actions in the database may be set. When the technical scheme provided by the application is executed, the user posture data variable quantity sequence obtained in the step S120 is compared with each standard action posture data variable quantity sequence one by one to determine which standard action corresponds to the user action in the database, so that the purpose of finally identifying the user action is achieved.
According to the technical scheme provided by the embodiment of the disclosure, in the whole process of user action identification, the millimeter wave chip and the camera are not required to be relied on for user action identification, the whole identification and calculation process is simple, the requirement on hardware of the computing equipment is low, and the requirements of large-scale landing and popularization can be met.
As can be understood by those skilled in the art, if the moment of abrupt change of the user action is not identified, the motion data of the device held by the user is periodically and continuously collected, and the user action is identified by taking all the collected motion data as a basis. This approach results in a large amount of data to be reported, requiring a high sensor data reporting rate (greater than 100 Hz). The performance requirements for the equipment and the terminal held by the user are higher. Moreover, the data sequence formed in this way is too long, which causes problems of long time spent on the first similarity calculation, large space required for storing the data sequence, and the like, and increases the calculation and storage burden.
The technical scheme provided by the embodiment of the disclosure acquires the motion data of the equipment held by the user at the moment of sudden change of the user action through setting; the method comprises the steps of determining a user posture data variable quantity sequence based on motion data of equipment held by a user at the moment of user action mutation, substantially capturing the motion data of the equipment held by the user at the moment of user action mutation, and subsequently identifying the user action by taking the motion data of the equipment held by the user at the moment of user action mutation as a basis, so that the data volume needing to be reported can be fully reduced, the sensor data reporting rate is reduced, and the requirements on the performance of the equipment held by the user and the performance of a terminal are reduced. The data sequence formed by the technical scheme provided by the embodiment of the disclosure is shorter, which is beneficial to shortening the time spent on calculating the first similarity and reducing the space required by data sequence storage.
The technical scheme can realize the purpose of identifying the specific actions (such as horizontal stroke, vertical stroke, wavy line drawing, circular drawing, digital writing, English letter writing and the like) finished by the user in the space.
The user action identification method can be applied to a scene that a user completes a specific action to control equipment in the human-computer interaction process.
For example, the user first writes "M" in the space, and enters the action of writing "M" as an unlocking action into the terminal system (this process is a standard action posture data variation sequence learning and recording process). Subsequently, when the user needs to unlock the terminal, the user writes 'M' in the space again, and if the terminal recognizes that the user action is consistent with the standard action, the terminal is unlocked.
For another example, in the setting link of correspondence between actions and instructions, the user writes "V" in space, and takes the action of writing "V" as the action corresponding to the "determination instruction"; drawing a semicircle in the space, and taking the action of drawing the semicircle as the action corresponding to the 'return instruction'; drawing a line segment upwards in the space, and taking the action of drawing the line segment upwards as the action corresponding to the command of moving the focus of the system upwards; drawing a line segment downwards in the space, wherein the action of drawing the line segment downwards is taken as the action corresponding to the command of moving the focus of the system downwards; drawing a line segment to the left in the space, and taking the action of drawing the line segment to the left as the action corresponding to the command of moving the focus of the system to the left; and drawing a line segment to the right in the space, wherein the action of drawing the line segment to the right is taken as the action corresponding to the instruction for moving the system focus to the right. After the setting of the correspondence between the motion and the command (i.e., the standard motion attitude data change amount sequence learning record) is completed, the database stores 6 standard motions, which are a motion of writing "V", a motion of drawing a semicircle, a motion of drawing a line segment upward, a motion of drawing a line segment downward, a motion of drawing a line segment leftward, and a motion of drawing a line segment rightward, respectively. Subsequently, assuming that a certain page of a certain electronic book is displayed on the terminal interface, the user draws a line segment to the left in the space again, the terminal identifies the user action by using the user action identification method provided by the disclosure, and the user action is taken as the action of drawing the line segment to the left through identification and corresponds to a 'command of moving the system focus to the left', so that the content displayed by the terminal is switched and the next page of the electronic book is displayed.
The user action recognition method can also be applied to human-computer interaction games. Illustratively, a plurality of standard motion pose data delta sequences are included in the game piece. In the game process, the system continuously gives prompt information of a plurality of actions (the actions given by the system are standard actions included in the game pack), so that the user completes corresponding actions according to the prompt information, the system identifies the actions completed by the user, and the system scores according to whether the actions completed by the user are consistent with the required actions.
The user action recognition method can also be applied to a scene of character input. However, in the above technical solution, in the process of performing motion recognition, recognition needs to be performed by using a preset standard motion posture data variation sequence, and the number of recognizable motions mainly depends on the number of standard motions in the database. Therefore, in such a case, it is necessary to learn the sequence of changes in the posture data of the motion corresponding to all the characters written in the course of the learning record.
Those skilled in the art will appreciate that the zero-crossing of the angular velocity of the device held by the user often means a substantial change in the direction of movement of the user's limb. Here, the "angular velocity zero crossing point" means that the angular velocity changes from a positive value to a negative value, or from a negative value to a positive value. Accordingly, the zero-crossing point time of the angular speed of the device held by the user can be used as the user action sudden change time. The method has the advantages that the angular velocity can be directly acquired through the gyroscope in the inertial sensor and is the original measurement data of the gyroscope, data processing is not needed, whether the moment is the moment when the user action changes suddenly can be directly judged, and the complexity of the user action identification method can be reduced.
Furthermore, a gyroscope in the inertial sensor is used for acquiring the angular velocity of the equipment held by the user rotating around each coordinate axis in the gyroscope three-dimensional coordinate system; the zero crossing time of the angular velocity of the equipment held by the user rotating around any coordinate axis in the gyroscope three-dimensional coordinate system is the sudden change time of the user action. Compared with the scheme that the moment of the simultaneous zero crossing of the angular velocities of the equipment held by the user rotating around the three coordinate axes in the gyroscope three-dimensional coordinate system is used as the moment of sudden change of the user action, the arrangement can ensure that important motion data are omitted, and the accuracy of subsequent user action identification can be improved.
Fig. 3 is a flowchart of another user action recognition method according to an embodiment of the present disclosure. Fig. 3 is a specific example of fig. 2. The user holds equipment in which an inertial sensor is installed. Inertial sensors include accelerometers, gyroscopes, and magnetometers. Referring to fig. 3, the method includes:
s210, in the process that the user completes a certain action, the inertial sensor in the equipment held by the user continuously collects motion data.
And S220, judging whether at least one angular velocity zero crossing point exists in the three angular velocities acquired by the gyroscope at the current moment by the equipment held by the user, and if so, executing S230.
And S230, the equipment held by the user reports the motion data acquired at the angular velocity zero crossing point moment to the terminal.
S240, the terminal receives motion data of the angular velocity zero crossing point moment, and determines attitude angle data of equipment held by a user at the angular velocity zero crossing point moment based on the received motion data of the angular velocity zero crossing point moment; the attitude angles include an azimuth angle, a pitch angle, and a roll angle.
And S250, determining a user attitude angle variation sequence based on a plurality of continuously determined attitude angle data.
S260, determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence;
and S270, determining a user action recognition result based on the first similarity.
According to the technical scheme, the zero crossing point time of the angular speed of the device held by the user is used as the user action sudden change time, the user action sudden change time determining method is simple, time spent for judging the user action sudden change time is favorably shortened, and overall time consumed for executing the user action identifying method is further shortened.
Fig. 4 is a flowchart of a standard action recording method according to an embodiment of the present disclosure. The present embodiment is applicable to a situation where a terminal performs standard action recording before performing human-computer interaction, and the method may be executed by a standard action recording device, where the device may be implemented in a software and/or hardware manner, and the device may be configured in an electronic device, for example, a terminal, specifically including but not limited to a smart phone, a palm computer, a tablet computer, a wearable device with a display screen, a desktop, a notebook computer, an all-in-one machine, a smart home device, and the like.
As shown in fig. 4, the method may specifically include:
s310, acquiring motion data of the equipment held by the user at the moment when the user action changes suddenly in the process of finishing the preset standard action.
In this step, the preset standard action refers to a pre-specified action. Illustratively, the user is designated to complete the right-handed circle drawing action. When executing the step, the terminal sends an instruction of 'please finish the action of drawing a circle with the right hand' to the user, and the user draws the circle with the right hand after receiving the instruction. And acquiring the motion data of the equipment held by the user at the moment when the user acts suddenly in the process of drawing a circle by using the right hand.
Optionally, the zero-crossing point time of the angular velocity of the device held by the user is used as the user action abrupt change time.
Optionally, the motion data of the device held by the user at the moment when the user action changes suddenly is acquired through an inertial sensor.
Optionally, a gyroscope in the inertial sensor is used for acquiring the angular velocity of the device held by the user rotating around each coordinate axis in a gyroscope three-dimensional coordinate system; the zero crossing time of the angular velocity of the equipment held by the user rotating around any coordinate axis in the gyroscope three-dimensional coordinate system is the sudden change time of the user action.
The specific implementation method of this step is similar to S110, and is not described here again.
And S320, determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action.
Further, determining attitude angle data of the equipment held by the user at the moment of sudden change of the user action based on the motion data of the equipment held by the user at the moment of sudden change of the user action; the attitude angle comprises at least one of an azimuth angle, a pitch angle and a roll angle; based on a plurality of attitude angle data determined successively, a sequence of user attitude angle variations is determined.
The specific implementation method of this step is similar to S120, and is not described here again.
And S330, recording the user posture data variable quantity sequence as the standard motion posture data variable quantity sequence.
The essence of the technical scheme is that firstly, a user is appointed to complete a preset standard action, and the terminal learns to obtain a standard action posture data variable quantity sequence in the process that the user completes the preset standard action. And when the subsequent man-machine interaction is carried out, the standard action attitude data variable quantity sequence is used as a judgment reference for carrying out user action identification, so that the data quantity to be reported can be fully reduced, the data reporting rate of the sensor is reduced, and the requirements on the performance of equipment and terminals held by a user are reduced. The data sequence formed by the technical scheme provided by the embodiment of the disclosure is shorter, which is beneficial to shortening the time spent on calculating the first similarity when the user action is identified and reducing the space required by data sequence storage.
The technical scheme can realize the purpose of identifying the specific actions (such as horizontal stroke, vertical stroke, wavy line drawing, circular drawing, digital writing, English letter writing and the like) finished by the user in the space. The user action identification method can be applied to equipment control through the completion of specific actions of a user in the man-machine interaction process, such as unlocking of control equipment, switching of a display interface of the control equipment, control of characters in a game to complete certain tasks and the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Fig. 5 is a schematic structural diagram of a user action recognition device in an embodiment of the present disclosure. The user action recognition device provided by the embodiment of the disclosure can be configured in a terminal. Referring to fig. 5, the user motion recognition apparatus specifically includes:
a first obtaining module 410, configured to obtain motion data of a device held by a user at a moment when a user action changes abruptly;
a first sequence determination module 420, configured to determine a user posture data variation sequence based on motion data of a device held by a user at the moment of the abrupt change of the user action;
a similarity determining module 430, configured to determine a first similarity between the user gesture data variation sequence and a preset standard action gesture data variation sequence;
and the recognition result determining module 440 is configured to determine a user action recognition result based on the first similarity.
Further, the zero-crossing point time of the angular speed of the device held by the user is used as the user action sudden change time.
Further, the first obtaining module 410 is configured to obtain, through the inertial sensor, motion data of the device held by the user at a moment when the user performs an abrupt change of motion.
Further, a gyroscope in the inertial sensor is used for acquiring the angular speed of the equipment held by the user rotating around each coordinate axis in the gyroscope three-dimensional coordinate system;
and the zero-crossing moments of the angular velocity of the equipment held by the user rotating around any coordinate axis in the gyroscope three-dimensional coordinate system are all the moments of sudden change of the user action.
Further, the first sequence determination module 420 is configured to determine, based on the motion data of the device held by the user at the moment of the abrupt change of the user action, attitude angle data of the device held by the user at the moment of the abrupt change of the user action; the attitude angle comprises at least one of an azimuth angle, a pitch angle, and a roll angle;
and determining a user attitude angle variation quantity sequence based on a plurality of the attitude angle data which are determined continuously.
Further, if the attitude angle variation sequence includes an azimuth angle variation sequence, a pitch angle variation sequence, and a roll angle variation sequence; a similarity determination module 430 configured to:
determining a second similarity between the azimuth angle variation sequence of the equipment held by the user and the preset standard action azimuth angle variation sequence;
determining a third similarity between the pitch angle variation sequence of the equipment held by the user and the preset standard action pitch angle variation sequence;
determining a fourth similarity between the roll angle variation sequence of the equipment held by the user and the preset standard action roll angle variation sequence;
and determining the first similarity of the user gesture data variable quantity sequence and the preset standard action gesture data variable quantity sequence based on the second similarity, the third similarity and the fourth similarity.
Further, the similarity determining module 430 is configured to:
and determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence based on a dynamic time normalization algorithm.
The user action recognition device provided by the embodiment of the disclosure can execute the steps of the user action recognition method provided by the embodiment of the disclosure, has execution steps and beneficial effects, and is not described again here.
Fig. 6 is a schematic structural diagram of a standard motion recording apparatus in an embodiment of the present disclosure. The standard action recording device provided by the embodiment of the disclosure can be configured in a terminal. Referring to fig. 6, the standard motion recording apparatus specifically includes:
a second obtaining module 510, configured to obtain motion data of a device held by a user at a moment when a user action changes abruptly in a process of completing a preset standard action by the user;
a second sequence determination module 520, configured to determine a user posture data variation sequence based on the motion data of the device held by the user at the moment of the abrupt change of the user action;
a recording module 530, configured to record the user gesture data variation sequence as the standard action gesture data variation sequence.
Further, the zero-crossing point time of the angular speed of the device held by the user is used as the user action sudden change time.
Further, the second obtaining module 510 is configured to obtain, through the inertial sensor, motion data of the device held by the user at a moment when the user performs an abrupt change of the motion.
Further, a gyroscope in the inertial sensor is used for acquiring the angular speed of the equipment held by the user rotating around each coordinate axis in the gyroscope three-dimensional coordinate system;
and the zero-crossing moments of the angular velocity of the equipment held by the user rotating around any coordinate axis in the gyroscope three-dimensional coordinate system are all the moments of sudden change of the user action.
Further, a second sequence determining module 520 is configured to:
determining attitude angle data of the equipment held by the user at the moment of the abrupt change of the user action based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action; the attitude angle comprises at least one of an azimuth angle, a pitch angle, and a roll angle;
and determining a user attitude angle variation quantity sequence based on a plurality of the attitude angle data which are determined continuously.
The standard motion recording apparatus provided in the embodiment of the present disclosure may perform the steps of the standard motion recording method provided in the embodiment of the present disclosure, and has the performing steps and beneficial effects, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now specifically to fig. 7, a schematic diagram of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 1000 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), a wearable electronic device, and the like, and fixed terminals such as a digital TV, a desktop computer, a smart home device, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM)1003 to implement a user action recognition method or a standard action recording method as an embodiment of the present disclosure. In the RAM 1003, various programs and information necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1007 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1009. The communications apparatus 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information. While fig. 7 illustrates an electronic device 1000 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart, thereby implementing the user action recognition method or the standard action recording method as above. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 1009, or installed from the storage means 1008, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include an information signal propagated in baseband or as part of a carrier wave, in which computer readable program code is carried. Such a propagated information signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital information communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring motion data of equipment held by a user at the moment of sudden change of user action;
determining a user posture data variation sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence;
and determining a user action recognition result based on the first similarity.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring motion data of equipment held by a user at the moment of sudden change of the user action in the process of finishing a preset standard action by the user;
determining a user posture data variation sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
and recording the user posture data variable quantity sequence as the standard action posture data variable quantity sequence.
Optionally, when the one or more programs are executed by the electronic device, the electronic device may also perform other steps of the above embodiments.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
when executed by one or more processors, cause the one or more processors to implement any of the user action recognition methods or standard action recording methods provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a user action recognition method or a standard action recording method as any one provided by the present disclosure.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement the user action recognition method or the standard action recording method as above.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A user action recognition method is characterized by comprising the following steps:
acquiring motion data of equipment held by a user at the moment of sudden change of user action;
determining a user posture data variation sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence;
and determining a user action recognition result based on the first similarity.
2. The method of claim 1,
and taking the zero-crossing point moment of the angular speed of the equipment held by the user as the sudden change moment of the action of the user.
3. The method of claim 2, wherein obtaining the motion data of the device held by the user at the moment of abrupt change of the user action comprises:
and acquiring motion data of the equipment held by the user at the moment when the user action changes suddenly through the inertial sensor.
4. The method of claim 3,
a gyroscope in the inertial sensor is used for acquiring the angular velocity of equipment held by a user rotating around each coordinate axis in the gyroscope three-dimensional coordinate system;
and the zero-crossing moments of the angular velocity of the equipment held by the user rotating around any coordinate axis in the gyroscope three-dimensional coordinate system are all the moments of sudden change of the user action.
5. The method of claim 4, wherein determining a sequence of user gesture data changes based on the motion data of the device held by the user at the moment of the abrupt change in user action comprises:
determining attitude angle data of the equipment held by the user at the moment of the abrupt change of the user action based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action; the attitude angle comprises at least one of an azimuth angle, a pitch angle, and a roll angle;
and determining a user attitude angle variation quantity sequence based on a plurality of the attitude angle data which are determined continuously.
6. The method of claim 5, wherein if the sequence of attitude angle variations comprises a sequence of azimuth angle variations, a sequence of pitch angle variations, and a sequence of roll angle variations;
the determining of the first similarity between the user gesture data variation sequence and the preset standard action gesture data variation sequence includes:
determining a second similarity between the azimuth angle variation sequence of the equipment held by the user and the preset standard action azimuth angle variation sequence;
determining a third similarity between the pitch angle variation sequence of the equipment held by the user and the preset standard action pitch angle variation sequence;
determining a fourth similarity between the roll angle variation sequence of the equipment held by the user and the preset standard action roll angle variation sequence;
and determining the first similarity of the user gesture data variable quantity sequence and the preset standard action gesture data variable quantity sequence based on the second similarity, the third similarity and the fourth similarity.
7. The method of claim 1, wherein determining a first similarity between the sequence of user gesture data variations and a preset sequence of standard motion gesture data variations comprises:
and determining a first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence based on a dynamic time normalization algorithm.
8. A standard action recording method, comprising:
acquiring motion data of equipment held by a user at the moment of sudden change of the user action in the process of finishing a preset standard action by the user;
determining a user posture data variation sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
and recording the user posture data variable quantity sequence as the standard action posture data variable quantity sequence.
9. A user action recognition device, comprising:
the first acquisition module is used for acquiring motion data of equipment held by a user at the moment of sudden change of user action;
the first sequence determination module is used for determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of sudden change of the user action;
the similarity determining module is used for determining first similarity of the user posture data variable quantity sequence and a preset standard action posture data variable quantity sequence;
and the recognition result determining module is used for determining a user action recognition result based on the first similarity.
10. A standard motion recording device, comprising:
the second acquisition module is used for acquiring the motion data of the equipment held by the user at the moment of sudden change of the user action in the process of finishing the preset standard action by the user;
the second sequence determination module is used for determining a user posture data variable quantity sequence based on the motion data of the equipment held by the user at the moment of the abrupt change of the user action;
and the recording module is used for recording the user posture data variable quantity sequence as the standard action posture data variable quantity sequence.
11. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202110726719.3A 2021-06-29 2021-06-29 User action recognition method and device, electronic equipment and storage medium Pending CN113407046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110726719.3A CN113407046A (en) 2021-06-29 2021-06-29 User action recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110726719.3A CN113407046A (en) 2021-06-29 2021-06-29 User action recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113407046A true CN113407046A (en) 2021-09-17

Family

ID=77680094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110726719.3A Pending CN113407046A (en) 2021-06-29 2021-06-29 User action recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113407046A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108015A (en) * 2017-11-20 2018-06-01 电子科技大学 A kind of action gesture recognition methods based on mobile phone gyroscope and dynamic time warping
CN108319421A (en) * 2018-01-29 2018-07-24 维沃移动通信有限公司 A kind of display triggering method and mobile terminal
CN111750919A (en) * 2020-07-02 2020-10-09 陕西师范大学 Identity authentication method and apparatus using multi-axis sensor and accelerometer
CN112212861A (en) * 2020-09-21 2021-01-12 哈尔滨工业大学(深圳) Track restoration method based on single inertial sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108015A (en) * 2017-11-20 2018-06-01 电子科技大学 A kind of action gesture recognition methods based on mobile phone gyroscope and dynamic time warping
CN108319421A (en) * 2018-01-29 2018-07-24 维沃移动通信有限公司 A kind of display triggering method and mobile terminal
CN111750919A (en) * 2020-07-02 2020-10-09 陕西师范大学 Identity authentication method and apparatus using multi-axis sensor and accelerometer
CN112212861A (en) * 2020-09-21 2021-01-12 哈尔滨工业大学(深圳) Track restoration method based on single inertial sensor

Similar Documents

Publication Publication Date Title
CN106951484B (en) Picture retrieval method and device, computer equipment and computer readable medium
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
CN111552888A (en) Content recommendation method, device, equipment and storage medium
CN110147533B (en) Encoding method, apparatus, device and storage medium
CN111107280B (en) Special effect processing method and device, electronic equipment and storage medium
US20230093983A1 (en) Control method and device, terminal and storage medium
CN112306235A (en) Gesture operation method, device, equipment and storage medium
CN109829431B (en) Method and apparatus for generating information
CN110069126B (en) Virtual object control method and device
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN111368668A (en) Three-dimensional hand recognition method and device, electronic equipment and storage medium
CN115880719A (en) Gesture depth information generation method, device, equipment and computer readable medium
CN113407046A (en) User action recognition method and device, electronic equipment and storage medium
CN111258413A (en) Control method and device of virtual object
CN115690845A (en) Motion trail prediction method and device
CN113706606A (en) Method and device for determining position coordinates of spaced gestures
CN110263743B (en) Method and device for recognizing images
CN114202799A (en) Method and device for determining change speed of controlled object, electronic equipment and storage medium
CN113778078A (en) Positioning information generation method and device, electronic equipment and computer readable medium
CN113191257A (en) Order of strokes detection method and device and electronic equipment
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN111209050A (en) Method and device for switching working mode of electronic equipment
CN111103967A (en) Control method and device of virtual object
CN112306223B (en) Information interaction method, device, equipment and medium
CN110197230B (en) Method and apparatus for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination