CN111228792B - Motion recognition method, device, computer equipment and storage medium for motion recognition game - Google Patents

Motion recognition method, device, computer equipment and storage medium for motion recognition game Download PDF

Info

Publication number
CN111228792B
CN111228792B CN202010038130.XA CN202010038130A CN111228792B CN 111228792 B CN111228792 B CN 111228792B CN 202010038130 A CN202010038130 A CN 202010038130A CN 111228792 B CN111228792 B CN 111228792B
Authority
CN
China
Prior art keywords
axis
user
area
gesture
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010038130.XA
Other languages
Chinese (zh)
Other versions
CN111228792A (en
Inventor
张书臣
罗晓喆
俞知渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shimi Network Technology Co ltd
Original Assignee
Shenzhen Shimi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shimi Network Technology Co ltd filed Critical Shenzhen Shimi Network Technology Co ltd
Priority to CN202010038130.XA priority Critical patent/CN111228792B/en
Publication of CN111228792A publication Critical patent/CN111228792A/en
Application granted granted Critical
Publication of CN111228792B publication Critical patent/CN111228792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a motion recognition method, a motion recognition device, computer equipment and a storage medium for a motion sensing game, wherein the method comprises the steps of obtaining configuration data of wearing intelligent wearing equipment by a user; acquiring detection signals of a sensor of intelligent wearing equipment worn on the hand of a user to obtain detection data; carrying out fillet calculation according to the detection data to obtain a calculation result; comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user; determining a recognition result according to the user gesture; and generating a corresponding game effect according to the identification result, and sending the game effect to a terminal for display. The invention adopts the sensor to obtain the detection signal, can realize the improvement of the recognition accuracy, has simple and easy gesture in the preset action library, can simplify the learning period of the gesture by the user, and enhances the experience of the user.

Description

Motion recognition method, device, computer equipment and storage medium for motion recognition game
Technical Field
The present invention relates to motion sensing games, and more particularly to a motion sensing game motion recognition method, device, computer apparatus, and storage medium.
Background
At present, the man-machine interaction technology refers to a technology for realizing man-machine interaction in an effective manner through input and output equipment. The existing man-machine interaction mode is generally that external devices such as a mouse, a keyboard, a touch screen or a handle interact with a machine system, and the machine system responds correspondingly. For example, when a user needs to operate a game on a terminal device, the user needs to click the game or perform other operations through a key or a touch screen, so that the operation on the game is realized, and therefore, the appearance of a somatosensory game becomes an important component of game development.
At present, a user operation action recognition method for a somatosensory game generally adopts a structured light device to project to the somatosensory game user, and a 3D model of the somatosensory game user at each time point is obtained; the 3D model of the somatosensory game user at each time point is analyzed to obtain the gesture information of the somatosensory game user at each time point, the recognition method is easy to cause recognition failure due to the influence of the environment, a camera or special equipment is adopted to obtain the user action, and the moving direction is analyzed according to the obtained result, but the method can not accurately obtain some tiny actions or can achieve accurate user action acquisition only by special training of the user, and the experience of the method is weak.
Therefore, it is necessary to design a new method to improve recognition accuracy, simplify the learning period of the gesture by the user, and enhance the experience of the user.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a motion recognition method, a motion recognition device, computer equipment and a storage medium for a motion sensing game.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a motion recognition method for a motion sensing game includes:
acquiring configuration data of wearing intelligent wearing equipment by a user;
acquiring detection signals of a sensor of intelligent wearing equipment worn on the hand of a user to obtain detection data;
carrying out fillet calculation according to the detection data to obtain a calculation result;
comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user;
determining a recognition result according to the user gesture;
and generating a corresponding game effect according to the identification result, and sending the game effect to a terminal for display.
The further technical scheme is as follows: the detection data comprise three-dimensional coordinate origins and three-dimensional coordinate data detected by a plurality of continuous and stable sensors.
The further technical scheme is as follows: the calculating the rounded corners according to the detection data to obtain a calculation result includes:
Determining a fillet section according to the three-dimensional coordinate data;
calculating the area of the fillet section to obtain the area to be judged;
calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value;
and integrating the area to be judged and the peak value to obtain a calculation result.
The further technical scheme is as follows: the areas to be judged comprise a first area to be judged of a cross section formed by an X axis and a Y axis, a second area to be judged of a cross section formed by an X axis and a Z axis, and a third area to be judged of a cross section formed by a Z axis and a Y axis.
The further technical scheme is as follows: the comparing according to the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user comprises:
judging whether the configuration data is that a user wears the intelligent wearable device by left hand;
if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the X axis;
if the first area to be judged is gradually increased to a corresponding threshold value in a preset action library along the X axis from the origin coordinate, the gesture of the user is left-hand inclined lifting;
If the first area to be judged is not gradually increased to the corresponding threshold value in the preset action library along the X axis from the original point coordinate, judging whether the third judging area is gradually increased to the corresponding threshold value in the preset action library from the original point coordinate to the area formed by the Z axis and the Y axis;
if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing;
if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis;
if the origin coordinate moves to a section formed by an X axis and a Y axis and a section formed by an X axis and a Z axis, the gesture of the user swings back and forth;
if the origin coordinates do not move to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis, the gesture of the user is a combined action;
if the configuration data is not that the intelligent wearable device is worn by the left hand of the user, judging whether the second area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis;
If the second area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis, the gesture of the user is inclined and lifted by the right hand;
if the second area to be judged is not gradually increased from the original point coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third judging area is gradually increased from the original point coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library;
if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing;
and if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment on whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
The further technical scheme is as follows: the determining the recognition result according to the user gesture comprises the following steps:
when the gesture of the user is left-hand inclined lifting and right-hand inclined lifting, the identification result is defensive action;
when the gesture of the user is left-hand vertical up-and-down swing and right-hand vertical up-and-down swing, the identification result is continuous attack and riot action;
When the gesture of the user swings back and forth, the identification result is continuous light effect;
when the user gesture is a combined action, the recognition result is a magic attack.
The further technical scheme is as follows: the threshold value corresponding to the preset action library is formed by gesture action data which are input in advance and are adjusted in real time based on the action data of the user each time.
The invention also provides a motion recognition device for the somatosensory game, which comprises:
the configuration data acquisition unit is used for acquiring configuration data of the intelligent wearing equipment worn by the user;
the detection data acquisition unit is used for acquiring detection signals of a sensor of the intelligent wearing equipment worn on the hand of the user so as to obtain detection data;
the computing unit is used for carrying out round corner computation according to the detection data so as to obtain a computation result;
the gesture determining unit is used for comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library so as to determine the gesture of the user;
the recognition result determining unit is used for determining a recognition result according to the user gesture;
and the effect generation unit is used for generating a corresponding game effect according to the identification result so as to be sent to a terminal for display.
The invention also provides a computer device which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the method when executing the computer program.
The present invention also provides a storage medium storing a computer program which, when executed by a processor, performs the above-described method.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, the sensor is used for detecting the action signals of the user, the fillet area is calculated according to the detection data, and the comparison is carried out according to the threshold values corresponding to the preset action libraries corresponding to different users, so that the user gesture is determined, the recognition result is determined by the user gesture, the effect corresponding to the recognition result can be displayed on the terminal, the gesture in the preset action library is simpler, the user can learn conveniently, the detection signals are acquired by the sensor, the recognition accuracy can be improved, the gesture in the preset action library is simple, the learning period of the user on the gesture can be simplified, and the experience of the user is enhanced.
The invention is further described below with reference to the drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of a motion recognition method for a motion sensing game according to an embodiment of the present invention;
FIG. 2 is a flowchart of a motion recognition method for a motion sensing game according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a frontal attack gesture provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a defensive posture provided by an embodiment of the invention;
FIG. 5 is a schematic diagram of a side attack gesture provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a lateral killing attack provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of a punch attack according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an attack special effect provided by an embodiment of the present invention;
FIG. 9 is a schematic block diagram of a motion recognition device 300 for a motion sensing game according to an embodiment of the present invention;
fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a motion recognition method for a motion sensing game according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of a motion recognition method for a motion sensing game according to an embodiment of the present invention. The motion recognition method for the somatosensory game is applied to the server. The server performs data interaction with the terminal and the intelligent wearing equipment, the intelligent wearing equipment is worn on the hand of a user and is provided with a chip based on a Bluetooth module, sensors such as a gravity sensor, a gyroscope sensor and a geomagnetic sensor are integrated on the chip, action signals of the user are detected in real time by the sensors, the server analyzes the data to identify actions corresponding to the gesture of the user, so that a corresponding game effect is presented at the terminal, the identification accuracy is improved, the learning period of the gesture of the user is simplified, and the experience of the user is enhanced.
Fig. 2 is a flowchart of a motion recognition method for a motion sensing game according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S160.
S110, acquiring configuration data of wearing the intelligent wearable device by the user.
In this embodiment, the configuration data includes information of a location where the user wears the smart wearable device and a user level.
Before the player enters the game, the player wears the device on the left and right hands to set prompts, and the setting value parameters of the player are written into the configuration table to form configuration data for subsequent identification, and the configuration data of the user can be displayed on the terminal in real time.
S120, acquiring detection signals of a sensor of the intelligent wearable device worn on the hand of the user to obtain detection data.
In this embodiment, the detection data refers to a signal detected by a sensor in the smart wearable device worn on the hand of the user, and is data for recognizing the gesture of the user.
Specifically, the detection data includes three-dimensional coordinate origins and three-dimensional coordinate data detected by a plurality of continuous and stable sensors.
S130, performing fillet calculation according to the detection data to obtain a calculation result.
In the present embodiment, the calculation results refer to areas of round cross sections in which continuous detection data are respectively combined into three two dimensions according to XYZ axes and peaks and valleys of the continuous detection data.
In one embodiment, the step S130 may include specific steps S131-S134.
S131, determining the round-corner section according to the three-dimensional coordinate data.
The fillet radius is a track of one continuous point of numerical variation of two coordinate axes, when the continuous tracks are connected in series, the fillet radius is obtained, and the area of the rectangle of the section is calculated to represent the area of the section of the fillet and express the direction variation amplitude of the rectangle of the section. The three two-dimensional coordinate sets are respectively section Panel1 formed by X axis and Y axis; section Panel2 formed by X axis and Z axis; and a section Panel3 formed by a Z-axis and a Y-axis.
S132, calculating the area of the round corner section to obtain the area to be judged.
In this embodiment, the area to be determined is the section Panel1 formed by the X axis and the Y axis; section Panel2 formed by X axis and Z axis; the area corresponding to the section Panel3 formed by the Z axis and the Y axis.
In this embodiment, the area to be determined includes a first area to be determined of a cross section formed by an X-axis and a Y-axis, a second area to be determined of a cross section formed by an X-axis and a Z-axis, and a third area to be determined of a cross section formed by a Z-axis and a Y-axis.
And S133, calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value.
The peak value is determined in order to more accurately determine the corresponding direction.
S134, integrating the area to be determined and the peak value to obtain a calculation result.
The first area to be determined of the Panel1 section is identified as rightward when the origin (0.0) coordinate is continuously increased to a threshold stop where the action in the preset action library is rightward, the second area to be determined of the Panel2 section is identified as leftward when the origin coordinate (0.0) is continuously increased to a threshold stop where the action in the preset action library is leftward, the second area to be determined of the Panel3 section is identified as downward when the origin coordinate (0.0) is continuously increased to a threshold stop where the action in the preset action library is downward, and the origin coordinate (0.0) is continuously increased to the X-axis to a threshold stop where the action in the preset action library is upward. The coordinate values of the correction parameters in the four directions are leftwards (0, Z), rightwards (0, Y, 0), upwards (X, 0) and downwards (0, Z, Y) respectively.
And S140, comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user.
Specifically, the threshold value corresponding to the preset action library is formed by gesture action data which are input in advance and are adjusted in real time based on the action data of the user each time. The sensor has the limitation of hardware capability in real-time monitoring and real-time identification, and if the gravity sensor, the gyroscope sensor and the geomagnetic sensor are integrated on a chip based on the Bluetooth module, the problem of identification accuracy exists, so that the gesture is determined by calling a threshold value in a mode of calling a gesture library based on player data analysis and an identification algorithm.
Specifically, the coordinate continuous track comparison analysis method is characterized in that identification and feedback are carried out based on data comparison of each action of a player through action data of an action library which is input in advance, and the identification is accurately determined through data comparison training and historical records. The main actions of the action library are forward, backward, leftward, rightward, overturn, acceleration, deceleration and riot 8 simple and easy-to-remember gesture actions. The learning period of the gestures by the player can be simplified, the player can easily get on the hand, and the excellent somatosensory interesting experience can be achieved by a simple mode of mapping the gestures by the action, so that the experience feeling of the user is improved.
In the present embodiment, the user gesture refers to an action gesture that the user actually operates when playing a game.
In an embodiment, the step S140 may include step S141
S140a, judging whether the configuration data is that the user wears the intelligent wearable device with the left hand;
s140b, if the configuration data is that the intelligent wearable device is worn by the left hand of the user, judging whether the first area to be judged is gradually increased from an origin coordinate to a corresponding threshold value in a preset action library along an X axis;
s140c, if the first area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the X axis, the gesture of the user is left-hand inclined lifting;
S140d, if the first area to be determined is not gradually increased from the origin coordinate to the corresponding threshold value in the preset action library along the X axis, judging whether the third area to be determined is gradually increased from the origin coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library;
s140e, if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left-hand vertical up-and-down swing;
s140f, if the third judging area is not gradually increased from the original point coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis;
s140g, if the origin coordinate moves to a section formed by an X axis and a Y axis and a section formed by an X axis and a Z axis, the gesture of the user swings back and forth;
s140h, if the origin coordinates do not move to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis, the gesture of the user is a combined action;
s140i, judging whether the second area to be judged is gradually increased from an origin coordinate to a corresponding threshold value in a preset action library along a Z axis if the configuration data is not that the intelligent wearing equipment is worn by the left hand of the user;
S140j, if the second area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis, the gesture of the user is inclined and lifted by the right hand;
s140k, if the second area to be determined is not gradually increased from the origin coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third determination area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library;
s140l, if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left-hand vertical up-and-down swing;
and if the third determination area does not gradually increase from the origin coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the step S140f.
The left-hand inclined lifting is worn as the left hand by a player, the first judging area continuous data track with the round-corner section calculated as Panel1 is identified as left-hand right-direction displacement, at the moment, the numerical values from the original point coordinates (0, 0) to (X, 0) continuously change, and the combination is identified as the left-hand inclined lifting; the right-hand inclined lifting, namely the player wears the right-hand, the second judging area continuous data track with the round-corner section calculated as Panel2 is identified as right-hand left-direction displacement, at the moment, the numerical values from the original point coordinates (0, 0) to (0, Z) continuously change, the combination is identified as the right-hand inclined lifting, the two user gestures are identified as defenses in the algorithm calculation result, as shown in fig. 4, the continuous evasion attack action combined identification result is added in the game action, and the gesture of the control character is displayed.
The left hand swings vertically up and down, namely the player wears the left hand, the third judging area continuous data track with the round angle section calculated as Panel3 is identified as the left hand vertical up and down direction displacement, at this time, the values from the original point coordinates (0, 0) to (0, Z, Y) continuously change, and the combination is identified as the vertical up-and-down swing of the left hand; the right hand swings vertically up and down, namely the player wears the right hand, the third judging area continuous data track with the round angle section calculated as Panel3 is identified as the right hand vertical up and down direction displacement, at the moment, the numerical values from the original point coordinates (0, 0) to (0, Z, Y) change continuously, the combination is identified as the left hand swings vertically up and down, the two user gestures are identified as attacks in algorithm calculation results, and the effects of continuous attacks, riot and the like are identified through acceleration continuous data results.
In addition, by matching with the acceleration data acquired by the sensor and the identification process, whether the front attack gesture corresponds to the left hand vertically swinging downwards and the right hand vertically swinging downwards can be determined, as shown in fig. 3; or the left hand of the side attack gesture recognition is inclined and flicked downwards by 45 degrees, and the right hand is inclined and flicked downwards by 45 degrees, as shown in fig. 5; the left hand and the right hand of the recognition of the attack gesture of the hook are swung vertically upwards, as shown in fig. 7.
In addition, the user gesture of swinging back and forth is identified as continuous light effect in the algorithm calculation result, the magic is implemented on the attack object, the magic refers to the light effect expression in the game attack process, and the specific implementation method is that the original point coordinates (0, 0) are respectively changed to continuous data tracks in the direction of the section Panel1 corresponding to the first judgment area and the section Panel2 corresponding to the second judgment area when the player swings back and forth horizontally, and the light effect of the attack magic is released by triggering the value of the light effect of the magic.
In addition, the user gesture corresponding to the combined attack effect is a combination of horizontal back-and-forth swing and left/right vertical up-and-down swing, and the combination is identified as the acceleration data obtained by the preset magic attack effect cooperation sensor in the game and the identification process, so that whether the left hand and the right hand are horizontally swung for identifying the horizontal killing attack gesture, as shown in fig. 6, or the left hand and the right hand of the attack special effect are turned, as shown in fig. 8.
S150, determining a recognition result according to the user gesture.
In this embodiment, the recognition result refers to an action within the game corresponding to the user gesture.
Specifically, the step S150 includes:
when the gesture of the user is left-hand inclined lifting and right-hand inclined lifting, the identification result is defensive action;
When the gesture of the user is left-hand vertical up-and-down swing and right-hand vertical up-and-down swing, the identification result is continuous attack and riot action;
when the gesture of the user swings back and forth, the identification result is continuous light effect;
when the user gesture is a combined action, the recognition result is a magic attack.
And S160, generating a corresponding game effect according to the identification result, and sending the game effect to a terminal for display.
The game effect corresponding to the identification result is displayed on the terminal, so that the purpose of playing the somatosensory game can be achieved.
According to the motion recognition method for the motion sensing game, the motion signals of the user are detected through the sensor, the round corner area is calculated according to the detection data, the user gestures are determined according to the threshold values corresponding to the preset motion libraries corresponding to different users, the recognition results are determined according to the user gestures, the effects corresponding to the recognition results can be displayed on the terminal, gestures in the preset motion libraries are relatively simple and convenient for the user to learn, the recognition accuracy can be improved by acquiring the detection signals through the sensor, the gestures in the preset motion libraries are simple and easy, the learning period of the user on the gestures can be simplified, and the experience of the user is enhanced.
Fig. 9 is a schematic block diagram of a motion recognition device 300 for a motion sensing game according to an embodiment of the present invention. As shown in fig. 9, the present invention also provides a motion recognition device 300 for a motion recognition game corresponding to the above motion recognition method. The motion recognition apparatus 300 includes a unit for performing the motion recognition method, and may be configured in a server. Specifically, referring to fig. 9, the motion recognition apparatus 300 for a motion sensing game includes a configuration data acquisition unit 301, a detection data acquisition unit 302, a calculation unit 303, a posture determination unit 304, a recognition result determination unit 305, and an effect generation unit 306.
A configuration data obtaining unit 301, configured to obtain configuration data of a user wearing the smart wearable device; a detection data obtaining unit 302, configured to obtain detection signals of a sensor of the smart wearable device worn on a hand of a user, so as to obtain detection data; a calculating unit 303, configured to perform fillet calculation according to the detection data, so as to obtain a calculation result; the gesture determining unit 304 is configured to compare the detection data, the identification result, and the configuration data with thresholds corresponding to a preset action library, so as to determine a gesture of the user; a recognition result determining unit 305 for determining a recognition result according to the user gesture; and the effect generating unit 306 is configured to generate a corresponding game effect according to the identification result, so as to send the game effect to a terminal for display.
In an embodiment, the computing unit 303 includes a section determining subunit, an area computing subunit, a peak computing subunit, and an integrating subunit.
A section determining subunit, configured to determine a fillet section according to the three-dimensional coordinate data; an area calculating subunit, configured to calculate an area of the rounded cross section to obtain an area to be determined; the peak value calculating subunit is used for calculating the wave crest and the wave trough of the three-dimensional coordinate data so as to obtain a peak value; and the integration subunit is used for integrating the area to be determined and the peak value so as to obtain a calculation result.
In one embodiment, the gesture determining unit 304 includes a configuration data determining subunit, a first determining subunit, a second determining subunit, a third determining subunit, a fourth determining subunit, and a fifth determining subunit.
A configuration data judging subunit, configured to judge whether the configuration data is that the user wears the intelligent wearable device with the left hand; the first judging subunit is used for judging whether the first area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the X axis if the configuration data is that the intelligent wearable device is worn by the left hand of the user; if the first area to be judged is gradually increased to a corresponding threshold value in a preset action library along the X axis from the origin coordinate, the gesture of the user is left-hand inclined lifting; the second judging subunit is configured to judge whether the third judging area gradually increases from the origin coordinate to a region formed by the Z axis and the Y axis to a corresponding threshold value in the preset action library if the first area to be judged does not gradually increase from the origin coordinate to the corresponding threshold value in the preset action library along the X axis; if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing; a third judging subunit, configured to judge whether the origin coordinate moves toward a section formed by an X-axis and a Y-axis and a section formed by an X-axis and a Z-axis if the third judging area does not gradually increase from the origin coordinate to a region formed by the Z-axis and the Y-axis to a corresponding threshold value in a preset action library; if the origin coordinate moves to a section formed by an X axis and a Y axis and a section formed by an X axis and a Z axis, the gesture of the user swings back and forth; if the origin coordinates do not move to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis, the gesture of the user is a combined action; a fourth judging subunit, configured to judge whether the second area to be judged gradually increases from an origin coordinate to a corresponding threshold value in a preset action library along a Z axis if the configuration data is not that the intelligent wearable device is worn by the left hand of the user; if the second area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis, the gesture of the user is inclined and lifted by the right hand; a fifth judging subunit, configured to judge whether the third judging area gradually increases from the origin coordinate to a region formed by the Z axis and the Y axis to a corresponding threshold in the preset action library if the second area to be judged does not gradually increase from the origin coordinate to the Z axis to the corresponding threshold in the preset action library; if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing; and if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment on whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
In an embodiment, the recognition result determining unit 305 is configured to, when the gesture of the user is a left-hand oblique lifting or a right-hand oblique lifting, determine that the recognition result is a defensive action; when the gesture of the user is left-hand vertical up-and-down swing and right-hand vertical up-and-down swing, the identification result is continuous attack and riot action; when the gesture of the user swings back and forth, the identification result is continuous light effect; when the user gesture is a combined action, the recognition result is a magic attack.
It should be noted that, as will be clearly understood by those skilled in the art, the specific implementation process of the motion recognition device 300 and each unit may refer to the corresponding description in the foregoing method embodiments, and for convenience and brevity of description, the detailed description is omitted herein.
The above-mentioned terminal may be an electronic device having a communication function such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device.
The motion recognition apparatus 300 may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, and the server may be a stand-alone server or may be a server cluster formed by a plurality of servers.
With reference to FIG. 10, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 includes program instructions that, when executed, cause the processor 502 to perform a motion recognition method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a motion detection method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of a portion of the architecture in connection with the present application and is not intended to limit the computer device 500 to which the present application is applied, and that a particular computer device 500 may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to implement the steps of:
acquiring configuration data of wearing intelligent wearing equipment by a user; acquiring detection signals of a sensor of intelligent wearing equipment worn on the hand of a user to obtain detection data; carrying out fillet calculation according to the detection data to obtain a calculation result; comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user; determining a recognition result according to the user gesture; and generating a corresponding game effect according to the identification result, and sending the game effect to a terminal for display.
The detection data comprise three-dimensional coordinate origins and three-dimensional coordinate data detected by a plurality of continuous and stable sensors.
The threshold value corresponding to the preset action library is formed by gesture action data which are input in advance and are adjusted in real time based on the action data of the user each time.
In one embodiment, when the processor 502 performs the step of performing the fillet calculation according to the detection data to obtain the calculation result, the following steps are specifically implemented:
determining a fillet section according to the three-dimensional coordinate data; calculating the area of the fillet section to obtain the area to be judged; calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value; and integrating the area to be judged and the peak value to obtain a calculation result.
The area to be judged comprises a first area to be judged of a cross section formed by an X axis and a Y axis, a second area to be judged of a cross section formed by an X axis and a Z axis, and a third area to be judged of a cross section formed by a Z axis and a Y axis.
In an embodiment, when the processor 502 performs the step of comparing the detection data, the identification result, and the configuration data with the threshold corresponding to the preset action library to determine the gesture of the user, the following steps are specifically implemented:
judging whether the configuration data is that a user wears the intelligent wearable device by left hand; if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the X axis; if the first area to be judged is gradually increased to a corresponding threshold value in a preset action library along the X axis from the origin coordinate, the gesture of the user is left-hand inclined lifting; if the first area to be judged is not gradually increased to the corresponding threshold value in the preset action library along the X axis from the original point coordinate, judging whether the third judging area is gradually increased to the corresponding threshold value in the preset action library from the original point coordinate to the area formed by the Z axis and the Y axis; if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing; if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis; if the origin coordinate moves to a section formed by an X axis and a Y axis and a section formed by an X axis and a Z axis, the gesture of the user swings back and forth; if the origin coordinates do not move to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis, the gesture of the user is a combined action; if the configuration data is not that the intelligent wearable device is worn by the left hand of the user, judging whether the second area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis; if the second area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis, the gesture of the user is inclined and lifted by the right hand; if the second area to be judged is not gradually increased from the original point coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third judging area is gradually increased from the original point coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library; if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing; and if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment on whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
In one embodiment, when the step of determining the recognition result according to the gesture of the user is implemented by the processor 502, the following steps are specifically implemented:
when the gesture of the user is left-hand inclined lifting and right-hand inclined lifting, the identification result is defensive action; when the gesture of the user is left-hand vertical up-and-down swing and right-hand vertical up-and-down swing, the identification result is continuous attack and riot action; when the gesture of the user swings back and forth, the identification result is continuous light effect; when the user gesture is a combined action, the recognition result is a magic attack.
It should be appreciated that in embodiments of the present application, the processor 502 may be a Central processing unit (Central ProcessingUnit, CPU), and the processor 502 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program comprises program instructions, and the computer program can be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring configuration data of wearing intelligent wearing equipment by a user; acquiring detection signals of a sensor of intelligent wearing equipment worn on the hand of a user to obtain detection data; carrying out fillet calculation according to the detection data to obtain a calculation result; comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user; determining a recognition result according to the user gesture; and generating a corresponding game effect according to the identification result, and sending the game effect to a terminal for display.
The detection data comprise three-dimensional coordinate origins and three-dimensional coordinate data detected by a plurality of continuous and stable sensors.
The threshold value corresponding to the preset action library is formed by gesture action data which are input in advance and are adjusted in real time based on the action data of the user each time.
In one embodiment, when the processor executes the computer program to implement the step of performing the fillet calculation according to the detection data to obtain the calculation result, the processor specifically implements the following steps:
determining a fillet section according to the three-dimensional coordinate data; calculating the area of the fillet section to obtain the area to be judged; calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value; and integrating the area to be judged and the peak value to obtain a calculation result.
The area to be judged comprises a first area to be judged of a cross section formed by an X axis and a Y axis, a second area to be judged of a cross section formed by an X axis and a Z axis, and a third area to be judged of a cross section formed by a Z axis and a Y axis.
In an embodiment, when the processor executes the computer program to perform the step of comparing the detection data, the identification result, and the configuration data with thresholds corresponding to a preset action library to determine a gesture of the user, the method specifically includes the following steps:
Judging whether the configuration data is that a user wears the intelligent wearable device by left hand; if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the X axis; if the first area to be judged is gradually increased to a corresponding threshold value in a preset action library along the X axis from the origin coordinate, the gesture of the user is left-hand inclined lifting; if the first area to be judged is not gradually increased to the corresponding threshold value in the preset action library along the X axis from the original point coordinate, judging whether the third judging area is gradually increased to the corresponding threshold value in the preset action library from the original point coordinate to the area formed by the Z axis and the Y axis; if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing; if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis; if the origin coordinate moves to a section formed by an X axis and a Y axis and a section formed by an X axis and a Z axis, the gesture of the user swings back and forth; if the origin coordinates do not move to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis, the gesture of the user is a combined action; if the configuration data is not that the intelligent wearable device is worn by the left hand of the user, judging whether the second area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis; if the second area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis, the gesture of the user is inclined and lifted by the right hand; if the second area to be judged is not gradually increased from the original point coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third judging area is gradually increased from the original point coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library; if the third judging area gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to a region formed by the Z axis and the Y axis, the gesture of the user is left hand vertical up-and-down swing; and if the third judging area is not gradually increased from the original point coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment on whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
In one embodiment, when the processor executes the computer program to implement the step of determining the recognition result according to the gesture of the user, the following steps are specifically implemented:
when the gesture of the user is left-hand inclined lifting and right-hand inclined lifting, the identification result is defensive action; when the gesture of the user is left-hand vertical up-and-down swing and right-hand vertical up-and-down swing, the identification result is continuous attack and riot action; when the gesture of the user swings back and forth, the identification result is continuous light effect; when the user gesture is a combined action, the recognition result is a magic attack.
The storage medium may be a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that can store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be combined, divided and deleted according to actual needs. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (8)

1. A motion recognition method for a motion sensing game is characterized by comprising the following steps:
acquiring configuration data of wearing intelligent wearing equipment by a user;
acquiring detection signals of a sensor of intelligent wearing equipment worn on the hand of a user to obtain detection data;
carrying out fillet calculation according to the detection data to obtain a calculation result;
comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library to determine the gesture of the user;
determining a recognition result according to the user gesture;
generating a corresponding game effect according to the identification result, and sending the game effect to a terminal for display;
the detection data comprise three-dimensional coordinate origins detected by a plurality of continuous and stable sensors and three-dimensional coordinate data;
The calculating the rounded corners according to the detection data to obtain a calculation result includes:
determining a fillet section according to the three-dimensional coordinate data;
calculating the area of the fillet section to obtain the area to be judged;
calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value;
and integrating the area to be judged and the peak value to obtain a calculation result.
2. The motion recognition method according to claim 1, wherein the area to be determined includes a first area to be determined of a cross section composed of X-axis and Y-axis, a second area to be determined of a cross section composed of X-axis and Z-axis, and a third area to be determined of a cross section composed of Z-axis.
3. The motion recognition method according to claim 2, wherein the comparing the detection data, the recognition result, and the configuration data with thresholds corresponding to a preset motion library to determine the gesture of the user comprises:
judging whether the configuration data is that a user wears the intelligent wearable device by left hand;
if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the X axis;
If the first area to be judged is gradually increased to a corresponding threshold value in a preset action library along the X axis from the origin coordinate, the gesture of the user is left-hand inclined lifting;
if the first area to be judged is not gradually increased to the corresponding threshold value in the preset action library along the X axis from the original point coordinate, judging whether the third area to be judged is gradually increased to the corresponding threshold value in the preset action library from the original point coordinate to the area formed by the Z axis and the Y axis;
if the third area to be judged gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to an area formed by the Z axis and the Y axis, the gesture of the user is left-hand vertical up-and-down swing;
if the third area to be judged is not gradually increased from the original point coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis;
if the origin coordinate moves to a section formed by an X axis and a Y axis and a section formed by an X axis and a Z axis, the gesture of the user swings back and forth;
if the origin coordinates do not move to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis, the gesture of the user is a combined action;
If the configuration data is not that the intelligent wearable device is worn by the left hand of the user, judging whether the second area to be judged is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis;
if the second area to be determined is gradually increased from the origin coordinate to a corresponding threshold value in a preset action library along the Z axis, the gesture of the user is inclined and lifted by the right hand;
if the second area to be judged is not gradually increased to the corresponding threshold value in the preset action library along the Z axis from the original point coordinate, judging whether the third area to be judged is gradually increased to the corresponding threshold value in the preset action library from the original point coordinate to the area formed by the Z axis and the Y axis;
if the third area to be judged gradually increases to a corresponding threshold value in a preset action library from the origin coordinate to an area formed by the Z axis and the Y axis, the gesture of the user is left-hand vertical up-and-down swing;
and if the third area to be judged is not gradually increased from the original point coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment on whether the original point coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
4. A motion recognition method according to claim 3, wherein the determining the recognition result according to the user gesture comprises:
When the gesture of the user is left-hand inclined lifting and right-hand inclined lifting, the identification result is defensive action;
when the gesture of the user is left-hand vertical up-and-down swing and right-hand vertical up-and-down swing, the identification result is continuous attack and riot action;
when the gesture of the user swings back and forth, the identification result is continuous light effect;
when the user gesture is a combined action, the recognition result is a magic attack.
5. The motion recognition method according to claim 1, wherein the threshold value corresponding to the preset motion library is formed by pre-inputting gesture motion data and performing real-time adjustment based on each motion data of a user.
6. A motion recognition device for a motion sensing game, comprising:
the configuration data acquisition unit is used for acquiring configuration data of the intelligent wearing equipment worn by the user;
the detection data acquisition unit is used for acquiring detection signals of a sensor of the intelligent wearing equipment worn on the hand of the user so as to obtain detection data;
the computing unit is used for carrying out round corner computation according to the detection data so as to obtain a computation result;
the gesture determining unit is used for comparing the detection data, the identification result and the configuration data with a threshold value corresponding to a preset action library so as to determine the gesture of the user;
The recognition result determining unit is used for determining a recognition result according to the user gesture;
the effect generation unit is used for generating a corresponding game effect according to the identification result so as to be sent to a terminal for display;
the detection data comprise three-dimensional coordinate origins detected by a plurality of continuous and stable sensors and three-dimensional coordinate data;
the calculating unit comprises a section determining subunit, an area calculating subunit, a peak value calculating subunit and an integrating subunit;
a section determining subunit, configured to determine a fillet section according to the three-dimensional coordinate data; an area calculating subunit, configured to calculate an area of the rounded cross section to obtain an area to be determined; the peak value calculating subunit is used for calculating the wave crest and the wave trough of the three-dimensional coordinate data so as to obtain a peak value; and the integration subunit is used for integrating the area to be determined and the peak value so as to obtain a calculation result.
7. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-5.
8. A storage medium storing a computer program which, when executed by a processor, performs the method of any one of claims 1 to 5.
CN202010038130.XA 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game Active CN111228792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010038130.XA CN111228792B (en) 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038130.XA CN111228792B (en) 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game

Publications (2)

Publication Number Publication Date
CN111228792A CN111228792A (en) 2020-06-05
CN111228792B true CN111228792B (en) 2023-05-05

Family

ID=70862552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038130.XA Active CN111228792B (en) 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game

Country Status (1)

Country Link
CN (1) CN111228792B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111643887B (en) * 2020-06-08 2023-07-14 歌尔科技有限公司 Headset, data processing method thereof and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010125294A (en) * 2008-12-01 2010-06-10 Copcom Co Ltd Game program, storage medium and computer device
CN108268137A (en) * 2018-01-24 2018-07-10 吉林大学 Taking, movement and action measuring method of letting go in a kind of Virtual assemble
CN109718559A (en) * 2018-12-24 2019-05-07 努比亚技术有限公司 Game control method, mobile terminal and computer readable storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013069224A (en) * 2011-09-26 2013-04-18 Sony Corp Motion recognition apparatus, motion recognition method, operation apparatus, electronic apparatus, and program
CN103386683B (en) * 2013-07-31 2015-04-08 哈尔滨工程大学 Kinect-based motion sensing-control method for manipulator
CN103941866B (en) * 2014-04-08 2017-02-15 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
CN105528059B (en) * 2014-09-30 2019-11-19 云南北方奥雷德光电科技股份有限公司 A kind of gesture operation in three-dimensional space method and system
CN104750397B (en) * 2015-04-09 2018-06-15 重庆邮电大学 A kind of Virtual mine natural interactive method based on body-sensing
CN105739106B (en) * 2015-06-12 2019-07-09 南京航空航天大学 A kind of true three-dimensional display apparatus of body-sensing multiple views large scale light field and method
US20170192494A1 (en) * 2016-01-05 2017-07-06 Shobhit NIRANJAN Wearable interactive gaming device
JP2017191426A (en) * 2016-04-13 2017-10-19 キヤノン株式会社 Input device, input control method, computer program, and storage medium
CN106249901B (en) * 2016-08-16 2019-03-26 南京华捷艾米软件科技有限公司 A kind of adaptation method for supporting somatosensory device manipulation with the primary game of Android
CN106445138A (en) * 2016-09-21 2017-02-22 中国农业大学 Human body posture feature extracting method based on 3D joint point coordinates
CN106510719B (en) * 2016-09-30 2023-11-28 歌尔股份有限公司 User gesture monitoring method and wearable device
CN107315473A (en) * 2017-06-19 2017-11-03 南京华捷艾米软件科技有限公司 A kind of method that body-sensing gesture selects Android Mission Objective UI controls
CN107688390A (en) * 2017-08-28 2018-02-13 武汉大学 A kind of gesture recognition controller based on body feeling interaction equipment
CN107894834B (en) * 2017-11-09 2021-04-02 上海交通大学 Control gesture recognition method and system in augmented reality environment
CN108549489B (en) * 2018-04-27 2019-12-13 哈尔滨拓博科技有限公司 gesture control method and system based on hand shape, posture, position and motion characteristics
CN109200576A (en) * 2018-09-05 2019-01-15 深圳市三宝创新智能有限公司 Somatic sensation television game method, apparatus, equipment and the storage medium of robot projection
CN110362197A (en) * 2019-06-13 2019-10-22 缤刻普达(北京)科技有限责任公司 Screen lights method, apparatus, intelligent wearable device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010125294A (en) * 2008-12-01 2010-06-10 Copcom Co Ltd Game program, storage medium and computer device
CN108268137A (en) * 2018-01-24 2018-07-10 吉林大学 Taking, movement and action measuring method of letting go in a kind of Virtual assemble
CN109718559A (en) * 2018-12-24 2019-05-07 努比亚技术有限公司 Game control method, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111228792A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
US8395620B2 (en) Method and system for tracking of a subject
JP5989667B2 (en) System and method for presenting multiple frames on a touch screen
CN106249882B (en) Gesture control method and device applied to VR equipment
US9423876B2 (en) Omni-spatial gesture input
CN106155312B (en) Gesture recognition and control method and device
US9262012B2 (en) Hover angle
KR20190122559A (en) Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments
CN111273777A (en) Virtual content control method and device, electronic equipment and storage medium
CN111258422B (en) Terminal game interaction method and device, computer equipment and storage medium
CN111228792B (en) Motion recognition method, device, computer equipment and storage medium for motion recognition game
US20140115532A1 (en) Information-processing device, storage medium, information-processing method, and information-processing system
JP2007072569A (en) Program, information recording medium, and freehand drawing similarity judging device
CN106951108B (en) Virtual screen implementation method and device
US11262850B2 (en) No-handed smartwatch interaction techniques
US11782548B1 (en) Speed adapted touch detection
CN104536575A (en) Large screen interaction system realization method based on 3D sensing
TW202141349A (en) Image recognition method and device thereof and ai model training method and device thereof
US10620760B2 (en) Touch motion tracking and reporting technique for slow touch movements
CN111103973A (en) Model processing method, model processing device, computer equipment and storage medium
CN115793893B (en) Touch writing handwriting generation method and device, electronic equipment and storage medium
CN104679230B (en) A kind of method and terminal of contactless input information
KR101605740B1 (en) Method for recognizing personalized gestures of smartphone users and Game thereof
WO2022196222A1 (en) Detection processing device, detection processing method, and information processing system
JP6523509B1 (en) Game program, method, and information processing apparatus
CN115393604A (en) Behavior detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant