CN111228792A - Motion sensing game action recognition method and device, computer equipment and storage medium - Google Patents

Motion sensing game action recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111228792A
CN111228792A CN202010038130.XA CN202010038130A CN111228792A CN 111228792 A CN111228792 A CN 111228792A CN 202010038130 A CN202010038130 A CN 202010038130A CN 111228792 A CN111228792 A CN 111228792A
Authority
CN
China
Prior art keywords
axis
area
user
origin coordinate
preset action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010038130.XA
Other languages
Chinese (zh)
Other versions
CN111228792B (en
Inventor
张书臣
罗晓喆
俞知渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shimi Network Technology Co ltd
Original Assignee
Shenzhen Shimi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shimi Network Technology Co ltd filed Critical Shenzhen Shimi Network Technology Co ltd
Priority to CN202010038130.XA priority Critical patent/CN111228792B/en
Publication of CN111228792A publication Critical patent/CN111228792A/en
Application granted granted Critical
Publication of CN111228792B publication Critical patent/CN111228792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a motion sensing game action recognition method, a motion sensing game action recognition device, computer equipment and a storage medium, wherein the method comprises the steps of obtaining configuration data of intelligent wearable equipment worn by a user; acquiring a detection signal of a sensor of intelligent wearable equipment worn on a hand of a user to obtain detection data; performing fillet calculation according to the detection data to obtain a calculation result; comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user posture; determining a recognition result according to the user posture; and generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for display. According to the invention, the detection signal is acquired by adopting the sensor, so that the recognition accuracy can be improved, the gesture in the preset action library is simple, the learning period of the user on the gesture can be simplified, and the experience of the user is enhanced.

Description

Motion sensing game action recognition method and device, computer equipment and storage medium
Technical Field
The invention relates to a motion sensing game, in particular to a motion sensing game motion recognition method, a motion sensing game motion recognition device, a computer device and a storage medium.
Background
At present, the man-machine interaction technology refers to a technology for realizing human and machine interaction in an effective way through input and output equipment. The existing man-machine interaction mode is usually to interact with a machine system through external devices such as a mouse, a keyboard, a touch screen or a handle, and the machine system then makes a corresponding response. For example, when a user needs to operate a game on a terminal device, the user needs to click or otherwise operate the game through a key or a touch screen, so as to operate the game.
Currently, a user operation action recognition method for a motion sensing game generally adopts structured light equipment to project to a motion sensing game user, and obtains a 3D model of the motion sensing game user at each time point; the 3D model of each time point motion sensing game user is analyzed, the posture information of each time point motion sensing game user is obtained, the recognition method is easily failed in recognition due to the influence of the environment, some motion of the user is obtained by a camera or special equipment, the moving direction is analyzed according to the obtained result, but the mode can not accurately obtain some fine motion or can accurately obtain the motion of the user only by professional training of the user, and the experience feeling of the mode is weak.
Therefore, it is necessary to design a new method to improve the recognition accuracy, simplify the learning cycle of the user for the gesture, and enhance the experience of the user.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a motion sensing game action recognition method, a motion sensing game action recognition device, a computer device and a storage medium.
In order to achieve the purpose, the invention adopts the following technical scheme: the motion sensing game action recognition method comprises the following steps:
acquiring configuration data of intelligent wearable equipment worn by a user;
acquiring a detection signal of a sensor of intelligent wearable equipment worn on a hand of a user to obtain detection data;
performing fillet calculation according to the detection data to obtain a calculation result;
comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user posture;
determining a recognition result according to the user posture;
and generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for display.
The further technical scheme is as follows: the detection data comprises three-dimensional coordinate origin points and three-dimensional coordinate data detected by a plurality of continuous and stable sensors.
The further technical scheme is as follows: the round angle calculation according to the detection data to obtain a calculation result includes:
determining a fillet section according to the three-dimensional coordinate data;
calculating the area of the section of the fillet to obtain the area to be judged;
calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value;
and integrating the area to be determined and the peak value to obtain a calculation result.
The further technical scheme is as follows: the area to be judged comprises a first area to be judged of the section formed by the X-axis and the Y-axis, a second area to be judged of the section formed by the X-axis and the Z-axis, and a third area to be judged of the section formed by the Z-axis and the Y-axis.
The further technical scheme is as follows: the comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user gesture comprises:
judging whether the configuration data is that the user wears the intelligent wearable device by the left hand;
if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate along the X axis to a corresponding threshold value in a preset action library;
if the first area to be determined gradually increases from the origin coordinate along the X axis to a corresponding threshold value in a preset action library, the user posture is left-hand inclined and raised;
if the first area to be judged is not gradually increased from the origin coordinate to the corresponding threshold in the preset action library along the X axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset action library;
if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down;
if the third judgment area is not gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis;
if the origin coordinate moves to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture swings back and forth;
if the origin coordinate does not move to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture is a combined action;
if the configuration data is not that the user wears the intelligent wearable device by the left hand, judging whether the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library;
if the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library, the user posture is that the right hand is inclined and lifted;
if the second area to be judged is not gradually increased from the origin coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library;
if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down;
and if the third judgment area does not gradually increase from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
The further technical scheme is as follows: the determining the recognition result according to the user gesture comprises the following steps:
when the user posture is left-hand inclined lifting and right-hand inclined lifting, the recognition result is defensive action;
when the user posture is that the left hand vertically swings up and down and the right hand vertically swings up and down, the identification result is continuous attack and attack action;
when the user gesture is swinging back and forth, the recognition result is a continuous light effect;
and when the user gesture is a combined action, the recognition result is a magic attack.
The further technical scheme is as follows: the threshold corresponding to the preset action library is formed by adjusting gesture action data which are input in advance in real time based on action data of each time of the user.
The invention also provides a motion sensing game action recognition device, which comprises:
the configuration data acquisition unit is used for acquiring configuration data of the intelligent wearable device worn by the user;
the detection data acquisition unit is used for acquiring detection signals of a sensor of the intelligent wearable device worn on the hand of the user to obtain detection data;
the calculation unit is used for performing fillet calculation according to the detection data to obtain a calculation result;
the gesture determining unit is used for comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the gesture of the user;
a recognition result determination unit for determining a recognition result according to the user posture;
and the effect generating unit is used for generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for displaying.
The invention also provides computer equipment which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the method when executing the computer program.
The invention also provides a storage medium storing a computer program which, when executed by a processor, is operable to carry out the method as described above.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, the action signals of the users are detected by the sensors, the fillet area is calculated according to the detection data, and the comparison is carried out according to the threshold values corresponding to the preset action libraries corresponding to different users, so as to determine the user postures, and then the recognition results are determined according to the user postures, so that the corresponding effects of the recognition results can be displayed on the terminal.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a motion sensing game motion recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a motion sensing game motion recognition method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a frontal attack gesture provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a defensive posture provided by an embodiment of the invention;
FIG. 5 is a schematic diagram of a side attack gesture provided by an embodiment of the invention;
FIG. 6 is a schematic diagram of a horizontal kill attack according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a hook punch attack provided by an embodiment of the present invention;
FIG. 8 is a diagram illustrating attack effects provided by an embodiment of the present invention;
fig. 9 is a schematic block diagram of a motion sensing game motion recognition apparatus 300 according to an embodiment of the present invention;
FIG. 10 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a motion sensing game motion recognition method according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of a motion sensing game motion recognition method according to an embodiment of the present invention. The motion sensing game action recognition method is applied to a server. This server carries out data interaction with terminal and intelligent wearing equipment, wear at user's hand by intelligent wearing equipment, this intelligent wearing equipment has the chip based on the bluetooth module, integrated gravity sensor on this chip, the gyroscope sensor, sensors such as earth magnetism sensor, by these sensor real-time detection user's action signal, the server carries out the analysis to these data, in order to discern the action that the user gesture corresponds, with this game effect that presents at the terminal correspondence, with this improvement discernment rate of accuracy, and simplify the learning cycle of user to the gesture, reinforcing user's experience is felt.
Fig. 2 is a schematic flow chart of a motion sensing game motion recognition method according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S160.
S110, obtaining configuration data of the intelligent wearable device worn by the user.
In this embodiment, the configuration data includes information of a position where the user wears the smart wearable device and a user level.
Before the player enters the game, the player wearing the game machine by the left hand and the right hand is prompted to set, and the player setting value parameters are written into a configuration table, so that configuration data are formed for subsequent identification, and the configuration data of the user can be displayed on the terminal in real time.
And S120, acquiring detection signals of the sensor of the intelligent wearable device worn on the hand of the user to obtain detection data.
In this embodiment, the detection data refers to signals detected by a sensor in the smart wearable device worn on the hand of the user, and is used for identifying data of the user gesture.
Specifically, the detection data comprises three-dimensional coordinate origin points detected by a plurality of continuous stable sensors and three-dimensional coordinate data.
And S130, performing fillet calculation according to the detection data to obtain a calculation result.
In this embodiment, the calculation result means that the continuous detection data are combined into three two-dimensional rounded cross-sectional areas and continuous peaks and valleys of the detection data according to XYZ axes.
In one embodiment, the step S130 may include specific steps S131 to S134.
S131, determining the section of the fillet according to the three-dimensional coordinate data.
The fillet radius is a track of a continuous point with numerical changes of two coordinate axes, when the continuous tracks are connected in series, the fillet radius is obtained, and the area of a section rectangle is calculated to represent the area of a fillet section and express the direction change amplitude of the fillet section. The three two-dimensional coordinate sets are respectively a section Panel1 formed by an X axis and a Y axis; x-axis Z-axis cross-section Panel 2; z-axis and Y-axis, respectively, as Panel 3.
And S132, calculating the area of the section of the fillet to obtain the area to be judged.
In the present embodiment, the area to be determined refers to the cross section Panel1 formed by the X axis and the Y axis; x-axis Z-axis cross-section Panel 2; the area of the cross section Panel3 formed by the Z axis and the Y axis corresponds to the Z axis and the Y axis.
In this embodiment, the area to be determined includes a first area to be determined of the cross section formed by the X-axis and the Y-axis, a second area to be determined of the cross section formed by the X-axis and the Z-axis, and a third area to be determined of the cross section formed by the Z-axis and the Y-axis.
And S133, calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value.
The peak is determined in order to determine the corresponding direction more accurately.
And S134, integrating the area to be judged and the peak value to obtain a calculation result.
The first area to be determined of the section of the Panel1 is identified as right when the first area to be determined of the section of the Panel1 continuously increases from the origin coordinate (0.0) to the threshold stop where the action in the preset action library is right, is identified as left when the second area to be determined of the section of the Panel2 continuously increases from the origin coordinate (0.0) to the threshold stop where the action in the preset action library is left, is identified as down when the second area to be determined of the section of the Panel3 continuously increases from the origin coordinate (0.0) to the threshold stop where the action in the preset action library is down, and is identified as up when the origin coordinate (0.0) continuously increases to the X axis to the threshold stop where the action in the preset action library is up. And the coordinate values of the correction parameters in the four directions are respectively leftward (0, 0, Z), rightward (0, Y, 0), upward (X, 0, 0) and downward (0, Z, Y).
S140, comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user posture.
Specifically, the threshold corresponding to the preset action library is formed by adjusting gesture action data which is input in advance in real time based on action data of each time of the user. The sensor has the restriction of hardware ability on real-time supervision and real-time identification, if integrated gravity sensor, gyroscope sensor, geomagnetic sensor have the problem of discernment precision on the chip based on the bluetooth module, consequently, combine the recognition algorithm based on player data analysis, carry out the transfer of threshold value through the mode that the gesture storehouse was called to this confirms the gesture.
Specifically, the coordinate continuous track comparison analysis method identifies and feeds back the action data of the player based on the action data comparison of each action through the action database action data which is input in advance, and enables the identification to be accurate through data comparison training and historical records. The main actions of the action library comprise 8 simple and easily-remembered gesture actions of forward, backward, leftward and rightward turning, accelerating, decelerating and knocking. The method can simplify the learning period of the player to the gestures, is easy to learn, can achieve excellent somatosensory interesting experience in a simple action mapping gesture library mode, and further improves the experience of the user.
In the present embodiment, the user gesture refers to an action gesture that the user actually operates while playing the game.
In an embodiment, the step S140 may include the step S141
S140, 140a, judging whether the configuration data are the intelligent wearable equipment worn by the left hand of the user;
s140b, if the configuration data is that the intelligent wearable device is worn by the left hand of the user, whether the first area to be judged is gradually increased from the origin coordinate along the X axis to a corresponding threshold value in a preset action library is judged;
s140c, if the first area to be determined gradually increases from the origin coordinate along the X axis to a corresponding threshold in a preset action library, the user gesture is left-handed tilt and lift;
s140d, if the first area to be determined does not gradually increase from the origin coordinate to the corresponding threshold in the preset action library along the X axis, determining whether the third determination area gradually increases from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset action library;
s140e, if the third judgment area is gradually increased from the origin coordinate to a region formed by the Z axis and the Y axis to a corresponding threshold value in a preset action library, the user posture is that the left hand vertically swings up and down;
s140f, if the third judgment area is not gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis;
s140g, if the origin coordinate moves to the section formed by the X-axis and the Y-axis and the section formed by the X-axis and the Z-axis, the user posture swings back and forth;
s140h, if the origin coordinates do not move to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture is a combined action;
s140i, if the configuration data is not that the user wears the intelligent wearable device by the left hand, whether the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library is judged;
s140, 140j, if the second area to be judged gradually increases from the origin coordinate along the Z axis to the corresponding threshold value in the preset action library, the user posture is that the right hand inclines and rises;
s140k, if the second area to be determined does not gradually increase from the origin coordinate along the Z axis to the corresponding threshold in the preset action library, determining whether the third determination area gradually increases from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset action library;
s140l, if the third judgment area is gradually increased from the origin coordinate to a region formed by the Z axis and the Y axis to a corresponding threshold value in a preset action library, the user posture is that the left hand vertically swings up and down;
if the third determination area does not gradually increase from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset operation library, the step S140f is executed.
The left hand is obliquely lifted, namely the player wears a left hand, the first judgment area continuous data track with the round-corner section calculated as Panel1 is identified as left-hand right direction displacement, at the moment, the numerical values from the origin coordinates (0, 0, 0) to (X, 0, 0) are continuously changed, and the combination is identified as left-hand oblique lifting; the right-hand oblique lifting is that the player wears the game as the right hand, the continuous data track of the second determination area with the fillet section calculated as Panel2 is identified as right-hand left-direction displacement, the values of the original coordinates (0, 0, 0) to (0, 0, Z) are continuously changed at the moment, the combination is identified as right-hand oblique lifting, the two user postures are identified as defense in the algorithm calculation result, and as shown in fig. 4, the continuous avoidance attack effect and the identification result are added in the game effect, and the posture of the control role is displayed.
The left hand vertically swings up and down, namely the player wears the left hand, the third judgment area continuous data track with the rounded section calculated as Panel3 is identified as the left hand vertical up and down displacement, at the moment, the numerical values continuously change from the original point coordinates (0, 0, 0) to (0, Z, Y), and the combination is identified as the left hand vertically swings up and down; the right hand vertically swings up and down, namely the player wears the right hand, the third judgment area continuous data track with the round-angle section calculated as Panel3 is identified as the vertical up and down direction displacement of the right hand, at the moment, the numerical values continuously change from the original point coordinate (0, 0, 0) to (0, Z, Y), the combination is identified as the vertical up and down swing of the left hand, the two user postures are identified as attacks in the calculation result of the algorithm, and the effects of continuous attacks, violent attacks and the like are identified through the continuous acceleration data result.
In addition, by matching with the acceleration data acquired by the sensor and the identification process, it can be determined whether the left hand downward vertical swing and the right hand downward vertical swing corresponding to the positive attack posture are performed, as shown in fig. 3; or the left hand is tilted downwards by 45 degrees and flapped and the right hand is tilted downwards by 45 degrees and flapped for the side attack gesture recognition, as shown in fig. 5; or the left hand swings vertically upwards and the right hand swings vertically upwards in the hook boxing attack gesture recognition, as shown in fig. 7.
In addition, the user posture swinging back and forth is identified as a continuous light effect in the algorithm calculation result, magic is carried out on the attacking object, the magic is light and shadow effect expression in the game attacking process, the specific implementation method is that the player horizontally swings back and forth, the original point coordinates (0, 0 and 0) respectively have continuous data track changes in the directions of the section Panel1 corresponding to the first judgment area and the section Panel2 corresponding to the second judgment area, and the light and shadow effect of the attacking magic is released by triggering the value of the light and shadow effect of the magic.
In addition, the user posture corresponding to the combined attack effect is a combination of horizontal back-and-forth swing and left hand/right hand vertical up-and-down swing, and is identified as acceleration data acquired by the magic attack effect matched with the sensor preset in the game and the identification process, so that whether the left hand horizontal whipping and the right hand horizontal whipping are recognized by the horizontal killing attack gesture can be determined, as shown in fig. 6, or whether the left hand overturning and the right hand overturning are recognized by the attack special effect, as shown in fig. 8.
And S150, determining a recognition result according to the user posture.
In this embodiment, the recognition result refers to an in-game action corresponding to the user gesture.
Specifically, the step S150 includes:
when the user posture is left-hand inclined lifting and right-hand inclined lifting, the recognition result is defensive action;
when the user posture is that the left hand vertically swings up and down and the right hand vertically swings up and down, the identification result is continuous attack and attack action;
when the user gesture is swinging back and forth, the recognition result is a continuous light effect;
and when the user gesture is a combined action, the recognition result is a magic attack.
And S160, generating a corresponding game effect according to the identification result, and sending the game effect to the terminal for displaying.
And displaying the game effect corresponding to the recognition result on the terminal, so that the purpose of playing the motion sensing game can be achieved.
According to the motion sensing game motion recognition method, motion signals of users are detected through the sensors, the fillet area is calculated according to detection data, the fillet area is compared according to the threshold values corresponding to the preset motion libraries corresponding to different users, the user gestures are determined, the recognition results are determined according to the user gestures, the corresponding effects of the recognition results can be displayed on the terminal, the gestures in the preset motion libraries are simple, the user can learn conveniently, the recognition accuracy can be improved by acquiring the detection signals through the sensors, the gestures in the preset motion libraries are simple, the learning period of the user to the gestures can be simplified, and the experience of the user is enhanced.
Fig. 9 is a schematic block diagram of a motion sensing game motion recognition device 300 according to an embodiment of the present invention. As shown in fig. 9, the present invention also provides a motion sensing game motion recognition device 300 corresponding to the above motion sensing game motion recognition method. The motion sensing game motion recognition device 300 includes means for executing the motion sensing game motion recognition method, and may be disposed in a server. Specifically, referring to fig. 9, the motion sensing game motion recognition apparatus 300 includes a configuration data acquisition unit 301, a detection data acquisition unit 302, a calculation unit 303, a posture determination unit 304, a recognition result determination unit 305, and an effect generation unit 306.
A configuration data obtaining unit 301, configured to obtain configuration data of the intelligent wearable device worn by the user; a detection data acquisition unit 302, configured to acquire a detection signal of a sensor of the smart wearable device worn on a hand of a user to obtain detection data; a calculating unit 303, configured to perform fillet calculation according to the detection data to obtain a calculation result; the gesture determining unit 304 is configured to compare the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine a user gesture; a recognition result determination unit 305 for determining a recognition result from the user posture; and the effect generating unit 306 is used for generating a corresponding game effect according to the identification result, so as to send the game effect to the terminal for displaying.
In one embodiment, the calculation unit 303 includes a section determination subunit, an area calculation subunit, a peak value operator unit, and an integration subunit.
The section determining subunit is used for determining a fillet section according to the three-dimensional coordinate data; the area calculating subunit is used for calculating the area of the fillet section to obtain the area to be judged; the peak value operator unit is used for calculating the peak value and the trough of the three-dimensional coordinate data to obtain a peak value; and the integration subunit is used for integrating the area to be determined and the peak value to obtain a calculation result.
In an embodiment, the posture determination unit 304 includes a configuration data determination subunit, a first determination subunit, a second determination subunit, a third determination subunit, a fourth determination subunit, and a fifth determination subunit.
The configuration data judging subunit is used for judging whether the configuration data is that the user wears the intelligent wearing equipment by the left hand; the first judging subunit is configured to, if the configuration data is that the user wears the intelligent wearable device with the left hand, judge whether the first area to be judged is gradually increased from the origin coordinate along the X axis to a corresponding threshold in a preset action library; if the first area to be determined gradually increases from the origin coordinate along the X axis to a corresponding threshold value in a preset action library, the user posture is left-hand inclined and raised; the second judging subunit is configured to, if the first area to be judged is not gradually increased from the origin coordinate along the X axis to a corresponding threshold in the preset action library, judge whether the third area to be judged is gradually increased from the origin coordinate to a region formed by the Z axis and the Y axis to the corresponding threshold in the preset action library; if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down; a third judging subunit, configured to, if the third judging area does not gradually increase from the origin coordinate to the region formed by the Z axis and the Y axis to the corresponding threshold in the preset action library, judge whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis; if the origin coordinate moves to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture swings back and forth; if the origin coordinate does not move to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture is a combined action; the fourth judging subunit is configured to, if the configuration data is not that the user wears the intelligent wearable device with the left hand, judge whether the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold in the preset action library; if the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library, the user posture is that the right hand is inclined and lifted; a fifth judging subunit, configured to, if the second area to be judged is not gradually increased from the origin coordinate along the Z axis to a corresponding threshold in the preset action library, judge whether the third area to be judged is gradually increased from the origin coordinate to a region formed by the Z axis and the Y axis to the corresponding threshold in the preset action library; if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down; and if the third judgment area does not gradually increase from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
In an embodiment, the recognition result determining unit 305 is configured to determine that the recognition result is a defensive action when the user gesture is a left-hand tilt and lift and a right-hand tilt and lift; when the user posture is that the left hand vertically swings up and down and the right hand vertically swings up and down, the identification result is continuous attack and attack action; when the user gesture is swinging back and forth, the recognition result is a continuous light effect; and when the user gesture is a combined action, the recognition result is a magic attack.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation processes of the motion sensing game motion recognition apparatus 300 and each unit may refer to the corresponding descriptions in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided herein.
The above-mentioned terminal may be an electronic device having a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device.
The motion sensing game motion recognition device 300 may be implemented as a computer program that can be run on a computer device as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, which may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 10, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 comprises program instructions that, when executed, cause the processor 502 to perform a motion sensing game motion recognition method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be enabled to execute a motion sensing game motion recognition method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 10 is a block diagram of only a portion of the configuration relevant to the present teachings and is not intended to limit the computing device 500 to which the present teachings may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps:
acquiring configuration data of intelligent wearable equipment worn by a user; acquiring a detection signal of a sensor of intelligent wearable equipment worn on a hand of a user to obtain detection data; performing fillet calculation according to the detection data to obtain a calculation result; comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user posture; determining a recognition result according to the user posture; and generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for display.
The detection data comprise three-dimensional coordinate origin points detected by a plurality of continuous and stable sensors and three-dimensional coordinate data.
The threshold corresponding to the preset action library is formed by adjusting gesture action data which are input in advance in real time based on action data of each time of the user.
In an embodiment, when the processor 502 implements the step of performing the fillet calculation according to the detection data to obtain the calculation result, the following steps are specifically implemented:
determining a fillet section according to the three-dimensional coordinate data; calculating the area of the section of the fillet to obtain the area to be judged; calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value; and integrating the area to be determined and the peak value to obtain a calculation result.
The area to be judged comprises a first area to be judged of the section formed by the X-axis and the Y-axis, a second area to be judged of the section formed by the X-axis and the Z-axis, and a third area to be judged of the section formed by the Z-axis and the Y-axis.
In an embodiment, when the step of determining the user gesture according to the comparison between the detection data, the recognition result, and the configuration data and the threshold corresponding to the preset action library is implemented by the processor 502, the following steps are specifically implemented:
judging whether the configuration data is that the user wears the intelligent wearable device by the left hand; if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate along the X axis to a corresponding threshold value in a preset action library; if the first area to be determined gradually increases from the origin coordinate along the X axis to a corresponding threshold value in a preset action library, the user posture is left-hand inclined and raised; if the first area to be judged is not gradually increased from the origin coordinate to the corresponding threshold in the preset action library along the X axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset action library; if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down; if the third judgment area is not gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis; if the origin coordinate moves to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture swings back and forth; if the origin coordinate does not move to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture is a combined action; if the configuration data is not that the user wears the intelligent wearable device by the left hand, judging whether the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library; if the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library, the user posture is that the right hand is inclined and lifted; if the second area to be judged is not gradually increased from the origin coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library; if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down; and if the third judgment area does not gradually increase from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
In an embodiment, when the processor 502 implements the step of determining the recognition result according to the user gesture, the following steps are specifically implemented:
when the user posture is left-hand inclined lifting and right-hand inclined lifting, the recognition result is defensive action; when the user posture is that the left hand vertically swings up and down and the right hand vertically swings up and down, the identification result is continuous attack and attack action; when the user gesture is swinging back and forth, the recognition result is a continuous light effect; and when the user gesture is a combined action, the recognition result is a magic attack.
It should be understood that, in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the steps of:
acquiring configuration data of intelligent wearable equipment worn by a user; acquiring a detection signal of a sensor of intelligent wearable equipment worn on a hand of a user to obtain detection data; performing fillet calculation according to the detection data to obtain a calculation result; comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user posture; determining a recognition result according to the user posture; and generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for display.
The detection data comprise three-dimensional coordinate origin points detected by a plurality of continuous and stable sensors and three-dimensional coordinate data.
The threshold corresponding to the preset action library is formed by adjusting gesture action data which are input in advance in real time based on action data of each time of the user.
In an embodiment, when the processor executes the computer program to implement the step of performing the fillet calculation according to the detection data to obtain the calculation result, the following steps are specifically implemented:
determining a fillet section according to the three-dimensional coordinate data; calculating the area of the section of the fillet to obtain the area to be judged; calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value; and integrating the area to be determined and the peak value to obtain a calculation result.
The area to be judged comprises a first area to be judged of the section formed by the X-axis and the Y-axis, a second area to be judged of the section formed by the X-axis and the Z-axis, and a third area to be judged of the section formed by the Z-axis and the Y-axis.
In an embodiment, when the processor executes the computer program to implement the step of determining the user gesture by comparing the detection data, the recognition result, and the configuration data with the threshold corresponding to the preset action library, the following steps are specifically implemented:
judging whether the configuration data is that the user wears the intelligent wearable device by the left hand; if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate along the X axis to a corresponding threshold value in a preset action library; if the first area to be determined gradually increases from the origin coordinate along the X axis to a corresponding threshold value in a preset action library, the user posture is left-hand inclined and raised; if the first area to be judged is not gradually increased from the origin coordinate to the corresponding threshold in the preset action library along the X axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset action library; if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down; if the third judgment area is not gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis; if the origin coordinate moves to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture swings back and forth; if the origin coordinate does not move to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture is a combined action; if the configuration data is not that the user wears the intelligent wearable device by the left hand, judging whether the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library; if the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library, the user posture is that the right hand is inclined and lifted; if the second area to be judged is not gradually increased from the origin coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library; if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down; and if the third judgment area does not gradually increase from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
In an embodiment, when the step of determining the recognition result according to the user gesture is implemented by the processor executing the computer program, the following steps are specifically implemented:
when the user posture is left-hand inclined lifting and right-hand inclined lifting, the recognition result is defensive action; when the user posture is that the left hand vertically swings up and down and the right hand vertically swings up and down, the identification result is continuous attack and attack action; when the user gesture is swinging back and forth, the recognition result is a continuous light effect; and when the user gesture is a combined action, the recognition result is a magic attack.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The motion sensing game action recognition method is characterized by comprising the following steps:
acquiring configuration data of intelligent wearable equipment worn by a user;
acquiring a detection signal of a sensor of intelligent wearable equipment worn on a hand of a user to obtain detection data;
performing fillet calculation according to the detection data to obtain a calculation result;
comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the user posture;
determining a recognition result according to the user posture;
and generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for display.
2. The motion sensing game motion recognition method according to claim 1, wherein the detection data includes three-dimensional coordinate origin points detected by a plurality of continuous and stable sensors and three-dimensional coordinate data.
3. The motion sensing game motion recognition method according to claim 2, wherein the performing fillet calculation according to the detection data to obtain a calculation result includes:
determining a fillet section according to the three-dimensional coordinate data;
calculating the area of the section of the fillet to obtain the area to be judged;
calculating the wave crest and the wave trough of the three-dimensional coordinate data to obtain a peak value;
and integrating the area to be determined and the peak value to obtain a calculation result.
4. The motion-sensing game motion recognition method according to claim 3, wherein the area to be determined includes a first area to be determined of a cross section formed by an X-axis and a Y-axis, a second area to be determined of a cross section formed by an X-axis and a Z-axis, and a third area to be determined of a cross section formed by a Z-axis and a Y-axis.
5. The motion sensing game motion recognition method of claim 4, wherein the determining the user gesture by comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset motion library comprises:
judging whether the configuration data is that the user wears the intelligent wearable device by the left hand;
if the configuration data is that the user wears the intelligent wearable device by the left hand, judging whether the first area to be judged is gradually increased from the origin coordinate along the X axis to a corresponding threshold value in a preset action library;
if the first area to be determined gradually increases from the origin coordinate along the X axis to a corresponding threshold value in a preset action library, the user posture is left-hand inclined and raised;
if the first area to be judged is not gradually increased from the origin coordinate to the corresponding threshold in the preset action library along the X axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold in the preset action library;
if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down;
if the third judgment area is not gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, judging whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis;
if the origin coordinate moves to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture swings back and forth;
if the origin coordinate does not move to the section formed by the X-axis Y-axis and the section formed by the X-axis Z-axis, the user posture is a combined action;
if the configuration data is not that the user wears the intelligent wearable device by the left hand, judging whether the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library;
if the second area to be judged is gradually increased from the origin coordinate along the Z axis to a corresponding threshold value in a preset action library, the user posture is that the right hand is inclined and lifted;
if the second area to be judged is not gradually increased from the origin coordinate to the corresponding threshold value in the preset action library along the Z axis, judging whether the third area to be judged is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library;
if the third judgment area is gradually increased from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, the user posture is that the left hand vertically swings up and down;
and if the third judgment area does not gradually increase from the origin coordinate to the area formed by the Z axis and the Y axis to the corresponding threshold value in the preset action library, executing the judgment whether the origin coordinate moves to the section formed by the X axis and the Y axis and the section formed by the X axis and the Z axis.
6. The motion sensing game motion recognition method according to claim 5, wherein the determining a recognition result according to the user gesture includes:
when the user posture is left-hand inclined lifting and right-hand inclined lifting, the recognition result is defensive action;
when the user posture is that the left hand vertically swings up and down and the right hand vertically swings up and down, the identification result is continuous attack and attack action;
when the user gesture is swinging back and forth, the recognition result is a continuous light effect;
and when the user gesture is a combined action, the recognition result is a magic attack.
7. The motion sensing game motion recognition method according to claim 1, wherein the threshold corresponding to the preset motion library is formed by pre-inputting gesture motion data and performing real-time adjustment based on each motion data of the user.
8. Body feeling game action recognition device, its characterized in that includes:
the configuration data acquisition unit is used for acquiring configuration data of the intelligent wearable device worn by the user;
the detection data acquisition unit is used for acquiring detection signals of a sensor of the intelligent wearable device worn on the hand of the user to obtain detection data;
the calculation unit is used for performing fillet calculation according to the detection data to obtain a calculation result;
the gesture determining unit is used for comparing the detection data, the recognition result and the configuration data with a threshold corresponding to a preset action library to determine the gesture of the user;
a recognition result determination unit for determining a recognition result according to the user posture;
and the effect generating unit is used for generating a corresponding game effect according to the identification result so as to send the game effect to the terminal for displaying.
9. A computer device, characterized in that the computer device comprises a memory, on which a computer program is stored, and a processor, which when executing the computer program implements the method according to any of claims 1 to 7.
10. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202010038130.XA 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game Active CN111228792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010038130.XA CN111228792B (en) 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038130.XA CN111228792B (en) 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game

Publications (2)

Publication Number Publication Date
CN111228792A true CN111228792A (en) 2020-06-05
CN111228792B CN111228792B (en) 2023-05-05

Family

ID=70862552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038130.XA Active CN111228792B (en) 2020-01-14 2020-01-14 Motion recognition method, device, computer equipment and storage medium for motion recognition game

Country Status (1)

Country Link
CN (1) CN111228792B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111643887A (en) * 2020-06-08 2020-09-11 歌尔科技有限公司 Head-mounted device, data processing method thereof, and computer-readable storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010125294A (en) * 2008-12-01 2010-06-10 Copcom Co Ltd Game program, storage medium and computer device
US20130077831A1 (en) * 2011-09-26 2013-03-28 Sony Corporation Motion recognition apparatus, motion recognition method, operation apparatus, electronic apparatus, and program
CN103386683A (en) * 2013-07-31 2013-11-13 哈尔滨工程大学 Kinect-based motion sensing-control method for manipulator
CN103941866A (en) * 2014-04-08 2014-07-23 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
CN104750397A (en) * 2015-04-09 2015-07-01 重庆邮电大学 Somatosensory-based natural interaction method for virtual mine
CN105528059A (en) * 2014-09-30 2016-04-27 奥视电子科技(海南)有限公司 A three-dimensional gesture operation method and system
CN105739106A (en) * 2015-06-12 2016-07-06 南京航空航天大学 Somatosensory multi-view point large-size light field real three-dimensional display device and method
CN106249901A (en) * 2016-08-16 2016-12-21 南京华捷艾米软件科技有限公司 A kind of adaptation method supporting somatosensory device manipulation with the primary game of Android
CN106445138A (en) * 2016-09-21 2017-02-22 中国农业大学 Human body posture feature extracting method based on 3D joint point coordinates
US20170192494A1 (en) * 2016-01-05 2017-07-06 Shobhit NIRANJAN Wearable interactive gaming device
JP2017191426A (en) * 2016-04-13 2017-10-19 キヤノン株式会社 Input device, input control method, computer program, and storage medium
CN107315473A (en) * 2017-06-19 2017-11-03 南京华捷艾米软件科技有限公司 A kind of method that body-sensing gesture selects Android Mission Objective UI controls
CN107688390A (en) * 2017-08-28 2018-02-13 武汉大学 A kind of gesture recognition controller based on body feeling interaction equipment
CN107894834A (en) * 2017-11-09 2018-04-10 上海交通大学 Gesture identification method and system are controlled under augmented reality environment
CN108268137A (en) * 2018-01-24 2018-07-10 吉林大学 Taking, movement and action measuring method of letting go in a kind of Virtual assemble
CN108549489A (en) * 2018-04-27 2018-09-18 哈尔滨拓博科技有限公司 A kind of gestural control method and system based on hand form, posture, position and motion feature
CN109200576A (en) * 2018-09-05 2019-01-15 深圳市三宝创新智能有限公司 Somatic sensation television game method, apparatus, equipment and the storage medium of robot projection
US20190053738A1 (en) * 2016-09-30 2019-02-21 Goertek Inc. Method for monitoring user gesture of wearable device
CN109718559A (en) * 2018-12-24 2019-05-07 努比亚技术有限公司 Game control method, mobile terminal and computer readable storage medium
CN110362197A (en) * 2019-06-13 2019-10-22 缤刻普达(北京)科技有限责任公司 Screen lights method, apparatus, intelligent wearable device and storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010125294A (en) * 2008-12-01 2010-06-10 Copcom Co Ltd Game program, storage medium and computer device
US20130077831A1 (en) * 2011-09-26 2013-03-28 Sony Corporation Motion recognition apparatus, motion recognition method, operation apparatus, electronic apparatus, and program
CN103116397A (en) * 2011-09-26 2013-05-22 索尼公司 Motion recognition apparatus, motion recognition method, operation apparatus, electronic apparatus, and program
CN103386683A (en) * 2013-07-31 2013-11-13 哈尔滨工程大学 Kinect-based motion sensing-control method for manipulator
CN103941866A (en) * 2014-04-08 2014-07-23 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
CN105528059A (en) * 2014-09-30 2016-04-27 奥视电子科技(海南)有限公司 A three-dimensional gesture operation method and system
CN104750397A (en) * 2015-04-09 2015-07-01 重庆邮电大学 Somatosensory-based natural interaction method for virtual mine
CN105739106A (en) * 2015-06-12 2016-07-06 南京航空航天大学 Somatosensory multi-view point large-size light field real three-dimensional display device and method
US20170192494A1 (en) * 2016-01-05 2017-07-06 Shobhit NIRANJAN Wearable interactive gaming device
JP2017191426A (en) * 2016-04-13 2017-10-19 キヤノン株式会社 Input device, input control method, computer program, and storage medium
CN106249901A (en) * 2016-08-16 2016-12-21 南京华捷艾米软件科技有限公司 A kind of adaptation method supporting somatosensory device manipulation with the primary game of Android
CN106445138A (en) * 2016-09-21 2017-02-22 中国农业大学 Human body posture feature extracting method based on 3D joint point coordinates
US20190053738A1 (en) * 2016-09-30 2019-02-21 Goertek Inc. Method for monitoring user gesture of wearable device
CN107315473A (en) * 2017-06-19 2017-11-03 南京华捷艾米软件科技有限公司 A kind of method that body-sensing gesture selects Android Mission Objective UI controls
CN107688390A (en) * 2017-08-28 2018-02-13 武汉大学 A kind of gesture recognition controller based on body feeling interaction equipment
CN107894834A (en) * 2017-11-09 2018-04-10 上海交通大学 Gesture identification method and system are controlled under augmented reality environment
CN108268137A (en) * 2018-01-24 2018-07-10 吉林大学 Taking, movement and action measuring method of letting go in a kind of Virtual assemble
CN108549489A (en) * 2018-04-27 2018-09-18 哈尔滨拓博科技有限公司 A kind of gestural control method and system based on hand form, posture, position and motion feature
CN109200576A (en) * 2018-09-05 2019-01-15 深圳市三宝创新智能有限公司 Somatic sensation television game method, apparatus, equipment and the storage medium of robot projection
CN109718559A (en) * 2018-12-24 2019-05-07 努比亚技术有限公司 Game control method, mobile terminal and computer readable storage medium
CN110362197A (en) * 2019-06-13 2019-10-22 缤刻普达(北京)科技有限责任公司 Screen lights method, apparatus, intelligent wearable device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111643887A (en) * 2020-06-08 2020-09-11 歌尔科技有限公司 Head-mounted device, data processing method thereof, and computer-readable storage medium
CN111643887B (en) * 2020-06-08 2023-07-14 歌尔科技有限公司 Headset, data processing method thereof and computer readable storage medium

Also Published As

Publication number Publication date
CN111228792B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
US8395620B2 (en) Method and system for tracking of a subject
CN106249882B (en) Gesture control method and device applied to VR equipment
CN105229666B (en) Motion analysis in 3D images
US8175374B2 (en) Volume recognition method and system
US9262012B2 (en) Hover angle
CN106155312A (en) Gesture recognition and control method and device
US10901496B2 (en) Image processing apparatus, image processing method, and program
CN109758760B (en) Shooting control method and device in football game, computer equipment and storage medium
WO2017000917A1 (en) Positioning method and apparatus for motion-stimulation button
CN111258422B (en) Terminal game interaction method and device, computer equipment and storage medium
CN104516649A (en) Intelligent cell phone operating technology based on motion-sensing technology
CN111273777A (en) Virtual content control method and device, electronic equipment and storage medium
US10983606B2 (en) Control instruction input methods and control instruction input devices
CN111228792A (en) Motion sensing game action recognition method and device, computer equipment and storage medium
JP2007072569A (en) Program, information recording medium, and freehand drawing similarity judging device
CN105204630A (en) Method and system for garment design through motion sensing
WO2022137450A1 (en) Information processing device, information processing method, and program
JP2016095795A (en) Recognition device, method, and program
CN111103973A (en) Model processing method, model processing device, computer equipment and storage medium
KR101576643B1 (en) Method and Apparatus for Controlling 3 Dimensional Virtual Space on Screen
Ko et al. Modeling One-Dimensional Touch Pointing with Nominal Target Width
CN117687504A (en) Cursor control method and device of wearable mouse and computer equipment
JP2022142624A (en) Detection processing device, detection processing method, and information processing system
CA3229530A1 (en) Electronic apparatus and program
WO2016148142A1 (en) Display data processing device, display device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant