CN115006840A - Somatosensory online game method, device and computer-readable storage medium - Google Patents
Somatosensory online game method, device and computer-readable storage medium Download PDFInfo
- Publication number
- CN115006840A CN115006840A CN202210697254.8A CN202210697254A CN115006840A CN 115006840 A CN115006840 A CN 115006840A CN 202210697254 A CN202210697254 A CN 202210697254A CN 115006840 A CN115006840 A CN 115006840A
- Authority
- CN
- China
- Prior art keywords
- data
- game
- client
- somatosensory
- player
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003238 somatosensory effect Effects 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000001228 spectrum Methods 0.000 claims abstract description 71
- 230000009471 action Effects 0.000 claims abstract description 54
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 claims abstract description 5
- 238000005070 sampling Methods 0.000 claims description 23
- 238000010586 diagram Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 16
- 230000006855 networking Effects 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 7
- 230000003595 spectral effect Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000004891 communication Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/211—Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/812—Ball games, e.g. soccer or baseball
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/105—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals using inertial sensors, e.g. accelerometers, gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8011—Ball
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a somatosensory online game method, a device and a computer readable storage medium, wherein the method comprises the following steps: after a preset somatosensory game is started by a client, acquiring attitude original data of a player from the client; generating a frequency spectrum gray scale map according to the attitude original data; leading the frequency spectrum gray scale map into a preset convolution neural network to identify player actions; generating a game operation instruction according to the recognition result of the preset convolutional neural network; and sending the game operation instruction to the client. The method has the advantages of capability of realizing the motion sensing online game, low calculation requirement on the client, high action recognition precision on the player and good game fluency.
Description
Technical Field
The invention relates to the technical field of motion sensing games, in particular to a motion sensing networking game method, device and computer readable storage medium.
Background
Currently, a motion sensing game is generally run on a local terminal, and the local terminal is relied on to recognize the action of a user to generate a game operation instruction. Among other things, differences in terminal computing power will affect the accuracy and speed of recognition of user actions. Due to different computing capacities of different terminals, if a plurality of terminals are interconnected to perform somatosensory networking games, pictures displayed by each terminal are not uniform, and the game experience of a user is seriously influenced.
Therefore, it is desirable to provide a motion sensing online game method to facilitate users to play motion sensing games on a wide area network.
Disclosure of Invention
The embodiment of the application provides a somatosensory networking game method, and aims to realize a somatosensory game and expand game modes of users in a wide area network.
In order to achieve the above object, an embodiment of the present application provides a motion sensing online game method, including:
after a preset somatosensory game is started by a client, acquiring attitude original data of a player from the client;
generating a frequency spectrum gray scale map according to the attitude original data;
leading the frequency spectrum gray scale map into a preset convolution neural network to identify player actions;
generating a game operation instruction according to the recognition result of the preset convolutional neural network;
and sending the game operation instruction to the client.
In one embodiment, generating a spectral gray-scale map from the pose raw data comprises:
generating a space attitude track of the action of the player according to the original data of the attitude of the player;
generating a oscillogram according to the space attitude track;
performing discrete Fourier transform on the original sampling data in the oscillogram to obtain frequency spectrum data;
merging the frequency spectrum data according to time domain information to obtain a two-dimensional frequency spectrum waterfall graph;
and mapping the element values in the two-dimensional frequency spectrum waterfall graph according to the image gray value to obtain the frequency spectrum gray graph.
In one embodiment, the raw sample data in the waveform map includes sample data for channel I and sample data for channel Q;
prior to discrete fourier transforming the raw sampled data in the waveform map, the method further comprises:
merging the sampling data of the channel I and the sampling data of the channel Q into a channel S according to the following formula:
in the formula, S I Is sampled data of channel I, S Q Is the sampled data for channel Q and S is the combined power density.
In one embodiment, discrete fourier transforming the raw sampled data in the waveform map is performed using the following equation:
in the formula, N is the number of sampling points in discrete fourier transform, and the value of N is limited to be 2 raised to the positive integer power.
In one embodiment, obtaining raw data of a player's pose from a client comprises:
and acquiring the original data of the posture of the player matched with a preset target axis of the current motion sensing game from the client.
In one embodiment, before the spectrum waterfall graph is led into a preset convolutional neural network to identify the action of a player, the method further comprises the following steps:
acquiring attitude data of standard somatosensory actions of a player completing a preset somatosensory game to establish a preset data set;
acquiring a training frequency spectrum waterfall diagram of the preset data set;
determining a preset target axis of the preset somatosensory game according to the oscillogram of the preset data set; and
and training the convolutional neural network by using the training frequency spectrum data graph of the preset target axis.
In one embodiment, the raw pose data obtained from the client is compressed by the Varints algorithm.
In one embodiment, sending the game operation instruction to the client comprises:
when the gateway node information bound by the client is inconsistent with the distribution gateway node of the current server, the game operation instruction is issued to a target gateway node bound by the client;
and sending the game operation instruction to the client through the target gateway node.
In order to achieve the above object, an embodiment of the present application further provides a motion sensing networked game device, which includes a memory, a processor, and a motion sensing networked game program stored in the memory and operable on the processor, where the processor implements the motion sensing networked game method according to any one of the above embodiments when executing the motion sensing networked game program.
In order to achieve the above object, an embodiment of the present application further provides a computer-readable storage medium, where a somatosensory online game program is stored, and when executed by a processor, the somatosensory online game program implements the somatosensory online game method according to any one of the above items.
According to the somatosensory networking game method, the server can convert player posture data provided by the client into the frequency spectrum gray-scale image, the frequency spectrum gray-scale image is classified through the preset convolutional neural network to identify actions of the player, a game operation instruction can be generated according to the actions of the player and is issued to the client, and the client can execute the game operation instruction to enable actions of game characters to be matched with the actions of the player. Therefore, in the whole game process, the client only performs data exchange and operation instruction execution, so that the model of the client is not particularly limited, and clients of different models and types can all perform somatosensory networking games. In addition, action recognition of the player is carried out on the server, so that the computing power requirement on the client is extremely low, action recognition delay caused by inconsistent computing power among different devices is eliminated, and the fluency of the somatosensory networked game is improved. Therefore, the motion sensing online game method has the advantages of being capable of achieving the motion sensing online game, low in requirement for computing power of the client side, high in accuracy of player action recognition and good in game fluency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a block diagram of an embodiment of a motion sensing networked gaming device of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of a somatosensory online game method according to the invention;
FIG. 3 is a schematic flow chart of another embodiment of the motion-sensing networked game method of the present invention;
FIG. 4 is a schematic flow chart of a somatosensory online game method according to another embodiment of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of "first", "second", and "third", etc. do not denote any order, and the words may be interpreted as names.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a server 1 (also called a motion sensing networked game device) in a hardware operating environment according to an embodiment of the present invention.
The server provided by the embodiment of the invention comprises equipment with a display function, such as Internet of things equipment, AR/VR equipment with a networking function, an intelligent sound box, an automatic driving automobile, a PC, a smart phone, a tablet personal computer, an electronic book reader, a portable computer and the like.
As shown in fig. 1, the server 1 includes: memory 11, processor 12, and network interface 13.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the server 1, for example a hard disk of the server 1. The memory 11 may also be an external storage device of the server 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the server 1.
Further, the memory 11 may also include an internal storage unit of the server 1 and also an external storage device. The memory 11 may be used not only to store application software installed in the server 1 and various types of data such as codes of the motion sensing network game program 10, but also to temporarily store data that has been output or is to be output.
The processor 12 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, and is configured to execute program codes stored in the memory 11 or process data, such as executing the somatosensory networked game program 10.
The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used for establishing a communication connection between the server 1 and other electronic devices.
The network may be the internet, a cloud network, a wireless fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), and/or a Metropolitan Area Network (MAN). Various devices in the network environment may be configured to connect to the communication network according to various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, at least one of: transmission control protocol and internet protocol (TCP/IP), User Datagram Protocol (UDP), hypertext transfer protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, IEEE 802.11, optical fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communications, wireless Access Points (APs), device-to-device communications, cellular communication protocol, and/or bluetooth (Blue Tooth) communication protocol, or a combination thereof.
Optionally, the server may further comprise a user interface, which may include a Display (Display), an input unit such as a Keyboard (Keyboard), and an optional user interface may also include a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is used for displaying information processed in the server 1 and for displaying a visualized user interface.
While fig. 1 shows only a server 1 with components 11-13 and a motion-sensing networked game program 10, those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the server 1, and may include fewer or more components than those shown, or some components in combination, or a different arrangement of components.
In this embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11, and perform the following operations:
after a preset somatosensory game is started by a client, acquiring attitude original data of a player from the client;
generating a frequency spectrum gray scale map according to the attitude original data;
leading the frequency spectrum gray scale map into a preset convolution neural network to identify actions of a player;
generating a game operation instruction according to the recognition result of the preset convolutional neural network;
and sending the game operation instruction to the client.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
generating a space attitude track of the action of the player according to the original data of the attitude of the player;
generating a oscillogram according to the space attitude track;
performing discrete Fourier transform on the original sampling data in the oscillogram to obtain frequency spectrum data;
merging the frequency spectrum data according to time domain information to obtain a two-dimensional frequency spectrum waterfall graph;
and mapping the element values in the two-dimensional frequency spectrum waterfall graph according to the image gray value to obtain the frequency spectrum gray graph.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
merging the sampling data of the channel I and the sampling data of the channel Q into a channel S according to the following formula:
in the formula, S I Is sampled data of channel I, S Q Is the sampled data for channel Q and S is the combined power density.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
performing discrete Fourier transform on the original sampling data in the oscillogram by adopting the following formula:
in the formula, N is the number of sampling points in discrete fourier transform, and the value of N is limited to be 2 raised to the positive integer power.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
and acquiring the original data of the posture of the player matched with a preset target axis of the current motion sensing game from the client.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
acquiring attitude data of standard somatosensory actions of a player completing a preset somatosensory game to establish a preset data set;
acquiring a training frequency spectrum waterfall diagram of the preset data set;
determining a preset target axis of the preset somatosensory game according to the oscillogram of the preset data set; and
and training the convolutional neural network by using the training frequency spectrum data graph of the preset target axis.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
and compressing the attitude raw data acquired from the client by using a Varings algorithm.
In one embodiment, the processor 12 may be configured to call the somatosensory networked game program stored in the memory 11 and perform the following operations:
when the gateway node information bound by the client is inconsistent with the distribution gateway node of the current server, the game operation instruction is issued to a target gateway node bound by the client;
and sending the game operation instruction to the client through the target gateway node.
Based on the hardware framework of the motion sensing online game device, the embodiment of the motion sensing online game method is provided. The invention discloses a motion sensing networking game method, which aims to realize motion sensing games on a wide area network and expand game modes of users
Referring to fig. 2, fig. 2 is an embodiment of the motion sensing online game method of the present invention, and the motion sensing online game method includes the following steps:
and S10, after the client starts the preset motion sensing game, acquiring the posture original data of the player from the client.
Here, the client refers to a terminal running a game, which includes, but is not limited to, a personal PC, a game console, a portable game console, and a mobile client. The mobile client includes, but is not limited to, a smart phone, a tablet computer, and a portable computer.
Further, the player's pose data may be detected by a six-axis IMU sensor, which includes a three-axis accelerometer and a three-axis gyroscope, capable of detecting three-axis acceleration data and three-axis angular velocity data while the player is in motion, ultimately to generate the player's pose data. Typically, the player pose data is provided by a motion sensing device connected to the client, wherein the motion sensing device includes, but is not limited to, a bracelet, a glove watch, a headband, a hat, a vest, a fitness ring, a gamepad. Whereas, if the client itself has a six-axis IMU sensor, the player pose data may be provided directly by the client.
Specifically, after the game is started, the client establishes a communication connection with the server, and at the time of the game, the client can transmit the player posture data to the server based on a specific communication protocol (generally, TCP/IP protocol).
And S20, generating a frequency spectrum gray scale map according to the attitude original data.
Specifically, the spectral gray-scale map is a gray-scale representation of the time-domain and frequency-domain spectra of the pose data, where the two axes of the spectral gray-scale map are frequency and time, respectively. The action posture of the player can be visually and really displayed through the frequency spectrum gray-scale map.
It should be noted that, when generating the spectrum gray scale map, the server may generate a corresponding number of spectrum gray scale maps according to the number of axes in the acquired data. That is, each time data of one axis is acquired, the server generates a spectrum gray scale map of the data of the axis. For example, if the player posture data acquired by the server from the client includes X-axis data and Y-axis data of the accelerometer, the server generates two spectrum gray-scale maps, i.e., spectrum waterfall maps of the X-axis and the Y-axis of the accelerometer. Among these, the player posture data provided by the client to the server typically includes data for three axes of the accelerometer and data for three axes of the gyroscope, i.e., there are six axes of data in common. Therefore, when the server generates the spectrum gray-scale map, six spectrum gray-scale maps are generated at the same time.
And S30, leading the frequency spectrum gray scale map into a preset convolutional neural network to identify the action of the player.
Specifically, the convolutional neural network has excellent image recognition capability, and the spectral gray-scale map can graphically show the action posture of the player. Therefore, the convolutional neural network is trained in a targeted mode, so that the convolutional neural network can identify the somatosensory motion of the player by identifying the frequency spectrum gray scale map. Thus, the characteristics of the convolutional neural network can be utilized to realize the quick recognition of the action of the player. In addition, the speed of recognizing the actions of the player by the convolutional neural network can be improved through training, and the accuracy of recognizing the actions of the player can be improved.
And S40, generating a game operation instruction according to the recognition result of the preset convolutional neural network.
Specifically, the output result of the convolutional neural network is the recognized player motion, and if a different player motion has a corresponding game operation command in a different motion-sensing game, the game operation command is mainly used for controlling a game character of the player to perform a game motion matching the actual motion-sensing motion of the player.
And S50, sending the game operation instruction to a corresponding client.
Specifically, after a game operation command matching the actual action of the player is obtained, the server transmits the game operation command to the client through the communication protocol, and after the client receives the game operation command, the client executes the game operation command, so that the game character on the client can execute the game action matching the actual action of the player.
For example, when the motion sensing game run by the client is a tennis motion sensing game and the player performs a forward swing action, the client can send attitude data of the player during the game to the server, and the server can recognize that the player performs the forward swing action currently through a preset convolutional neural network. The server can obtain a game operation command of the main hand swing according to the main hand swing action and send the game operation command to the client, and the client executes the game operation command after receiving the game operation command so that the player character in the somatosensory game also performs the main hand swing action. In addition, the client only carries out data exchange and execution of operation instructions in the whole game process, so the model of the client is not specially limited, and the clients of different models and types can all carry out somatosensory networking games. In addition, action recognition of the player is carried out on the server, so that the computing power requirement on the client is extremely low, action recognition delay caused by inconsistent computing power among different devices is eliminated, and the fluency of the somatosensory networked game is improved.
According to the somatosensory networking game method, the server can convert player posture data provided by the client into a frequency spectrum gray-scale image, the frequency spectrum gray-scale image is classified through a preset convolutional neural network to identify actions of a player, a game operation instruction can be generated according to the actions of the player and is issued to the client, and the client can execute the game operation instruction to enable actions of game characters to be matched with the actions of the player. Therefore, in the whole game process, the client only performs data exchange and operation instruction execution, so that the model of the client is not particularly limited, and clients of different models and types can all perform somatosensory networking games. In addition, action recognition of the player is carried out on the server, so that the computing power requirement on the client is extremely low, action recognition delay caused by inconsistent computing power among different devices is eliminated, and the fluency of the somatosensory networked game is improved. Therefore, the motion sensing online game method has the advantages of being capable of achieving the motion sensing online game, low in computing power requirement on the client side, high in action recognition accuracy of the player and good in game fluency.
As shown in fig. 3, in some embodiments, generating a spectral gray scale map from the pose raw data comprises:
and S21, generating a space posture track of the action of the player according to the original data of the posture of the player.
Specifically, after obtaining the player posture original data, the player posture original data may be subjected to data solution by euler angle solution to obtain the spatial posture trajectory of the player.
And S22, generating a oscillogram according to the space attitude trajectory.
Specifically, the player posture trajectory can be decomposed into wave diagrams of six axes including three acceleration axes and three gyroscope axes, and six-dimensional data can be obtained. Through the operation, the data dimension reduction can be carried out on the player posture data, so that the subsequent processing can be conveniently carried out on the player posture data.
And S23, performing discrete Fourier transform on the original sampling data in the oscillogram to obtain frequency spectrum data.
Here, the original sampling data in the waveform diagram includes sampling data of a channel I and sampling data of a channel Q, i.e., I/Q signals in digital communication.
Specifically, the discrete fourier transform formula can perform the discrete fourier transform on the original sampling data in the oscillogram, specifically, the discrete fourier transform formula is as follows:
in the formula, N is the number of sampling points in discrete fourier transform.
After discrete Fourier transform is carried out on the original sampling data in the oscillogram, the frequency spectrum data of the original sampling data can be obtained. The spectrum data indicates a spectrum at a certain time point.
It is worth noting that data loss from raw sampled data to spectral data can be achieved by discrete fourier transform.
And S24, merging the frequency spectrum data according to the time domain information to obtain a two-dimensional frequency spectrum waterfall graph.
Here, the time domain information refers to time information that each of the spectrum data has.
Specifically, according to the time information of each spectrum data, the sequence of each spectrum data can be determined, and then a plurality of spectrum data belonging to the same axis are combined based on the sequence to obtain a two-dimensional spectrum waterfall graph, wherein two coordinate axes are a time axis and a frequency axis respectively.
And S25, mapping the element values in the two-dimensional frequency spectrum waterfall graph according to image gray values to obtain the frequency spectrum gray graph.
Specifically, the element values in the spectrum data map are the result values obtained by performing discrete fourier transform.
Further, after obtaining the spectrogram, mapping the element values in the two-dimensional spectrogram according to image gray values, so that a gray map can be obtained, which is a frequency spectrum waterfall graph finally obtained by us.
It can be understood that, because the spectrum gray-scale map is subjected to feature recognition through the convolutional neural network, the two-dimensional spectrum waterfall map is converted into the gray-scale map, and the image recognition characteristics of the convolutional neural network can be matched to accurately recognize the action of the player.
In some embodiments, when fourier transforming the raw sampled data in the waveform map, we define the value of N in the discrete fourier transform formula to be an integer power of 2, i.e., N-2 K ,k=1、2、3…。
Through the arrangement, the operation speed of discrete Fourier transform can be increased, and the recognition speed of user actions is further improved.
In some embodiments, prior to discrete fourier transforming the raw sampled data in the waveform map, the method further comprises:
merging the sampling data of the channel I and the sampling data of the channel Q into a channel S according to the following formula:
in the formula, S I Is sampled data of channel I, S Q Is the sampled data for channel Q and S is the combined power density.
Specifically, the data amount of the original sample data can be reduced by the above formula, so that the server can perform discrete fourier transform, and the speed of recognizing the player's motion can be increased.
In some embodiments, obtaining raw pose data for a player from a client includes:
and acquiring the original data of the posture of the player matched with a preset target axis of the current motion sensing game from the client.
The preset target axis refers to an axis on which signal changes are relatively most severe when the player performs a standard action according with the motion sensing game, in other words, the signal of the preset target axis can reflect the action of the player most. It is noted that the preset target axis may be one or more of six axes, i.e., three axes of the accelerometer and three axes of the gyroscope.
Specifically, we can obtain preset target axes corresponding to different motion sensing games by analyzing the oscillogram of the known data set.
Specifically, different motion sensing games can best reflect different shaft types and numbers of actions of players, so that posture data of a preset target shaft of the current motion sensing game are obtained from a client and are used for player action recognition, the player action recognition accuracy can be improved, the terminal calculation amount can be reduced, and the player action recognition speed is increased.
As shown in fig. 4, in some embodiments, before the spectrum waterfall graph is led into the preset convolutional neural network to identify the action of the player, the method further includes:
s110, acquiring gesture data of standard somatosensory actions of a player completing a preset somatosensory game to establish a preset data set.
Specifically, the motion sensing devices may be provided on the players of a set number of players, the set motion sensing actions of the preset motion sensing game may be repeatedly performed by the players, and the posture data of the players completing the set motion sensing actions may be collected to establish a desired preset data set. It should be noted that, when collecting and training the preset data set, a label may be added to the data in the preset data set, so as to facilitate the subsequent training of the convolutional neural network.
And S120, obtaining a training frequency spectrum waterfall diagram of the preset data set.
Specifically, a training spectrum gray scale map may be created for the preset data set based on the manner from step S21 to step S25.
And S130, determining a preset target axis of the preset motion sensing game according to the oscillogram of the preset data set.
Specifically, the preset target axes corresponding to different preset motion sensing games can be determined by observing the amplitude changes of the waveform diagrams based on the corresponding waveform diagrams generated in the preset data set. Specifically, we can determine the required preset target axis according to the intensity of the waveform change in the waveform diagram corresponding to a complete motion.
And S140, training the convolutional neural network by using the training frequency spectrum gray scale map of the preset target axis.
Specifically, after a preset target axis of the preset motion sensing game is determined, a frequency spectrum gray scale map can be generated according to data of the preset axis or axes, and then the frequency spectrum gray scale map is led into the convolutional neural network to train the neural network until the recognition accuracy of the convolutional neural network reaches a set accuracy.
It can be understood that the convolutional neural network can be trained in a targeted manner by the above method, so as to improve the sensitivity of the convolutional neural network to the spectrum gray-scale map, and further improve the recognition accuracy and recognition speed of the convolutional neural network to the actions of the player.
In some embodiments, the pose raw data obtained from the client is compressed by the Varints algorithm.
In particular, the Varints algorithm is a typical data compression algorithm, which can greatly reduce the data volume on the basis of ensuring the data information volume. It can be understood that the player posture data is compressed by the Varints algorithm and then transmitted to the server, so that packet loss from the client to the server can be reduced, and transmission delay is reduced, so as to improve the game experience of the player.
In some embodiments, sending the game play instructions to the client comprises:
s210, when the gateway node information bound by the client is inconsistent with the distribution gateway node of the current server, the game operation instruction is issued to the target gateway node bound by the client.
It is worth mentioning that the server in the technical scheme of the application comprises a plurality of distribution gateways, when the client accesses the server for the first time, the connection request of the client passes through a certain gateway node of the server, and after the client and the server successfully establish connection, the server binds the client and the gateway node.
Specifically, after generating the game operation instruction, the server sends the game operation instruction to the client through any gateway node. If the distribution gateway node of the current game operation instruction distributed by the server is consistent with the gateway node information bound by the client, the current gateway can directly send the game operation instruction to the corresponding client. And if the current distribution gateway node of the server is inconsistent with the gateway node information bound by the client, the server can issue the game operation instruction to the target gateway node bound by the client.
S230, the game operation instruction is sent to the client through the target gateway node.
Specifically, when a game operation is issued to a target gateway node, the server directly transmits the game operation command to the client via the target gateway node.
It can be understood that through the mode, high concurrency performance and long connection performance of the service can be guaranteed, and then body sensing networking game stability and low delay can be improved.
In addition, the embodiment of the present invention further provides a computer-readable storage medium, which may be any one of or any combination of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, and the like. The computer readable storage medium includes a motion sensing online game program 10, and the specific implementation of the computer readable storage medium of the present invention is substantially the same as the above-described motion sensing online game method and the specific implementation of the server 1, and will not be described herein again.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A motion sensing networking game method is characterized by comprising the following steps:
after a client starts a preset somatosensory game, acquiring attitude original data of a player from the client;
generating a frequency spectrum gray scale map according to the attitude original data;
leading the frequency spectrum gray scale map into a preset convolution neural network to identify actions of a player;
generating a game operation instruction according to the recognition result of the preset convolutional neural network;
and sending the game operation instruction to the client.
2. The somatosensory networked gaming method of claim 1, wherein generating a spectral gray-scale map from the pose raw data comprises:
generating a space attitude track of the action of the player according to the original data of the attitude of the player;
generating a oscillogram according to the space attitude track;
performing discrete Fourier transform on the original sampling data in the oscillogram to obtain frequency spectrum data;
merging the frequency spectrum data according to time domain information to obtain a two-dimensional frequency spectrum waterfall graph;
and mapping the element values in the two-dimensional frequency spectrum waterfall graph according to the image gray value to obtain the frequency spectrum gray graph.
3. The somatosensory networked game method according to claim 2, wherein the raw sample data in the oscillogram comprises sample data of channel I and sample data of channel Q;
prior to discrete fourier transforming the raw sampled data in the waveform map, the method further comprises:
merging the sampling data of the channel I and the sampling data of the channel Q into a channel S according to the following formula:
in the formula, S I Is sampled data of channel I, S Q Is the sampled data for channel Q and S is the combined power density.
4. The motion-sensing networked game method according to claim 2, wherein discrete fourier transform of the raw sampled data in the oscillogram is performed using the following equation:
in the formula, N is the number of sampling points in discrete fourier transform, and the value of N is limited to be 2 raised to the positive integer power.
5. The somatosensory networked gaming method of claim 1, wherein obtaining raw data of the player's pose from a client comprises:
and acquiring the original data of the posture of the player matched with a preset target axis of the current motion sensing game from the client.
6. The somatosensory networked game method according to claim 5, wherein before importing the spectrum waterfall graph into a preset convolutional neural network to identify player actions, the method further comprises:
acquiring attitude data of standard somatosensory actions of a player completing a preset somatosensory game to establish a preset data set;
acquiring a training frequency spectrum waterfall diagram of the preset data set;
determining a preset target axis of the preset somatosensory game according to the oscillogram of the preset data set; and
and training the convolutional neural network by using the training frequency spectrum data graph of the preset target axis.
7. The somatosensory networked gaming method of claim 1, wherein the pose raw data obtained from the client is compressed by a Varints algorithm.
8. The somatosensory networked game method of claim 1, wherein sending the game operation instruction to the client comprises:
when the gateway node information bound by the client is inconsistent with the distribution gateway node of the current server, the game operation instruction is issued to a target gateway node bound by the client;
and sending the game operation instruction to the client through the target gateway node.
9. A motion-sensing networked game device, comprising a memory, a processor and a motion-sensing networked game program stored on the memory and executable on the processor, wherein the processor implements the motion-sensing networked game method according to any one of claims 1 to 8 when executing the motion-sensing networked game program.
10. A computer-readable storage medium, having a somatosensory networked game program stored thereon, which when executed by a processor implements the somatosensory networked game method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210697254.8A CN115006840A (en) | 2022-06-20 | 2022-06-20 | Somatosensory online game method, device and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210697254.8A CN115006840A (en) | 2022-06-20 | 2022-06-20 | Somatosensory online game method, device and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115006840A true CN115006840A (en) | 2022-09-06 |
Family
ID=83075332
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210697254.8A Pending CN115006840A (en) | 2022-06-20 | 2022-06-20 | Somatosensory online game method, device and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115006840A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108519812A (en) * | 2018-03-21 | 2018-09-11 | 电子科技大学 | A kind of three-dimensional micro-doppler gesture identification method based on convolutional neural networks |
CN108764013A (en) * | 2018-03-28 | 2018-11-06 | 中国科学院软件研究所 | A kind of automatic Communication Signals Recognition based on end-to-end convolutional neural networks |
JP2018205292A (en) * | 2017-06-05 | 2018-12-27 | 瀏陽 宋 | State identification method by characteristic analysis of histogram in time region and frequency region |
CN111176465A (en) * | 2019-12-25 | 2020-05-19 | Oppo广东移动通信有限公司 | Use state identification method and device, storage medium and electronic equipment |
CN111318009A (en) * | 2020-01-19 | 2020-06-23 | 张衡 | Somatosensory health entertainment system based on wearable inertial sensing and working method thereof |
-
2022
- 2022-06-20 CN CN202210697254.8A patent/CN115006840A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018205292A (en) * | 2017-06-05 | 2018-12-27 | 瀏陽 宋 | State identification method by characteristic analysis of histogram in time region and frequency region |
CN108519812A (en) * | 2018-03-21 | 2018-09-11 | 电子科技大学 | A kind of three-dimensional micro-doppler gesture identification method based on convolutional neural networks |
CN108764013A (en) * | 2018-03-28 | 2018-11-06 | 中国科学院软件研究所 | A kind of automatic Communication Signals Recognition based on end-to-end convolutional neural networks |
CN111176465A (en) * | 2019-12-25 | 2020-05-19 | Oppo广东移动通信有限公司 | Use state identification method and device, storage medium and electronic equipment |
CN111318009A (en) * | 2020-01-19 | 2020-06-23 | 张衡 | Somatosensory health entertainment system based on wearable inertial sensing and working method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116251343A (en) | Somatosensory game method based on throwing action | |
US20150273321A1 (en) | Interactive Module | |
CN115006840A (en) | Somatosensory online game method, device and computer-readable storage medium | |
CN116196611A (en) | Somatosensory game method based on waving action | |
CN115068938A (en) | Motion sensing game method based on jumping motion | |
CN115414669B (en) | Motion sensing game method, device and computer readable storage medium based on running gesture | |
CN115282589B (en) | Somatosensory game method based on rope skipping action | |
US20240226733A9 (en) | Method for operating running-type somatosensory game | |
CN115414669A (en) | Motion sensing game method and device based on running posture and computer readable storage medium | |
CN116271785A (en) | Somatosensory game method based on slapping action | |
CN114949839A (en) | Swimming posture-based motion sensing game method | |
CN115337627B (en) | Boxing body feeling game method, boxing body feeling game device and computer readable storage medium | |
CN115253270A (en) | Table tennis ball feeling game method | |
CN115869611A (en) | Somatosensory game method based on climbing action | |
CN115845356A (en) | Motion sensing game method based on swiping action | |
CN115282589A (en) | Somatosensory game method based on rope skipping action | |
CN115282596B (en) | Control method, device and equipment of somatosensory equipment and computer readable storage medium | |
CN117018594A (en) | Somatosensory game method based on apple watch | |
CN114904258A (en) | Motion sensing game method based on translation motion | |
CN114926908A (en) | Motion sensing game method based on baton swinging gesture | |
CN116889729A (en) | Motion sensing boxing game method based on function fitting | |
CN115414664B (en) | Data transmission method of somatosensory equipment | |
CN115531866A (en) | Operation method of shooting somatosensory game | |
CN116271783A (en) | Somatosensory tug-of-war game method, device, equipment and computer readable storage medium | |
CN116474355A (en) | Motion sensing game method based on hammering action |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |