CN116764220A - Method, apparatus and computer storage medium for providing game somatosensory data - Google Patents

Method, apparatus and computer storage medium for providing game somatosensory data Download PDF

Info

Publication number
CN116764220A
CN116764220A CN202310847066.3A CN202310847066A CN116764220A CN 116764220 A CN116764220 A CN 116764220A CN 202310847066 A CN202310847066 A CN 202310847066A CN 116764220 A CN116764220 A CN 116764220A
Authority
CN
China
Prior art keywords
game
game player
data
motion sensing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310847066.3A
Other languages
Chinese (zh)
Inventor
戎思佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhuling Technology Co ltd
Original Assignee
Shanghai Zhuling Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhuling Technology Co ltd filed Critical Shanghai Zhuling Technology Co ltd
Priority to CN202310847066.3A priority Critical patent/CN116764220A/en
Publication of CN116764220A publication Critical patent/CN116764220A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to the field of electronic game development, and provides a method, a device and a computer storage medium for providing game somatosensory data. The method comprises the following steps: image acquisition is carried out on the game player through the image acquisition equipment, so that a color image and a depth image of the game player are obtained; identifying human body key point information of a game player according to the color image and the depth image; acquiring somatosensory data of a game player in the process of playing a current somatosensory game according to human body key point information of the game player; translating motion sensing data of a game player during play of a current motion sensing game into data conforming to a protocol of a universal human-machine interface device; data conforming to the universal human interface device protocol is output to the gaming computing platform. The technical scheme of the application can enable a game developer to develop the somatosensory game efficiently and easily.

Description

Method, apparatus and computer storage medium for providing game somatosensory data
Technical Field
The present application relates to the field of electronic game development, and in particular, to a method, apparatus, and computer storage medium for providing game motion sensing data.
Background
Somatosensory game is an interactive game based on human motion and perception, allowing players to control characters or operations in the game with physical motion through the use of specially designed sensors and devices. The somatosensory game enables players to interact with the game in a brand new manner, enhancing the immersion and realism of the game experience. The somatosensory game is not only an entertainment form, but also can be used as a way of exercising the body.
Most of the somatosensory game devices on the market at present provide private and special development interfaces, and a small part of the devices do not even provide external development interfaces. This means that on the one hand, game developers need to learn these specialized proprietary protocol interfaces when developing games based on these devices, resulting in longer development cycles and higher development costs; on the other hand, gamers purchasing these devices can only be forced to face the dilemma of a relatively thin and fixed number of somatosensory games available for play. In summary, the related somatosensory game devices in the current market cannot meet the requirements of low-cost development of game developers and diversified game requirements of game players.
Disclosure of Invention
The present application provides a method, apparatus and computer storage medium for providing game somatosensory data, which can enable a game developer to develop a somatosensory game efficiently and easily.
In one aspect, the present application provides a method of providing game motion sensing data, the method comprising:
image acquisition is carried out on a game player through image acquisition equipment, so that a color image and a depth image of the game player are obtained;
identifying human body key point information of the game player according to the color image and the depth image;
acquiring somatosensory data of the game player in the process of playing a current somatosensory game according to the human body key point information of the game player;
translating motion sensing data of the game player during play of the current motion sensing game into data conforming to a universal human-machine interface device protocol;
and outputting the data conforming to the universal man-machine interface device protocol to a game computing platform.
In another aspect, the present application provides an apparatus for providing game motion sensing data, the apparatus comprising:
the image acquisition module is used for acquiring images of the game player through the image acquisition equipment to obtain color images and depth images of the game player;
the identification module is used for identifying human body key point information of the game player according to the color image and the depth image;
the acquisition module is used for acquiring the somatosensory data of the game player in the current somatosensory game playing process according to the human body key point information of the game player;
A translation module for translating motion sensing data of the game player during play of the current motion sensing game into data conforming to a universal human-machine interface device protocol;
and the output module is used for outputting the data conforming to the universal man-machine interface equipment protocol to the game computing platform.
In a third aspect, the present application provides an apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the technical solution of the method for providing game motion data as described above when the computer program is executed.
In a fourth aspect, the present application provides a computer storage medium storing a computer program which, when executed by a processor, implements the steps of the solution of the method for providing game motion sensing data as described above.
According to the technical scheme provided by the application, when the image acquisition device acquires the color image and the depth image of the game player by image acquisition, the motion sensing data of the game player in the current motion sensing game playing process are acquired, and then the motion sensing data of the game player in the current motion sensing game playing process are translated into data conforming to the common human-computer interface equipment protocol and are output to the game computing platform. Because the game computing platform receives the data conforming to the protocol of the universal man-machine interface device, the data is not the somatosensory data of the game player in the process of playing the current somatosensory game, a game developer does not need to use complex special somatosensory data to control the game role, only needs to use the data conforming to the protocol of the universal man-machine interface device to develop the game on the game computing platform, and the game developer is more familiar with the protocol of the universal man-machine interface device. Further, due to the uniformity of the protocol, a game developer does not need to distinguish the difference between equipment for implementing the method provided by the application and universal man-machine interface equipment such as a keyboard, a mouse or a game handle, and only needs to develop according to the development flow of the universal keyboard, the mouse or the game handle. Therefore, not only is the efficiency higher, but also the cost is lower when developing games. Meanwhile, the technical scheme of the application is compatible with games supporting all universal man-machine interface devices such as keyboards, mice or game handles.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a method for providing game somatosensory data provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of key points of a human body according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an apparatus for providing game motion sensing data according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In this specification, adjectives such as first and second may be used solely to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. Where the environment permits, reference to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but may be one or more of the element, component, or step, etc.
In the present specification, for convenience of description, the dimensions of the various parts shown in the drawings are not drawn in actual scale.
In general, a motion sensing game device, for example, a body-building ring, is adapted to only one motion sensing game of the device, which means that a game player can only play the motion sensing game to which the motion sensing game device is adapted after purchasing the motion sensing game device, and the diversified game needs of users are not satisfied. In the motion-sensing game development process, a game developer needs to acquire so-called motion-sensing data such as actions and/or postures of game players, and these motion-sensing data are generally obtained through a special interface provided by a motion-sensing game device. It is of course desirable for a somatosensory game developer to easily and efficiently develop more somatosensory games based on one somatosensory game apparatus. However, current somatosensory gaming devices simply provide the somatosensory data directly to the game developer through their special interface. Since the motion of the virtual character in the motion-sensing game is often a gap from the motion of the game player in reality in time-space, for example, in real life, the motion of the game player jumping takes typically 0.5s (seconds) from the place to the place, but in the game, the motion of the virtual character jumping may take less than or more than 0.5s. In this case, the game developer needs to perform special processing in accordance with the game action setting. It should be noted that, under the condition of the motion sensing game devices on the market at present, game developers need to control game virtual characters based on motion sensing data of different standards of each manufacturer, and the motion sensing data and data generated by common human-machine interface devices such as a keyboard, a mouse or a game handle in a game community are greatly different in terms of temporal and spatial characteristics. It is known that, for most game developers, game development based on motion sensing data provided by motion sensing game devices on the market at present is still difficult or requires high cost.
In view of this, the present application provides a method for providing game body feeling data, the flowchart of which is shown in fig. 1, and mainly includes steps S101 to S105, which are described in detail as follows:
step S101: and carrying out image acquisition on the game player through the image acquisition equipment to obtain a color image and a depth image of the game player.
In the embodiment of the application, the image acquisition equipment comprises a common camera, a laser radar and a depth camera, wherein the camera and other image acquisition equipment acquire images of the game player, so that color images of the game player can be obtained, and the depth camera or the laser radar acquires images of the game player, so that depth images of the game player can be obtained. It should be noted that, the color image of the game player is usually a two-dimensional image, which captures color information such as appearance and texture of the player when playing the game, but does not include information about depth and three-dimensional structure, and the depth image is an image including depth information, which is information about the distance between the pixel point in the depth image and the camera.
Step S102: human body key point information of a game player is identified according to the color image and the depth image.
Since only the human body region in the depth image can be separated from the background by the depth information, it can be converted into a coordinate position of the object in the three-dimensional space. Therefore, the human body key point information of the game player is recognized according to the color image and the depth image, and the gesture of the game player can be estimated better. In other words, although the color image can capture the appearance and texture information of the game player when playing the game, important clues of the position and posture of the game player in the three-dimensional space can be acquired through the depth information in the depth image, thereby precisely detecting or identifying key points of the human body. Key points of the human body generally refer to specific locations or characteristic points of the human body, such as skeletal points of specific locations of elbows, knees, wrists, heads, shoulders, etc., as shown in fig. 2. Therefore, in the embodiment of the application, the human body key point information of the game player can be skeleton point information of the game player, including information such as the position, the direction, the angle and the like of joints of the game player in a three-dimensional space.
As one embodiment of the present application, identifying human body key point information of a game player from a color image and a depth image can be achieved by the following steps S1021 to S1023:
Step S1021: and aligning the color image and the depth image to obtain an aligned color image and depth image.
Because the common camera is based on the coordinate system of the common camera when acquiring the color image and the depth camera or the laser radar acquires the depth image. In order to effectively match and fuse the depth information of the depth image with the color information of the color image, thereby obtaining a more accurate identification result of the human body key point information of the game player, in the embodiment of the application, the color image and the depth image can be aligned, that is, the color image and the depth image are placed in the same space coordinate system, so that the color image and the depth image have a consistent corresponding relationship in space. As an embodiment of the present application, aligning the color image and the depth image to obtain the aligned color image and depth image may be: obtaining a calibration result between the depth image acquisition equipment and the color camera; calculating a transformation matrix between the depth image acquisition equipment and the color camera according to a calibration result between the depth image acquisition equipment and the color camera; transforming the coordinates of each pixel point in the depth image into a coordinate system of the color image according to the calibration result and the transformation matrix; the depth image capturing device may be a depth camera or a laser radar for capturing the depth image mentioned in the foregoing embodiment, and the color camera is an image capturing device for capturing a color image, that is, a common camera, or the like. It should be noted that, in the above embodiment, the obtaining of the calibration result between the depth image capturing device and the color camera may be directly achieved through the respective factory references of the depth image capturing device and the color camera, or may be achieved through the collection of a specific calibration pattern or the use of a calibration board, specifically, the obtaining of the calibration result between the depth image capturing device and the color camera through the collection of the specific calibration pattern or the use of the calibration board may be: fixing the depth image acquisition equipment and the color camera on respective holders, and keeping the relative positions fixed; respectively using depth image acquisition equipment and a color camera to shoot a plurality of groups of calibration pictures with different poses on the calibration plate at the same time; converting the calibration pictures shot by the depth image acquisition equipment and the color camera into calibration pictures with the same size, and removing calibration pictures with larger errors to obtain a preliminary calibration result; and carrying out distortion correction on the preliminary calibration result, deriving an internal reference matrix of the depth image acquisition equipment and the color camera and an external reference matrix between the cameras, and carrying out transposition to obtain a conversion matrix from the color camera coordinate system to the depth image acquisition equipment coordinate system or a conversion matrix from the depth image acquisition equipment coordinate system to the color camera coordinate system as a final calibration result between the depth image acquisition equipment and the color camera.
Step S1022: and fusing the aligned color image and depth image to obtain a fused image.
Since the subsequent step is implemented in one image when identifying the human body key point information of the game player, the aligned color image and depth image need to be fused after being aligned to obtain a fused image. Specifically, fusing the aligned color image and depth image to obtain a fused image may be: preprocessing the depth image so that the resolution is the same as that of the color image and the range of depth values of the depth image is suitable for visualization; converting the preprocessed positioning depth image into a gray level image; normalizing the gray level image according to the range of the depth values, so that the minimum depth value corresponds to black and the maximum depth value corresponds to white; superposing the color image and the gray image in the following mode one or the mode two to obtain a fusion image:
mode one: taking the gray image as a transparency channel of the color image, and adjusting the superposition effect by mapping the gray value to the transparency range;
mode two: the color channels of the color image and the gray image are linearly combined according to a certain weight to generate a new color image as a fusion image.
It should be noted that, when the depth image is preprocessed, the resolution of the depth image is the same as that of the color image, which is considered that the depth image and the color image are acquired by the depth image acquisition device and the color camera respectively and may have different resolutions, so in order to ensure the correspondence between the two, the resolution of the depth image needs to be adjusted to be the same as that of the color image, which may be achieved by interpolation or scaling, for example. As far as the range of depth values of the depth image is suitable for visualization, it is considered that the depth values acquired by the depth image acquisition device are usually represented in a raw format, and the range of these values may exceed the visualization range. For example, the depth value may be expressed in millimeters, but it is preferable to map it to a suitable range, e.g., 0-255, for viewing by the human eye when visualized. In particular, a method of adapting a range of depth values of a depth image to visualization may be range scaling or color mapping, the former scheme including mapping the depth values to a desired range by linear transformation, e.g., scaling the depth range using minimum and maximum depth values, and the color mapping scheme including mapping the depth values to different color spaces by gray scale mapping, pseudo color mapping, etc., representing the size of the depth values using different colors.
Step S1023: based on the depth information and the color information in the fusion image, the human body key point information of the game player is estimated by adopting a preset algorithm.
Here, the preset algorithm includes a method based on model matching and a method based on model training, wherein estimating human body key point information of the game player by using the method based on model matching based on depth information and color information in the fused image specifically includes: constructing a bone model representing a bone structure of a human body by using sample data or a 3D model library; extracting features for matching, such as edges, textures, or shape descriptors, from the fused image; and matching the extracted features with the skeleton model to find out the best matching result, and obtaining the human body key point information of the game player. Based on the depth information and the color information in the fusion image, the human body key point information of the game player estimated by adopting the method based on model training can be: training the model by using a training set containing key points of the human body to obtain a trained model; features for matching are extracted from the fused images, the features are input into a trained model, and the predicted result output by the trained model is used as human body key point information of a game player. In the embodiment of estimating human body key point information of a game player by using the method based on model training, the model training is key, and the specific method for training the model by using the training set including human body key points may be: determining a part to be identified in a human body, and processing a training set containing key points of the human body to obtain a target data set containing the part to be identified; determining the supervision type of the human body key points based on the human body key point information, and correcting an initial training label containing the human body key points based on the supervision type so as to determine a training label of the target data set according to the corrected initial training label; and performing supervised training on the model to be trained through the target data set and the training label to obtain a trained model. As an embodiment of the present application, the performing the supervised training on the model to be trained through the target data set and the training label in the above embodiment may be: image processing is carried out on the target data set through the model to be trained, so that the position information of each part to be identified in the target data set is obtained; determining a target loss function value for constraining a position difference between a first part and a second part, which are associated parts to be identified, based on position information of the first part and the second part among the plurality of parts to be identified and the training label; and adjusting model parameters of the model to be trained according to the function value of the target loss function, so as to train the model to be trained through the adjusted model parameters.
The model training is supervised training, and in the embodiment of the application, a method for training the model is further provided, which specifically comprises the following steps: acquiring a target image acquired by utilizing a plurality of cameras with different visual angles, wherein the target image comprises at least one of a color image and a depth map of a current scene, and each camera has a unique identifier; performing human body posture estimation on the target image acquired by each camera to obtain first set data of a human body key point data set; determining a unique identification of the camera as a field of view tag of the first set of data; adding each first set of data and a field of view tag corresponding to the first set of data to a first sequence data set; performing feature extraction on each first set of data based on the initialized first feature parameters by using a feature extraction network to obtain feature vectors corresponding to the first set of data; based on the initialized second characteristic parameters, processing the characteristic vector of each first set of data by using a first prediction type output layer to obtain a type likelihood vector corresponding to the first set of data; calculating a loss value by using a KL divergence loss function based on the type likelihood vector of each first set data and the field of view tag of each first set data; when the KL divergence loss function is determined to be converged according to the loss value, the current first characteristic parameter is determined to be the first characteristic parameter after model training, and the current second characteristic parameter is determined to be the second characteristic parameter after model training.
Step S103: and acquiring the somatosensory data of the game player in the process of playing the current somatosensory game according to the human body key point information of the game player.
In the embodiment of the application, the motion sensing data of the game player in the process of playing the current motion sensing game comprises both the motion of the game player in the process of playing the current motion sensing game and the gesture of the game player in the process of playing the current motion sensing game, wherein the motion sensing data comprise 'jumping', 'slapping', 'walking' and the like, and the gesture comprises 'aiming', 'cat waist' and 'knee bending' and the like. Specifically, as an embodiment of the present application, according to human body key point information of a game player, obtaining motion sensing data of the game player during playing of a current motion sensing game may be: performing action recognition of the game player according to the human body key point information of the game player to obtain an action recognition result, and performing gesture recognition of the game player according to the human body key point information of the game player to obtain a gesture recognition result; and synchronizing the action recognition result and the gesture recognition result to obtain the somatosensory data of the game player in the process of playing the current somatosensory game. When the gesture recognition result is obtained by performing gesture recognition of the game player according to the human body key point information of the game player, the human body key points of the game player can be connected according to the human body key point information of the game player, such as the position, the direction and the angle of joints, to extract skeleton or outline information of the game player by defining the connection relation between human body parts, such as shoulder to wrist, crotch to foot, and the like; based on skeleton or outline information of the game player, a gesture estimation algorithm is used, and the specific body gesture of the game player is deduced by combining the technologies of connecting lines between joints, angle calculation, human body model matching and the like, so that a gesture recognition result is obtained.
The recognition of the action of the game player may be complex or special with respect to the gesture of the game player, since the gesture of the game player may be counted as a static behavior, whereas the action of the game player is a dynamic behavior or a behavior with a long duration. Because of the above-described difference in motion relative to pose, in order to more accurately identify the motion of a game player, in embodiments of the present application, the motion of the game player may be serialized first, i.e., subdivided into different sequences of motions. The action sequence of the game player is serialized, so that the action of the game player can be accurately identified, and a new action sequence can be generated by combining the action sequences after the serialization so as to adapt to different game scenes or interaction requirements. For example, in a batting game, various batting actions may be generated by combining different parts of a sequence of actions such as take-off, swing, and landing. In addition, serializing the action of the game player also allows the developer to smooth and complement the action of the game player, thereby eliminating abrupt and inconsistent sensations of the action, and so forth. After serializing an action of a game player, these action sequences (which may be considered as a frame image or an image frame) need to be buffered in a queue. When the queue is full, the action sequence of the head of the queue is removed. Upon identifying the action of the game player, the sequence of actions is fetched from the queue frame by frame. For example, for actions such as "jump", "swing" and the like, a typical game player takes 0.5 seconds. Assuming a system frame rate of 30 frames/second, a buffer queue of 15 frames is required, and the system needs to recognize that the game player takes a T-shaped motion (i.e., two feet are held together and lifted with two hands) of 2 seconds and a "power accumulating" motion of about 2 seconds, a buffer queue of 60 frames is required. The two cache queues can be independently maintained by using different memory spaces respectively, and the same memory space can also be used. Specifically, when a buffer queue with a capacity of 15 frames and a buffer queue with a capacity of 60 frames share a section of memory space, the last 15 frames of data of the buffer queue with a capacity of 60 frames are 15 frames of buffer area data, and only when the memory space is about to exceed 60 frames, the data of the buffer queue positioned at the head of the queue, namely the first frame or the earliest frame, is removed. As for the action recognition of the game player based on the human body key point information of the game player, the action recognition result may be obtained by estimating the posture of the game player based on the human body key point information of the game player, and then recognizing the action of the game player based on the body posture change of the game player by using a classifier, pattern matching or time series analysis preset algorithm, for example, training a classifier or network model using a convolutional neural network or other deep learning model, or recognizing the action of the game player using a method such as edge detection, corner detection or texture feature. A simpler act of identifying a game player may be to obtain similarity between target action feature information (i.e., feature information of an action to be identified by the game player) and preset feature information (the preset feature information is used for describing a preset dynamic action); when the maximum first similarity among the acquired multiple similarities is larger than a first preset threshold value, and the difference between the first similarity and the second similarity is larger than a second preset threshold value, determining the first action corresponding to the first similarity as a preset dynamic action, wherein the second similarity is the similarity corresponding to the first action.
In addition, considering that the gesture and the action of the game player are recognized by two different modules, the gesture and the action of the game player are likely to be two very closely related actions, or the recognition of the action is dependent on the recognition of the gesture, after the gesture recognition result and the action recognition result are obtained according to the human body key point information of the game player, the gesture recognition result and the gesture recognition result need to be synchronized, and then combined, finally the somatosensory data of the game player in the current somatosensory game playing process is obtained. Specifically, the method for synchronizing and combining the action recognition result and the gesture recognition result may be: acquiring time stamp information of an action recognition result and a gesture recognition result; aligning the action recognition result and the gesture recognition result according to the timestamp information of the action recognition result and the gesture recognition result so as to ensure that the action recognition result and the gesture recognition result are in the same time window; and combining the aligned action recognition result and gesture recognition result to obtain somatosensory data of a game player in the process of playing the current somatosensory game, for example, respectively representing the two types of results as different data fields, and combining the two types of results according to a defined format or combining the two types of results into a data structure.
Considering that, in order to adapt to the basic characteristics of the man-machine interface device, the action recognition result obtained by performing action recognition on the game player needs to be divided into an action requiring continuous triggering and an action requiring continuous triggering, as an embodiment of the present application, the action recognition of the game player according to the human body key point information of the game player in the foregoing embodiment may be implemented through steps S1 to S5, and the following description is given below:
step S1: and identifying the actions of the game player by adopting a preset algorithm according to the human body key point information of the game player, obtaining a preliminary identification result and caching.
The identification of the action of the game player by using the preset algorithm according to the human body key point information of the game player can refer to the related description of the foregoing embodiments, which is not described herein.
Step S2: and judging whether the action type of the game player is the action needing to be continuously triggered.
The action requiring continuous triggering corresponds to an operation requiring continuous triggering of the game player (for example, a certain key of a keyboard is always pressed) or an operation maintaining a certain state (for example, a joystick lever is always pushed upwards), for example, for a sense action such as "walking" or "running" of the game player, the operation corresponding to the general man-machine interface device protocol is generally to continuously trigger a certain key or a certain keys of the keyboard or maintain a joystick lever state.
Step S3: if the preliminary identification result indicates that the action type of the game player is action requiring continuous triggering, outputting the preliminary identification result with the action type requiring continuous triggering as an action identification result.
If the preliminary identification result indicates that the action type of the game player is action requiring continuous triggering, other processing is not needed, and the preliminary identification result with the action type requiring continuous triggering is directly output as the action identification result.
Step S4: if the preliminary identification result indicates that the action type of the game player is action without continuous triggering, judging whether a specific frame in a buffer unit with the buffer action type being the preliminary identification result without continuous triggering is a key frame or not.
In contrast to the above-described sustained trigger action, the sustained trigger action is not required to correspond to an operation in the universal human-machine interface device protocol in which the game player is not required to maintain a certain state continuously, for example, a "jump" motion of the game player is required to correspond to an operation in the universal human-machine interface device protocol in which only a single or multiple clicks of a certain key of the keyboard are required by the game player without having to hold a certain key of the keyboard at all times. When the preliminary recognition result indicates that the action type of the game player is the action without continuous triggering, the preliminary recognition result cannot be directly output as the action recognition result, and whether the specific frame in the buffer unit with the buffer action type of the preliminary recognition result without continuous triggering is a key frame or not needs to be judged.
As described above, after the action of the game player is identified by adopting the preset algorithm to obtain the preliminary identification result, a series of image frames as the preliminary identification result are cached in a certain memory or a cache unit, and the preliminary identification result without continuous triggering action is cached in the cache unit, at this time, a specific frame may be taken out from the cache unit, for example, when the capacity of the cache unit is 15 image frames that can be cached, the 7 th frame or 8 th frame may be taken out as the specific frame, and whether the 7 th frame or 8 th frame is a key frame is determined. It should be noted that, since the key frame is a sequence of actions corresponding to the representative actions of the game player, in the embodiment of the present application, whether a specific frame in the cache unit whose cache action type is the preliminary recognition result without continuous triggering action is a key frame may be determined by one or a combination of the following modes:
mode one: whether a specific frame in the buffer unit is a key frame or not is judged based on a speed threshold value or an acceleration threshold value, specifically, by setting a threshold value, when the speed or the acceleration of a specific frame corresponding to the action of a game player exceeds a certain limit, the specific frame is marked as the key frame, and the method can capture the rapid change or high-energy part of the action of the game player.
Mode two: whether or not a specific frame in the buffer unit is a key frame is determined based on the pose difference, specifically, the key frame is selected according to the pose difference between adjacent image frames, for example, the image frames adjacent to the specific frame, and if the pose difference between the two exceeds a set threshold, the specific frame or the adjacent image frames are marked as the key frames, which is very effective for capturing important transition points or morphological changes in the action of the game player.
Mode three: whether the specific frame in the buffer unit is a key frame is judged based on the energy or the amplitude, specifically, the action energy or the amplitude between the specific frame and the adjacent image frame is calculated, and then the frame with the maximum or minimum value is selected as the key frame. For example, for a motion of a game player that is "jumping," if a skeletal point of a foot of the game player is a key point of a human body, an image frame in which the skeletal point of the foot is located is obtained from a buffer unit (for example, when the buffer unit has a capacity of being able to buffer 15 image frames, a 7 th frame or an 8 th frame may be used as a specific frame), and when the skeletal point of the foot is the highest position in the y-axis direction of the two-dimensional coordinate system of all the image frames buffered in the buffer unit, the specific frame is the key frame.
Mode four: whether the specific frames in the buffer unit are key frames is judged based on machine learning, specifically, a feature extraction and classification method is used by training a model, the model can predict which specific frames are key frames according to input action data, and the method can realize more accurate and self-adaptive key frame extraction although a large amount of labeling data and algorithm optimization are needed.
Step S5: if the specific frame is a key frame, outputting a preliminary recognition result with the action type without continuous triggering as an action recognition result.
Step S104: motion-sensing data of a game player during play of a current motion-sensing game is translated into data conforming to a common human interface device protocol.
In the embodiment of the present application, the general human interface device (HumanInterface Device, HID) protocol refers to a protocol supported by a human interface device that is familiar to general game developers, such as a USB human interface device protocol formulated by the USB alliance or a bluetooth human interface device protocol formulated by the bluetooth alliance, and the general human interface device refers to a human interface device supported by the foregoing general HID protocol, typically, a game pad, a keyboard and/or a mouse of a personal computer, etc. are general human interface devices. Since a typical game developer is familiar with protocols that support human interface devices such as gamepads, keyboards, and/or mice, such as the aforementioned USB human interface device protocol or Bluetooth human interface device protocol, developing a game based on the protocols is not difficult or easy for a typical game developer. As one embodiment of the present application, translating motion-sensing data of a game player during play of a current motion-sensing game into data conforming to a universal human interface device protocol may be: inquiring a preset mapping table; and mapping the somatosensory data of the game player in the process of playing the current somatosensory game into data conforming to a universal human-computer interface equipment protocol according to the mapping relation of the mapping table. In the embodiment of the application, the mapping table maintains the corresponding relation between the somatosensory data and the data conforming to the protocol of the universal man-machine interface device, so that the somatosensory data of a game player in the process of playing the current somatosensory game can be mapped into the data conforming to the protocol of the universal man-machine interface device by inquiring the preset mapping table according to the mapping relation of the mapping table. Taking a keyboard and/or a mouse as an example of the universal man-machine interface device, if motion of motion sensing data of a game player in the process of playing a current motion sensing game is "jump", mapping the motion of "jump" into a space of the keyboard (an ascii code of the space corresponds to 0x 20) according to a mapping relation of a mapping table; for another example, if the motion of the motion sensing data of the game player in the process of playing the current motion sensing game is "walking", the motion of "walking" can be mapped into the letter "w" of the keyboard (the ascii code of the motion sensing data corresponds to 0x 77) according to the mapping relation of the mapping table; for another example, if the motion of the motion sensing data of the game player in the process of playing the current motion sensing game is "aiming", the motion of "aiming" can be mapped into a left key trigger state of the mouse (the bit value of which is "1") according to the mapping relation of the mapping table, and the like; these somatosensory data may then be "jumped", "walked" and "aimed" to generate a data report conforming to the universal human interface device protocol containing 0x20, 0x77 and bit values "1" within a defined time window, for example 33.33ms (milliseconds). It should be noted that, in the mapping process, the distribution of the data in time needs to be changed according to the relevant features of the human interface device protocol, for example, the following steps S1041 to S1044 are described, and not the simple mapping.
It is contemplated that in an actual game scenario, the motion sensing data of the game player during the playing of the current motion sensing game may be motion sensing data of a one-shot type (e.g., the motion sensing data of the one-shot type mentioned above, such as "jump", "walk", or "aim"), or motion sensing data of a multi-shot type (e.g., a special skill in some games, such as a "combination fist", which requires the game player to trigger the letters "w" of the keyboard, space, click the left mouse button, etc. in a certain order, corresponding to the motion sensing data of the multi-shot type). For different types of somatosensory data, different processing methods are required, so as an embodiment of the present application, mapping the somatosensory data of a game player during playing a current somatosensory game according to the mapping relation of the mapping table into data conforming to the general man-machine interface device protocol can be implemented by the following steps S1041 to S1044:
step S1041: and judging whether the motion sensing data of the game player in the process of playing the current motion sensing game is one-shot motion sensing data.
Step S1042: if the motion sensing data of the game player in the process of playing the current motion sensing game is the motion sensing data of the one-shot type, the motion sensing data of the one-shot type is mapped into the data conforming to the protocol of the universal human-computer interface device according to the mapping relation of the mapping table and then is directly output.
Step S1043: if the somatosensory data of the game player in the process of playing the current somatosensory game is the somatosensory data of the multiple trigger types, mapping the somatosensory data of the multiple trigger types into data conforming to a general human-computer interface equipment protocol according to the mapping relation of the mapping table, and forming a queue by the data and caching the queue to a cache unit.
In the embodiment of the application, one or more queues can exist in the buffer unit, one queue consists of a plurality of queue units, each queue unit in one queue corresponds to one time window, and one queue unit can contain zero, one or more data conforming to a universal man-machine interface device protocol.
Step S1044: and combining all the data of the universal man-machine interface equipment protocols meeting the time window limiting conditions in the buffer unit and outputting the combined data of the universal man-machine interface equipment protocols.
It should be noted that, in the embodiment of the present application, the time window limitation condition is that the time window is greater than the data pull interval time in interrupt/control transfer (interrupt/control transfer) in the common human interface device protocol, but less than the input interval time of the common game player, and the size of one time window may be the interval time between two image frames, for example, at a frame rate of 30 frames/second, the size of one time window is 33.33ms (milliseconds).
As described above, after the somatosensory data of the multi-trigger type is mapped into the data conforming to the general man-machine interface device protocol, the data is buffered in the buffer unit in the form of a queue, and the access to the data in the queue is first-in first-out, that is, the data in the queue is always fetched from the head of the queue. Therefore, in a time window, the data of the head unit of all queues in the buffer unit is fetched, and the next opposite unit becomes the head unit, and all the fetched data in all the queue units are combined with other data which is mapped in the time window and accords with the universal man-machine interface device protocol and is output. It should be noted that, in the above embodiment, the data obtained after processing in steps S1041 to S1044 is identical to the data of the human interface device protocol defined in the general HID protocol in terms of space-time characteristics, and both are considered equivalent to conform to the general human interface device protocol, where the space-time characteristics of the data include the distribution characteristics of the data in terms of space-time, such as the data transmission interval, the time required for generating the data, and the like.
The mapping table in the above embodiment may be a default mapping table, or may be a mapping table modified by the user-defined mapping table, that is, the game player modifies the default mapping table, in other words, the user may map the specific action and/or gesture, that is, the somatosensory data combination, to a standard protocol value of the universal man-machine interface device in a customized manner, so that the user-defined mapping table obtained by the user-defining the mapping table may satisfy the diversified game requirements of the game player. For example, by a game player modifying the default mapping table, the action of "walk" and the gesture combination of "face forward" may be mapped to the corresponding ascii code for the alphabetic key "w" of the keyboard or the corresponding value of the joystick rocker up (x=0, y=1). When the game player makes the above actions and gestures, the system translates them into a report of the universal human interface device protocol including the ascii code corresponding to the alphabetic key "w" or the value corresponding to the joystick lever up (x=0, y=1). It should be noted that, when the game player customizes the mapping table, the system checks whether the mapping table customized by the user, i.e. the modified mapping table, has a conflict, for example, different somatosensory data are mapped to the same standard protocol value of the universal man-machine interface device. If there is a conflict, the game player is reminded to re-modify the mapping table or directly not allowed to modify.
Step S105: data conforming to the universal human interface device protocol is output to the gaming computing platform.
In an embodiment of the application, the game computing platform may be a computing device with a general purpose man-machine interface device such as a keyboard, mouse, etc., for example, a personal computer, etc. The game computing platform can output data conforming to the protocol of the universal man-machine interface device to the game computing platform by adopting a Universal Serial Bus (USB) mode or a Bluetooth mode.
As can be seen from the above method for providing motion sensing data of a game illustrated in fig. 1, when an image capturing device captures an image of a game player to obtain a color image and a depth image of the game player, motion sensing data of the game player during a current motion sensing game is obtained, and then translated into motion sensing data conforming to a common man-machine interface protocol and output to a game computing platform. Because the game computing platform receives the data conforming to the protocol of the universal man-machine interface device, the data is not the somatosensory data of the game player in the process of playing the current somatosensory game, a game developer does not need to use complex special somatosensory data to control the game role, only needs to use the data conforming to the protocol of the universal man-machine interface device to develop the game on the game computing platform, and the game developer is more familiar with the protocol of the universal man-machine interface device. Further, due to the uniformity of the protocol, a game developer does not need to distinguish the difference between equipment for implementing the method provided by the application and universal man-machine interface equipment such as a keyboard, a mouse or a game handle, and only needs to develop according to the development flow of the universal keyboard, the mouse or the game handle. Therefore, not only is the efficiency higher, but also the cost is lower when developing games. Meanwhile, the technical scheme of the application is compatible with games supporting all universal man-machine interface devices such as keyboards, mice or game handles.
Referring to fig. 3, an apparatus for providing game motion sensing data according to an embodiment of the present application may include an image acquisition module 301, an identification module 302, an acquisition module 303, a translation module 304, and an output module 305, which are described in detail below:
the image acquisition module 301 is configured to acquire an image of a game player through the image acquisition device, so as to obtain a color image and a depth image of the game player;
an identification module 302, configured to identify human body key point information of a game player according to the color image and the depth image;
an obtaining module 303, configured to obtain, according to human body key point information of a game player, somatosensory data of the game player during a current somatosensory game playing process;
a translation module 304 for translating motion sensing data of a game player during play of a motion sensing game into data conforming to a universal human machine interface device protocol;
an output module 305 for outputting data conforming to the universal human interface device protocol to the gaming computing platform.
As can be seen from the above-mentioned apparatus for providing motion sensing data of a game in fig. 3, when an image capturing device captures an image of a game player to obtain a color image and a depth image of the game player, the motion sensing data of the game player during the process of playing a current motion sensing game is obtained, and then translated into data conforming to a common man-machine interface protocol, and output to a game computing platform. Because the game computing platform receives the data conforming to the protocol of the universal man-machine interface device, the data is not the somatosensory data of the game player in the process of playing the current somatosensory game, a game developer does not need to use complex special somatosensory data to control the game role, only needs to use the data conforming to the protocol of the universal man-machine interface device to develop the game on the game computing platform, and the game developer is more familiar with the protocol of the universal man-machine interface device. Further, due to the uniformity of the protocol, a game developer does not need to distinguish the difference between equipment for implementing the method provided by the application and universal man-machine interface equipment such as a keyboard, a mouse or a game handle, and only needs to develop according to the development flow of the universal keyboard, the mouse or the game handle. Therefore, not only is the efficiency higher, but also the cost is lower when developing games. Meanwhile, the technical scheme of the application is compatible with games supporting all universal man-machine interface devices such as keyboards, mice or game handles.
Fig. 4 is a schematic structural diagram of an apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus 4 of this embodiment mainly includes: a processor 40, a memory 41 and a computer program 42 stored in the memory 41 and executable on the processor 40, such as a program for a method of providing game motion sensing data. The steps in the above-described embodiment of the method of providing game motion sensing data, such as steps S101 to S105 shown in fig. 1, are implemented when the processor 40 executes the computer program 42. Alternatively, the processor 40 may perform the functions of the modules/units of the above-described apparatus embodiments when executing the computer program 42, for example, the functions of the image acquisition module 301, the identification module 302, the acquisition module 303, the translation module 304, and the output module 305 shown in fig. 3.
Illustratively, the computer program 42 of the method of providing game motion sensing data generally comprises: image acquisition is carried out on the game player through the image acquisition equipment, so that a color image and a depth image of the game player are obtained; identifying human body key point information of a game player according to the color image and the depth image; acquiring somatosensory data of a game player in the process of playing a current somatosensory game according to human body key point information of the game player; translating motion sensing data of a game player during play of a current motion sensing game into data conforming to a protocol of a universal human-machine interface device; data conforming to the universal human interface device protocol is output to the gaming computing platform. The computer program 42 may be divided into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function, which instruction segments are used to describe the execution of the computer program 42 in the device 4. For example, the computer program 42 may be divided into functions of an image acquisition module 301, a recognition module 302, an acquisition module 303, a translation module 304, and an output module 305 (modules in the virtual device), each of which specifically functions as follows: the image acquisition module 301 is configured to acquire an image of a game player through the image acquisition device, so as to obtain a color image and a depth image of the game player; an identification module 302, configured to identify human body key point information of a game player according to the color image and the depth image; an obtaining module 303, configured to obtain, according to human body key point information of a game player, somatosensory data of the game player during a current somatosensory game playing process; a translation module 304 for translating motion sensing data of a game player during play of a motion sensing game into data conforming to a universal human machine interface device protocol; an output module 305 for outputting data conforming to the universal human interface device protocol to the gaming computing platform.
Device 4 may include, but is not limited to, a processor 40, a memory 41. It will be appreciated by those skilled in the art that fig. 4 is merely an example of device 4 and is not intended to limit device 4, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computing device may also include an input-output device, a network access device, a bus, etc.
The processor 40 may be a central processing unit (CentralProcessingUnit, CPU), as well as other general purpose processors, digital signal processors (DigitalSignal Processor, DSP), application specific integrated circuits (ApplicationSpecificIntegrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the device 4, such as a hard disk or a memory of the device 4. The memory 41 may also be an external storage device of the device 4, such as a plug-in hard disk provided on the device 4, a smart memory card (SmartMediaCard, SMC), a secure digital (SecureDigital, SD) card, a flash memory card (FlashCard), etc. Further, the memory 41 may also include both an internal storage unit of the device 4 and an external storage device. The memory 41 is used to store computer programs and other programs and data required by the device. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that the above-described functional units and modules are merely illustrated for convenience and brevity of description, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, the present application may implement all or part of the processes in the methods of the above embodiments, or may be implemented by instructing related hardware by a computer program, where the computer program for providing the method of game body feeling data may be stored in a computer storage medium, where the computer program, when executed by a processor, may implement the steps of the above embodiments of the methods, that is, image capturing, by an image capturing device, of a game player, to obtain a color image and a depth image of the game player; identifying human body key point information of a game player according to the color image and the depth image; acquiring somatosensory data of a game player in the process of playing a current somatosensory game according to human body key point information of the game player; translating motion sensing data of a game player during play of a current motion sensing game into data conforming to a protocol of a universal human-machine interface device; data conforming to the universal human interface device protocol is output to the gaming computing platform. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the non-transitory computer readable medium may include content that is suitably scaled according to the requirements of jurisdictions in which the legislation and patent practice, such as in some jurisdictions, the non-transitory computer readable medium does not include electrical carrier signals and telecommunication signals according to the legislation and patent practice. The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (10)

1. A method of providing game motion sensing data, the method comprising:
image acquisition is carried out on a game player through image acquisition equipment, so that a color image and a depth image of the game player are obtained;
identifying human body key point information of the game player according to the color image and the depth image;
acquiring somatosensory data of the game player in the process of playing a current somatosensory game according to the human body key point information of the game player;
translating motion sensing data of the game player during play of the current motion sensing game into data conforming to a universal human-machine interface device protocol;
and outputting the data conforming to the universal man-machine interface device protocol to a game computing platform.
2. The method of providing game motion sensing data as recited in claim 1, wherein said identifying human body keypoint information of said game player from said color image and said depth image comprises:
aligning the color image and the depth image to obtain an aligned color image and depth image;
fusing the aligned color image and depth image to obtain a fused image;
and estimating human body key point information of the game player by adopting a preset algorithm based on the depth information and the color information in the fusion image.
3. The method of providing game motion sensing data of claim 2, wherein said aligning said color image and said depth image results in aligned color images and depth images, comprising:
obtaining a calibration result between a depth camera and a color camera, wherein the depth camera is image acquisition equipment for acquiring the depth image, and the color camera is image acquisition equipment for acquiring the color image;
calculating a transformation matrix between the depth camera and the color camera according to a calibration result between the depth camera and the color camera;
and transforming the coordinates of each pixel point in the depth image into a coordinate system of the color image according to the calibration result and the transformation matrix.
4. The method for providing motion sensing data of a game as set forth in claim 1, wherein the acquiring motion sensing data of the game player during the playing of the current motion sensing game based on human body key point information of the game player comprises:
performing action recognition of the game player according to the human body key point information of the game player to obtain an action recognition result, and performing gesture recognition of the game player according to the human body key point information of the game player to obtain a gesture recognition result;
And synchronizing the action recognition result and the gesture recognition result to obtain the somatosensory data of the game player in the current somatosensory game playing process.
5. The method for providing game feel data of claim 4, wherein said performing motion recognition of said game player based on human body keypoint information of said game player to obtain motion recognition results comprises:
identifying the actions of the game player by adopting a preset algorithm according to the human body key point information of the game player, obtaining a primary identification result and caching the primary identification result;
judging whether the action type of the game player is action requiring continuous triggering;
if the preliminary identification result shows that the action type of the game player is action requiring continuous triggering, outputting a preliminary identification result with the action type of the action requiring continuous triggering as the action identification result;
if the preliminary identification result shows that the action type of the game player is action without continuous triggering, judging whether a specific frame in a caching unit with the caching action type being the preliminary identification result without continuous triggering is a key frame or not;
and if the specific frame is a key frame, outputting the preliminary identification result with the action type of the action which does not need to be continuously triggered as the action identification result.
6. The method of providing game motion sensing data as in claim 1, wherein translating motion sensing data of the game player during play of a current motion sensing game into data conforming to a universal human interface device protocol comprises:
inquiring a preset mapping table;
and according to the mapping relation of the mapping table, mapping the somatosensory data of the game player in the process of playing the current somatosensory game into data conforming to the universal man-machine interface equipment protocol.
7. The method for providing game motion sensing data according to claim 6, wherein the mapping motion sensing data of the game player during the playing of the current motion sensing game to data conforming to the common human machine interface device protocol according to the mapping relation of the mapping table comprises:
judging whether the somatosensory data of the game player in the process of playing the current somatosensory game is one-time trigger type somatosensory data or not;
if the motion sensing data of the game player in the process of playing the current motion sensing game is the motion sensing data of the one-shot type, mapping the motion sensing data of the one-shot type into data conforming to the general human-computer interface equipment protocol according to the mapping relation of the mapping table, and then directly outputting the data;
If the somatosensory data of the game player in the process of playing the current somatosensory game is the somatosensory data of a plurality of trigger types, mapping the somatosensory data of the plurality of trigger types into data conforming to the general man-machine interface equipment protocol according to the mapping relation of the mapping table, and caching the data in a cache unit;
and combining the data of all the universal human-machine interface equipment protocols meeting the time window limiting conditions in the buffer unit and outputting the combined data of the universal human-machine interface equipment protocols.
8. An apparatus for providing game motion sensing data, the apparatus comprising:
the image acquisition module is used for acquiring images of the game player through the image acquisition equipment to obtain color images and depth images of the game player;
the identification module is used for identifying human body key point information of the game player according to the color image and the depth image;
the acquisition module is used for acquiring the somatosensory data of the game player in the current somatosensory game playing process according to the human body key point information of the game player;
a translation module for translating motion sensing data of the game player during play of the current motion sensing game into data conforming to a universal human-machine interface device protocol;
And the output module is used for outputting the data conforming to the universal man-machine interface equipment protocol to the game computing platform.
9. An apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202310847066.3A 2023-07-11 2023-07-11 Method, apparatus and computer storage medium for providing game somatosensory data Pending CN116764220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310847066.3A CN116764220A (en) 2023-07-11 2023-07-11 Method, apparatus and computer storage medium for providing game somatosensory data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310847066.3A CN116764220A (en) 2023-07-11 2023-07-11 Method, apparatus and computer storage medium for providing game somatosensory data

Publications (1)

Publication Number Publication Date
CN116764220A true CN116764220A (en) 2023-09-19

Family

ID=88011449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310847066.3A Pending CN116764220A (en) 2023-07-11 2023-07-11 Method, apparatus and computer storage medium for providing game somatosensory data

Country Status (1)

Country Link
CN (1) CN116764220A (en)

Similar Documents

Publication Publication Date Title
CN111460875B (en) Image processing method and apparatus, image device, and storage medium
CN110310288A (en) Method and system for the Object Segmentation in mixed reality environment
US7974443B2 (en) Visual target tracking using model fitting and exemplar
US8320619B2 (en) Systems and methods for tracking a model
CN102448566B (en) Gestures beyond skeletal
KR20130099317A (en) System for implementing interactive augmented reality and method for the same
US8432390B2 (en) Apparatus system and method for human-machine interface
KR20090110357A (en) Augmented reality method and devices using a real time automatic tracking of marker-free textured planar geometrical objects in a video stream
US20140045593A1 (en) Virtual joint orientation in virtual skeleton
CN110147737B (en) Method, apparatus, device and storage medium for generating video
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN102171726A (en) Information processing device, information processing method, program, and information storage medium
CN107656611A (en) Somatic sensation television game implementation method and device, terminal device
CN113449696A (en) Attitude estimation method and device, computer equipment and storage medium
JP2017037424A (en) Learning device, recognition device, learning program and recognition program
CN114170407B (en) Model mapping method, device, equipment and storage medium for input equipment
CN115497149A (en) Music interaction method for automobile cabin
CN112973110A (en) Cloud game control method and device, network television and computer readable storage medium
CN116764220A (en) Method, apparatus and computer storage medium for providing game somatosensory data
CN111639615A (en) Trigger control method and device for virtual building
CN114167997B (en) Model display method, device, equipment and storage medium
CN113407031B (en) VR (virtual reality) interaction method, VR interaction system, mobile terminal and computer readable storage medium
KR20210003515A (en) Augmented Reality Implementation Device Supporting Interactive Mode
CN114093024A (en) Human body action recognition method, device, equipment and storage medium
CN114973396B (en) Image processing method, image processing device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination