CN117939196A - Game experience method and device - Google Patents

Game experience method and device Download PDF

Info

Publication number
CN117939196A
CN117939196A CN202311865163.1A CN202311865163A CN117939196A CN 117939196 A CN117939196 A CN 117939196A CN 202311865163 A CN202311865163 A CN 202311865163A CN 117939196 A CN117939196 A CN 117939196A
Authority
CN
China
Prior art keywords
video
picture
key object
experience
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311865163.1A
Other languages
Chinese (zh)
Inventor
王付生
武云霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhonghe Ultra Hd Collaborative Technology Center Co ltd
Original Assignee
Beijing Zhonghe Ultra Hd Collaborative Technology Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhonghe Ultra Hd Collaborative Technology Center Co ltd filed Critical Beijing Zhonghe Ultra Hd Collaborative Technology Center Co ltd
Priority to CN202311865163.1A priority Critical patent/CN117939196A/en
Publication of CN117939196A publication Critical patent/CN117939196A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a competition experience method and device, wherein the method comprises the following steps: reconstructing an initial three-dimensional virtual scene corresponding to a first moment picture in the multi-view game video; determining information of each key object in each frame of picture based on the multi-view competition video; calculating the variation of the next frame of picture relative to each key object in the previous frame of picture based on the information of each key object in each frame of picture; the change amount of each key object comprises the change of the position information and the change of the action information of each key object; and performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight base, so as to obtain the experience video. The application can perform scene video rendering at the view angle of the role selected by the user, so that the user can experience the game in the presence.

Description

Game experience method and device
Technical Field
The application relates to the technical field of video rendering, in particular to a competition experience method and device.
Background
In the prior art, when people watch a game in front of a screen, the position and the angle of a camera are limited when the video is shot, the player can only watch the game at the visual angle shot by the camera, each player cannot be seen in real time, what kind of event in the eyes of the player is not known, and the player cannot experience the game in the scene.
Disclosure of Invention
The application aims to provide a competition experience method and device, which can conduct scene video rendering according to the view angle of a role selected by a user, so that the user can experience competition in an immersive manner.
In a first aspect, the present application provides a method of playing a game, the method comprising: reconstructing an initial three-dimensional virtual scene corresponding to a first moment picture in the multi-view game video; determining information of each key object in each frame of picture based on the multi-view competition video; calculating the variation of the next frame of picture relative to each key object in the previous frame of picture based on the information of each key object in each frame of picture; the change amount of each key object comprises the change of the position information and the change of the action information of each key object; and performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight base, so as to obtain the experience video.
Further, the method further comprises the steps of: erecting a plurality of cameras around a playing field, and performing time synchronization on each camera; and shooting the video of the competition in the competition field without dead angles by using the cameras, so as to obtain the multi-view competition video.
Further, the method further comprises reconstructing a three-dimensional virtual scene of the playing field based on the basic information of the playing field; the step of reconstructing an initial three-dimensional virtual scene corresponding to a first moment frame in the multi-view game video comprises: obtaining basic information of each key object in a first moment painting surface based on the multi-view competition video; adding basic information of each key object in the first moment picture into the three-dimensional virtual scene of the competition field, and reconstructing the basic information into an initial three-dimensional virtual scene corresponding to the first moment picture.
Further, the method further comprises the step of dividing the multi-view competition video into a plurality of video segments according to a preset time node; wherein, the picture of the first frame in each video segment is the first moment picture of the corresponding video segment; the step of reconstructing an initial three-dimensional virtual scene corresponding to a first moment frame in the multi-view game video comprises: reconstructing an initial three-dimensional virtual scene corresponding to a first frame of picture in each video segment based on the multi-view game video; the method for obtaining the experience video comprises the following steps of: and performing video rendering by taking the head information of the selected role in each key object in each frame of picture as a sight line base according to the initial three-dimensional virtual scene corresponding to the first frame of picture in each video segment and the variation of each key object in each video segment to obtain a multi-segment experience video.
Further, the step of performing video rendering based on the initial three-dimensional virtual scene and the variation of each key object by using the head information of the selected character in each key object in each frame of picture as a sight line basis to obtain an experience video includes: determining a sight origin based on head position information of the selected character in each frame of picture; determining a line of sight direction based on the face orientation information of the selected character on each frame of picture; determining a sight angle based on head torsion information of the selected character in each frame of picture; and performing video rendering based on the sight line origin, the sight line direction, the sight line angle, the initial three-dimensional virtual scene and the variation of each key object to obtain an experience video.
Further, the step of performing video rendering based on the initial three-dimensional virtual scene and the variation of each key object by using the head information of the selected character in each key object in each frame of picture as a sight line basis to obtain an experience video includes: determining a sight origin based on head position information of the selected character in each frame of picture; and rendering the full-view video based on the sight line origin, the initial three-dimensional virtual scene and the variable quantity of each key object to obtain the experience video.
Further, the method further comprises the steps of: setting one or more positions of a third visual angle above the playing field; and taking the position of the third view angle as an origin, and performing VR video rendering according to the initial three-dimensional virtual scene and the variable quantity of two adjacent frames of pictures to obtain an experience video.
Further, the method further comprises the steps of: dividing a playing field into a plurality of matrix areas; setting an audio pick-up device to collect sounds transmitted to each matrix area in the competition area; determining a corresponding audio pick-up device based on a matrix area in which a head position of a selected character is located in the multi-view game video; forming play audio based on the audio picked up by the corresponding audio pick-up device; and synthesizing the experience video and the play audio into the volume checking video.
Further, the method further comprises the steps of: responding to a competition experience request of a user; the request carries the target experience mode; the target experience mode at least comprises one of the following: a target role substitution mode, a full view mode and a third view mode of target role substitution; and generating experience video and audio based on the target experience mode.
In a second aspect, the present application also provides a game experience device, the device comprising: the scene reconstruction module is used for reconstructing an initial three-dimensional virtual scene corresponding to a first moment picture in the multi-view competition video; the information determining module is used for determining information of each key object in each frame of picture based on the multi-view competition video; the change amount calculation module is used for calculating the change amount of each key object in the next frame picture relative to the previous frame picture based on the information of each key object in each frame picture; the change amount of each key object comprises the change of the position information and the change of the action information of each key object; and the video rendering module is used for performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight line basis to obtain an experience video.
In the competition experience method and device provided by the application, an initial three-dimensional virtual scene corresponding to a first moment picture in a multi-view competition video is firstly reconstructed; then, based on the multi-view competition video, determining information of each key object in each frame of picture; calculating the variation of the next frame of picture relative to each key object in the previous frame of picture based on the information of each key object in each frame of picture; the change amount of each key object comprises the change of the position information and the change of the action information of each key object; and finally, performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight base, so as to obtain the experience video. The application can render the scene video based on the three-dimensional virtual scene drawn at the first moment in the pre-reconstructed game video and the variation of the key object in each frame of picture in the multi-view video, and the view angle of the role selected by the user can enable the user to experience the game in an immersive manner.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a game experience method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a camera setting position according to an embodiment of the present application;
fig. 3 is a schematic diagram of audio collected by a microphone according to an embodiment of the present application;
Fig. 4 is a block diagram of a game experience device according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the present application will be clearly and completely described in connection with the embodiments, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In view of the fact that in the prior art, a user can only experience a game video scene according to a fixed shooting view angle and a fixed position, but cannot experience each video scene according to personal wishes, the embodiment of the application provides a game experience method and a game experience device, and scene video rendering is performed according to the view angle of a role selected by the user based on a three-dimensional virtual scene drawn at a first moment in a pre-reconstructed game video and the variation of key objects in each frame of picture in a multi-view video, so that the user experiences a game personally.
For the convenience of understanding the present embodiment, a game experience method disclosed in the embodiment of the present application will be described in detail first.
Fig. 1 is a flowchart of a game experience method provided by an embodiment of the present application, where the method specifically includes the following steps:
Step S102, reconstructing an initial three-dimensional virtual scene corresponding to a first moment picture in a multi-view competition video;
The above multi-view video of a game is obtained by setting up a plurality of cameras around a playing field, and when the video is collected, it is necessary to synchronize the time of each camera first, and then shoot the video of the game occurring in the playing field without dead angle by using the plurality of cameras.
The first moment picture is a first frame picture in the video of a time period; if the competition video is divided into a plurality of video segments according to the time node, the first moment picture is a first frame picture in each video segment; if not dividing into several video segments, the first temporal frame is the first frame of the entire video.
The initial three-dimensional virtual scene comprises the three-dimensional virtual scene of the field and basic information of key objects added to the three-dimensional virtual scene of the field; the basic information includes: head position, face orientation, head twist, each articulation point position, body orientation, pose, etc.; the position information is understood as the head position, the key point position, and the like, and the motion information is understood as information on torsion, orientation, and the like.
Step S104, determining information of each key object in each frame of picture based on the multi-view competition video;
for ball game video, the key subjects are usually referred to as athletes, referees, balls, etc. The information of the key object specifically comprises: head position, face orientation, head twist, each articulation point position, body orientation, pose, etc.; the position information is understood as the head position, the key point position, and the like, and the motion information is understood as information on torsion, orientation, and the like.
Step S106, calculating the variation of the next frame picture relative to each key object in the previous frame picture based on the information of each key object in each frame picture; the change amount of each key object includes a change in position information and a change in motion information of each key object.
The change amount of the key object in the next frame picture relative to the previous frame picture can be determined by comparing the information of the key objects in any two adjacent frames.
The information and the variable quantity of each key object can be calculated by calculating the head position, the face orientation, the head torsion, the position of each joint point, the body orientation, the gesture and other information of each frame of picture of the real scene and the variable quantity of two adjacent frames, mapping the information and the variable quantity into the three-dimensional virtual scene to form the information and the variable quantity of each key object in the three-dimensional virtual scene, and then further calculating based on the information and the variable quantity of each key object in the three-dimensional virtual scene. Or the three-dimensional virtual scene corresponding to each frame of picture is calculated based on each frame of picture in the multi-view competition video, then the information and the variation of each key object between two adjacent frames of three-dimensional virtual scenes are calculated, and then further calculation is performed.
And S108, performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight line base, and obtaining the experience video.
In the case where the definition of the camera is high, the above-mentioned head information may include eye information, the video in the line of sight may be calculated more accurately, and the eye information may include the rotation of the eye beads.
Generally, the upper cameras arranged in the four corners of the field are difficult to capture the rotation of the eyeball, and in order to capture the details more clearly, tracking cameras can be arranged around the competition field, and designated athletes or referees can be tracked to shoot the facial expression and the rotation of the eyeball of the athletes or referees more clearly. Wherein, the player or referee to be tracked can be changed according to the position change of the player, for example, one tracking camera originally tracks the player or referee to move to a longer distance or moves away from the tracking camera originally tracks the player or referee, then the tracking camera is difficult to achieve the original purpose, at this time, other tracking cameras can be arranged to track the player or referee, and the original tracking camera can be used for tracking other players or referees.
The tracking camera can be fixedly arranged around the field without limitation, or a person can carry the camera to follow the shooting.
The selected role is one of the key objects selected by the user during the competition experience; such as a certain athlete or referee, etc. And then, performing video rendering by taking the head information of the selected role in each key object in each frame of picture as a sight line basis according to the initial three-dimensional virtual scene and the variable quantity of each key object in two adjacent frames of pictures, and obtaining the experience video under the view angle of the selected role.
According to the competition experience method provided by the embodiment of the application, the scene video rendering is carried out according to the visual angle of the role selected by the user based on the three-dimensional virtual scene drawn at the first moment in the pre-reconstructed competition video and the variable quantity of the key object in each frame of picture in the multi-visual angle video, so that the user can experience competition in an immersive manner, and the experience of the competition scene of the user is improved. Furthermore, rendering in varying amounts may reduce the amount of data in large amounts when transferred and stored.
The embodiment of the application also provides another competition experience method, which is realized on the basis of the embodiment; the present embodiment focuses on a multi-view video acquisition process, an audio acquisition process, a three-dimensional scene reconstruction process, and a video rendering process.
In the embodiment of the application, a plurality of cameras are erected around a competition field, and each camera is time-synchronized so as to shoot the video of a competition occurring in the competition field without dead angles, thereby obtaining the video of the competition with multiple visual angles. In general, in order to obtain more details, it is necessary to use an ultra-high-definition camera to capture an ultra-high-definition video, and the resolution is preferably 4K or more, and the higher the resolution, the clearer the video obtained.
Referring to fig. 2, four cameras may be installed above four corners of the playing field, the height of the cameras being about the distance from the outside of the four corners of the playing field to the center of the playing field, and the angle of the cameras being about 45 °. Parameters such as shooting angle, focal length and the like of each camera need to be adjusted during specific shooting, so that the cameras can clearly shoot the game video.
In practice, the cameras may be disposed above the four sides of the playing field, or in other places, the number of cameras is not limited to 4, and parameters such as shooting angle, focal length, aperture, etc. of each camera need to be adjusted during specific shooting, so as to obtain a video of the game without dead angles.
The following describes the reconstruction process of the initial three-dimensional virtual scene of the first moment picture in detail, and specifically includes the following steps:
(1) Reconstructing a three-dimensional virtual scene of the playing field based on the basic information of the playing field; basic information of the playing field includes: the information of the field structure such as the field size, color, brightness, side line and key position is generally motionless in the competition, and if necessary, the information can also comprise the positions of the audience, surrounding buildings and the like, and the basic information of the competition field can be obtained in advance or calculated from the multi-view video pictures. The key positions can be goals of football games, baskets of basketball games, and the information of the key positions can be information of the positions of the goals, the orientations of the goals, the sizes of the goals and the like, and can also be information of the positions of the baskets, the sizes of the baskets, the heights of the baskets and the like.
(2) Obtaining basic information of each key object in a first moment painting surface based on the multi-view competition video; the head position, the face orientation, the head torsion, the positions of all joints, the body orientation, the gesture and the like are specifically included; the position information is understood as the head position, the key point position, and the like, and the motion information is understood as information on torsion, orientation, and the like. Each key object generally refers to an object that moves on the field during a game.
In the implementation, the accuracy of a scene can be determined according to the needs, specifically, if the demand level is low, the position, the joint point and other information of each key object can be used for representing the head and the joint by points, the trunk and the limbs by lines, and each key object can be simply represented; more complex points can be modeled with their body data based on each athlete's information to more clearly represent each key object.
(3) Adding basic information of each key object in the first moment picture into the three-dimensional virtual scene of the competition field, and reconstructing the basic information into an initial three-dimensional virtual scene corresponding to the first moment picture.
The method is that a first frame picture is used as an initial three-dimensional virtual scene in a video segment, and each frame is rendered based on the variation of the previous frame, so that when a user enters from any moment in the middle of a video for a period of time, all the variation from the initial three-dimensional virtual scene to the entering moment needs to be calculated to obtain the video when the user enters.
In order to enable a user to enter into watching from any time, a complete multi-view competition video can be divided into a plurality of video segments according to preset time nodes, a first frame of each video segment is an initial frame of the video segment, an initial three-dimensional virtual scene is correspondingly constructed for each initial frame, and then the change quantity of a next frame relative to a previous frame scene in each video segment is calculated frame by frame. Namely, the method in the embodiment of the application can be realized by the following steps:
(1) Dividing the multi-view competition video into a plurality of video segments according to a preset time node; the first frame in each video segment is a first moment picture of the corresponding video segment, and may also be referred to as a key frame;
the preset time node may be half a second, one second or two seconds; of these, one second is optimal.
(2) Reconstructing an initial three-dimensional virtual scene corresponding to a first frame of picture in each video segment based on the multi-view game video;
(3) And performing video rendering according to the initial three-dimensional virtual scene corresponding to the first frame picture in each video segment and the variation of each key object in each video segment by taking the head information (eye information can be included if the definition of the camera is enough) of the selected role in each frame picture in each key object as a sight line base, and more accurately calculating the video in the sight line, so as to obtain a multi-segment experience video.
The above manner is a process of actually segmenting the video and then rendering each segment of the video separately, and the method does not need to construct a three-dimensional virtual scene for each frame of picture, but constructs a three-dimensional virtual scene for a time period, which is advantageous in that: 1. when the video is watched from the middle, the corresponding key frames in the selected time point can be watched without starting calculation from the initial three-dimensional virtual scene and the change amount of the second frame relative to the first frame; 2. the result of the variation-based calculation can be corrected and the video rendering is more realistic.
In the embodiment of the application, the provided user experience modes comprise the following three modes: a target role substitution mode, a full view mode and a third view mode of target role substitution; after a user initiates a competition experience request through a client, different experience videos can be rendered according to different experience modes carried in the request, and then a final experience video and audio can be obtained by combining the following playing audio acquisition process; finally, the body inspection video and audio, namely the VR video and audio, are played for the user, so that the user has the feeling of being personally on the scene.
Namely, the competition experience method provided by the embodiment of the application comprises the following steps:
(1) Responding to a competition experience request of a user; the request carries the target experience mode; the target experience mode at least comprises one of the following: a target role substitution mode and a full view mode of target role substitution;
(2) And generating experience video and audio based on the target experience mode.
The specific process of generating the body inspection audio frequency is as follows:
1) Rendering and generating corresponding experience videos in a target experience mode by the mode of non-segmentation or segmentation processing of the multi-view game video;
2) Dividing a playing field into a plurality of matrix areas; setting an audio pick-up device to collect sounds transmitted to each matrix area in the competition area; determining a corresponding audio pick-up device based on a matrix area in which a head position of a selected character is located in the multi-view game video; forming play audio based on the audio picked up by the corresponding audio pick-up device;
In the implementation, a plurality of high directivity microphones are arranged at the edge of a field to collect sounds at each position on the field in an array, wherein the array refers to dividing the field into arrays, each microphone collects sounds in one area, and in the example of fig. 3, one microphone collects sounds in one frame, which is only schematically illustrated, and in the practical arrangement, the microphone with different arrangement ranges and the field size has a larger probability. The microphones may be placed in a front row (a row near the near field) to collect near sound, the microphones in the second row collect far sound (a position farther toward the middle of the near field), and the placement position will be higher, because in general, the higher the directivity of the microphones, the longer the microphone shaft will be.
And forming playing audio based on the audio collected by the high-directivity microphone corresponding to the matrix area where the head position is located.
3) The experience video and the play audio are synthesized into a body inspection video, namely VR video.
The following details are respectively set forth for experience video generation processes corresponding to the three experience modes:
First, for the case that the target experience mode of the user is the target role substitution mode, the following steps are adopted in the embodiment of the application to obtain the experience video:
(1) Determining a sight origin based on head position information of the selected character in each frame of picture;
(2) Determining a line of sight direction based on the face orientation information of the selected character on each frame of picture;
(3) Determining a sight angle based on head torsion information of the selected character in each frame of picture;
(4) And performing video rendering based on the sight line origin, the sight line direction, the sight line angle, the initial three-dimensional virtual scene and the variation of each key object to obtain an experience video. When video rendering is performed, the display range of the experience video can be determined according to the visual field range of a common human eye.
The visual line origin, visual line direction and visual line angle of the selected character can be determined through the action information of the selected character, such as the head position information, the face orientation information and the head torsion information, and the video rendering is further carried out by combining the initial three-dimensional virtual scene and the variation of each key object, so that the experience video can be obtained, and the actual display of the experience video is the content information of the selected character in the visual line of each frame scene. The selected character may be determined by a rendering parameter carried in the experience request of the user, that is, the target character selected by the user is substituted into a mode, where the target character is the selected character.
Through the process, the user can be in the scene and brought into the character which wants to experience, the user can carry out the bringing-in experience at the view angle of the selected character, the football match is taken as an example, the player dribbling mode can be experienced at the view angle of the player, and the judging reason can be judged.
Second kind: for the situation that the target experience mode of the user is the full view mode of substituting the target role, the embodiment of the application obtains the experience video by adopting the following steps:
(1) Determining a sight origin based on head position information of the selected character in each frame of picture;
(2) And rendering the full-view video based on the sight line origin, the initial three-dimensional virtual scene and the variable quantity of each key object to obtain the experience video.
Full view video rendering compared with the first mode, the full view video content information of the track point of the selected character in each frame of picture can be watched, the full view experience video can enable a user to see globally throughout the whole field, and clearly know the game condition of the whole field, the information of why a pass is transmitted to someone, and the like. The rendering parameters are also transmitted from the client, are parameters selected by the user based on the key object to be carried in by the character, the user can personally take in the character (key object) to be experienced, the user can carry in the character to be experienced by taking the view angle of the selected character as an example, the football match can take the player's dribbling as an example, the judging reason can be seen by the player from the view angle, and the longitudinal match can be carried out without omission at the player or the judging position. The video content information of the full view angle refers to the content in the sight of the key object, wherein the video content information of the full view angle refers to the content that the game can be watched up and down, left and right without dead angles at a certain track point.
Third kind: for the case that the target experience mode of the user is the third view angle mode, the experience video is obtained by adopting the following steps in the embodiment of the application:
(1) Setting one or more positions of a third visual angle above the playing field;
(2) And taking the position of the third view angle as an origin, and performing VR video rendering according to the initial three-dimensional virtual scene and the variable quantity of two adjacent frames of pictures to obtain an experience video.
In real-time application, one or more positions of a third viewing angle can be set in the high altitude of the field, VR video and audio are rendered, and video is watched at the third viewing angle. The third view may not be a fixed position, but may be a track ball or some key object above. The third view angle is that most of the playing field can be seen from the view angle when playing in the air of the playing field, and the position of the third view angle can be considered to be adjusted by a mouse and keyboard lamp mode.
The third view angle mode can be combined with the first two experience modes for viewing, specifically, a shortcut key can be arranged, and the third view angle can be quickly switched by pressing the shortcut key.
The rendering parameters include specified key objects (i.e. selected roles) and experience modes (such as the three experience modes), and may further include information such as experience time and definition. Wherein the target experience mode may be any one or any combination of the three.
For the frame number of the experience video, the frame number per second can be determined according to the requirement, and if the frame is a key highlight picture, the frame number per second can be increased so as to achieve a more real effect.
In addition, the experience video rendering process can be generated in real time or can be pre-generated, and for the experience video generated in real time, the following manner can be adopted: the client sends the requirement to the server, and the server receives the signal and then generates VR experience video and audio in real time, and sends the video and audio to the client for playing while generating the video and audio; the method has the advantages of reducing memory occupation, and has the disadvantage of high bandwidth occupation; or each client is provided with a VR rendering device, the video rendering process can be completed by the client, the client stores basic three-dimensional virtual scenes, variable quantity and playing audio, and after the client receives the operation of a user, VR video and audio are correspondingly generated; the method has the advantages of no transmission and higher requirements on the client equipment.
For the pre-generation mode, the following mode can be adopted: the server may generate corresponding VR audio and video according to the viewing angles of the key objects, or may generate VR audio and video according to the third viewing angle. The method has the advantages of no need of generating video at present, low requirement on calculation force and high memory occupation.
According to the competition experience method provided by the embodiment of the application, after the three-dimensional virtual scene is formed, the characters are selected for substitution, and the user can experience the whole competition with the movement track and the visual angle of the selected characters in an immersive manner according to the selected characters through VR viewing, so that the user has more feeling of being in the scene and has better experience; in addition, after the stereoscopic virtual scene is formed, the mode of substituting the roles into the third view angle is selected for viewing, so that the user experiences the whole game on the motion trail of the selected roles, the third view angle can be switched according to will, and the user can grasp the whole game process more.
Based on the above method embodiment, the embodiment of the present application further provides a game experience device, as shown in fig. 4, where the device includes: a scene reconstruction module 42, configured to reconstruct an initial three-dimensional virtual scene corresponding to a first moment frame in the multi-view game video; an information determining module 44, configured to determine information of each key object in each frame of picture based on the multi-view video of the game; a variable amount calculating module 46, configured to calculate, based on information of each key object in each frame, a variable amount of each key object in a subsequent frame relative to a previous frame; the change amount of each key object comprises the change of the position information and the change of the action information of each key object; the video rendering module 48 is configured to perform video rendering based on the head information of the selected character in each key object in each frame of picture as a line of sight, according to the initial three-dimensional virtual scene and the variation of each key object, and obtain an experience video.
Further, the apparatus further includes: the video acquisition module is used for erecting a plurality of cameras around the competition field, and carrying out time synchronization on each camera so as to shoot the video of the competition in the competition field without dead angles and obtain the multi-view competition video.
Further, the scene reconstruction module 42 is configured to reconstruct a three-dimensional virtual scene of the playing field based on the basic information of the playing field; obtaining basic information of each key object in a first moment painting surface based on the multi-view competition video; adding basic information of each key object in the first moment picture into the three-dimensional virtual scene of the competition field, and reconstructing the basic information into an initial three-dimensional virtual scene corresponding to the first moment picture.
Further, the apparatus further includes: the video segmentation module is used for segmenting the multi-view competition video into a plurality of video segments according to a preset time node; wherein, the picture of the first frame in each video segment is the first moment picture of the corresponding video segment; the scene reconstruction module 42 is configured to: reconstructing an initial three-dimensional virtual scene corresponding to a first frame of picture in each video segment based on the multi-view game video; the video rendering module 48 is configured to perform video rendering based on the head information of the selected character in each key object in each frame of picture as a line of sight, according to the initial three-dimensional virtual scene corresponding to the first frame of picture in each video segment, and the variation of each key object in each video segment, so as to obtain a multi-segment experience video.
Further, the video rendering module 48 is configured to: determining a sight origin based on head position information of the selected character in each frame of picture; determining a line of sight direction based on the face orientation information of the selected character on each frame of picture; determining a sight angle based on head torsion information of the selected character in each frame of picture; and performing video rendering based on the sight line origin, the sight line direction, the sight line angle, the initial three-dimensional virtual scene and the variation of each key object to obtain an experience video.
Further, the video rendering module 48 is configured to: determining a sight origin based on head position information of the selected character in each frame of picture; and rendering the full-view video based on the sight line origin, the initial three-dimensional virtual scene and the variable quantity of each key object to obtain the experience video.
Further, the video rendering module 48 is configured to set one or more positions of a third viewing angle above the playing field; and taking the position of the third view angle as an origin, and performing VR video rendering according to the initial three-dimensional virtual scene and the variable quantity of two adjacent frames of pictures to obtain an experience video.
Further, the apparatus further includes: the audio acquisition module is used for dividing the playing field into a plurality of matrix areas; setting an audio pick-up device to collect sounds transmitted to each matrix area in the competition area; determining a corresponding audio pick-up device based on a matrix area in which a head position of a selected character is located in the multi-view game video; forming play audio based on the audio picked up by the corresponding audio pick-up device; and the video and audio synthesis module is used for synthesizing the experience video and the playing audio into the body inspection video and audio.
Further, the apparatus further includes: the request response module is used for responding to the competition experience request of the user; the request carries the target experience mode; the target experience mode at least comprises one of the following: a target role substitution mode, a full view mode and a third view mode of target role substitution; and the video and audio synthesis module is used for generating experience video and audio based on the target experience mode.
The device provided by the embodiment of the present application has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brief description, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned.
An embodiment of the present application further provides an electronic device, as shown in fig. 5, which is a schematic structural diagram of the electronic device, where the electronic device includes a processor 51 and a memory 50, where the memory 50 stores computer executable instructions that can be executed by the processor 51, and the processor 51 executes the computer executable instructions to implement the above method.
In the embodiment shown in fig. 5, the electronic device further comprises a bus 52 and a communication interface 53, wherein the processor 51, the communication interface 53 and the memory 50 are connected by the bus 52.
The memory 50 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is achieved via at least one communication interface 53 (which may be wired or wireless), and the internet, wide area network, local network, metropolitan area network, etc. may be used. Bus 52 may be an ISA (Industry Standard Architecture ) bus, a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The bus 52 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one bi-directional arrow is shown in FIG. 5, but not only one bus or type of bus.
The processor 51 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 51 or by instructions in the form of software. The processor 51 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), and the like; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory and the processor 51 reads information in the memory and in combination with its hardware performs the steps of the method of the previous embodiment.
The embodiment of the application also provides a computer readable storage medium, which stores computer executable instructions that, when being called and executed by a processor, cause the processor to implement the above method, and the specific implementation can refer to the foregoing method embodiment and will not be described herein.
The method, the apparatus and the computer program product of the electronic device provided in the embodiments of the present application include a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present application, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of playing a game, the method comprising:
Reconstructing an initial three-dimensional virtual scene corresponding to a first moment picture in the multi-view game video;
determining information of each key object in each frame of picture based on the multi-view competition video;
Calculating the variation of the next frame of picture relative to each key object in the previous frame of picture based on the information of each key object in each frame of picture; the change amount of each key object comprises position information change and action information change of each key object;
and performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight base, so as to obtain an experience video.
2. A game experience method as claimed in claim 1, further comprising:
Erecting a plurality of cameras around a playing field, and performing time synchronization on each camera;
and shooting the video of the competition in the competition field without dead angles by using the cameras, so as to obtain the multi-view competition video.
3. The game experience method according to claim 1, further comprising reconstructing a three-dimensional virtual scene of the playing field based on the basic information of the playing field;
The step of reconstructing the initial three-dimensional virtual scene corresponding to the first moment picture in the multi-view game video comprises the following steps:
Obtaining basic information of each key object in a first moment painting surface based on the multi-view competition video;
adding basic information of each key object in the first moment picture into the three-dimensional virtual scene of the competition field, and reconstructing the basic information into an initial three-dimensional virtual scene corresponding to the first moment picture.
4. The method of claim 1, further comprising dividing the multi-view video of the game into a plurality of video segments according to a predetermined time node; wherein, the picture of the first frame in each video segment is the first moment picture of the corresponding video segment;
The step of reconstructing the initial three-dimensional virtual scene corresponding to the first moment picture in the multi-view game video comprises the following steps: reconstructing an initial three-dimensional virtual scene corresponding to a first frame of picture in each video segment based on the multi-view game video;
The step of performing video rendering based on the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight line basis, and obtaining experience video comprises the following steps:
And performing video rendering by taking the head information of the selected role in each key object in each frame of picture as a sight line basis according to the initial three-dimensional virtual scene corresponding to the first frame of picture in each video segment and the variation of each key object in each video segment to obtain a multi-segment experience video.
5. The method according to any one of claims 1 to 4, wherein the step of performing video rendering based on the initial three-dimensional virtual scene and the variation of each key object on the basis of the head information of the selected character in each frame of picture, to obtain the experience video includes:
determining a sight origin based on head position information of the selected character in each frame of picture;
determining a sight line direction based on the face orientation information of the selected character in each frame of picture;
determining a sight angle based on head torsion information of the selected character in each frame of picture;
And performing video rendering based on the sight origin, the sight direction, the sight angle, the initial three-dimensional virtual scene and the variation of each key object to obtain an experience video.
6. The method according to any one of claims 1 to 4, wherein the step of performing video rendering based on the initial three-dimensional virtual scene and the variation of each key object on the basis of the head information of the selected character in each frame of picture, to obtain the experience video includes:
determining a sight origin based on head position information of the selected character in each frame of picture;
and performing full-view video rendering based on the sight origin, the initial three-dimensional virtual scene and the variable quantity of each key object to obtain an experience video.
7. A game experience method according to any one of claims 1 to 4, wherein the method further comprises:
Setting one or more positions of a third visual angle above the playing field;
And taking the position of the third view angle as an origin, and performing VR video rendering according to the initial three-dimensional virtual scene and the variable quantity of two adjacent frames of pictures to obtain an experience video.
8. A game experience method according to any one of claims 1 to 7, wherein the method further comprises:
dividing a playing field into a plurality of matrix areas;
setting an audio pick-up device to collect sounds transmitted to each matrix area in the competition area;
determining a corresponding audio pick-up device based on a matrix area in which the head position of the selected character is located in the multi-view game video;
Forming play audio based on the audio picked up by the corresponding audio pick-up device;
and synthesizing the experience video and the playing audio into a body inspection video.
9. A game experience method according to any one of claims 1 to 8, wherein the method further comprises:
Responding to a competition experience request of a user; the request carries a target experience mode; the target experience mode at least comprises one of the following: a target role substitution mode, a full view mode and a third view mode of target role substitution;
And generating experience video and audio based on the target experience mode.
10. A game experience device, the device comprising:
The scene reconstruction module is used for reconstructing an initial three-dimensional virtual scene corresponding to a first moment picture in the multi-view competition video;
the information determining module is used for determining information of each key object in each frame of picture based on the multi-view competition video;
The change amount calculation module is used for calculating the change amount of each key object in the next frame picture relative to the previous frame picture based on the information of each key object in each frame picture; the change amount of each key object comprises position information change and action information change of each key object;
and the video rendering module is used for performing video rendering according to the initial three-dimensional virtual scene and the variation of each key object by taking the head information of the selected role in each key object in each frame of picture as a sight line basis to obtain an experience video.
CN202311865163.1A 2023-12-29 2023-12-29 Game experience method and device Pending CN117939196A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311865163.1A CN117939196A (en) 2023-12-29 2023-12-29 Game experience method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311865163.1A CN117939196A (en) 2023-12-29 2023-12-29 Game experience method and device

Publications (1)

Publication Number Publication Date
CN117939196A true CN117939196A (en) 2024-04-26

Family

ID=90760622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311865163.1A Pending CN117939196A (en) 2023-12-29 2023-12-29 Game experience method and device

Country Status (1)

Country Link
CN (1) CN117939196A (en)

Similar Documents

Publication Publication Date Title
US10922879B2 (en) Method and system for generating an image
US10681337B2 (en) Method, apparatus, and non-transitory computer-readable storage medium for view point selection assistance in free viewpoint video generation
US6072504A (en) Method and apparatus for tracking, storing, and synthesizing an animated version of object motion
JP7010292B2 (en) Video generation program, video generation method and video generation device
CN103442773B (en) The sensing apparatus of virtual golf analogue means and use thereof and method for sensing
JP2020025320A (en) Video presentation device, video presentation method, and program
JP2018180655A (en) Image processing device, image generation method, and program
CN114097248B (en) Video stream processing method, device, equipment and medium
US9087380B2 (en) Method and system for creating event data and making same available to be served
TWI623342B (en) Screen golf system, method for image realization for screen golf and recording medium readable by computing device for recording the method
BR102019000927A2 (en) DESIGN A BEAM PROJECTION FROM A PERSPECTIVE VIEW
JP2021023401A (en) Information processing apparatus, information processing method, and program
CN114245210B (en) Video playing method, device, equipment and storage medium
RU2602792C2 (en) Motion vector based comparison of moving objects
US10786742B1 (en) Broadcast synchronized interactive system
US20210125349A1 (en) Systems and methods for visualizing ball trajectory in real-time
CN117939196A (en) Game experience method and device
JP2020013470A (en) Information processing device, information processing method, and program
JP2009519539A (en) Method and system for creating event data and making it serviceable
CN113971693A (en) Live broadcast picture generation method, system and device and electronic equipment
US11615580B2 (en) Method, apparatus and computer program product for generating a path of an object through a virtual environment
WO2023106201A1 (en) Play analysis device, play analysis method, and computer-readable storage medium
CN117793324A (en) Virtual rebroadcast reconstruction system, real-time generation system and pre-generation system
WO2022259546A1 (en) Information processing method, information processing device, and program
WO2020008511A1 (en) Electronic device, content processing device, content processing system, image data output method, and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination