WO2023029900A1 - 视频帧的渲染方法、装置、设备以及存储介质 - Google Patents
视频帧的渲染方法、装置、设备以及存储介质 Download PDFInfo
- Publication number
- WO2023029900A1 WO2023029900A1 PCT/CN2022/111061 CN2022111061W WO2023029900A1 WO 2023029900 A1 WO2023029900 A1 WO 2023029900A1 CN 2022111061 W CN2022111061 W CN 2022111061W WO 2023029900 A1 WO2023029900 A1 WO 2023029900A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video frame
- virtual object
- server
- terminal
- target
- Prior art date
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 219
- 238000000034 method Methods 0.000 title claims abstract description 89
- 230000004044 response Effects 0.000 claims description 47
- 238000012545 processing Methods 0.000 claims description 33
- 238000004590 computer program Methods 0.000 claims description 20
- 230000015654 memory Effects 0.000 claims description 11
- 230000000694 effects Effects 0.000 description 34
- 230000006870 function Effects 0.000 description 21
- 230000008859 change Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 238000013473 artificial intelligence Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 244000144730 Amygdalus persica Species 0.000 description 2
- 235000006040 Prunus persica var persica Nutrition 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/355—Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5255—Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5258—Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/533—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5372—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5375—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/837—Shooting of targets
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/85—Providing additional services to players
- A63F13/86—Watching games played by other players
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/02—Non-photorealistic rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present application relates to the field of computer technology, and in particular to a method, device, device and storage medium for rendering video frames.
- cloud computing users can use cloud computing to achieve tasks that are difficult for terminals to complete.
- users can use cloud computing technology to play games that the terminal cannot run smoothly.
- the background processing related to the game is completed by the cloud game server, and the terminal only needs to send control information to the cloud game server, and the control information is used to control the game objects in the game scene.
- the cloud game server can perform background processing based on the control information to obtain a video frame, and the cloud game server sends the video frame to the terminal, and the terminal only needs to display the video frame.
- the display effect of the game object is configured in advance by the technician when designing the game.
- the display effect of the car is Colors and styles are configured in advance by technicians.
- Embodiments of the present application provide a video frame rendering method, device, device, and storage medium, and provide a function of re-rendering a virtual object in a cloud application. Described technical scheme is as follows:
- a method for rendering a video frame comprising:
- the server acquires the first video frame corresponding to the first terminal, the first video frame is a video frame obtained by rendering the target virtual scene from the perspective of the controlled virtual object in the target virtual scene, and the controlled virtual object a virtual object controlled by the first terminal;
- the server performs the target virtual object in the first video frame based on the first parameter of the first terminal performing rendering to obtain a second video frame, where the target virtual object is a virtual object that needs to be re-rendered by the first terminal;
- the server sends the second video frame to the first terminal, and the first terminal is used to display the second video frame.
- a video frame rendering system comprising:
- first terminal a first server, and a second server, and the first terminal, the first server, and the second server are communicatively connected;
- the first server is configured to render the target virtual scene from the perspective of the controlled virtual object in the target virtual scene, obtain a first video frame corresponding to the first terminal, and send the first video frame to the
- the controlled virtual object is a virtual object controlled by the first terminal
- the second server is configured to receive the first video frame
- the second server is further configured to, in the case that the first video frame displays the target virtual object of the first terminal, based on the first parameter of the first terminal, The target virtual object is rendered to obtain a second video frame;
- the second server is further configured to send the second video frame to the first terminal
- the first terminal is configured to display the second video frame in response to receiving the second video frame.
- a video frame rendering device comprising:
- the first video frame acquisition module is configured to acquire the first video frame corresponding to the first terminal, the first video frame is a video obtained by rendering the target virtual scene from the perspective of the controlled virtual object in the target virtual scene frame, the controlled virtual object is a virtual object controlled by the first terminal;
- a rendering module configured to, in the case where the first video frame displays the target virtual object of the first terminal, the server, based on the first parameter of the first terminal, render the virtual object in the first video frame
- the target virtual object is rendered to obtain a second video frame, and the target virtual object is a virtual object that needs to be re-rendered by the first terminal;
- a sending module configured to send the second video frame to the first terminal, and the first terminal is configured to display the second video frame.
- a server in one aspect, includes one or more processors and one or more memories, and at least one computer program is stored in the one or more memories, and the computer program is executed by the one or more A plurality of processors are loaded and executed to realize the rendering method of the video frame.
- a computer-readable storage medium wherein at least one computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by a processor to implement the video frame rendering method.
- a computer program product includes program code, and when the program code is executed by a processor, the above method for rendering a video frame is implemented.
- the display effect of the virtual object displayed in the second video frame is also the display effect configured by the first terminal for the virtual object.
- FIG. 1 is a schematic diagram of an implementation environment of a video frame rendering method provided by an embodiment of the present application
- FIG. 2 is a flow chart of a method for rendering a video frame provided in an embodiment of the present application
- FIG. 3 is a flow chart of a method for rendering a video frame provided in an embodiment of the present application
- Fig. 4 is a schematic diagram of an interface provided by an embodiment of the present application.
- Fig. 5 is a schematic diagram of an interface provided by an embodiment of the present application.
- FIG. 6 is a flow chart of a method for rendering a video frame provided by an embodiment of the present application.
- Fig. 7 is a schematic diagram of an interface provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a video frame rendering device provided by an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of a server provided by an embodiment of the present application.
- first and second are used to distinguish the same or similar items with basically the same function and function. It should be understood that “first”, “second” and “nth” There are no logical or timing dependencies, nor are there restrictions on quantity or order of execution.
- the term "at least one" refers to one or more, and the meaning of “multiple” refers to two or more.
- multiple reference face images refer to two or more reference faces. image.
- Cloud application It is a new type of application that changes the traditional application of "local installation and local computing" into an "out-of-the-box” service, connects and controls remote server clusters through the Internet or LAN, and completes business logic or computing tasks .
- the cloud application runs in the remote server cluster, but the interface of the cloud application is displayed on the terminal, which can reduce the operating cost of the terminal and greatly improve work efficiency.
- Cloud Gaming also known as Gaming on Demand
- Cloud gaming technology enables thin clients with relatively limited graphics processing and data computing capabilities to run high-quality games.
- the game is not run on the player's game terminal, but in the cloud server, and the cloud game server renders the game scene into a video and audio stream, which is transmitted to the player's game terminal through the network.
- the player's game terminal does not need to have powerful graphics computing and data processing capabilities, but only needs to have basic streaming media playback capabilities and the ability to obtain player input instructions and send them to the cloud game server.
- Virtual scene it is a virtual scene displayed (or provided) when the application is running.
- the virtual scene can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual environment, or a purely fictional virtual environment.
- the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimensions of the virtual scene.
- the virtual scene may include sky, land, ocean, etc.
- the land may include environmental elements such as deserts and cities, and the user may control virtual objects to move in the virtual scene.
- Virtual object refers to the movable object in the virtual scene.
- the movable object may be a virtual character, a virtual animal, an animation character, etc., such as: a character, an animal, a plant, an oil drum, a wall, a stone, etc. displayed in a virtual scene.
- the virtual object may be a virtual avatar representing the user in the virtual scene.
- the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
- the virtual object is a user character controlled by an operation on the client, or an artificial intelligence (Artificial Intelligence, AI) set in a virtual scene battle through training, or an artificial intelligence (AI) set in a virtual scene Non-Player Character (NPC).
- AI Artificial Intelligence
- AI artificial Intelligence
- AI artificial intelligence
- NPC Non-Player Character
- the virtual object is a virtual character competing in a virtual scene.
- the number of virtual objects participating in the interaction in the virtual scene is preset, or dynamically determined according to the number of clients participating in the interaction.
- the user can control the virtual object to fall freely in the sky of the virtual scene, glide or open the parachute to fall, etc., run, jump, crawl, bend forward, etc. on the land, and can also control The virtual object swims, floats or dives in the ocean.
- the user can also control the virtual object to move in the virtual scene on a virtual vehicle.
- the virtual vehicle can be a virtual car, a virtual aircraft, a virtual yacht, etc.
- the above-mentioned scenario is used as an example for illustration, and this embodiment of the present application does not specifically limit it.
- Users can also control virtual objects to interact with other virtual objects through interactive props such as fighting.
- the interactive props can be throwing interactive props such as virtual grenade, virtual cluster grenade, virtual sticky grenade (referred to as "virtual sticky mine”) , can also be shooting interactive props such as virtual machine guns, virtual pistols, virtual rifles, etc.
- This application does not specifically limit the types of interactive props.
- Android (Android) container Android is packaged into a container image, and then released through a standard container image.
- the carrier can be any container shell that supports OCI (Open Container Initiative, open container standard), so it can be easily passed K8s (kubernetes, a container orchestration engine) is used for operation and maintenance. With the help of k8s powerful operation and maintenance tools, thousands of server clusters can be easily deployed on the cloud.
- FIG. 1 is a schematic diagram of an implementation environment of a video frame rendering method provided in an embodiment of the present application.
- the implementation environment may include a first terminal 110, a client server 120, a first server 130, a game server 140, and The second server 150.
- the first terminal 110, the client server 120, the first server 130, the game server 140 and the second server 150 are nodes in the blockchain system, the first terminal 110, the client server 120, the first The data transmitted among the server 130, the game server 140 and the second server 150 is also stored on the block chain.
- the first terminal 110 is connected to the client server 120 through a wireless network or a wired network.
- the first terminal 110 is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, etc., but is not limited thereto.
- the first terminal 110 is installed and runs a client supporting virtual scene display.
- the client server 120 is a server that provides services for the client.
- the client server 120 is an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, Servers for basic cloud computing services such as cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, distribution networks (Content Delivery Network, CDN), and big data and artificial intelligence platforms.
- the first terminal 110 logs in to the client server 120 through the running client, and the client server 120 provides services related to the user account, such as providing services related to user account verification, or providing services for determining the duration of the cloud game corresponding to the user account, or The service of storing the personalized settings corresponding to the user account is provided, which is not limited in this embodiment of the present application. It can be said that the client server 120 is an intermediary connecting the client running on the first terminal 110 and the first server 130 .
- the first server 130 is an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, intermediate Servers for basic cloud computing services such as mail services, domain name services, security services, distribution networks, and big data and artificial intelligence platforms.
- the first server 130 is related to the background service of displaying the virtual scene.
- the first terminal 110 is connected to the first server 130 through a wireless network or a wired network.
- the first terminal 110 can send control information to the first server 130.
- the control information uses It is used to control the virtual objects in the virtual scene.
- the first server 130 renders the virtual scene based on the control information.
- the first server 130 is connected to the client server 120 through a wireless network or a wired network.
- the first server 130 can obtain relevant information of the user account from the client server 120, and perform related operations such as game initialization based on the relevant information.
- the first server 130 is a cloud server.
- the game server 140 is an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware Services, domain name services, security services, distribution networks, and servers for basic cloud computing services such as big data and artificial intelligence platforms.
- the game server 140 is connected to the first server 120 through a wireless network or a wired network.
- Information related to the game role is stored in the game server 140, such as the friends of the game role, the address book, the level of the game role, the name of the game role, etc.
- the first server 130 can obtain information related to the game role from the game server 140, wherein one user account may correspond to multiple game roles, the client server 120 is used to store information related to the user account, and the game server 140 is Used to store information related to game characters.
- the second server 150 is an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, intermediate Servers for basic cloud computing services such as mail services, domain name services, security services, distribution networks, and big data and artificial intelligence platforms.
- the second server 150 provides services related to secondary rendering of the virtual scene, the second server 150 is connected to the first server 120 through a wireless network or a wired network, and the first server 120 can send the video frame after the virtual scene is rendered For the second server 150, the second server 150 performs secondary rendering on the video frame, so as to realize the personalized processing of the user.
- the second server 150 can send the second-rendered video frame to the first terminal 110, and the first terminal 110 displays the second-rendered video frame.
- the second server 150 is connected to the client server 130 through a wireless network or a wired network, and the second server 150 can obtain the personalized setting corresponding to the user account from the client server 130, and based on the personalized setting, update the first
- the video frame sent by the server 120 is rendered twice to obtain the video frame after the second rendering.
- the second server 150 is integrated in the first server 130 , and the functions of the second server 150 are implemented by the system running on the first server 130 .
- the second server 150 may be a scene management server.
- first terminals may be more or less.
- the embodiment of the present application does not limit the number and device types of the first terminals.
- the first A terminal is also the first terminal 110 in the above-mentioned implementation environment
- the client server is also the client server 120 in the above-mentioned implementation environment
- the first server is also the first server 130 in the above-mentioned implementation environment
- the game server is also That is, the game server 140 in the above implementation environment
- the second server is also the second server 150 in the above implementation environment.
- the video frame rendering method provided by the embodiment of the present application can be applied in various cloud game scenarios, such as first-person shooting (First-person Shooting, FPS) games, or third-person shooting games (Third-person Shooting, FPS). Personal Shooting, TPS) games, or in multiplayer online battle arena (Multiplayer Online Battle Arena, MOBA), or in war chess games or auto chess games, which is not limited in the embodiment of the present application.
- the user starts the cloud game client on the first terminal, and logs in the user account in the cloud game client, that is, the user is in the cloud game Enter the user account and corresponding password in the client, and click the login control to log in.
- the first terminal sends a login request to the client server, where the login request carries a user account and a corresponding password.
- the client server obtains the user account and the corresponding password from the login request, and verifies the user account and the corresponding password. After the client server passes the verification of the user account and the corresponding password, it sends login success information to the first terminal.
- the first terminal After receiving the successful login information, the first terminal sends a cloud game acquisition request to the client server, and the cloud game acquisition request carries the user account.
- the client server obtains the cloud game acquisition request, it performs a query based on the user account carried in the cloud game acquisition request, obtains multiple cloud games corresponding to the user account, and sends the identifiers of the multiple cloud games to the first terminal, and the second terminal A terminal presents the identifiers of the multiple cloud games in the cloud game client.
- the user selects the logo of the FPS game he wants to play from among the logos of the cloud games displayed on the cloud game client, that is, he selects the FPS game he wants to play.
- the first terminal After the user selects an FPS game on the cloud game client, the first terminal sends a game start command to the client server, and the game start command carries the user account, the FPS game ID and the hardware information of the first terminal, wherein the first The hardware information of the terminal includes the screen resolution of the first terminal, the model of the first terminal, etc., which are not limited in this embodiment of the present application.
- the client server After the client server receives the game start instruction, it sends the game start instruction to the first server, and after the first server receives the game start instruction, it obtains the user account, the FPS game ID, and the first server from the game start instruction. Terminal hardware information.
- the first server initializes the FPS game based on the hardware information of the first terminal to match the rendered game screen with the first terminal, and sends the user account to the game server corresponding to the FPS game.
- the game server After receiving the user account, the game server sends information corresponding to the game account to the first server, and the first server starts the FPS game based on the information corresponding to the game account.
- the user can control the controlled virtual object in the FPS to move through the first terminal, that is, the first terminal sends control information on the controlled virtual object to the first server, and the first The server renders the FPS virtual scene based on the control information to obtain the first video frame.
- the first server sends the first video frame to the second server.
- the second server can perform image recognition on the first video frame to determine whether the first video frame includes a vehicle. If the second server determines that a vehicle is displayed in the first video frame, then the first parameter of the vehicle is obtained from the client server, and the first parameter is determined by the client server based on the parameters set by the user for the vehicle. The second server renders the vehicle in the first video frame based on the first parameter to obtain a second video frame, and sends the second video frame to the first terminal, and the user can view the second video through the first terminal frame. If in the FPS game, the technician configures the color of the vehicle to be blue, then through the above steps, the color of the vehicle can be adjusted to the red color set by the user to realize the personalized configuration of the vehicle.
- the above description is based on the processing of the first video frame as an example.
- the process of running the FPS game there are multiple continuous video frames.
- the above method is adopted. Processing is performed for display by the first terminal.
- the above steps can be used for processing.
- the user can set the trees in the MOBA game as peach blossoms through the first terminal.
- the second server can identify the trees from the first video frame, render the trees as peach blossoms twice, obtain the second video frame, and send the second video frame to the first terminal for display.
- the video frame rendering method provided by the embodiment of this application can be applied to other types of cloud games besides the above-mentioned FPS games, MOBA games, war chess games or auto chess games. Examples are not limited to this.
- Fig. 2 is a flow chart of a method for rendering a video frame provided in an embodiment of the present application. Referring to Fig. 2, taking the method executed by the second server as an example, the method includes:
- the second server obtains the first video frame corresponding to the first terminal.
- the first video frame is a video frame obtained by rendering the target virtual scene from the perspective of the controlled virtual object in the target virtual scene.
- the controlled virtual scene The object is a virtual object controlled by the first terminal.
- rendering the target virtual scene from the perspective of the controlled virtual object in the target virtual scene refers to: rendering a picture observed by the controlled virtual object in the target virtual scene. Since the controlled virtual object is a virtual object controlled by the first terminal, the picture observed by the controlled virtual object in the target virtual scene can simulate the picture seen by the user of the first terminal in the target virtual scene.
- first terminals For different first terminals, different first terminals have different controlled virtual objects, and in the same target virtual scene, the images observed by different controlled virtual objects should be different, so different first terminals The displayed screen should also be different. Therefore, it is necessary to render the target virtual scene from the perspective of the controlled virtual object of the first terminal, and the obtained video frames correspond to the first terminal.
- the first video frame is rendered by the first server, it is sent to the second server.
- the second server When the first video frame displays the target virtual object of the first terminal, the second server renders the target virtual object in the first video frame based on the first parameter of the first terminal, Get the second video frame.
- the target virtual object is a virtual object set by the first terminal, and is a virtual object that needs to be re-rendered by the first terminal.
- the target virtual object is a virtual object selected by the user through the first terminal, or a virtual object selected by the first terminal according to the position of the controlled virtual object, or a virtual object set in other ways.
- the first parameter is a first parameter of the first terminal, for example, the first parameter is a first parameter determined by a user through the first terminal, or is a first parameter set by default by the first terminal. The first parameter is used to re-render the target virtual object in the first video frame to obtain the second video frame.
- the personalized configuration of the virtual scene is realized.
- you want to change the display of a virtual object in the virtual scene When it is effective, it can be configured directly through the first terminal, and there is no need to contact technical personnel to change the underlying code and files, which is more efficient.
- the second server sends the second video frame to the first terminal, and the first terminal is used to display the second video frame.
- the video frame rendering method provided by the embodiment of the present application is applied in a cloud game scene, then the rendering of the virtual scene and the rendering process of the video frame are all implemented in the cloud, where the cloud is the second server (such as the scene management server), the first server (such as cloud server) and other related servers collectively.
- the second server performs secondary rendering on the first video frame, it sends the obtained second video frame to the first terminal, and the first terminal does not need to perform background processing, but can directly display the second video frame, which is more efficient.
- the display effect of the virtual object displayed in the second video frame is also the display effect configured by the first terminal for the virtual object.
- the above steps 201-203 are a brief introduction to the embodiment of the present application. The following will combine some examples to describe the technical solution provided by the embodiment of the present application more clearly. Taking the method executed by the second server as an example, see FIG. 3 , the method includes :
- the second server acquires the first video frame corresponding to the first terminal.
- the first video frame is a video frame obtained by rendering the target virtual scene from the perspective of the controlled virtual object in the target virtual scene.
- the controlled virtual scene The object is a virtual object controlled by the first terminal.
- the target virtual scene is also the game scene of the cloud game
- the angle of view of the controlled virtual object is also the angle of view of the virtual camera of the controlled virtual object.
- the angle of view of the controlled virtual object The virtual camera is located on the head of the controlled virtual object.
- the virtual camera of the controlled virtual object is located above the controlled virtual object.
- the virtual camera will also follow the controlled virtual object.
- the picture captured by the virtual camera is also the picture observed above the controlled virtual object.
- the images captured by the virtual camera are rendered by the cloud game server. Since there are multiple sequential frames during the game, the multiple frames are also called video frames.
- the first server may also use the virtual camera to shoot the target virtual scene to obtain a picture, and the captured picture is also called a video frame.
- the first server is a cloud server.
- the first terminal sends control information on the controlled virtual object to the first server, and after receiving the control information, the first server determines the control information of the controlled virtual object in the target virtual scene based on the control information. perspective. Based on the perspective of the controlled virtual object, the first server renders the target virtual scene to obtain a first video frame, which is also a video frame corresponding to the first terminal. The first server sends the first video frame to the second server, and the second server obtains the first video frame.
- the control information on the controlled virtual object is used to change the position, orientation and action of the controlled virtual object in the target scene.
- control information can control the controlled virtual object to move forward, backward, left, and right in the target virtual scene, or It can control the controlled virtual object to rotate left or right in the target virtual scene, or can control the controlled virtual object to perform actions such as squatting, crawling and using virtual props in the target virtual scene.
- the first server controls the controlled virtual object to move or perform actions in the target virtual scene based on the control information
- the virtual camera bound to the controlled virtual object will also move along with the controlled virtual
- the movement or execution of actions by the controlled virtual object will cause the angle of view of the controlled virtual object to observe the target virtual scene to change, and the virtual camera bound to the controlled virtual object can record this change.
- the first server is a scene management server.
- control information on the controlled virtual object can be sent to the first server through the first terminal, and the first server can determine the angle of view of the controlled virtual object in the target virtual scene based on the control information, based on The angle of view of the controlled virtual object in the target virtual scene is rendered to the target virtual scene to obtain the first video frame, without the need for the first terminal to render, which improves the rendering efficiency.
- the second server directly performs subsequent processing on the first video frame. Can.
- the first terminal displays the target virtual scene, and the controlled virtual object is displayed in the target virtual scene.
- a long connection is established between the first terminal and the first server.
- the first terminal sends control information corresponding to the operation to the first server. Multiple data packets can be sent continuously. During the connection maintenance period, if no data packets are sent, both parties need to send link detection packets.
- the operation on the controlled virtual object includes a click operation and a drag operation, wherein the click operation is a click operation on an operation control, such as a click operation on a fire control in an FPS game.
- the drag operation refers to a drag operation on an operation control or a controlled virtual object, such as a drag operation on a skill control in a MOBA game.
- the first server determines the position, posture and action of the controlled virtual object in the target virtual scene based on the control information.
- the first server determines the angle of view of the controlled virtual object based on the position, posture and action of the controlled virtual object in the target virtual scene.
- the first server renders the target virtual scene based on the perspective of the controlled virtual object to obtain the first video frame corresponding to the first terminal, and the first video frame is obtained by rendering based on the control information sent by the first terminal video frames.
- the control information sent by the first terminal to the first server is also used to control the controlled virtual object to squat.
- the first server controls the controlled virtual object to squat in the target virtual scene, and determines the angle of view of the controlled virtual object after squatting. Based on the perspective of the controlled virtual object after squatting, the target virtual scene is rendered to obtain the first video frame.
- the first server sends the first video frame to the second server, and the second server obtains the first video frame.
- the second server performs image recognition on the first video frame to determine the type of the first video frame.
- the second server uses a template image of the target virtual object to detect in the first video frame.
- the second server determines the first video frame as a first type, and the first type indicates that the first video frame displays a The target dummy.
- the second server determines the first video frame as a second type, the second type indicating that the first video frame does not The target dummy is displayed.
- the identification of the target virtual object is sent by the client server to the second server at the beginning, and the second server can determine the target virtual object that needs to be identified based on the identification of the target virtual object, for example, the second server based on the identification of the target virtual object Query the template image stored corresponding to the identifier, that is, the template image of the target virtual object, and then use the template image for detection each time the video frame corresponding to the first terminal is acquired.
- the target virtual object is a virtual object set by the first terminal, and the target virtual object is any virtual object in the target virtual scene.
- the target virtual object is a virtual car in the target virtual scene, or the target virtual object is a virtual tree in the target virtual scene, or the target virtual object is a virtual house in the virtual scene, or the target virtual object is a virtual firearm.
- the embodiment does not limit this.
- the second server can determine the type of the first video frame through template matching, that is, determine whether the target virtual object is displayed in the first video frame. Since the speed of template matching is relatively fast, then When the type of the first video frame is determined by template matching, the efficiency is higher.
- the second server uses the template image of the target virtual object to obtain the similarity between the template image of the target virtual object and multiple regions in the first video frame. In response to the fact that there is an area in the plurality of areas whose similarity with the template image of the target virtual object meets the target similarity condition, the second server determines that the area matches the template image of the target virtual object, and the first video frame identified as the first type. In response to the fact that none of the similarities between the multiple regions and the template image of the target virtual object meet the target similarity condition, the second server determines that there is no region matching the template image of the target virtual object in the first video frame, The first video frame is determined to be of the second type.
- the second server uses the template image of the target virtual object to slide on the first video frame, and obtains the similarity between the template image of the target virtual object and multiple regions on the first video frame, and the multiple regions That is, the area covered when the template image of the target virtual object slides on the first video frame.
- the second server determines the similarity between the template image of the target virtual object and multiple regions, it can be realized by using the color value similarity or the gray value similarity, which is not discussed in this embodiment of the present application. Do limited.
- the second server determines that the area matches the template image of the target virtual object, and sends the first video
- the frame is determined to be of the first type, that is, the first video frame displays the target virtual object.
- the second server determines that there is no region matching the template image of the target virtual object in the first video frame, and the second server A video frame is determined to be of the second type, that is, the first video frame does not display the target virtual object.
- the second server scales the template image of the target virtual object to obtain template images of multiple sizes of the target virtual object.
- the second server acquires the similarity between the template image of the target virtual object and multiple regions on the first video frame based on template images of multiple sizes, and the multiple regions include regions with different sizes.
- the second server determines that the area matches the template image of the target virtual object, and the first video frame identified as the first type.
- the second server determines that there is no region matching the template image of the target virtual object in the first video frame,
- the first video frame is determined to be of the second type.
- the distance between the controlled virtual object and the target virtual object may be different in different video frames, so the size of the target virtual object may also be different in different video frames
- the template matching is performed by scaling the template image of the target virtual object into template images of multiple sizes, which can improve the accuracy of the template matching.
- the second server performs scaling processing on the template image of the target virtual object to obtain template images of multiple sizes of the target virtual object.
- the second server controls template images of multiple sizes to slide on the first video frame, and obtains the similarity between the template images of multiple sizes and multiple regions on the first video frame, and the multiple regions are The area covered by the template image of the target virtual object when sliding on the first video frame.
- the second server determines the similarity between template images of multiple sizes and multiple regions, it can be realized by using similarity of color value or gray value similarity, which is not done in the embodiment of the present application. limited.
- the second server determines that the area matches the template image of the target virtual object,
- the first video frame is determined as the first type, that is, the first video frame displays the target virtual object.
- the second server determines that there is no region matching the template image of the target virtual object in the first video frame, and the second server A video frame is determined to be of the second type, that is, the first video frame does not display the target virtual object.
- the second server inputs the first video frame into the image recognition model.
- the second server performs feature extraction and classification on the first video frame through the image recognition model, and outputs the type of the first video frame.
- the image recognition model is trained based on a set of sample video frames.
- the sample video frame set includes positive sample video frames and negative sample video frames.
- the positive sample video frames are video frames showing target virtual objects, and the negative sample video frames are Video frames with target dummy not shown.
- the image recognition model trained by using the sample video frame set has the ability to judge whether a target virtual object is displayed in the video frame.
- the image recognition model is trained by the second server in advance, and the image recognition model can be directly used to determine the type of the video frame during the game.
- the second server inputs the first video frame into the image recognition model.
- the second server performs feature extraction and full connection processing on the first video frame through the image recognition model to obtain a probability corresponding to the first video frame, and the probability is a probability that the first video frame displays the target virtual object.
- the second server determines the first video frame as a first type, where the first type indicates that the first video frame displays the target virtual object.
- the second server determines the first video frame as a second type, where the second type indicates that the first video frame does not display the target virtual object.
- the second server inputs the first video frame into the image recognition model, and performs convolution processing on the first video frame through the image recognition model to obtain a feature map of the first video frame.
- the second server uses the image recognition model to perform full connection processing and normalization processing on the feature map of the first video frame to obtain the probability distribution column corresponding to the first video frame, wherein the normalization processing can use S-shaped growth Curve (Sigmoid) or soft maximization (Softmax) function, which is not limited in this embodiment of the present application.
- the second server determines the first video frame as the first type, that is, the first video frame displays the target virtual object.
- the second server determines the first video frame as the second type, that is, the first video frame does not display the target virtual object.
- the second server divides the first video frame into multiple image blocks, and inputs the multiple image blocks into the image recognition model.
- the second server uses the image recognition model to perform feature extraction and full connection processing on the multiple image blocks to obtain multiple probabilities corresponding to the multiple image blocks, and the probability is the probability that the corresponding image block includes the target virtual object .
- the second server determines the first video frame as a first type, where the first type indicates that the first video frame displays the target virtual object.
- the second server determines the first video frame as a second type, and the second type indicates that the first video frame does not display the target virtual object.
- the second server divides the first video frame into multiple image blocks of the same size, inputs the multiple image blocks into the image recognition model, and performs convolution processing on the multiple image blocks through the image recognition model, Get feature maps of multiple image blocks.
- the second server performs full-connection processing and normalization processing on the feature maps of multiple image blocks to obtain multiple probabilities corresponding to the multiple image blocks.
- the probability is that the corresponding image blocks display the target virtual
- the probability of the object where the normalization process can be performed by using a Sigmoid or Softmax function, which is not limited in this embodiment of the present application.
- the second server determines the first video frame as the first type, that is, the first video frame displays the target virtual object.
- the second server determines that the first video frame is of the second type, that is, the first video frame does not display the target virtual object.
- step 302 if the type indicates that the first video frame displays the target virtual object, then the second server executes the following step 303; if the type indicates that the first video frame does not display the target virtual object object, then the second server performs step 304 described below.
- the second server performs the target virtual object in the first video frame based on the first parameter of the first terminal Render to get the second video frame.
- the second server when the first video frame displays the target virtual object of the first terminal, acquires a first rendering parameter, where the first rendering parameter is the Rendering parameters corresponding to the first angle and the first distance, the first angle is the angle between the controlled virtual object and the target virtual object, and the first distance is the angle between the controlled virtual object and the target virtual object distance.
- the second server renders the target virtual object in the first video frame based on the first rendering parameter to obtain the second video frame.
- the second server can perform secondary rendering on the target virtual object based on the first rendering parameters to change the display effect of the target virtual object.
- the distance and angle between the controlled virtual object and the target virtual object may change at any time, and different distances and angles correspond to different rendering parameters.
- the first rendering parameter is obtained from the parameters, and there is no need to perform a second calculation based on the first angle and the first distance to obtain the first rendering parameter, which is more efficient.
- the second server acquires the first angle and the first distance between the controlled virtual object and the target virtual object from the first server .
- the second server sends the first angle, the first distance and the identifier of the target virtual object to the client server.
- the client server determines the first parameter based on the identifier of the target virtual object.
- the client server searches the first parameter based on the first angle and the first distance to obtain the first rendering parameter, and sends the first rendering parameter to the second server.
- the second server determines target pixel values of multiple target pixel points of the target virtual object in the first video frame based on the first rendering parameter.
- the second server uses the target pixel value to update the pixel values of the plurality of target pixel points in the first video frame to obtain the second video frame.
- the target pixel point refers to the pixel point of the target virtual object in the first video frame, and the target pixel point has a pixel value in the first video frame, and the target pixel point means that the target pixel point needs to be updated The obtained pixel value, so the original pixel value of the target pixel value will be updated with the target pixel value.
- the angle between the controlled virtual object and the target virtual object may be the angle between the orientation of the controlled virtual object and the orientation of the target virtual object.
- the direction of the charged virtual object is also the aiming direction of the virtual gun.
- the target virtual object is a virtual vehicle, then the orientation of the target virtual object is directly in front of the virtual vehicle.
- the distance between the controlled virtual object and the target virtual object is the distance between the center point of the controlled virtual object and the center point of the target virtual object.
- the second server acquires the first rendering parameter and the second rendering parameter from the first parameter.
- the second server uses the first rendering parameter and the second rendering parameter to respectively render the target virtual object and the controlled virtual object to obtain the second video frame.
- the first parameter is a parameter of the first terminal
- the first parameter not only carries the first rendering parameter for rendering the target virtual object, but also carries the rendering parameter for the controlled virtual object.
- the second rendering parameter which means that the user can not only perform secondary rendering on the target virtual object in the virtual scene, but also perform secondary rendering on the controlled virtual object controlled by the first terminal, which improves personalization and playability .
- the second server acquires the first angle and the first distance between the controlled virtual object and the target virtual object from the first server .
- the second server sends the first angle, the first distance and the identifier of the target virtual object to the client server.
- the client server determines the first parameter based on the identification of the target virtual object.
- the client server searches the first parameter based on the first angle and the first distance, obtains the first rendering parameter and the second rendering parameter, and sends the first rendering parameter and the second rendering parameter to the second server.
- the second server receives the first rendering parameter and the second rendering parameter, in the first video frame, the first rendering parameter is used to render the target virtual object, and the second rendering parameter is used to render the controlled virtual object to obtain the second Two video frames.
- the above implementation method can to fulfill.
- the target virtual object as a virtual vehicle
- the user wants to set the color of the virtual vehicle to red, and at the same time, when the virtual vehicle appears in the target virtual scene, the game character (virtual object) controlled by the first terminal can be If the clothes are displayed in blue, the first rendering parameter and the second rendering parameter can be configured in the first parameter through the above implementation manner.
- the second server can also use the first rendering parameters to perform secondary rendering on the target virtual object while using the first rendering parameters to perform secondary rendering on the controlled virtual object, Improved personalization.
- the second server determines the position of the controlled virtual object in the target virtual scene.
- the second server renders the target virtual object in the first video frame based on the first parameter to obtain the second video frame.
- the second server determines the first video frame as the second video frame.
- the target virtual scene includes multiple sub-scenes, and the multiple sub-scenes also constitute the target virtual scene.
- the multiple sub-scenes are divided into a virtual social scene and a virtual combat scene, wherein the virtual social scene includes a virtual fishing scene, a virtual chat room, In virtual dance studios or virtual chess and card rooms, etc., in virtual social scenes, users communicate with other users by controlling virtual objects.
- the virtual battle scene is also a scene where the user controls a virtual object to fight.
- the target sub-scene is determined by the first terminal, and may be a virtual social scene or a virtual battle scene, which is not limited in this embodiment of the present application.
- the user can not only determine the first parameter through the first terminal, but also determine the target sub-scene for rendering the target virtual object, which greatly improves the degree of personalization and improves user stickiness.
- the second server acquires the position of the controlled virtual object in the target virtual scene from the first server. That is, the second server sends a location acquisition request to the first server, and the location acquisition request carries the time stamp of the first video frame and the identifier of the controlled virtual object.
- the first server obtains the location acquisition request, it obtains the timestamp of the first video frame and the identifier of the controlled virtual object from the location acquisition request, performs a query based on the timestamp of the first video frame and the identifier of the controlled virtual object, and obtains
- the time point indicated by the time stamp of the first video frame is the position of the controlled virtual object in the target virtual scene.
- the first server sends the position of the controlled virtual object in the target virtual scene to the server, and the position is used to indicate the sub-scene where the controlled virtual object is located in the target virtual scene.
- the second server sends the location acquisition request to the first server
- the second server sends a parameter acquisition request to the client server
- the parameter acquisition request carries the identifier of the target virtual object.
- the client server obtains the identifier of the target virtual object from the parameter obtaining request.
- the client server performs a query based on the target virtual object identifier, obtains the first parameter corresponding to the target virtual object and the target sub-scene identifier, and sends the first parameter and the target sub-scene identifier to the second server.
- the second server acquires the location sent by the first server, the first parameter sent by the client server, and the identifier of the target sub-scene. If the sub-scene indicated by the position is the target sub-scene, the second server uses the first parameter to render the target virtual object in the first video frame to obtain the second video frame. If the sub-scene indicated by the position is not the target sub-scene, the second server does not render the first video frame, but directly determines the first video frame as the second video frame, that is, the second server can directly The first video frame is sent to the first terminal, and the first terminal displays the first video frame.
- the target sub-scene is a virtual fishing scene
- the target virtual object is a virtual vehicle.
- the first parameter can be used to set the virtual vehicle.
- the virtual vehicle is rendered twice to change the display effect of the virtual vehicle.
- the second server determines that the virtual vehicle is displayed in the first video frame, but the controlled virtual object is not located in the virtual fishing scene, it does not use the first rendering parameters to perform secondary rendering on the first video frame, and directly renders the first video frame
- the frame is determined to be a second video frame.
- the second server in addition to determining the type of the first video frame, can also determine the target virtual frame in the first video frame if the first video frame is of the first type. The location of the object. If the second server uses a template matching method to determine the type of the first video frame, then the scene server can determine the region that matches the template image of the target virtual object in the first video frame as the target virtual object in the first video frame s position. If the second server uses the image recognition model to determine the type of the first video frame, then the image recognition model can output a detection frame in the first video frame, which is the location of the target virtual object in the first video frame . After the second server determines the position of the target virtual object in the first video frame, it can perform secondary rendering on the position of the target virtual object in the first video frame based on the first rendering parameters to obtain the second video frame.
- the second server sends a relative position acquisition request to the first server, and the relative position acquisition request is used to acquire the controlled virtual object and the target virtual object.
- the distance between objects, the relative position acquisition request carries the identity of the controlled virtual object and the identity of the target virtual object.
- the first server receives the relative position acquisition request, it acquires the identity of the controlled virtual object and the identity of the target virtual object from the relative position acquisition request, and determines the controlled virtual object based on the identity of the controlled virtual object and the identity of the target virtual object The first angle and the first distance from the target virtual object.
- the first server sends the first angle and the first distance between the controlled virtual object and the target virtual object to the second server.
- the second server After the second server receives the first angle and the first distance between the controlled virtual object and the target virtual object, it sends a rendering parameter acquisition request to the client server, and the rendering parameter acquisition request carries the first angle and the first distance And the identification of the target virtual object is sent to the client server.
- the client server After receiving the rendering parameter acquisition request, the client server acquires the first angle, the first distance, and the identifier of the target virtual object from the rendering parameter acquisition request, and determines the first parameter based on the identifier of the target virtual object.
- the client server searches the first parameter based on the first angle and the first distance to obtain the first rendering parameter, and sends the first rendering parameter to the second server.
- the second server determines target pixel values of multiple target pixel points of the target virtual object in the first video frame based on the first rendering parameter.
- the second server adopts the target pixel value to update the pixel values of the plurality of target pixel points in the first video frame to obtain the second video frame, that is, adopts the target pixel value to replace the pixels of the plurality of target pixel points value.
- the target pixel values are also referred to as texture data.
- the first server renders the target virtual scene from the perspective of the controlled virtual object in the target virtual scene to obtain a first video frame 401, the first video frame 401 includes a virtual vehicle 402, and the virtual vehicle 402 is also the target dummy object.
- the second server uses the first rendering parameters to perform secondary rendering on the virtual vehicle 402 to obtain a second video frame 403 . It can be seen that the virtual vehicle 404 in the second video frame 403 has different display effects from the first video frame 401 .
- an Android container inside the first server, and the rendering technology of the Android container adopts the technology of rendering in the Android container.
- the principle is that GPU (Graphics Processing Unit, graphics processing unit) can be directly accessed in the Android container.
- the GPU has a high burst of computing power, can directly complete rendering in the Android container, and can improve rendering performance, greatly reducing the delay caused by excessive rendering.
- An Android container is a complete Android (Android) system with perfect compatibility and can realize game rendering.
- the first server can send an instruction that the rendering is completed to the client server, and the client server can record the game duration information of the current user account in real time.
- the client server can notify the client to do processing through signaling (for example, when the duration is not enough, the pop-up box will prompt the duration not to be used. When the cloud game is abnormal, let the client pop-up prompt the user to be abnormal), etc.
- the following describes the method for the client server to generate the first parameter of the target virtual object.
- the first terminal displays a configuration interface of the target virtual object, and the configuration interface is used to obtain configuration information of the target virtual object.
- the configuration information of the target virtual object is sent to the client server.
- the client server receives configuration information of the target virtual object, creates a 3D model of the target virtual object based on the configuration information of the target virtual object, and acquires first parameters corresponding to the 3D model.
- the client server can bind and store the user account (User ID) logged in by the first terminal, the target virtual scene ID (Game ID), the target virtual object ID (Object ID) and the first parameter , so that in the subsequent calling process, the first parameter can be determined based on the representation, which is more efficient.
- the first terminal displays the configuration interface of the target cloud game
- the configuration interface displays a plurality of virtual objects in the target cloud game
- the target cloud game is the cloud game selected by the user
- the first terminal displays a configuration interface of the target virtual object, and multiple configuration options are displayed in the configuration interface.
- the first terminal generates configuration information of the target virtual object based on the selected option among the plurality of configuration options, and sends the configuration information of the target virtual object to the client server.
- the client server After receiving the configuration information of the target virtual object, the client server renders the initial 3D model of the target virtual object based on the configuration information of the target virtual object to obtain the 3D model of the target virtual object.
- the client server acquires rendering parameters at different angles and different distances between the virtual camera and the 3D model of the target virtual object, and the rendering parameters at different angles and different distances constitute the first parameter corresponding to the 3D model.
- the first terminal starts the cloud game client, displays the running interface of the cloud game client, and displays multiple cloud games to be selected on the running interface.
- the first terminal displays the configuration interface of the target cloud game, and the configuration interface displays a plurality of virtual objects available for personalized processing in the target cloud game, for example,
- the plurality of virtual objects include virtual vehicles, virtual trees, virtual houses, and the like.
- the first-level configuration option is used to select the type of virtual vehicle, in some embodiments, the first terminal uses the brand of the vehicle to indicate the type of virtual vehicle.
- the first terminal In response to the matching of any one of the multiple primary configuration options, the first terminal displays multiple secondary configuration options 503 corresponding to the primary configuration option, the secondary configuration options are used to select the style of the virtual vehicle, In some embodiments, the first terminal uses the model of the vehicle to indicate the style of the virtual vehicle. In response to the selection of any secondary configuration option among the plurality of secondary configuration options, the first terminal displays a plurality of third-level configuration options 504 corresponding to the second configuration option, and the third-level configuration options are used to select the color of the virtual vehicle, such as Red, black, blue, etc.
- the first terminal in response to the matching of any three-level configuration option in the plurality of three-level configuration options, the first terminal displays a plurality of four-level configuration options 505, the four-level configuration options are used to select the configuration of the target virtual object
- the scope of application of the information includes “visible only to myself”, “visible to everyone”, “permanent use” and “single use”, etc., wherein “visible to myself” is used to indicate that the second server is in the game process, Secondary rendering is only performed on video frames corresponding to the first terminal, and secondary rendering is not performed on video frames corresponding to other terminals. "Visible to all” is used to instruct the second server to re-render the video frames corresponding to all terminals during the game.
- the virtual vehicle in the target virtual scene is displayed as type A-style B-red by default, and the user of the first terminal sets the virtual vehicle as type C-style D-white through the configuration interface of the virtual vehicle. If the user selects "Visible only to himself", then during the game, in the target virtual scene displayed on the first terminal, the virtual vehicle is displayed as type C-style D-white, and for other terminals displaying the target virtual scene Say, the virtual vehicle is still shown as Type A-Style B-Red. Of course, if the user of the first terminal selects "visible to all", then during the game, all terminals displaying the target virtual scene will display the virtual vehicle as type C-style D-white.
- the second server will determine the rendering parameters according to the configuration information determined this time every time. For the option “single use”, the second server will only determine the rendering parameters during this game according to the configuration information determined this time.
- the first terminal determines the setting parameters of the target virtual object, sends the setting parameters to the client server, and the client server based on the For the setting parameters of the target virtual object, the 3D model of the target virtual object is created with the help of Open Graphics Library (OpenGL).
- OpenGL Open Graphics Library
- the client server acquires rendering parameters at different angles and different distances between the virtual camera and the 3D model of the target virtual object, and the rendering parameters at different angles and different distances constitute the first parameter corresponding to the 3D model.
- technicians can create a large number of 3D models of virtual vehicles in advance based on the basic information of the vehicle (brand CarType, style CarStyle, color CarCalor, etc.) through the client server, so that The corresponding 3D model can be called directly during the game, which is more efficient.
- the following steps can also be performed:
- the second server acquires the second animation and the second audio corresponding to the target virtual object.
- the second server adds a second animation corresponding to the target virtual object to the second video frame to obtain a seventh video frame.
- the second server sends the seventh video frame and the second audio to the first terminal, and the first terminal plays the second audio while displaying the seventh video frame.
- the position of adding the second animation in the second video frame is determined by the user or technician, for example, adding the second animation at a position adjacent to the target virtual object, such as above or beside the target virtual object, etc. This embodiment of the present application does not limit this.
- the second server performs secondary rendering on the first video frame based on the first parameter, and after obtaining the second video frame, further rendering can be performed on the basis of the second video frame, that is, in A second animation corresponding to the target virtual object is added to the second video frame to obtain a seventh video frame.
- the second audio corresponding to the target virtual object is also obtained.
- the first terminal can play the second audio while displaying the seventh video frame. Audio, since the first parameter, the second animation, and the second audio are determined by the user through the first terminal, the same first video frame may be rendered as a different second video frame by the second server due to different settings. Seven video frames, and with a different secondary audio, which enhances personalization.
- the second server acquires the second animation and the second audio corresponding to the target virtual object from the client server, and both the second animation and the second audio are determined by the first terminal. That is, the second server sends a first animation acquisition request to the client server, and the first animation acquisition request carries the identifier of the target virtual object.
- the client server receives the first animation acquisition request, it acquires the identifier of the target virtual object from the first animation acquisition request, performs a query based on the identifier of the target virtual object, acquires the second animation and the second audio corresponding to the target virtual object, and sends The second animation and the second audio are sent to the second server.
- the second server After receiving the second animation and the second audio, the second server adds the second animation to the second video frame, for example, adds the first frame of the second animation to the second video frame to obtain a seventh video frame.
- the second server sends the seventh video frame and the second audio to the first terminal, and the first terminal receives the seventh video frame and the second audio, and plays the second audio while displaying the seventh video frame.
- the target virtual object is a virtual vehicle
- the second animation is a gradually appearing thumb
- the first parameter is to adjust the virtual vehicle to red.
- the color of the virtual vehicle is red
- the thumb is displayed beside the virtual vehicle, and at the same time, the second audio is played.
- the second server sends the first video frame to the first terminal, and the first terminal is used to display the first terminal A video frame.
- step 304 when the first video frame does not display the target virtual object, the second server does not need to perform secondary rendering on the first video frame, and directly forwards the first video frame to the first terminal for display by the first terminal. Just the first video frame.
- the second server sends the second video frame to the first terminal, and the first terminal is used to display the second video frame.
- the second server performs secondary rendering on the first video frame based on the first rendering parameters to obtain a second video frame, and sends the second video frame to the second video frame.
- a terminal the target virtual object in the second video frame displayed on the first terminal is displayed with the display effect configured by the first terminal, thereby realizing quick adjustment of the display effect of the target virtual object.
- any of the following steps can be performed:
- the second server aggregates the plurality of second video frames into a first video frame set, and sends the first video frame set to the first terminal, and the first terminal is used to send other The terminal shares the first video frame set.
- the display effect of the secondary video frame is also personalized, and the second server can render the second video frame
- the video frames are aggregated into a first video frame set, and the first video frame set is sent to the first terminal, and the first terminal can share the first video frame set with other terminals.
- the personalized setting of the cloud application stimulates other users to play the cloud application, so that the cloud application spreads more widely.
- the first terminal after acquiring the first set of video frames, can share the first set of video frames through a social network, so that the first set of video frames can be disseminated in a wider range.
- the second server splices the second video frame and the first video frame to obtain a spliced video frame.
- the second server aggregates the multiple spliced video frames into a second video frame set, and sends the second video frame set to the first terminal, and the first terminal is used to share the second video frame set with other terminals.
- the second video frame is the second video frame rendered by the second server based on user settings
- the first video frame is the video frame rendered by the first server by default
- the second video frame and the first video frame are combined
- the second server can aggregate the spliced video frames into a second video frame set, send the second video frame set to the first terminal, and the first terminal can share the second video frame set with other terminals, that is, Users share their personalized settings for the cloud application with other users, inspiring other users to play the cloud application, so that the cloud application can spread more widely.
- the second server processes the first video frame to obtain the second video frame as an example.
- the first server performs real-time rendering to obtain a series of video frames
- the series of video frames are all video frames corresponding to the first terminal
- the first video frame is a video frame in the series of video frames.
- the second server can perform the above-mentioned steps 301-305.
- the second server can also perform the following steps 306-308, so that the second server can perform secondary rendering on the video frames corresponding to other terminals, so as to change the object in the virtual scene The display effect of the virtual object.
- the second server acquires a fifth video frame corresponding to the second terminal, where the fifth video frame is a video frame obtained by rendering the target virtual scene from the perspective of the second virtual object.
- the method for the second server to obtain the fifth video frame corresponding to the second terminal belongs to the same inventive concept as the method for the second server to obtain the first video frame corresponding to the first terminal.
- the implementation process refer to the relevant description of the above step 301. This will not be repeated here.
- the second server performs target processing on the fifth video frame to obtain a sixth video frame.
- the method for the second server to determine whether the target virtual object is displayed in the fifth video frame and the method for the second server to determine whether the first video frame displays the target virtual object belong to the same inventive concept.
- the implementation process please refer to the relevant description of the above step 302. I won't repeat them here.
- the second server performs the target virtual object in the fifth video frame based on the second parameter of the second terminal rendering to obtain the sixth video frame.
- the second parameter is determined by the second terminal.
- the second server can use the second parameter to render the target virtual object in the fifth video frame to obtain the sixth video frame, and then The sixth video frame can be sent to the second terminal for display by the second terminal. That is, the second server can use the first parameter to perform secondary rendering on the video frame including the target virtual object, and send the secondary rendered video frame to the first terminal. The second server performs secondary rendering on the video frame including the target virtual object by using the second parameter, and sends the secondary rendered video frame to the second terminal.
- the target virtual object may have different display effects.
- the first parameter indicates that the virtual vehicle is rendered as red
- the second parameter indicates that the virtual vehicle is rendered as blue
- the second server obtains the second virtual object and the target virtual object from the first server. Second angle and second distance between objects. The second server sends the second angle, the second distance, and the identifier of the target virtual object to the client server. After receiving the second angle, the second distance and the identification of the target virtual object, the client server determines the second parameter based on the identification of the target virtual object. The client server searches the second parameter based on the second angle and the second distance to obtain the third rendering parameter, and sends the third rendering parameter to the second server. After receiving the third rendering parameter, the second server uses the third rendering parameter to render the target virtual object in the fifth video frame to obtain a sixth video frame.
- the second server when the fifth video frame displays the target virtual object, the second server renders the target virtual object in the fifth video frame based on the first parameter to obtain the Sixth video frame.
- the second server can use the first parameter to render the target virtual object in the fifth video frame to obtain the sixth video frame, and then The sixth video frame can be sent to the second terminal for display by the second terminal. That is, in the video frames displayed on the first terminal and the second terminal, the target virtual object has the same display effect.
- the server obtains the difference between the second virtual object and the target virtual object from the first server.
- the second server sends the second angle, the second distance, and the identifier of the target virtual object to the client server.
- the client server determines the first parameter based on the identification of the target virtual object.
- the client server searches the first parameter based on the second angle and the second distance to obtain a third rendering parameter, and sends the third rendering parameter to the second server.
- the second server uses the third rendering parameter to render the target virtual object in the fifth video frame to obtain a sixth video frame.
- the second server when the fifth video frame displays the target virtual object, the second server adds a second animation corresponding to the target virtual object to the fifth video frame to obtain the fifth video frame.
- the second server can add the second animation corresponding to the target virtual object in the fifth video frame, thereby improving personalization.
- the controlled virtual object and the second virtual object are simultaneously in the target sub-scene of the target virtual scene, and when the fifth video frame displays the target virtual object, the second server Comparing virtual levels of the controlled virtual object and the second virtual object. If the virtual level of the controlled virtual object is higher than that of the second virtual object, the second server renders the target virtual object in the fifth video frame based on the first parameter, and obtains the sixth video frame. When the virtual level of the controlled virtual object is lower than the virtual level of the second virtual object, the second server renders the target virtual object in the fifth video frame based on the second parameter of the second terminal, The sixth video frame is obtained. If the virtual level of the controlled virtual object and the second virtual object are the same, the second server renders the target virtual object in the fifth video frame based on the third parameter to obtain the sixth video frame.
- the third parameter is set by the first server.
- the target virtual scene includes multiple sub-scenes, and the multiple sub-scenes also constitute the target virtual scene.
- the multiple sub-scenes are divided into a virtual social scene and a virtual combat scene, wherein the virtual social scene includes a virtual fishing scene, a virtual chat room, In virtual dance studios or virtual chess and card rooms, etc., in virtual social scenes, users communicate with other users by controlling virtual objects.
- the virtual battle scene is also a scene where the user controls a virtual object to fight.
- the target sub-scene is determined by the first terminal, and may be a virtual social scene or a virtual battle scene, which is not limited in this embodiment of the present application.
- the virtual grade is the grade of the virtual object in the target virtual scene, and the higher the grade, the stronger the combat capability of the virtual object.
- the virtual level is the membership level of the game account corresponding to the virtual object, and the higher the membership level, the more services can be enjoyed in the cloud application.
- the second server can determine to use the first terminal setting based on the virtual levels of the controlled virtual object and the second virtual object.
- the first parameter or the second parameter of the second terminal is used to perform secondary rendering on the target virtual object, so as to encourage the user to improve the virtual level of the virtual object and improve the user's enthusiasm for the game.
- the second server sends a virtual level acquisition request to the client server, and the virtual level acquisition request carries There is an identifier of the controlled virtual object and an identifier of the second virtual object.
- the client server obtains the virtual level acquisition request, it acquires the identifier of the controlled virtual object and the identifier of the second virtual object from the virtual grade acquisition request, and performs a query based on the identifier of the controlled virtual object and the identifier of the second virtual object, The first virtual level of the controlled virtual object and the second virtual level of the second virtual object are obtained.
- the client server sends the first virtual level of the controlled virtual object and the second virtual level of the second virtual object to the second server.
- the second server receives the first virtual level of the controlled virtual object and the second virtual level of the second virtual object, if the first virtual level is greater than the second virtual level, the second server obtains the first virtual level from the client server. parameter, using the first parameter to render the target virtual object in the fifth video frame to obtain the sixth video frame.
- the second server obtains the second parameter from the client server, uses the second parameter to render the target virtual object in the fifth video frame, and obtains the sixth video frame.
- the second server obtains the third parameter from the client server, uses the third parameter to render the target virtual object in the fifth video frame, and obtains the sixth video frame, in some embodiments, the third parameter is determined based on the first virtual level.
- the second server sends the sixth video frame to the second terminal, where the second terminal is used to display the sixth video frame.
- the server performs secondary rendering on the video frame based on the first parameter , the second video frame is obtained.
- the display effect of the virtual object displayed in the second video frame is also the display effect configured by the first terminal for the virtual object.
- the embodiment of the present application also provides another video frame rendering method, which is suitable for the situation where a target event occurs in the target virtual scene, see Figure 6, the method includes:
- the second server acquires a third video frame corresponding to the first terminal, where the third video frame is obtained by rendering the target virtual scene from the perspective of the controlled virtual object after a target event occurs in the target virtual scene , the target event is an event associated with the controlled virtual object.
- the first server in response to a target event occurring in the target virtual scene, sends the third video frame corresponding to the first terminal to the second server, and the first server sends the third video frame to the second server.
- the prompt information bound to the third video frame is sent to the second server, where the prompt information carries the identifier of the target event.
- the second server obtains the third video frame, it can determine based on the prompt information that a target event occurs in the target virtual scene, so as to trigger subsequent rendering of the third video frame.
- the target event is that the controlled virtual object defeats the first virtual object in the target virtual scene
- the defeat means that the behavior of the controlled virtual object in the target virtual scene reduces the life value of the first virtual object to 0, for example, the accused virtual object attacks the first virtual object with a virtual gun/virtual dagger/virtual grenade, etc., so that the life value of the first virtual object is reduced to 0, and the first virtual object is in a different team from the accused virtual object
- the object, or the first virtual object is a virtual object hostile to the charged virtual object.
- the first server sends a third video frame to the second server in real time, and the third video frame is the first frame after the first virtual object is defeated.
- the first server sends the third video frame to the second server, it also sends prompt information bound to the third video frame.
- the prompt information also carries the information that the first virtual object was defeated.
- the subsequent second server can also determine the area where the first virtual object was defeated in the third video frame.
- the first server in response to the occurrence of the target event in the target virtual scene, can also use a hook (Hook) function to Hook the target event, and notify the second server of the target event, that is, by means of prompt information Informing the second server that a target event has occurred in the target virtual scene.
- Hook hook
- the target event is the controlled virtual object defeating the first virtual object in the target virtual scene.
- the target event can also be the controlled virtual object
- the target virtual prop is picked up in the target virtual scene, or the target virtual vehicle is activated for the controlled virtual object in the target virtual scene, or the controlled virtual object continuously defeats multiple virtual objects in the target virtual scene, etc.
- the target Events are set by technicians according to actual conditions, which is not limited in this embodiment of the present application.
- the second server adds the first animation corresponding to the target event to the third video frame to obtain a fourth video frame.
- the target event is that the controlled virtual object defeats the first virtual object in the target virtual scene
- the second server determines an area where the first virtual object is defeated in the third video frame.
- the second server adds the first animation corresponding to the target event in the area to obtain the fourth video frame.
- the corresponding relationship between the target event and the animation is set through the first terminal, for example, the target event is selected through the first terminal, the animation bound to the target event is selected from multiple animations provided by the client server, or the animation bound to the target event is selected through the second terminal
- a terminal uploads the first animation corresponding to the target event to the client server, and the client server binds the animation to the target event after receiving the animation uploaded by the first terminal.
- the first server while sending the third video frame to the second server, the first server also sends prompt information bound to the third video frame.
- the second server acquires the identifier of the target event and the location where the first virtual object was defeated from the prompt information.
- the second server determines the first animation corresponding to the target event based on the identification of the target event, and determines the defeated area of the first virtual object based on the defeated position of the first virtual object.
- the second server adds the first animation corresponding to the target event in the area to obtain the fourth video frame.
- the first server when the first server sends the third video frame to the second server, it also sends prompt information bound to the third video frame.
- the second server acquires the identifier of the target event and the location where the first virtual object was defeated from the prompt information.
- the second server sends a second animation acquisition request to the client server, and the second animation acquisition request carries the identifier of the target event.
- the client server After receiving the second animation acquisition request, the client server obtains the identifier of the target event from the second animation acquisition request, performs a query based on the identifier of the target event, obtains the first animation corresponding to the target event, and converts the first animation corresponding to the target event sent to the second server.
- the second server After receiving the first animation corresponding to the target event, the second server adds the animation to the area where the first virtual object is defeated to obtain a fourth video frame.
- the second server can add the first frame of the animation to the third video frame to obtain the fourth video frame.
- the second server sends the fourth video frame to the first terminal, where the first terminal is used to display the fourth video frame.
- the second server can also acquire the first audio corresponding to the target event, where the first audio is the audio of the first terminal.
- the second server sends the first audio to the first terminal, and the first terminal is used to play the first audio while displaying the fourth video frame.
- the corresponding relationship between the target event and the first audio is set through the first terminal, for example, the first terminal selects the target event, and selects from multiple first audios provided by the client server to be bound to the target event.
- the first audio, or the first audio corresponding to the target event is uploaded to the client server through the first terminal, and the client server binds the first audio to the target event after receiving the first audio uploaded by the first terminal.
- the first server while sending the third video frame to the second server, the first server also sends prompt information bound to the third video frame.
- the second server acquires the identifier of the target event from the prompt information.
- the second server sends animation and audio acquisition requests to the client server, where the animation and audio acquisition requests carry the identifier of the target event.
- the client server After receiving the second animation acquisition request, the client server obtains the identifier of the target event from the second animation acquisition request, performs a query based on the identifier of the target event, obtains the first animation and the first audio corresponding to the target event, and matches the target event
- the first animation and the first audio are sent to the second server.
- the second server After receiving the first animation and the first audio corresponding to the target event, the second server adds the animation to the third video frame to obtain a fourth video frame. While sending the fourth video frame to the first terminal, the second server sends the first audio to the first terminal. After receiving the fourth video frame and the first audio, the first terminal plays the first audio while displaying the fourth video frame.
- the first server in response to the accused virtual object defeating the first false reporting object in the virtual scene, sends the third video to the second server frame and prompt information bound to the third video frame, where the prompt information carries the identification of the target event and the position where the first virtual object was defeated.
- the second server acquires the identifier of the target event and the location where the first virtual object was defeated from the prompt information.
- the second server sends animation and audio acquisition requests to the client server, where the animation and audio acquisition requests carry the identifier of the target event.
- the client server After the client server receives the animation and audio acquisition request, it obtains the identifier of the target event from the animation and audio acquisition request, performs a query based on the identifier of the target event, obtains the first animation and the first audio corresponding to the target event, and matches the target event to The first animation and the first audio are sent to the second server.
- the second server After receiving the first animation and the first audio corresponding to the target event, the second server adds the animation to the position where the first virtual object is defeated in the third video frame to obtain a fourth video frame. While sending the fourth video frame to the first terminal, the second server sends the first audio to the first terminal. After receiving the fourth video frame and the first audio, the first terminal plays the first audio while displaying the fourth video frame.
- the animation is an animation of a villain dancing
- the first audio is played while the animation of the villain dancing is played at the position where the first virtual object is defeated, and the user uses the animation and the first audio to Motivate yourself to perform better in the game.
- the second server adds animation 703 to the position 702 where the first virtual object is defeated in the fourth video frame 701 .
- the embodiments of the present application if you want to play a specified animation when a target event is sent in the target virtual scene, you can select a target animation for the target event through the first terminal.
- the video frame is rendered twice by the second server to obtain a fourth video frame.
- the animation set by the first terminal can be displayed in the fourth video frame.
- a second rendering animation function is provided in the cloud application, and the user can quickly and efficiently adjust the display effect of the virtual scene, thus expanding the functional scope of the cloud application and improving the personalization of the cloud application , making the spread of cloud applications wider.
- the first terminal, the client server, the first server, the second server and the game server constitute a video frame rendering system.
- the implementation environment of the video frame rendering method is introduced in FIG. 1 . It briefly introduces the functions of each component.
- the video frame rendering system provided by the embodiment of the present application will be introduced below in conjunction with the above method embodiment. Referring to FIG. 1, the system includes: a first terminal, a first server and a second server , the first terminal, the first server, and the second server are communicatively connected.
- the first server is a cloud server.
- the second server is a scene management server.
- the first server is used to render the target virtual scene from the perspective of the controlled virtual object in the target virtual scene, obtain the first video frame corresponding to the first terminal, and send the first video frame to the second server,
- the controlled virtual object is a virtual object controlled by the first terminal.
- the second server is used for receiving the first video frame.
- the second server is further configured to render the target virtual object in the first video frame based on the first parameter of the first terminal when the first video frame displays the target virtual object of the first terminal , to get the second video frame.
- the second server is also used to send the second video frame to the first terminal.
- the first terminal is used for displaying the second video frame in response to receiving the second video frame.
- the first terminal is further configured to send control information to the first server, where the control information is used to control actions of the controlled virtual object in the target virtual scene.
- the first server is further configured to determine the angle of view of the controlled virtual object in the target virtual scene based on the control information, and render the target virtual scene based on the angle of view of the controlled virtual object in the target virtual scene to obtain the The first video frame.
- the system further includes a client server, and the client server is respectively connected in communication with the first terminal, the first server, and the second server.
- the first terminal is also used to display a configuration interface of the target virtual object, and the configuration interface is used to acquire configuration information of the target virtual object.
- the first terminal is further configured to send configuration information of the target virtual object to the client server in response to an operation on the configuration interface.
- the client server is used to receive configuration information of the target virtual object, create a 3D model of the target virtual object based on the configuration information of the target virtual object, and obtain the first parameter corresponding to the 3D model.
- the second server is further configured to acquire a first rendering parameter when the first video frame displays the target virtual object of the first terminal, where the first rendering parameter is the second One parameter is a rendering parameter corresponding to a first angle and a first distance, the first angle is the angle between the controlled virtual object and the target virtual object, and the first distance is the angle between the controlled virtual object and the target virtual distance between objects. Render the target virtual object in the first video frame based on the first rendering parameter to obtain the second video frame.
- the second server is further configured to determine target pixel values of multiple target pixel points of the target virtual object in the first video frame based on the first rendering parameter. Using the target pixel value, update the pixel values of the plurality of target pixel points in the first video frame to obtain the second video frame.
- the second server is further configured to obtain the first rendering parameter and the second rendering parameter from the first parameter when the first video frame displays the target virtual object of the first terminal. Rendering parameters. In the first video frame, use the first rendering parameter and the second rendering parameter to render the target virtual object and the controlled virtual object respectively to obtain the second video frame.
- the second server is further configured to determine the position of the controlled virtual object in the target virtual scene when the first video frame displays the target virtual object of the first terminal .
- the target virtual object in the first video frame is rendered based on the first parameter to obtain the second video frame. If the controlled virtual object is not in the target sub-scene of the target virtual scene, the first video frame is determined as the second video frame.
- the second server is further configured to perform image recognition on the first video frame to determine the type of the first video frame. If the type indicates that the target virtual object is displayed in the first video frame, the target virtual object is rendered based on the first parameter to obtain the second video frame.
- the second server is further configured to use the template image of the target virtual object to perform detection in the first video frame.
- the first type indicates that the first video frame displays the target virtual object object.
- the second type indicates that the first video frame does not display the The target dummy.
- the second server is further configured to input the first video frame into an image recognition model. Through the image recognition model, feature extraction and classification are performed on the first video frame, and the type of the first video frame is output.
- the second server is further configured to divide the first video frame into multiple image blocks, and input the multiple image blocks into the image recognition model.
- performing feature extraction and classification on the first video frame, and outputting the type of the first video frame includes: using the image recognition model, performing feature extraction and full connection processing on the multiple image blocks to obtain A plurality of probabilities corresponding to the plurality of image blocks respectively, where the probability is a probability that the corresponding image block includes the target virtual object.
- the first video frame is determined as a first type, where the first type indicates that the first video frame displays the target virtual object.
- the first video frame is determined as a second type, and the second type indicates that the first video frame does not display the target virtual object.
- the second server is further configured to acquire the first parameter from the client server, and the client server is configured to create the A 3D model of the target virtual object, the first parameter includes a plurality of rendering parameters of the 3D model.
- the first server is further configured to, in response to a target event occurring in the target virtual scene, render the target virtual scene from the perspective of the controlled virtual object to obtain the corresponding A third video frame, the third video frame is sent to the second server, and the target event is that the controlled virtual object beats the first virtual object in the target virtual scene.
- the second server is further configured to acquire a first animation corresponding to the target event and a first audio corresponding to the target event, where the first audio is the audio of the first terminal.
- the second server is also used to determine the area where the first virtual object is defeated in the third video frame. Add the first animation corresponding to the target event in this area to obtain the fourth video frame. sending the fourth video frame and the first audio to the first terminal.
- the first terminal is further configured to play the first audio while displaying the fourth video frame in response to receiving the fourth video frame and the first audio.
- the system further includes a second terminal, the second terminal, the first server, and the second server are communicatively connected, and the second terminal is different from the first terminal.
- the first server is also used to render the target virtual scene from the perspective of the second virtual object in the target virtual scene, obtain the fifth video frame corresponding to the second terminal, and send the fifth video frame to the second terminal.
- server where the second virtual object is a virtual object controlled by the second terminal.
- the second server is further configured to perform target processing on the fifth video frame to obtain a sixth video frame when the fifth video frame displays the target virtual object. Send the sixth video frame to the second terminal.
- the second terminal is used for displaying the sixth video frame in response to receiving the sixth video frame.
- the second server is also configured to perform any of the following:
- the target virtual object in the fifth video frame is rendered based on the second parameter of the second terminal to obtain the sixth video frame.
- the target virtual object in the fifth video frame is rendered based on the first parameter to obtain the sixth video frame.
- a second animation corresponding to the target virtual object is added to the fifth video frame to obtain the sixth video frame.
- the second server is further configured to compare the virtual level of the controlled virtual object with that of the second virtual object when the fifth video frame displays the target virtual object. If the virtual level of the controlled virtual object is higher than that of the second virtual object, the target virtual object in the fifth video frame is rendered based on the first parameter to obtain the sixth video frame. If the virtual level of the controlled virtual object is lower than the virtual level of the second virtual object, the target virtual object in the fifth video frame is rendered based on the second parameter of the second terminal to obtain the second virtual object Six video frames. If the virtual level of the controlled virtual object is the same as that of the second virtual object, the target virtual object in the fifth video frame is rendered based on the third parameter to obtain the sixth video frame.
- the second server is further configured to obtain a second animation and a second audio corresponding to the target virtual object.
- a second animation corresponding to the target virtual object is added to the second video frame to obtain a seventh video frame.
- the seventh video frame and the second audio are sent to the first terminal, and the first terminal is used to play the second audio while displaying the seventh video frame.
- the second server is also configured to perform any of the following:
- the multiple second video frames are aggregated into a first video frame set, and the first video frame set is sent to the first terminal, and the first terminal is used to share the first video frame set with other terminals.
- Splicing the second video frame and the first video frame to obtain a spliced video frame Aggregating the multiple spliced video frames into a second set of video frames, and sending the second set of video frames to the first terminal, where the first terminal is used to share the second set of video frames with other terminals.
- the second server is further configured to acquire the first parameter from the client server, and the client server is configured to create the A 3D model of the target virtual object, the first parameter includes a plurality of rendering parameters of the 3D model.
- the first terminal displays a configuration interface of the target virtual object, where the configuration interface is used to obtain configuration information of the target virtual object.
- the configuration information of the target virtual object is sent to the client server.
- the client server receives configuration information of the target virtual object, creates a 3D model of the target virtual object based on the configuration information of the target virtual object, and acquires first parameters corresponding to the 3D model.
- the first terminal starts the target cloud game, maintains a persistent connection with the client server, and sends a game start command to the client server.
- the game start command carries the user account, the target cloud game ID and the first Terminal hardware information.
- the client server After the client server receives the game start instruction, it sends the game start instruction to the first server, and after the first server receives the game start instruction, it obtains the user account, the target cloud game ID and the second game start instruction from the game start instruction. Hardware information of a terminal.
- the first server initializes the target cloud game based on the hardware information of the first terminal to match the rendered game screen with the first terminal, and sends the user account to the game server corresponding to the target cloud game.
- the game server After receiving the user account, the game server sends information corresponding to the game account to the first server, and the first server starts the target cloud game based on the information corresponding to the game account.
- the target virtual scene of the target cloud game is displayed, and the controlled virtual object is displayed in the target virtual scene.
- the first terminal sends control information on the controlled virtual object to the first server, and after receiving the control information, the first server determines an angle of view of the controlled virtual object in the target virtual scene based on the control information. Based on the perspective of the controlled virtual object, the first server renders the target virtual scene to obtain a first video frame, which is also a video frame corresponding to the first terminal.
- the first server sends the first video frame to the second server.
- the second server can perform image recognition on the first video frame to determine whether the first video frame includes a vehicle. If the second server determines that a vehicle is displayed in the first video frame, then the first parameter of the vehicle is obtained from the client server, and the first parameter is determined by the client server based on the parameters set by the user for the vehicle. The second server renders the vehicle in the first video frame based on the first parameter to obtain a second video frame, and sends the second video frame to the first terminal, and the user can view the second video through the first terminal frame. If in the target cloud game, the technician configures the color of the vehicle to be blue, then through the above steps, the color of the vehicle can be adjusted to the red color set by the user to realize the personalized configuration of the vehicle.
- the server performs secondary rendering on the video frame based on the first parameter , the second video frame is obtained.
- the display effect of the virtual object displayed in the second video frame is also the display effect configured by the first terminal for the virtual object.
- FIG. 8 is a schematic structural diagram of a video frame rendering device provided by an embodiment of the present application.
- the device includes: a first video frame acquisition module 801 , a rendering module 802 and a sending module 803 .
- the first video frame acquisition module 801 is configured to acquire the first video frame corresponding to the first terminal, the first video frame is a video frame obtained by rendering the target virtual scene from the perspective of the controlled virtual object in the target virtual scene , the controlled virtual object is a virtual object controlled by the first terminal.
- a rendering module 802 configured to render the target virtual object in the first video frame based on the first parameter of the first terminal when the first video frame displays the target virtual object of the first terminal , to get the second video frame.
- the sending module 803 is configured to send the second video frame to the first terminal, and the first terminal is used to display the second video frame.
- the rendering module 802 is configured to obtain a first rendering parameter when the first video frame displays the target virtual object of the first terminal, where the first rendering parameter is the first rendering parameter.
- One parameter is a rendering parameter corresponding to a first angle and a first distance, the first angle is the angle between the controlled virtual object and the target virtual object, and the first distance is the angle between the controlled virtual object and the target virtual distance between objects. Render the target virtual object in the first video frame based on the first rendering parameter to obtain the second video frame.
- the rendering module 802 is configured to determine target pixel values of multiple target pixel points of the target virtual object in the first video frame based on the first rendering parameter. Using the target pixel value, update the pixel values of the plurality of target pixel points in the first video frame to obtain the second video frame.
- the rendering module 802 is configured to obtain the first rendering parameter and the second rendering parameter from the first parameter when the first video frame displays the target virtual object of the first terminal. Rendering parameters. In the first video frame, the target virtual object and the controlled virtual object are respectively rendered by using the first rendering parameter and the second rendering parameter to obtain the second video frame.
- the rendering module 802 is configured to determine the position of the controlled virtual object in the target virtual scene when the first video frame displays the target virtual object of the first terminal .
- the target virtual object in the first video frame is rendered based on the first parameter to obtain the second video frame.
- the rendering module 802 is further configured to determine the first video frame as the second video frame when the controlled virtual object is not in the target sub-scene of the target virtual scene .
- the device also includes:
- the image recognition module is configured to perform image recognition on the first video frame to determine the type of the first video frame.
- the rendering module 802 is configured to render the target virtual object based on the first parameter to obtain the second video frame when the type indicates that the first video frame displays the target virtual object.
- the image recognition module is configured to use the template image of the target virtual object to detect in the first video frame.
- the first type indicates that the first video frame displays the target virtual object object.
- the second type indicates that the first video frame does not display the The target dummy.
- the image recognition module is configured to input the first video frame into an image recognition model, perform feature extraction and classification on the first video frame through the image recognition model, and output the first video frame The type of frame.
- the image recognition module is configured to divide the first video frame into a plurality of image blocks, and input the plurality of image blocks into the image recognition model. Through the image recognition model, feature extraction and full connection processing are performed on the multiple image blocks to obtain multiple probabilities respectively corresponding to the multiple image blocks, and the probabilities are the probability that the corresponding image blocks include the target virtual object.
- the first video frame is determined as a first type, where the first type indicates that the first video frame displays the target virtual object.
- the first video frame is determined as a second type, and the second type indicates that the first video frame does not display the target virtual object.
- the device also includes:
- the third video frame acquisition module is used to acquire the third video frame corresponding to the first terminal, and the third video frame is to virtualize the target from the perspective of the controlled virtual object after the target event occurs in the target virtual scene.
- the video frame is obtained by rendering the scene, and the target event is that the controlled virtual object beats the first virtual object in the target virtual scene.
- the rendering module 802 is further configured to acquire a first animation corresponding to the target event and a first audio corresponding to the target event, where the first audio is the audio of the first terminal. A region where the first virtual object was defeated is determined in the third video frame. Add the first animation corresponding to the target event in this area to obtain the fourth video frame.
- the sending module 803 is further configured to send the fourth video frame and the first audio to the first terminal, and the first terminal is configured to play the first audio while displaying the fourth video frame.
- the target virtual scene further includes a second virtual object
- the second virtual object is a virtual object controlled by a second terminal
- the second terminal is a different terminal from the first terminal
- the device further include:
- the fifth video frame acquisition module is configured to acquire a fifth video frame corresponding to the second terminal, where the fifth video frame is obtained by rendering the target virtual scene from the perspective of the second virtual object.
- the rendering module 802 is further configured to perform target processing on the fifth video frame to obtain a sixth video frame when the fifth video frame displays the target virtual object.
- the sending module 803 is further configured to send the sixth video frame to the second terminal, and the second terminal is configured to display the sixth video frame.
- the rendering module 802 is also configured to perform any of the following:
- the target virtual object in the fifth video frame is rendered based on the second parameter of the second terminal to obtain the sixth video frame.
- the target virtual object in the fifth video frame is rendered based on the first parameter to obtain the sixth video frame.
- a second animation corresponding to the target virtual object is added to the fifth video frame to obtain the sixth video frame.
- the controlled virtual object and the second virtual object are in the target sub-scene of the target virtual scene at the same time, and the rendering module 802 is further configured to display the target virtual object in the fifth video frame
- the rendering module 802 is further configured to display the target virtual object in the fifth video frame
- compare the virtual level of the controlled virtual object with that of the second virtual object If the virtual level of the controlled virtual object is higher than that of the second virtual object, the target virtual object in the fifth video frame is rendered based on the first parameter to obtain the sixth video frame. If the virtual level of the controlled virtual object is lower than the virtual level of the second virtual object, the target virtual object in the fifth video frame is rendered based on the second parameter of the second terminal to obtain the second virtual object Six video frames. If the virtual level of the controlled virtual object is the same as that of the second virtual object, the target virtual object in the fifth video frame is rendered based on the third parameter to obtain the sixth video frame.
- the rendering module 802 is further configured to obtain a second animation and a second audio corresponding to the target virtual object.
- a second animation corresponding to the target virtual object is added to the second video frame to obtain a seventh video frame.
- the sending module 803 is further configured to send the seventh video frame and the second audio to the first terminal, and the first terminal is configured to play the second audio while displaying the seventh video frame.
- the device further includes a video frame set generation module, and the video frame set generation module is configured to perform any of the following:
- the multiple second video frames are aggregated into a first video frame set, and the first video frame set is sent to the first terminal, and the first terminal is used to share the first video frame set with other terminals.
- Splicing the second video frame and the first video frame to obtain a spliced video frame Aggregating the multiple spliced video frames into a second set of video frames, and sending the second set of video frames to the first terminal, where the first terminal is used to share the second set of video frames with other terminals.
- the device also includes:
- a parameter acquisition module configured to acquire the first parameter from a client server, the client server is configured to create a three-dimensional model of the target virtual object based on the setting parameters uploaded by the first terminal for the target virtual object, and the first The parameters include a plurality of rendering parameters of the three-dimensional model.
- the video frame rendering device provided in the above embodiment performs secondary rendering on the video frame
- the division of the above-mentioned functional modules is used as an example for illustration.
- the above-mentioned functions can be assigned by Completion of different functional modules means that the internal structure of the server is divided into different functional modules to complete all or part of the functions described above.
- the video frame rendering device and the video frame rendering method embodiment provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
- the server performs secondary rendering on the video frame based on the first parameter , the second video frame is obtained.
- the display effect of the virtual object displayed in the second video frame is also the display effect configured by the first terminal for the virtual object.
- the structure of the server is introduced as follows:
- FIG. 9 is a schematic structural diagram of a server provided by an embodiment of the present application.
- the server 900 may have relatively large differences due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 901 and a or a plurality of memories 902, wherein at least one computer program is stored in the one or more memories 902, and the at least one computer program is loaded and executed by the one or more processors 901 to implement the above-mentioned various methods The method provided by the example.
- the server 900 may also have components such as a wired or wireless network interface, a keyboard, and an input and output interface for input and output, and the server 900 may also include other components for implementing device functions, which will not be repeated here.
- a computer-readable storage medium such as a memory including a computer program
- the above computer program can be executed by a processor to complete the video frame rendering method in the above embodiment.
- the computer-readable storage medium can be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a read-only optical disc (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks, and optical data storage devices, etc.
- a computer program product or computer program is also provided, the computer program product or computer program includes program code, the program code is stored in a computer-readable storage medium, and the processor of the computer device reads from the computer The program code is read by reading the storage medium, and the processor executes the program code, so that the computer device executes the above video frame rendering method. That is, when the program code is executed by the processor, the above-mentioned method for rendering the video frame is realized.
- the computer programs involved in the embodiments of the present application can be deployed and executed on one computer device, or executed on multiple computer devices at one location, or distributed in multiple locations and communicated Executed on multiple computer devices interconnected by the network, multiple computer devices distributed in multiple locations and interconnected through a communication network can form a blockchain system.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- General Business, Economics & Management (AREA)
- Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims (17)
- 一种视频帧的渲染方法,所述方法包括:服务器获取第一终端对应的第一视频帧,所述第一视频帧是以目标虚拟场景中被控虚拟对象的视角,对所述目标虚拟场景进行渲染得到的视频帧,所述被控虚拟对象为所述第一终端控制的虚拟对象;在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧,所述目标虚拟对象为所述第一终端需要重新进行渲染的虚拟对象;所述服务器将所述第二视频帧发送给所述第一终端,所述第一终端用于显示所述第二视频帧。
- 根据权利要求1所述的方法,其中,所述在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧包括:在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器获取第一渲染参数,所述第一渲染参数为所述第一参数中与第一角度和第一距离对应的渲染参数,所述第一角度为所述被控虚拟对象与所述目标虚拟对象之间的角度,所述第一距离为所述被控虚拟对象与所述目标虚拟对象之间的距离;所述服务器基于所述第一渲染参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到所述第二视频帧。
- 根据权利要求2所述的方法,其中,所述服务器基于所述第一渲染参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到所述第二视频帧包括:所述服务器基于所述第一渲染参数,确定所述第一视频帧中所述目标虚拟对象的多个目标像素点的目标像素值;所述服务器采用所述目标像素值,更新所述第一视频帧中所述多个目标像素点的像素值,得到所述第二视频帧。
- 根据权利要求1所述的方法,其中,所述在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧包括:在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器从所述第一参数中获取第一渲染参数和第二渲染参数;所述服务器在所述第一视频帧中,采用所述第一渲染参数和所述第二渲染参数,分别对所述目标虚拟对象和所述被控虚拟对象进行渲染,得到所述第二视频帧。
- 根据权利要求1所述的方法,其中,所述在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧包括:在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器确定所述被控虚拟对象在所述目标虚拟场景中的位置;在所述被控虚拟对象处于所述目标虚拟场景的目标子场景的情况下,所述服务器基于所述第一参数对所述第一视频帧中的所述目标虚拟对象进行渲染,得到所述第二视频帧。
- 根据权利要求5所述的方法,其中,所述方法还包括:在所述被控虚拟对象不处于所述目标虚拟场景的目标子场景的情况下,所述服务器将所述第一视频帧确定为所述第二视频帧。
- 根据权利要求1所述的方法,其中,所述在所述第一视频帧显示有所述第一终端的目 标虚拟对象的情况下,所述服务器基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧之前,所述方法还包括:所述服务器对所述第一视频帧进行图像识别,确定所述第一视频帧的类型;所述在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器基于第一参数对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧包括:在所述类型表示所述第一视频帧显示有所述目标虚拟对象的情况下,所述服务器基于所述第一参数对所述目标虚拟对象进行渲染,得到所述第二视频帧。
- 根据权利要求1所述的方法,其中,所述方法还包括:所述服务器获取所述第一终端对应的第三视频帧,所述第三视频帧是在所述目标虚拟场景中发生目标事件后,以所述被控虚拟对象的视角对所述目标虚拟场景进行渲染得到的视频帧,所述目标事件为所述被控虚拟对象在所述目标虚拟场景中击败了第一虚拟对象;所述服务器获取与所述目标事件对应的第一动画以及与所述目标事件对应的第一音频,所述第一音频为所述第一终端的音频;所述服务器在所述第三视频帧中确定所述第一虚拟对象被击败的区域;所述服务器在所述区域中添加与所述目标事件对应的第一动画,得到第四视频帧;所述服务器将所述第四视频帧和所述第一音频发送给所述第一终端,所述第一终端用于在显示所述第四视频帧的同时播放所述第一音频。
- 根据权利要求1所述的方法,其中,所述目标虚拟场景还包括第二虚拟对象,所述第二虚拟对象为第二终端控制的虚拟对象,所述第二终端与所述第一终端为不同终端,所述方法还包括:所述服务器获取所述第二终端对应的第五视频帧,所述第五视频帧是所述以所述第二虚拟对象的视角,对所述目标虚拟场景进行渲染得到的视频帧;在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器对所述第五视频帧进行目标处理,得到第六视频帧;所述服务器将所述第六视频帧发送给所述第二终端,所述第二终端用于显示所述第六视频帧。
- 根据权利要求9所述的方法,其中,所述在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器对所述第五视频帧进行目标处理,得到第六视频帧显示有下述任一项:在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器基于所述第二终端的第二参数,对所述第五视频帧中的所述目标虚拟对象进行渲染,得到所述第六视频帧;在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器基于所述第一参数对所述第五视频帧中的所述目标虚拟对象进行渲染,得到所述第六视频帧;在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器在所述第五视频帧中添加与所述目标虚拟对象对应的第二动画,得到所述第六视频帧。
- 根据权利要求9所述的方法,其中,所述被控虚拟对象和所述第二虚拟对象同时处于所述目标虚拟场景的目标子场景,所述在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器对所述第五视频帧进行目标处理,得到第六视频帧包括:在所述第五视频帧显示有所述目标虚拟对象的情况下,所述服务器比较所述被控虚拟对象和所述第二虚拟对象的虚拟等级;在所述被控虚拟对象的虚拟等级高于所述第二虚拟对象的虚拟等级的情况下,所述服务器基于所述第一参数对所述第五视频帧中的所述目标虚拟对象进行渲染,得到所述第六视频帧;在所述被控虚拟对象的虚拟等级低于所述第二虚拟对象的虚拟等级的情况下,所述服务器基于所述第二终端的第二参数对所述第五视频帧中的所述目标虚拟对象进行渲染,得到所述第六视频帧;在所述被控虚拟对象和所述第二虚拟对象的虚拟等级相同的情况下,所述服务器基于第三参数对所述第五视频帧中的所述目标虚拟对象进行渲染,得到所述第六视频帧。
- 根据权利要求1所述的方法,其中,所述在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,所述服务器基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧之后,所述方法还包括:所述服务器获取所述目标虚拟对象对应的第二动画和第二音频;所述服务器在所述第二视频帧中添加与所述目标虚拟对象对应的第二动画,得到第七视频帧;所述服务器将所述第七视频帧和所述第二音频发送给所述第一终端,所述第一终端用于在显示所述第七视频帧的同时播放所述第二音频。
- 根据权利要求1-12任一项所述的方法,其中,所述将所述第二视频帧发送给所述第一终端,所述第一终端用于显示所述第二视频帧之后,所述方法还包括下述任一项:所述服务器将多个所述第二视频帧聚合为第一视频帧集合,将所述第一视频帧集合发送给所述第一终端,所述第一终端用于向其他终端分享所述第一视频帧集合;所述服务器将所述第二视频帧和所述第一视频帧进行拼接,得到拼接视频帧;将多个所述拼接视频帧聚合为第二视频帧集合,将所述第二视频帧集合发送给所述第一终端,所述第一终端用于向其他终端分享所述第二视频帧集合。
- 一种视频帧的渲染系统,所述系统包括:第一终端、第一服务器以及第二服务器,所述第一终端、所述第一服务器以及所述第二服务器之间通讯相连;所述第一服务器用于以目标虚拟场景中被控虚拟对象的视角对所述目标虚拟场景进行渲染,得到所述第一终端对应的第一视频帧,将所述第一视频帧发送给所述第二服务器,所述被控虚拟对象为所述第一终端控制的虚拟对象;所述第二服务器用于接收所述第一视频帧;所述第二服务器还用于在所述第一视频帧显示有所述第一终端的目标虚拟对象的情况下,基于所述第一终端的第一参数,对所述第一视频帧中的所述目标虚拟对象进行渲染,得到第二视频帧;所述第二服务器还用于向所述第一终端发送所述第二视频帧;所述第一终端用于响应于接收到所述第二视频帧,显示所述第二视频帧。
- 一种服务器,所述服务器包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条计算机程序,所述计算机程序由所述一个或多个处理器加载并执行以实现如权利要求1至权利要求12任一项所述的视频帧的渲染方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至权利要求12任一项所述的视频帧的渲染方法。
- 一种计算机程序产品,包括程序代码,所述程序代码被处理器执行时实现权利要求1至权利要求12任一项所述的视频帧的渲染方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237030586A KR20230142597A (ko) | 2021-08-31 | 2022-08-09 | 비디오 프레임 렌더링 방법 및 장치, 디바이스 및 저장 매체 |
US18/318,780 US20230285857A1 (en) | 2021-08-31 | 2023-05-17 | Video frame rendering method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111010058.0A CN113633971B (zh) | 2021-08-31 | 2021-08-31 | 视频帧的渲染方法、装置、设备以及存储介质 |
CN202111010058.0 | 2021-08-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/318,780 Continuation US20230285857A1 (en) | 2021-08-31 | 2023-05-17 | Video frame rendering method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023029900A1 true WO2023029900A1 (zh) | 2023-03-09 |
Family
ID=78424633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/111061 WO2023029900A1 (zh) | 2021-08-31 | 2022-08-09 | 视频帧的渲染方法、装置、设备以及存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230285857A1 (zh) |
KR (1) | KR20230142597A (zh) |
CN (1) | CN113633971B (zh) |
WO (1) | WO2023029900A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188637A (zh) * | 2023-04-23 | 2023-05-30 | 世优(北京)科技有限公司 | 数据同步方法及装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113633971B (zh) * | 2021-08-31 | 2023-10-20 | 腾讯科技(深圳)有限公司 | 视频帧的渲染方法、装置、设备以及存储介质 |
CN114363666B (zh) * | 2021-12-22 | 2023-11-10 | 咪咕互动娱乐有限公司 | 视频处理方法、装置及电子设备 |
CN114866802B (zh) * | 2022-04-14 | 2024-04-19 | 青岛海尔科技有限公司 | 视频流的发送方法和装置、存储介质及电子装置 |
CN116440501B (zh) * | 2023-06-16 | 2023-08-29 | 瀚博半导体(上海)有限公司 | 自适应云游戏视频画面渲染方法和系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111214828A (zh) * | 2020-01-03 | 2020-06-02 | 北京小米移动软件有限公司 | 游戏运行方法、装置、设备、介质及云端游戏平台 |
US20210158597A1 (en) * | 2019-11-22 | 2021-05-27 | Sony Interactive Entertainment Inc. | Systems and methods for adjusting one or more parameters of a gpu |
CN112891936A (zh) * | 2021-02-10 | 2021-06-04 | 广州虎牙科技有限公司 | 虚拟对象渲染方法、装置、移动端及存储介质 |
CN113230651A (zh) * | 2021-04-20 | 2021-08-10 | 网易(杭州)网络有限公司 | 游戏场景的显示方法、装置、电子设备及存储介质 |
CN113633971A (zh) * | 2021-08-31 | 2021-11-12 | 腾讯科技(深圳)有限公司 | 视频帧的渲染方法、装置、设备以及存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3018631A4 (en) * | 2013-07-05 | 2016-12-14 | Square Enix Co Ltd | SCREEN PROCESSING DEVICE, SCREEN PROCESSING SYSTEM, CONTROL METHOD, PROGRAM AND RECORDING MEDIUM |
US10953334B2 (en) * | 2019-03-27 | 2021-03-23 | Electronic Arts Inc. | Virtual character generation from image or video data |
CN112316424B (zh) * | 2021-01-06 | 2021-03-26 | 腾讯科技(深圳)有限公司 | 一种游戏数据处理方法、装置及存储介质 |
CN113244614B (zh) * | 2021-06-07 | 2021-10-26 | 腾讯科技(深圳)有限公司 | 图像画面展示方法、装置、设备及存储介质 |
-
2021
- 2021-08-31 CN CN202111010058.0A patent/CN113633971B/zh active Active
-
2022
- 2022-08-09 KR KR1020237030586A patent/KR20230142597A/ko unknown
- 2022-08-09 WO PCT/CN2022/111061 patent/WO2023029900A1/zh active Application Filing
-
2023
- 2023-05-17 US US18/318,780 patent/US20230285857A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210158597A1 (en) * | 2019-11-22 | 2021-05-27 | Sony Interactive Entertainment Inc. | Systems and methods for adjusting one or more parameters of a gpu |
CN111214828A (zh) * | 2020-01-03 | 2020-06-02 | 北京小米移动软件有限公司 | 游戏运行方法、装置、设备、介质及云端游戏平台 |
CN112891936A (zh) * | 2021-02-10 | 2021-06-04 | 广州虎牙科技有限公司 | 虚拟对象渲染方法、装置、移动端及存储介质 |
CN113230651A (zh) * | 2021-04-20 | 2021-08-10 | 网易(杭州)网络有限公司 | 游戏场景的显示方法、装置、电子设备及存储介质 |
CN113633971A (zh) * | 2021-08-31 | 2021-11-12 | 腾讯科技(深圳)有限公司 | 视频帧的渲染方法、装置、设备以及存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188637A (zh) * | 2023-04-23 | 2023-05-30 | 世优(北京)科技有限公司 | 数据同步方法及装置 |
CN116188637B (zh) * | 2023-04-23 | 2023-08-15 | 世优(北京)科技有限公司 | 数据同步方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US20230285857A1 (en) | 2023-09-14 |
KR20230142597A (ko) | 2023-10-11 |
CN113633971A (zh) | 2021-11-12 |
CN113633971B (zh) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023029900A1 (zh) | 视频帧的渲染方法、装置、设备以及存储介质 | |
US10888778B2 (en) | Augmented reality (AR) system for providing AR in video games | |
US11620800B2 (en) | Three dimensional reconstruction of objects based on geolocation and image data | |
WO2019153840A1 (zh) | 声音再现方法和装置、存储介质及电子装置 | |
CN111744202A (zh) | 加载虚拟游戏的方法及装置、存储介质、电子装置 | |
WO2009101153A2 (en) | Live-action image capture | |
WO2022068452A1 (zh) | 虚拟道具的交互处理方法、装置、电子设备及可读存储介质 | |
WO2022227958A1 (zh) | 虚拟载具的显示方法、装置、设备以及存储介质 | |
WO2022267512A1 (zh) | 信息发送方法、信息发送装置、计算机可读介质及设备 | |
WO2022242021A1 (zh) | 多人在线对战程序中的消息发送方法、装置、终端及介质 | |
CN112822556A (zh) | 游戏画面的拍摄方法、装置、设备及存储介质 | |
CN114288639B (zh) | 画面显示方法、提供方法、装置、设备及存储介质 | |
CN114225402A (zh) | 一种游戏中虚拟对象视频的编辑方法和编辑装置 | |
CN112138379B (zh) | 不同应用模式之间的交互方法和装置及存储介质 | |
CN112642150B (zh) | 游戏画面的拍摄方法、装置、设备及存储介质 | |
CN115591237A (zh) | 虚拟对局中基于天气效果的交互方法、装置及产品 | |
CN115645916A (zh) | 虚拟场景中虚拟对象组的控制方法、装置及产品 | |
US11471779B2 (en) | Spectating support apparatus, spectating support method, and spectating support program | |
US10668384B2 (en) | System using rule based techniques for handling gameplay restrictions | |
CN113599829B (zh) | 虚拟对象的选择方法、装置、终端及存储介质 | |
WO2024067168A1 (zh) | 基于社交场景的消息显示方法、装置、设备、介质及产品 | |
CN117599419A (zh) | 虚拟场景中虚拟墙体的显示方法、装置、系统及程序产品 | |
CN115671722A (zh) | 虚拟场景中对象动作的显示方法、装置、设备及程序产品 | |
CN116983638A (zh) | 虚拟对象的互动方法、装置、设备、存储介质及程序产品 | |
CN117753004A (zh) | 消息显示方法、装置、设备、介质及程序产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22863041 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237030586 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020237030586 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202306476W Country of ref document: SG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023571123 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |