CN113318444B - Role rendering method and device, electronic equipment and storage medium - Google Patents

Role rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113318444B
CN113318444B CN202110636702.9A CN202110636702A CN113318444B CN 113318444 B CN113318444 B CN 113318444B CN 202110636702 A CN202110636702 A CN 202110636702A CN 113318444 B CN113318444 B CN 113318444B
Authority
CN
China
Prior art keywords
target
scene
viewport
user interface
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110636702.9A
Other languages
Chinese (zh)
Other versions
CN113318444A (en
Inventor
郑国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Yake Interactive Technology Co ltd
Original Assignee
Tianjin Yake Interactive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Yake Interactive Technology Co ltd filed Critical Tianjin Yake Interactive Technology Co ltd
Priority to CN202110636702.9A priority Critical patent/CN113318444B/en
Publication of CN113318444A publication Critical patent/CN113318444A/en
Application granted granted Critical
Publication of CN113318444B publication Critical patent/CN113318444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The application provides a role rendering method and device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a first target scene, wherein the first target scene is a main scene to be rendered on a target user interface; acquiring a second target scene through a target viewport, wherein the second target scene is a separate level loaded through the target viewport, and a target virtual character is placed in the second target scene; rendering the second target scene in the target viewport on the target user interface while rendering the first target scene on the target user interface to render the target avatar on the target user interface. By the method and the device, the problem that the character rendering quality is poor due to the fact that the character rendering mode in the related technology is easily affected by illumination, shadow and the like in the scene in which the character rendering mode is located is solved.

Description

Role rendering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet, and in particular, to a role rendering method and apparatus, an electronic device, and a storage medium.
Background
Currently, in a ghost Engine such as UE4 (universal Engine 4), in order to render a separate character (for example, a character half-body is displayed on a UI) on a UI (User Interface) Interface, a character model may be placed in a scene, the character to be rendered is photographed by a separate camera, an image of only the character is obtained, and the separate character may be rendered on the UI Interface.
However, in the character rendering method, the light and shadow effect of the character is affected by the illumination, shadow, and the like in the scene. Therefore, the character rendering mode in the related art has the problem of poor character rendering quality due to the fact that the character rendering mode is easily influenced by illumination, shadow and the like in a scene where the character is located.
Disclosure of Invention
The application provides a role rendering method and device, an electronic device and a storage medium, which are used for at least solving the problem that a role rendering quality is poor due to the fact that a role rendering mode in the related art is easily influenced by illumination, shadow and the like in a scene in which the role rendering mode is located.
According to an aspect of an embodiment of the present application, there is provided a rendering method of a character, including: acquiring a first target scene, wherein the first target scene is a main scene to be rendered on a target user interface; acquiring a second target scene through a target viewport, wherein the second target scene is an independent barrier loaded through the target viewport, and a target virtual character is placed in the second target scene; rendering the second target scene in the target viewport on the target user interface while rendering the first target scene on the target user interface to render the target virtual character on the target user interface.
Optionally, before rendering the second target scene in the target viewport on the target user interface, the method further comprises: and adjusting the second role attribute of the target virtual role in real time according to the first role attribute of the target reference role, wherein the target reference role is a virtual role matched with the target virtual role in the first target scene.
Optionally, the acquiring the second target scene through the target viewport includes: obtaining a first scene parameter and a second scene parameter, wherein the first scene parameter is used for indicating a target viewport size of the target viewport, and the second scene parameter is used for indicating a target scene map packet corresponding to the second target scene; creating the second target scene in the target viewport by loading the target scene map packet indicated by the second scene parameter, wherein a viewport size of the target viewport is the target viewport size indicated by the first scene parameter.
Optionally, creating the second target scene in the target viewport by loading the target scene map packet indicated by the second scene parameter comprises: loading the target scene map packet indicated by the second scene parameter in the target viewport, wherein the target scene map packet does not contain atmospheric fog components; and performing copy operation on the target scene map packet to obtain the second target scene, wherein the second target scene is created based on the copy of the target scene map packet.
Optionally, the sky ball of the target viewport is configured to be in a hidden state in a running process; rendering the second target scene in the target viewport on the target user interface comprises: copying an alpha channel of the target viewport before a tone mapping operation through a target channel in the process of performing a post-processing operation on the target viewport, wherein the post-processing operation comprises the tone mapping operation; performing fast approximate anti-aliasing processing on the post-processed target viewport to obtain the anti-aliased target viewport, wherein an alpha channel of the post-processed target viewport is the target channel; performing scene capture on the antialiased target viewport through a target scene capture component, wherein the target scene capture component is used for capturing a scene with a transparent background; controlling display of the second target scene captured by the target scene capture component on the target user interface.
Optionally, before the second target scene is acquired through the target viewport, the method further includes: obtaining first configuration information of an initial viewport corresponding to the target viewport, wherein the first configuration information is used for indicating at least one of the following: closing or reducing cascaded shadow maps corresponding to the scene within the initial viewport, not creating a particle system for the scene within the initial viewport, not creating a physical scene within the initial viewport; and configuring the initial view port according to the first configuration information to obtain the target view port.
Optionally, the target viewport contains a plurality of target views, each of the target views corresponding to at least one of the target user interfaces, different ones of the target views corresponding to different view parameters; rendering the second target scene in the target viewport on the target user interface comprises: and using a target scene renderer to render the second target scene to the target user interface corresponding to each target view according to the view parameters of each target view.
According to another aspect of the embodiments of the present application, there is also provided a rendering apparatus for a character, including: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a first target scene, and the first target scene is a main scene to be rendered on a target user interface; a second obtaining unit, configured to obtain a second target scene through a target viewport, where the second target scene is an individual checkpoint loaded through the target viewport, and a target virtual character is placed in the second target scene; a rendering unit, configured to render the second target scene in the target viewport on the target user interface while rendering the first target scene on the target user interface, so as to render the target avatar on the target user interface.
Optionally, the apparatus further comprises: an adjusting unit, configured to adjust, in real time, a second role attribute of a target virtual role according to a first role attribute of a target reference role before rendering the second target scene in the target viewport on the target user interface, where the target reference role is a virtual role in the first target scene that is matched with the target virtual role.
Optionally, the second obtaining unit includes: an obtaining module, configured to obtain a first scene parameter and a second scene parameter, where the first scene parameter is used to indicate a target viewport size of the target viewport, and the second scene parameter is used to indicate a target scene map packet corresponding to the second target scene; a loading module, configured to create the second target scene in the target viewport by loading the target scene map packet indicated by the second scene parameter, where a viewport size of the target viewport is the target viewport size indicated by the first scene parameter.
Optionally, the loading module includes: a loading submodule, configured to load the target scene map package indicated by the second scene parameter in the target viewport, where the target scene map package does not include an atmospheric fog component; and the copying submodule is used for executing copying operation on the target scene map packet to obtain the second target scene, wherein the second target scene is created on the basis of the copy of the target scene map packet.
Optionally, the sky ball of the target viewport is configured to be in a hidden state in a running process; the rendering unit includes: a copy module, configured to copy, through a target channel, an alpha channel of the target viewport before a tone mapping operation in a process of performing a post-processing operation on the target viewport, where the post-processing operation includes the tone mapping operation; an execution module, configured to perform fast approximate antialiasing processing on the post-processed target viewport to obtain the antialiasing-processed target viewport, where an alpha channel of the post-processed target viewport is the target channel; a capture module for performing scene capture on the antialiased target viewport through a target scene capture component, wherein the target scene capture component is configured to capture a scene of a transparent background; a control module to control display of the second target scene captured by the target scene capture component on the target user interface.
Optionally, the apparatus further comprises: a third obtaining unit, configured to obtain, before the target viewport obtains the second target scene, first configuration information of an initial viewport corresponding to the target viewport, where the first configuration information is used to indicate at least one of: closing or reducing cascaded shadow maps corresponding to the scene within the initial viewport, not creating a particle system for the scene within the initial viewport, not creating a physical scene within the initial viewport; and the configuration unit is used for configuring the initial view port according to the first configuration information to obtain the target view port.
Optionally, the target viewport contains a plurality of target views, each of the target views corresponding to at least one of the target user interfaces, different ones of the target views corresponding to different view parameters; the rendering unit includes: and the rendering module is used for rendering the second target scene to the target user interface corresponding to each target view according to the view parameters of each target view by using a target scene renderer.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory communicate with each other through the communication bus; wherein the memory is used for storing the computer program; a processor for performing the method steps in any of the above embodiments by running the computer program stored on the memory.
According to a further aspect of the embodiments of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method steps of any of the above embodiments when the computer program is executed.
In the embodiment of the application, a mode of setting a scene independent of a main scene for a virtual role to be rendered independently is adopted, and a first target scene is obtained, wherein the first target scene is the main scene to be rendered on a target user interface; acquiring a second target scene through the target viewport, wherein the second target scene is an independent checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene; the method comprises the steps of rendering a first target scene on a target user interface, simultaneously rendering a second target scene in a target view port on the target user interface, and rendering a target virtual character on the target user interface, wherein a level independent from a main scene is loaded through a view port control, the level can be used as an independent scene, and the target virtual character (namely, the virtual character to be rendered) is placed in the independent scene, so that the influence of the light and shadow effect of the character on the light and shadow and the like in the main scene can be avoided, the technical effect of improving the rendering quality of the character is achieved, and the problem of poor character rendering caused by the influence of the light and shadow and the like in the scene in a character rendering mode in the related technology is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a diagram of a hardware environment for an alternative character rendering method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative character rendering method according to an embodiment of the application;
FIG. 3 is a schematic diagram of an alternative character rendering method according to an embodiment of the application;
FIG. 4 is a schematic diagram of another alternative character rendering method according to an embodiment of the application;
FIG. 5 is a schematic diagram of a rendering method for a further alternative character according to an embodiment of the present application;
FIG. 6 is a flow chart illustrating an alternative character rendering method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a rendering method for a further alternative character according to an embodiment of the present application;
FIG. 8 is a block diagram of an alternative character rendering apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, a method for rendering a character is provided. Alternatively, in the present embodiment, the rendering method of the character may be applied to a hardware environment formed by the terminal 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the terminal 102 through a network, and may be configured to provide services (e.g., game services, application services, etc.) for the terminal or a client installed on the terminal, and may be configured with a database on the server or separately from the server, and configured to provide data storage services for the server 104.
The network may include, but is not limited to, at least one of: wired networks, wireless networks. The wired network may include, but is not limited to, at least one of: wide area networks, metropolitan area networks, local area networks, which may include, but are not limited to, at least one of the following: WIFI (Wireless Fidelity), bluetooth. The terminal 102 may not be limited to a PC, a mobile phone, a tablet computer, etc.
The rendering method of the character in the embodiment of the present application may be executed by the server 104, or may be executed by the terminal 102, or may be executed by both the server 104 and the terminal 102. The rendering method for the terminal 102 to execute the role according to the embodiment of the present application may also be executed by a client installed thereon.
Taking the rendering method of a character in this embodiment executed by the server 104 as an example, fig. 2 is a schematic flowchart of a rendering method of an optional character according to an embodiment of the present application, and as shown in fig. 2, the flowchart of the method may include the following steps:
step S202, a first target scene is obtained, wherein the first target scene is a main scene to be rendered on a target user interface.
The character rendering method in this embodiment may be applied to rendering a scene of an individual virtual character in a virtual scene, where the virtual scene may be a game scene, or other scenes in which an individual object needs to be rendered in one scene. The above game scenario may be based on a game scenario created by a game engine, such as the UE4, and correspondingly, the virtual character may be one or more player characters. In the present embodiment, a game scene based on the UE4 is described as an example. For game scenes based on other engines or other virtual scenes, the rendering method of characters in the present embodiment is also used without contradiction.
The game scene may be a game scene (e.g., a three-dimensional game scene) of a target game. The target game may be an AR (Augmented Reality) game, a VR (Virtual Reality) game, or other type of game; can be a single-player game or a multi-player game; it can be a battle game, such as MMO (Massive Multiplayer Online) game, or a non-battle game; can be used for end-play or hand-play. The game type of the target game is not limited in this embodiment.
A target user (target player, corresponding to a target object) may have a target client running on a terminal device, which may be a client of a game application of a target game. The target client can be in communication connection with a target server, and the target server is a background server of the target game. The target user can log in to the target client terminal by using the modes of account number and password, dynamic password, associated application (third-party application) login and the like, and enters into the game scene of the target game by operating the target client terminal.
The target game may have one or more game scenes, and the game scene in which the virtual character operated by the target user is currently located is the first target scene. During scene rendering, a target server may obtain a first target scene, which is a main scene to be rendered on a target User Interface (User Interface). The target user interface may be a user interface of a target client. The manner of acquiring the first target scene may be various, for example, the first target scene is loaded from the saved game scene according to the scene identifier, and for example, the scene data of the loaded first target scene may be acquired, which is not limited in this embodiment.
Step S204, a second target scene is obtained through the target viewport, wherein the second target scene is an independent checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene.
In order to display the individual scene in the UI interface without the objects in the scene being affected by the light of the main scene, the individual scene may be placed far away from the main scene, and a scene capture component (or scene capturer) may be used to capture the scene for display on the UI interface. The above-described functions may be implemented by the class of a uviowport (i.e., viewport). The UViewport may be added using a control blueprint and may be used to load a specified scene.
Optionally, in this embodiment, during scene editing, a target viewport may be added in the target control blueprint, and the target viewport may be used to load a second target scene, where the second target scene may be used to place a target virtual character. Through the target viewport, the target server can acquire a second target scene. The second target scene is independent from the first target scene, and may be a small scene of the first target scene, both having different World (i.e., world), or a separate Level (i.e., level) loaded through a target control (i.e., target viewport).
It should be noted that an Actor created dynamically (an entity object that can be placed in a Level, which is equivalent to a container of a component) needs to pay attention to selecting a correct World, and an Actor created in the World of the main scene is hung on an Actor in the World of the small scene and is not displayed.
The target avatar may be an avatar corresponding to the target client, that is, an avatar operated by the target user through the target client, or may be another avatar that needs to be displayed separately, for example, a Non-Player Character (NPC), an avatar controlled by another user through the client, or the like. Optionally, in this embodiment, the target virtual character may be a 3D (3 Dimensions) character, and the number of the target virtual characters may be one or more.
Step S206, while rendering the first target scene on the target user interface, rendering the second target scene in the target viewport on the target user interface, so as to render the target virtual character on the target user interface.
The target server may control rendering of the first target scene on the target user interface, and a manner of rendering the scene on the UI may refer to related technologies, which is not described herein in this embodiment.
The target server may control rendering of a second target scene in the target viewport on the target user interface while rendering the first target scene on the target user interface, thereby rendering a target avatar on the target user interface. The target viewport may have a viewport size and a display position, i.e., a viewport having a viewport size, which may correspond to a target window on a target user interface, displayed at a particular position of the target user interface.
On the terminal device side of the user, the user interface of the target client is a target user interface, a first target scene may be displayed on the target user interface, a second target scene may be displayed in a certain area (for example, a target window) of the first target scene, and a target avatar (for example, a bust of the target avatar) may be displayed in the second target scene. The light and shadow effect in the second object scene is independent of the light and shadow effect in the first object scene, as shown in fig. 3.
Through the steps S202 to S206, a first target scene is obtained, where the first target scene is a main scene to be rendered on a target user interface; acquiring a second target scene through the target viewport, wherein the second target scene is an independent checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene; the method and the device have the advantages that the first target scene is rendered on the target user interface, the second target scene in the target viewport is rendered on the target user interface at the same time, so that the target virtual role is rendered on the target user interface, the problem that the role rendering quality is poor due to the fact that a role rendering mode in the related technology is easily affected by illumination, shadow and the like in the scene is solved, and the role rendering quality is improved.
As an optional embodiment, before rendering the second target scene in the target viewport on the target user interface, the method further includes:
s11, adjusting the second role attribute of the target virtual role in real time according to the first role attribute of the target reference role, wherein the target reference role is a virtual role matched with the target virtual role in the first target scene.
The target virtual character may be different from any character in the first target scene, or may correspond to one or some characters in the first target scene, for example, a virtual character in the first target scene that matches the target virtual character is a target reference character.
The information such as the pose of the target virtual character can be independent of or associated with the target reference character. Optionally, in this embodiment, the role attribute of the target virtual role may be adjusted according to the role attribute of the target reference role, where the role attribute may include but is not limited to: and (6) attitude attribute.
The target server may obtain a role attribute (i.e., a first role attribute) of the target reference role in real time, and adjust a role attribute (i.e., a second role attribute) of the target virtual role in real time according to the role attribute of the target reference role.
Through the embodiment, the role attributes of the corresponding virtual roles in the independent scene are adjusted in real time through the role attributes of the virtual roles in the main scene, so that the information displayed in the independent scene can be enriched, and the visual experience of a user is improved.
As an alternative embodiment, acquiring the second target scene through the target viewport includes:
s21, acquiring a first scene parameter and a second scene parameter, wherein the first scene parameter is used for indicating the size of a target viewport of the target viewport, and the second scene parameter is used for indicating a target scene map packet corresponding to a second target scene;
and S22, creating a second target scene in the target viewport by loading a target scene map packet indicated by second scene parameters, wherein the viewport size of the target viewport is the target viewport size indicated by the first scene parameters.
In order to simplify the configuration operation of the scene, a Level (corresponding to the second target scene) may be loaded by configuring the scene parameters of the individual scene in advance. The scene parameters may include, but are not limited to: the first scene parameter indicating a target viewport size of the target viewport, the second scene parameter indicating a target scene map packet corresponding to a second target scene, may further include: a third scene parameter indicating a target viewport location of the target viewport.
When editing the second target scene, adding an individual scene requires manually adding various actors, loading resources, setting various parameters, and the like, and directly loading a Level for convenience, thereby simplifying the adding operation of the scene, when editing the scene, the target editing device may first load a specified Level in a constructor of an FPreviewScene (i.e., a preview scene), and then modify a PreviewWorld, which may include the following steps:
step 1, adding two parameters to a sautoreflhviewport (i.e., automatically refreshing a viewport), which are respectively: viewport size (one example of a first scene parameter) and MapPackageName (a map package name, one example of a second scene parameter) to set the size of the viewport and the map name that needs to be loaded, and then load the specified map in the Construct function (i.e., constructor);
step 2, deriving a subclass of SAutoRefreshViewport, wherein the original constructor of FPreviewScene is not called in the constructor of the subclass to create World, but a map is loaded in the Construct function of the parent class to create World;
step 3, deriving a UViewport subclass, rewriting a rebuilded Widget method, and setting parameters of ViewportSize and MapPackageName;
step 4, reloading the constructor of the FPreviewScreen, and loading a specified map;
step 5, adding a LoadWorldFromMap function and loading a map Package;
and 6, creating a UPerfectViewport object, adding a Widget returned by the TakeWidget to the GameViewport, wherein the UViewport also has a Spawn (namely, generating) function which can dynamically create a required Actor, and noticing that the Spawn function needs to be called after the TakeWidget and the World is not created before.
For UE4, the above-mentioned uviowport may be added using the control blueprint: removing the Experimental in UCLASS (Experimental) in front of the UViewport, and adding the UViewport to the blueprint in the control panel; initializing viewport location (i.e., viewport position) and viewport rotation (viewport rotation); a control blueprint is created and initialized and the control blueprint displayed in the editor may be as shown in fig. 4.
Through the steps, the packed map can be loaded by using two parameters, namely, viewport size and MapPackageName, so that the required World, namely, the World of the second target scene is created. The above is only an example of editing the second target scene, and the same is applied to editing the second target scene for other engines and other editing manners without contradiction.
For the target viewport, according to the first scene parameter and the second scene parameter, the target server may create a second target scene (or World of the second target scene) in the target viewport by loading a target scene map packet indicated by the second scene parameter, where a viewport size of the target viewport is the target viewport size indicated by the first scene parameter. Optionally, the viewport location of the target viewport may be the target viewport location indicated by the third scene parameter.
By the embodiment, the small scene is created through the scene parameter indicating the size of the viewport and the scene parameter indicating the scene map packet, so that the convenience of scene creation can be improved.
As an alternative embodiment, creating the second target scene in the target viewport by loading the target scene map indicated by the second scene parameter includes:
s31, loading a target scene map packet indicated by the second scene parameters in the target viewport, wherein the target scene map packet does not contain an atmospheric fog component;
and S32, performing copy operation on the target scene map packet to obtain a second target scene, wherein the second target scene is a copy of the target scene map packet.
The initialization flow of World after PIE (run in Editor) and packing is not the same. In the case that the Package is not unloaded, the World obtained by loading the map each time is the same. In order to enable the World corresponding to each previewsene to dynamically create different objects without mutual interference, the loaded World can be copied. After packaging, the large aerosol component will have serialization errors when copying, so using the method needs to delete the large aerosol in the map.
In order to obtain the second target scene, the target server may load a target scene map packet indicated by the second scene parameter in the target viewport, where the loaded target scene map packet does not include the atmospheric fog component, so as to avoid a serialization error of the atmospheric fog component when copying.
The target server may then perform a copy operation on the target scene map package, such that a second target scene may be derived based on the copied target scene map package, that is, the second target scene is created based on the copy of the target scene map package.
For example, for UE4, to delete an atmospheric fog component in the map, view family.
It should be noted that, for the UE4, if bdulicateworld is not checked, world obtained by loading the same Map (i.e., map) every time is the same. If an object is dynamically created in World, it needs to destroy itself manually at the right moment, otherwise the object created before will be seen the next time the UI is closed and opened. And the icon of the map in the editor is in a modified state, and the map is opened to prompt whether the map needs to be stored or not, if the map is not stored, the map is opened again, and the modified asterisk of the map cannot be displayed. Of course, this World could also be destroyed manually when the UI is closed:
PreviewWorld->CleanupWorld()
PreviewWorld->MarkObjectsPendingKill();
thus, the next time the UI is opened, a new original World results. This may result in reloading the initialization World each time the UI is opened, with some impact on performance.
According to the embodiment, atmospheric fog in the map is deleted during packaging, and the loaded scene map packet is copied into one copy, so that the map loading accuracy can be improved.
As an alternative embodiment, the sky ball of the target viewport is configured to be hidden during the running of the target viewport, so that the target viewport is configured as a transparent background. A uvewport transparent background can be conveniently implemented using SceneCapture. When editing the UViewport, an Actor high in Game option of the sky ball can be selected, and the sky ball is configured to be Hidden in the process of playing. When editing a uviowport, viewfamily. Engineering showflags. Atmosphere = false can also be set.
Optionally, we can set the bEnableBlending of SViewport as true, bPreMultipliedAlpha as false, and set DrawEffect = ES1ateDrawEffect in the OnPaint function, invertAlpha; setting the bEnableGamma correction of SViewport as false; in FUMGViewportClient:, certain code is added in a Draw function to repair the problem of fuzzy role mapping caused by no update of Texture Streaming.
Further, the default of r.defaulttbackbufferpixel Format is 4 (i.e., A2B10G10R 10), resulting in insufficient alpha precision, which may be modified to 0 (B8G 8R8 A8) or 3 (F1 oatgba), and the Frame Buffer Pixel Format may be set to 8bit RGBA or Float RGBA in the project setting.
Optionally, in this embodiment, rendering the second target scene in the target viewport on the target user interface includes:
s41, in the process of carrying out post-processing operation on the target viewport, copying an alpha channel of the target viewport before tone mapping operation through the target channel, wherein the post-processing operation comprises the tone mapping operation;
s42, performing fast approximate anti-aliasing processing on the post-processed target viewport to obtain the anti-aliasing processed target viewport, wherein an alpha channel of the post-processed target viewport comprises a target channel;
s43, performing scene capture on the anti-aliasing processed target viewport through a target scene capture component, wherein the target scene capture component is used for capturing a scene with a transparent background;
and S44, controlling the second target scene captured by the target scene capturing component from the target viewport to be displayed on the target user interface.
Considering that SceneCapture does not support post-processing, i.e., cannot support effects such as SeparateTranslucency (i.e., single translucency), FXAA (Fast approximation Anti-Aliasing), tonemapping, etc., and uveport supports post-processing, but some of the post-processing modifies the Alpha channel, in this embodiment, one Pass (i.e., channel) may be added to copy the Alpha channel before post-processing, then post-processing the uveport, and capture the uvewport scene after post-processing using SceneCapture.
When the second target scene is captured, the added channel (i.e., the target channel) may be directly used to copy the Alpha channel of the target viewport, and then the post-processing operation is performed on the target viewport, and the post-processed Alpha channel of the target viewport is configured as the target channel.
The post-processing operations of the target viewport may include a variety of processing operations, which may include, but are not limited to: tone mapping operation (i.e., tonemap). In this embodiment, in the process of performing the post-processing operation on the target viewport, the target server may copy, by using the target channel, the Alpha channel of the target viewport before the tone mapping operation, perform the post-processing operation on the target viewport, to obtain the post-processed target viewport, and the target viewport that performs the post-processing operation may include the Alpha channel or may not include the Alpha channel. After obtaining the post-processed target view port, the target channel may be used as an Alpha channel of the post-processed target view port, so as to obtain an updated post-processed target view port.
The target server may also perform anti-aliasing on the post-processed target viewport. The anti-aliasing treatment operation employed may be of various types and may include, but is not limited to, one of the following: FXAA treatment, TAA (Temporal Anti-Aliasing) treatment.
If FXAA treatment is adopted, the Alpha channel needs to be subjected to FXAA treatment in order to avoid obvious jaggy of the edge part of a displayed image. In this regard, the target server may perform fast approximate antialiasing processing on the post-processed target viewport to obtain an antialiasing-processed target viewport, where an Alpha channel of the post-processed target viewport is a target channel.
For example, a Pass is added in the post-processing for copying the Alpha channel before Tonemap, if FXAA is used, the Alpha channel also needs FXAA processing, otherwise the edge part of the image has more obvious jaggy.
Alternatively, if TAA is to be used, the configuration file consolevivariates. Ini may be modified, r. Here, this configuration would result in a poor FXAA effect, and this modification is removed when using FXAA. This leaves the Alpha channel and no new Pass needs to be added to copy the Alpha channel.
Through this embodiment, copy the Alpha passageway before Tonemap through adding Pass to also carry out FXAA processing to the Alpha passageway when using FXAA, can avoid image edge part to appear obvious sawtooth, improve the picture quality that the image shows, and then promote user's visual experience.
As an optional embodiment, before the acquiring the second target scene through the target viewport, the method further includes:
s51, obtaining first configuration information of an initial viewport corresponding to a target viewport, where the first configuration information is used to indicate at least one of: closing or reducing cascade shadow maps corresponding to the scene in the initial viewport, not creating a particle system of the scene in the initial viewport, and not creating a physical scene in the initial viewport;
and S52, configuring the initial view port according to the first configuration information to obtain the target view port.
In order to reduce the memory usage of the target viewport, when editing the target viewport, an initial viewport may be first added to the target editing device, and some parameters of the initial viewport may be set, which may include, but are not limited to, at least one of the following:
closing CSM (Cascaded Shadow Maps) Shadow or reducing CSM layer number to reduce memory occupation of the ShadowMap;
the bCrataeFXSystem option is closed, a particle system is not created, and the particle system can increase more video memory occupation;
the bcreatephvsicscscene option is closed, no physical scene is created, and if the role is dropped, the gravity can be changed to 0.
In addition, options such as Audio (i.e., voice), hitprox (i.e., hit proxy), navigation (i.e., navigation), AISystem (Artificial Intelligence System) may be turned off by default.
The target editing device may obtain first configuration information of the initial viewport, the first configuration information indicating at least one of: closing or reducing cascaded shadow maps corresponding to a scene within an initial viewport, not creating a particle system of the scene within the initial viewport, not creating a physical scene within the initial viewport, may also be used to indicate at least one of: and closing the voice, closing the hit proxy, closing the navigation and closing the AI system. Based on the acquired first configuration information, the target editing device may configure the initial viewport to obtain the target viewport.
By the embodiment, the CSM shadow of the scene in the control is closed, or the number of CSM layers is reduced, a particle system is not created, a physical scene is not created, and the like, so that the occupation of the scene on the memory resource can be reduced, and the utilization rate of the memory resource is improved.
As an alternative embodiment, in some cases, multiple different 3D characters may be displayed on multiple UIs, e.g., a team interface. If there is a need to display multiple different 3D characters on multiple UIs, and if each character uses one uviowport, although one World and Scene can be shared, each uviowport takes one complete rendering process, which is a little wasteful reference in performance.
Optionally, one scenerender may be used to render multiple views (i.e., views), each View may set corresponding View parameters, which may include, but are not limited to, at least one of the following; view position (i.e., viewLocation), view rotation (i.e., viewRotation).
For example, as shown in FIG. 5, two views are rendered using a sceneRenderer, view 0 on the left and View 1 on the right, with different ViewLocation and ViewRotation.
For a target viewport, the target viewport may contain a plurality of target views, each corresponding to at least one target user interface (i.e., each target view is displayed on at least one user's UI), and different target views corresponding to different view parameters, which may include, but are not limited to, at least one of: view position, view rotation.
Correspondingly, rendering the second target scene in the target viewport on the target user interface includes:
and S61, using the target scene renderer to render the second target scene to the target user interface corresponding to each target view according to the view parameters of each target view.
The object server may render the plurality of object views using an object scene renderer to render a second object scene to an object user interface corresponding to each object view according to the view parameters of the respective object view.
Through the embodiment, a plurality of views are rendered through one scene renderer, resources occupied by scene rendering can be improved, the rationality of resource utilization is improved, and system performance is improved.
The following explains a rendering method of a character in the embodiment of the present application with reference to an optional example. In this example, the first target scene is a game scene (i.e., a game master scene), the second target scene is a small scene in the game scene, the target virtual character is a 3D character, and the target server is a game server.
As shown in fig. 6, the flow of the rendering method of the character in this alternative example may include the following steps:
step S602, the game server loads a game main scene and loads a small scene through UViewport, and 3D characters needing to be displayed independently are placed in the small scene.
In step S604, the game server captures a scene screen through the scenecapturemenu to display on the UI of the user.
As shown in fig. 7, compared with the case of directly using scene capture, the method of loading a single small scene using a UViewport has the advantages that the light and shadow effect of the small scene is not affected by the light and the like of the main scene, and the better rendering effect is achieved.
Through the example, the individual small scene is loaded by using the UViewport, the influence of light and the like of the main scene on the shadow effect of the small scene can be reduced, the picture rendering effect on the UI is improved, and the visual experience of a user is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, an optical disk) and includes several instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the methods according to the embodiments of the present application.
According to another aspect of the embodiment of the present application, there is also provided a rendering apparatus for a character for implementing the rendering method for a character. Fig. 8 is a block diagram illustrating a rendering apparatus for a selectable character according to an embodiment of the present application, where the rendering apparatus, as shown in fig. 8, may include:
a first obtaining unit 802, configured to obtain a first target scene, where the first target scene is a main scene to be rendered on a target user interface;
a second obtaining unit 804, connected to the first obtaining unit 802, configured to obtain a second target scene through the target viewport, where the second target scene is an individual checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene;
and a rendering unit 806, connected to the second obtaining unit 804, configured to render the first target scene on the target user interface, and simultaneously render the second target scene in the target viewport on the target user interface, so as to render the target virtual character on the target user interface.
It should be noted that the first obtaining unit 802 in this embodiment may be configured to perform the step S202, the second obtaining unit 804 in this embodiment may be configured to perform the step S204, and the rendering unit 806 in this embodiment may be configured to perform the step S206.
Acquiring a first target scene through the module, wherein the first target scene is a main scene to be rendered on a target user interface; acquiring a second target scene through the target viewport, wherein the second target scene is an independent checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene; the method and the device have the advantages that the first target scene is rendered on the target user interface, the second target scene in the target viewport is rendered on the target user interface at the same time, so that the target virtual role is rendered on the target user interface, the problem that the role rendering quality is poor due to the fact that a role rendering mode in the related technology is easily affected by illumination, shadow and the like in the scene is solved, and the role rendering quality is improved.
As an alternative embodiment, the apparatus further comprises:
and the adjusting unit is used for adjusting the second role attribute of the target virtual role in real time according to the first role attribute of the target reference role before rendering the second target scene in the target viewport on the target user interface, wherein the target reference role is a virtual role matched with the target virtual role in the first target scene.
As an alternative embodiment, the second obtaining unit includes:
an obtaining module, configured to obtain a first scene parameter and a second scene parameter, where the first scene parameter is used to indicate a target viewport size of a target viewport, and the second scene parameter is used to indicate a target scene map packet corresponding to a second target scene;
and the loading module is used for creating a second target scene in the target viewport by loading the target scene map packet indicated by the second scene parameters, wherein the viewport size of the target viewport is the target viewport size indicated by the first scene parameters.
As an alternative embodiment, the loading module includes:
the loading submodule is used for loading a target scene map packet indicated by the second scene parameters in the target viewport, wherein the target scene map packet does not contain an atmospheric fog component;
and the copying submodule is used for executing copying operation on the target scene map packet to obtain a second target scene, wherein the second target scene is created based on the copy of the target scene map packet.
As an alternative embodiment, the sky ball of the target viewport is configured to be in a hidden state during the operation. Optionally, the rendering unit comprises:
the copying module is used for copying an alpha channel of the target viewport before the tone mapping operation through the target channel in the process of carrying out post-processing operation on the target viewport, wherein the post-processing operation comprises the tone mapping operation;
the execution module is used for executing rapid approximate anti-aliasing processing on the post-processed target viewport to obtain the anti-aliasing processed target viewport, wherein an alpha channel of the post-processed target viewport is a target channel;
the capturing module is used for performing scene capturing on the anti-aliasing processed target viewport through a target scene capturing component, wherein the target scene capturing component is used for capturing a scene of a transparent background;
and the control module is used for controlling the second target scene captured by the target scene capturing component to be displayed on the target user interface.
As an alternative embodiment, the apparatus further comprises:
a third obtaining unit, configured to obtain first configuration information of the initial viewport corresponding to the target viewport before the target viewport obtains the second target scene, where the first configuration information is used to indicate at least one of: closing or reducing cascade shadow maps corresponding to the scene in the initial viewport, not creating a particle system of the scene in the initial viewport, and not creating a physical scene in the initial viewport;
and the configuration unit is used for configuring the initial view port according to the first configuration information to obtain the target view port.
As an alternative embodiment, the target viewport contains a plurality of target views, each corresponding to at least one target user interface, and different target views corresponding to different view parameters. Optionally, the rendering unit comprises:
and the rendering module is used for rendering the second target scene to the target user interface corresponding to each target view according to the view parameters of each target view by using the target scene renderer.
It should be noted that the modules described above are the same as examples and application scenarios realized by corresponding steps, but are not limited to what is disclosed in the foregoing embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the rendering method of a character, where the electronic device may be a server, a terminal, or a combination thereof.
Fig. 9 is a block diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 9, including a processor 902, a communication interface 904, a memory 906, and a communication bus 908, where the processor 902, the communication interface 904, and the memory 906 communicate with each other via the communication bus 908,
a memory 906 for storing a computer program;
the processor 902, when executing the computer program stored in the memory 906, implements the following steps:
s1, acquiring a first target scene, wherein the first target scene is a main scene to be rendered on a target user interface;
s2, acquiring a second target scene through the target viewport, wherein the second target scene is an independent checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene;
and S3, rendering a second target scene in the target view port on the target user interface while rendering the first target scene on the target user interface so as to render a target virtual role on the target user interface.
Alternatively, in this embodiment, the communication bus may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus. The communication interface is used for communication between the electronic equipment and other equipment.
The memory may include RAM, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
As an example, the memory 906 may include, but is not limited to, a first obtaining unit 802, a second obtaining unit 804, and a rendering unit 806 in the rendering apparatus including the character. In addition, the rendering apparatus may further include, but is not limited to, other module units in the rendering apparatus of the above role, which is not described in detail in this example.
The processor may be a general-purpose processor, and may include but is not limited to: a CPU (Central Processing Unit), an NP (Network Processor), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In addition, the electronic device further includes: a display to display a target user interface.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the device for implementing the rendering method for roles may be a terminal device, and the terminal device may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 does not limit the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
According to still another aspect of an embodiment of the present application, there is also provided a storage medium. Optionally, in this embodiment, the storage medium may be configured to execute program codes of a rendering method of any character in the embodiment of the present application.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, acquiring a first target scene, wherein the first target scene is a main scene to be rendered on a target user interface;
s2, acquiring a second target scene through the target viewport, wherein the second target scene is an independent checkpoint loaded through the target viewport, and a target virtual role is placed in the second target scene;
and S3, rendering a second target scene in the target view port on the target user interface while rendering the first target scene on the target user interface so as to render a target virtual role on the target user interface.
Optionally, for a specific example in this embodiment, reference may be made to the example described in the foregoing embodiment, and details of this are not described again in this embodiment.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, a ROM, a RAM, a removable hard disk, a magnetic disk, or an optical disk.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, and may also be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in this embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (9)

1. A method for rendering a character, comprising:
acquiring a first target scene, wherein the first target scene is a main scene to be rendered on a target user interface;
acquiring a second target scene through a target viewport, wherein the second target scene is an independent barrier loaded through the target viewport, and a target virtual character is placed in the second target scene;
rendering the second target scene in the target viewport on the target user interface while rendering the first target scene on the target user interface to render the target avatar on the target user interface;
wherein, prior to acquiring the second target scene through the target viewport, the method further comprises: obtaining first configuration information of an initial viewport corresponding to the target viewport, wherein the first configuration information is used for indicating at least one of the following: closing or reducing cascaded shadow maps corresponding to the scene within the initial viewport, not creating a particle system for the scene within the initial viewport, not creating a physical scene within the initial viewport; and configuring the initial view port according to the first configuration information to obtain the target view port.
2. The method of claim 1, wherein prior to rendering the second target scene in the target viewport on the target user interface, the method further comprises:
and adjusting the second role attribute of the target virtual role in real time according to the first role attribute of the target reference role, wherein the target reference role is a virtual role matched with the target virtual role in the first target scene.
3. The method of claim 1, wherein acquiring the second target scene through the target viewport comprises:
obtaining a first scene parameter and a second scene parameter, wherein the first scene parameter is used for indicating a target viewport size of the target viewport, and the second scene parameter is used for indicating a target scene map packet corresponding to the second target scene;
creating the second target scene in the target viewport by loading the target scene map packet indicated by the second scene parameter, wherein a viewport size of the target viewport is the target viewport size indicated by the first scene parameter.
4. The method of claim 3, wherein creating the second target scene in the target viewport by loading the target scene map package indicated by the second scene parameter comprises:
loading the target scene map packet indicated by the second scene parameter in the target viewport, wherein the target scene map packet does not contain atmospheric fog components;
and performing copy operation on the target scene map packet to obtain the second target scene, wherein the second target scene is created based on the copy of the target scene map packet.
5. The method of claim 1, wherein the sky ball of the target viewport is configured to be in a hidden state during operation;
rendering the second target scene in the target viewport on the target user interface comprises:
copying an alpha channel of the target viewport before a tone mapping operation through a target channel in the process of performing a post-processing operation on the target viewport, wherein the post-processing operation comprises the tone mapping operation;
performing fast approximate anti-aliasing processing on the post-processed target viewport to obtain the anti-aliased target viewport, wherein an alpha channel of the post-processed target viewport is the target channel;
performing scene capture on the antialiased target viewport through a target scene capture component, wherein the target scene capture component is used for capturing a scene with a transparent background;
controlling display of the second target scene captured by the target scene capture component on the target user interface.
6. The method of any of claims 1 to 5, wherein the target viewport contains a plurality of target views, each corresponding to at least one of the target user interfaces, different ones of the target views corresponding to different view parameters;
rendering the second target scene in the target viewport on the target user interface comprises:
and using a target scene renderer to render the second target scene to the target user interface corresponding to each target view according to the view parameters of each target view.
7. An apparatus for rendering a character, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a first target scene, and the first target scene is a main scene to be rendered on a target user interface;
a second obtaining unit, configured to obtain a second target scene through a target viewport, where the second target scene is an individual checkpoint loaded through the target viewport, and a target virtual character is placed in the second target scene;
a rendering unit configured to render the second target scene in the target viewport on the target user interface while rendering the first target scene on the target user interface to render the target avatar on the target user interface;
wherein the apparatus further comprises: a third obtaining unit, configured to obtain first configuration information of an initial viewport corresponding to the target viewport before the target viewport obtains the second target scene, where the first configuration information is used to indicate at least one of: closing or reducing cascaded shadow maps corresponding to the scene within the initial viewport, not creating a particle system for the scene within the initial viewport, not creating a physical scene within the initial viewport; and the configuration unit is used for configuring the initial viewport according to the first configuration information to obtain the target viewport.
8. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein said processor, said communication interface and said memory communicate with each other via said communication bus,
the memory for storing a computer program;
the processor for performing the method steps of any one of claims 1 to 6 by running the computer program stored on the memory.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method steps of any one of claims 1 to 6 when executed.
CN202110636702.9A 2021-06-08 2021-06-08 Role rendering method and device, electronic equipment and storage medium Active CN113318444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110636702.9A CN113318444B (en) 2021-06-08 2021-06-08 Role rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110636702.9A CN113318444B (en) 2021-06-08 2021-06-08 Role rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113318444A CN113318444A (en) 2021-08-31
CN113318444B true CN113318444B (en) 2023-01-10

Family

ID=77421135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110636702.9A Active CN113318444B (en) 2021-06-08 2021-06-08 Role rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113318444B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800130A (en) * 2012-07-04 2012-11-28 哈尔滨工程大学 Water level-close aircraft maneuvering flight visual scene simulation method
CN112233217A (en) * 2020-12-18 2021-01-15 完美世界(北京)软件科技发展有限公司 Rendering method and device of virtual scene

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257876B (en) * 2013-04-28 2016-04-13 福建天晴数码有限公司 The method of C3 map dynamic load
CN105159687B (en) * 2015-09-29 2018-04-17 腾讯科技(深圳)有限公司 A kind of information processing method, terminal and computer-readable storage medium
CN105335064B (en) * 2015-09-29 2017-08-15 腾讯科技(深圳)有限公司 A kind of information processing method and terminal
WO2019164510A1 (en) * 2018-02-23 2019-08-29 Rovi Guides, Inc. Systems and methods for creating a non-curated viewing perspective in a video game platform based on a curated viewing perspective
CN110478902A (en) * 2019-08-20 2019-11-22 网易(杭州)网络有限公司 Game operation method and device
CN112337091B (en) * 2020-11-27 2022-06-07 腾讯科技(深圳)有限公司 Man-machine interaction method and device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800130A (en) * 2012-07-04 2012-11-28 哈尔滨工程大学 Water level-close aircraft maneuvering flight visual scene simulation method
CN112233217A (en) * 2020-12-18 2021-01-15 完美世界(北京)软件科技发展有限公司 Rendering method and device of virtual scene

Also Published As

Publication number Publication date
CN113318444A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN109377546B (en) Virtual reality model rendering method and device
US11711563B2 (en) Methods and systems for graphics rendering assistance by a multi-access server
CN106658145A (en) Live data processing method and device
CN110599396A (en) Information processing method and device
CN110675466A (en) Rendering system, rendering method, rendering device, electronic equipment and storage medium
US11615575B2 (en) Methods and systems for constructing a shader
CN106713968A (en) Live broadcast data display method and device
CN111738935B (en) Ghost rendering method and device, storage medium and electronic device
CN113470092B (en) Terrain rendering method and device, electronic equipment and storage medium
WO2022127275A1 (en) Method and device for model switching, electronic device, and storage medium
US20130050190A1 (en) Dressing simulation system and method
CN112150602A (en) Model image rendering method and device, storage medium and electronic equipment
CN113318444B (en) Role rendering method and device, electronic equipment and storage medium
CN104994920A (en) Presenting digital content item with tiered functionality
CN116206038A (en) Rendering method, rendering device, electronic equipment and storage medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
EP4231243A1 (en) Data storage management method, object rendering method, and device
CN117065357A (en) Media data processing method, device, computer equipment and storage medium
CN114307158A (en) Three-dimensional virtual scene data generation method and device, storage medium and terminal
CN114299202A (en) Processing method and device for virtual scene creation, storage medium and terminal
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
CN108846897B (en) Three-dimensional model surface material simulation method and device, storage medium and electronic equipment
CN111292392A (en) Unity-based image display method, apparatus, device and medium
CN111145358A (en) Image processing method, device and hardware device
CN113542846B (en) AR barrage display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant