CN113140028A - Virtual object rendering method and device and electronic equipment - Google Patents

Virtual object rendering method and device and electronic equipment Download PDF

Info

Publication number
CN113140028A
CN113140028A CN202110380010.2A CN202110380010A CN113140028A CN 113140028 A CN113140028 A CN 113140028A CN 202110380010 A CN202110380010 A CN 202110380010A CN 113140028 A CN113140028 A CN 113140028A
Authority
CN
China
Prior art keywords
rendered
rendering
map
maps
game scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110380010.2A
Other languages
Chinese (zh)
Other versions
CN113140028B (en
Inventor
刘舟
袁尧
沈琳焘
施坤省
黎煌达
张志稳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Sanqi Mutual Entertainment Technology Co ltd
Original Assignee
Guangzhou Sanqi Mutual Entertainment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Sanqi Mutual Entertainment Technology Co ltd filed Critical Guangzhou Sanqi Mutual Entertainment Technology Co ltd
Priority to CN202110380010.2A priority Critical patent/CN113140028B/en
Publication of CN113140028A publication Critical patent/CN113140028A/en
Application granted granted Critical
Publication of CN113140028B publication Critical patent/CN113140028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a virtual object rendering method, a virtual object rendering device and electronic equipment, wherein the method comprises the following steps: acquiring a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene, wherein the to-be-selected maps are preset according to the objects to be rendered and associated areas which can be connected with the objects to be rendered; matching corresponding target maps from the multiple to-be-selected maps according to the associated areas connected with the to-be-rendered objects in the game scene; and rendering the object to be rendered according to the target map.

Description

Virtual object rendering method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for rendering a virtual object, and an electronic device.
Background
In order to enrich the game scene, a plurality of different virtual objects, such as trees, grasslands, deserts, rivers, houses, etc., are usually set in the game scene. The virtual objects are rendered through different rendering parameters to simulate the state of the virtual objects in a real environment, so that the reality of a game scene is improved.
In the traditional rendering of different virtual objects in a game scene, a corresponding parameter is bound to each virtual object, and then the virtual object is rendered according to the bound parameter. However, since the same virtual object may appear in different scenes, for example, trees may grow on grass or in a desert, the same rendering parameters are completely adopted for different scenes to render the virtual object, and a situation that the virtual object is linked with the scenes in an unnatural manner may occur in some scenes, resulting in a poor display effect of the game scenes.
Disclosure of Invention
The application aims to solve at least one of technical problems in the prior art, and provides a virtual object rendering method, a virtual object rendering device and electronic equipment, which can perform differentiated rendering on virtual objects in different scenes and improve the display effect of game scenes.
The embodiment of the application provides a virtual object rendering method, which comprises the following steps:
acquiring a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene, wherein the to-be-selected maps are preset according to the objects to be rendered and associated areas which can be connected with the objects to be rendered;
matching corresponding target maps from the multiple to-be-selected maps according to the associated areas connected with the to-be-rendered objects in the game scene;
and rendering the object to be rendered according to the target map.
In this embodiment, by obtaining a plurality of preset to-be-selected maps corresponding to an object to be rendered, matching a target map from a correlation area connected to the object to be rendered, and rendering the object to be rendered according to the target map, the virtual objects in different scenes can be rendered in a differentiated manner, so that the virtual objects are prevented from being linked with the scenes unnaturally when the virtual objects are rendered by using the same rendering parameters, and the display effects of a game scene and the virtual objects can be improved.
In one embodiment, acquiring a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene comprises:
acquiring an image identifier of the object to be rendered;
and acquiring a plurality of to-be-selected maps corresponding to the to-be-rendered objects according to the image identifiers.
In this embodiment, the to-be-selected map corresponding to the to-be-rendered object is obtained through the identifier of the to-be-rendered object for selection, so that the to-be-rendered object is rendered, the to-be-selected map corresponding to the to-be-rendered object can be quickly obtained, the efficiency of the rendering process is improved, and the operation amount is reduced.
In one embodiment, matching a corresponding target map from the multiple candidate maps according to the associated area connected with the object to be rendered in the game scene includes:
acquiring the association area corresponding to the position information from the game scene according to the position information of the object to be rendered;
and matching corresponding target maps from the multiple to-be-selected maps according to the associated areas.
In the embodiment, the associated area corresponding to the position information is determined according to the position information of the object to be rendered, the target map is matched from the map to be selected according to the associated area, the corresponding target map can be obtained only by matching according to the parameter data instead of the image data, and the operation amount in the rendering process is reduced.
In one embodiment, acquiring the association area corresponding to the position information from the game scene according to the position information of the object to be rendered includes:
according to the position information of the object to be rendered, acquiring a ground identifier corresponding to the position information from the game scene;
and acquiring the associated area according to the ground identification.
In this embodiment, the association area is obtained through the ground identifier corresponding to the position information, and the association area can be obtained according to the correspondence between the position information and the ground identifier, so that the correspondence between the association area and the object to be rendered is clearer, and the calculation amount can be effectively reduced.
In one embodiment, matching a corresponding target map from the multiple candidate maps according to the associated area includes:
acquiring rendering parameters of corresponding positions in the associated area according to the position information;
and traversing each to-be-selected map according to the rendering parameters, and matching a corresponding target map from the to-be-selected maps, wherein a target area of the target map comprises the rendering parameters, and the target area corresponds to a connecting part of the to-be-rendered object and the associated area.
In this embodiment, the rendering parameters of the corresponding positions in the associated areas are obtained according to the position information, the to-be-selected maps are traversed according to the rendering parameters to match the target maps, the target maps can be directly matched according to the rendering parameters, the target maps can be quickly matched from the pre-stored to-be-selected maps, the efficiency of the rendering process is improved, and the conditions of the rendering parameters of the target areas in the target maps can be visually displayed.
In one embodiment, after matching the corresponding target map from the multiple candidate maps, the method further includes:
when the target map is not matched, rendering the object to be rendered according to a basic map to obtain an initial object;
and rendering the connection part of the initial object and the associated area according to the rendering parameters.
In this embodiment, when the target map is not matched according to the rendering parameters, the object to be rendered is rendered according to the basic map, and then the connection part between the rendered object and the associated region is rendered according to the rendering parameters, so that the rendering effect of the connection part between the associated region which is not matched with the target map can be ensured, the connection between the virtual object and the connection part between the associated regions which are not matched with the target map is more natural, and the display effect of the game scene is improved.
In one embodiment, rendering the connection part of the initial object with the associated area according to the rendering parameters comprises:
and according to the rendering parameters, performing parameter superposition on the connecting part of the initial object and the associated area.
In the embodiment, when the target map is not matched according to the rendering parameters, the complexity of the rendering process can be reduced by performing parameter superposition on the connecting part of the initial object and the associated area, the natural connection of the connecting part of the initial object and the associated area can be ensured, and the display effect of the game scene is improved.
In one embodiment, there is also provided a virtual object rendering apparatus including:
the system comprises a map obtaining module, a map selecting module and a map processing module, wherein the map obtaining module is used for obtaining a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene, and the to-be-selected maps are preset according to the objects to be rendered and associated areas which can be connected with the objects to be rendered;
the map matching module is used for matching corresponding target maps from the multiple maps to be selected according to the associated areas connected with the objects to be rendered in the game scene;
and the object rendering module is used for rendering the object to be rendered according to the target map.
Further, an embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the virtual object rendering method as described in the above embodiments when executing the program.
Further, the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer to execute the virtual object rendering method according to the above embodiment.
Drawings
The present application is further described with reference to the following figures and examples;
FIG. 1 is a diagram of an application environment of a virtual object rendering method according to an embodiment;
FIG. 2 is a flowchart illustrating a method for rendering virtual objects according to one embodiment;
FIG. 3 is a diagram illustrating object positions and associated regions to be rendered in one embodiment;
FIG. 4 is a block diagram of a virtual object rendering apparatus according to an embodiment;
FIG. 5 is a block diagram of a computer device in one embodiment.
Detailed Description
Reference will now be made in detail to the present embodiments of the present application, preferred embodiments of which are illustrated in the accompanying drawings, which are for the purpose of visually supplementing the description with figures and detailed description, so as to enable a person skilled in the art to visually and visually understand each and every feature and technical solution of the present application, but not to limit the scope of the present application.
In order to enrich the game scene, a plurality of different virtual objects, such as trees, grasslands, deserts, rivers, houses, etc., are usually set in the game scene. The virtual objects are rendered through different rendering parameters to simulate the state of the virtual objects in a real environment, so that the reality of a game scene is improved.
In the traditional rendering of different virtual objects in a game scene, a corresponding parameter is bound to each virtual object, and then the virtual object is rendered according to the bound parameter. However, since the same virtual object may appear in different scenes, for example, trees may grow on grass or in a desert, the same rendering parameters are completely adopted for different scenes to render the virtual object, and a situation that the virtual object is linked with the scenes in an unnatural manner may occur in some scenes, resulting in a poor display effect of the game scenes. The game scene refers to the environment, building, machinery, props, etc. in the game, and various virtual objects in the game are usually restored according to certain requirements. Rendering is an operation of conforming images constituting a game scene to the game scene. The rendering parameters and the corresponding parameters bound to the virtual object are related settings and data according to which the virtual object is rendered, such as an area to be rendered, an aspect ratio or a pixel aspect ratio of an image output after rendering, whether an atmospheric effect is considered, rendering a hidden geometric object, and the like.
To solve the above technical problem, as shown in fig. 1, it is an application environment diagram of a virtual object rendering method in an embodiment. In the application environment, the user terminal 110 is connected to the server 120 through a network, the terminal 110 may be specifically a desktop terminal 110 or a mobile terminal 110, and the mobile terminal may be one of a mobile phone, a tablet computer, a notebook computer, a wearable device, and the like. The server 120 may be implemented by an independent server 120 or a server cluster composed of a plurality of servers 120, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. In this embodiment, the user terminal 110 may be used as a front end of a game client running a game, and the server 120 may be used as a background of the game client, so that after the server 120 obtains a data call request sent by a game user in the user terminal 110 through the game client, a long connection may be quickly established with the user terminal 110.
Hereinafter, the virtual object rendering method provided by the embodiments of the present application will be described and explained in detail through several specific embodiments.
As shown in FIG. 2, in one embodiment, a virtual object rendering method is provided. The embodiment is mainly illustrated by applying the method to computer equipment. The computer device may specifically be the user terminal 110 in fig. 1 described above.
Referring to fig. 2, the virtual object rendering method specifically includes the following steps:
s11, obtaining a plurality of to-be-selected maps corresponding to the to-be-rendered objects in the game scene, wherein the to-be-selected maps are preset according to the to-be-rendered objects and associated areas capable of being connected with the to-be-rendered objects.
In this embodiment, the user terminal obtains a plurality of to-be-selected maps corresponding to an object to be rendered in a game scene, where the to-be-selected maps are preset according to the object to be rendered and an associated area that can be connected with the object to be rendered, that is, the to-be-selected maps are preset images or files that can be used as a basis for rendering a virtual object. The to-be-selected map can be stored in a server, the user terminal sends a related instruction or request to the server, and the server sends a plurality of to-be-selected maps to the user terminal after receiving the related instruction or request, wherein the related instruction or request sent by the user terminal to the server includes related information of an object to be rendered in the game scene, so that the server can select the to-be-selected map corresponding to the object to be rendered according to the related instruction or request and send the to-be-selected map to the user terminal, for example, the object to be rendered is a bird with the number of 1 and is identified as bird1, the related instruction or request can include an identifier bird1, and the server identifies the to-be-selected map corresponding to the identifier bird 1. In addition, the to-be-selected map is preset according to the to-be-rendered object and the associated area which can be connected with the to-be-rendered object, and specifically, the map of the associated area which is connected with different game scenes by the to-be-rendered object in different scenes can be obtained, for example, when the game scene of the to-be-rendered object is a desert, a bird stands on a deadtree in the desert, the associated area which is connected with the to-be-rendered object bird is a deadtree at the moment, the to-be-selected map in the scene is a map in which the bird stands on the deadtree and the background is the desert, and the like.
In one embodiment, acquiring a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene comprises:
acquiring an image identifier of an object to be rendered;
and acquiring a plurality of to-be-selected maps corresponding to the to-be-rendered objects according to the image identifiers.
In this embodiment, the object to be rendered includes an image identifier, and the user terminal acquires a plurality of to-be-selected maps corresponding to the object to be rendered according to the image identifier after acquiring the image identifier of the object to be rendered, that is, the image identifier included in the object to be rendered can correspond to the plurality of to-be-selected maps, for example, the object to be rendered is a bird, the image identifier of the bird is bird, and when the to-be-selected map includes a virtual object bird, the to-be-selected map also has an identifier that is the same as or corresponds to the image identifier bird, so the user terminal can acquire the plurality of to-be-selected maps including the object to be rendered according to the image identifier of the object to be rendered.
In this embodiment, the to-be-selected map corresponding to the to-be-rendered object is obtained through the identifier of the to-be-rendered object for selection, so that the to-be-rendered object is rendered, the to-be-selected map corresponding to the to-be-rendered object can be quickly obtained, the efficiency of the rendering process is improved, and the operation amount is reduced.
And S12, matching corresponding target maps from the multiple maps to be selected according to the associated areas connected with the objects to be rendered in the game scene.
In this embodiment, the user terminal matches a corresponding target map according to an association region connected to an object to be rendered in a game scene, and specifically may match a target map corresponding to an identifier from a plurality of candidate maps by determining an identifier of an association region connected to an object to be rendered in a game scene, for example, when an identifier of a cumtree in an association region connected to a bird of the object to be rendered in a desert game scene is desettree, and the user terminal or the server adds an identifier to the map when obtaining a map in which a bird of the object to be rendered is connected to an association region (standing on the cumtree) in the desert game scene, where the identifier may be the same as the identifier of the association region as desettree, or may be another identifier, and binds the identifier to the identifier of the association region after the identifier is added. The user terminal matches the corresponding target map according to the associated area connected with the object to be rendered in the game scene, and specifically, whether the associated area connected with the object to be rendered exists in the map can be identified one by identifying a plurality of maps to be selected, and the identified map to be selected is the target map.
In one embodiment, matching a corresponding target map from a plurality of candidate maps according to an associated region connected with an object to be rendered in a game scene includes:
acquiring a relevant area corresponding to the position information from a game scene according to the position information of the object to be rendered;
and matching the corresponding target map from the multiple candidate maps according to the associated area.
In this embodiment, the user terminal obtains the associated area corresponding to the position information according to the position information of the object to be rendered, specifically, the user terminal calls the map information of the game scene to the server, the map information of the game scene includes the position information of each point constituting the game scene, and the position information may be coordinates in the game scene, so that the user terminal can obtain the position information, such as the coordinates (60,120), of the object to be rendered in the game scene according to the map information of the game scene. The user terminal obtains the associated area corresponding to the position information from the game scene according to the position information of the object to be rendered, wherein the associated area corresponding to the position information may specifically be an area formed by a plurality of points connected with the object to be rendered in the game scene, for example, the position information is a coordinate (60,120), the associated area corresponding to the position information may include coordinates (59,119), (60,119), (61,119), (59,120), (61,120), (59,121), (60,121), (61,121), and the like, the size range of the associated area may be set by a user or an administrator, and the associated area is an area within a certain range connected with the object to be rendered. As shown in fig. 3, the solid point is a position of the object to be rendered, and the hollow point is an associated region corresponding to the position information of the object to be rendered.
The user terminal matches a corresponding target map from the multiple maps to be selected according to the obtained associated area corresponding to the position information of the object to be rendered, specifically, the user terminal traverses the multiple maps to be selected according to the position information (coordinates) of the multiple points included in the associated area, and searches for the maps including the position information of the multiple points in the associated area in the multiple maps to be selected, wherein the maps are the target maps.
In the embodiment, the associated area corresponding to the position information is determined according to the position information of the object to be rendered, the target map is matched from the map to be selected according to the associated area, the corresponding target map can be obtained only by matching according to the parameter data instead of the image data, and the operation amount in the rendering process is reduced.
In one embodiment, acquiring a correlation area corresponding to position information from a game scene according to the position information of an object to be rendered includes:
according to the position information of the object to be rendered, acquiring a ground identifier corresponding to the position information from a game scene;
and acquiring the associated area according to the ground identification.
In this embodiment, the user terminal obtains the ground identifier corresponding to the position information according to the position information of the object to be rendered, specifically, the user terminal calls the map information of the game scene to the server, the map information of the game scene includes the position information of each point constituting the game scene, and the position information may be coordinates in the game scene, so that the user terminal can obtain the position information, such as the coordinates (60,120), of the object to be rendered in the game scene according to the map information of the game scene. The user terminal determines the ground type of the position in the game scene, such as forest, grassland, desert, sea and the like, according to the position information of the object to be rendered, and the identifier of the ground type is the ground identifier. For example, if the type of ground on which the coordinates (60,120) are located in the game scene is desert, then the corresponding ground is identified as desert.
The user terminal obtains the associated area according to the obtained ground identifier corresponding to the position information, the associated area is an area connected with the position corresponding to the position information, and the specific size of the associated area can be set, for example, the associated area can be a square area with coordinates (60,120) as the center and the distance between the center and four sides being 3 unit lengths.
In this embodiment, the association area is obtained through the ground identifier corresponding to the position information, and the association area can be obtained according to the correspondence between the position information and the ground identifier, so that the correspondence between the association area and the object to be rendered is clearer, and the calculation amount can be effectively reduced.
In one embodiment, matching a corresponding target map from a plurality of candidate maps according to the associated region includes:
according to the position information, obtaining rendering parameters of corresponding positions in the associated area;
and traversing each to-be-rendered map according to the rendering parameters, and matching a corresponding target map from the to-be-rendered maps, wherein a target area of the target map comprises the rendering parameters, and the target area corresponds to a connecting part of an object to be rendered and the associated area.
In this embodiment, the user terminal obtains rendering parameters of corresponding positions in the associated area according to the position information, specifically, the user terminal calls the map information of the game scene to the server, where the map information of the game scene includes the position information of each point constituting the game scene, and the position information may be coordinates in the game scene, and then the user terminal may obtain the position information of an object to be rendered in the game scene, such as coordinates (60,120), according to the map information of the game scene. And the user terminal acquires the associated area connected with the position according to the position corresponding to the position information, wherein the size of the associated area can be set. The respective positions in the associated region may be positions directly connected to the position of the object to be rendered, e.g., the position to be rendered is a coordinate (60,120), and then the respective positions may be coordinates (60,119), (59,120), (61,120), and (60,121). The rendering parameters of the corresponding position may include multiple parameters, for example, the rendering parameters are obtained by combining the parameters of multiple colors according to a certain ratio.
And traversing the target area of each to-be-selected map by the user terminal according to the acquired rendering parameters of the similar positions, wherein when the rendering parameters included in the target area traversed to the to-be-selected map are the same as the acquired rendering parameters, the to-be-selected map in which the target area is located is the target map. The rendering parameters are rendering parameters contained in a target area of the target map, the target area corresponds to a connection part of the object to be rendered and the associated area, and the connection part is a direct connection part, namely, a position similar to the above similar position.
In this embodiment, the rendering parameters of the corresponding positions in the associated areas are obtained according to the position information, the to-be-selected maps are traversed according to the rendering parameters to match the target maps, the target maps can be directly matched according to the rendering parameters, the target maps can be quickly matched from the pre-stored to-be-selected maps, the efficiency of the rendering process is improved, and the conditions of the rendering parameters of the target areas in the target maps can be visually displayed.
And S13, rendering the object to be rendered according to the target map.
In this embodiment, the user terminal renders the object to be rendered according to the target map, and may specifically obtain rendering parameters in the target map, such as the color of each pixel, and correspondingly render the object to be rendered. Specifically, the map color of the target map can be applied to each unit in the object to be rendered for rendering. Wherein each cell may be each pixel or each point constituting the object to be rendered.
In this embodiment, by obtaining a plurality of preset to-be-selected maps corresponding to an object to be rendered, matching a target map from a correlation area connected to the object to be rendered, and rendering the object to be rendered according to the target map, the virtual objects in different scenes can be rendered in a differentiated manner, so that the virtual objects are prevented from being linked with the scenes unnaturally when the virtual objects are rendered by using the same rendering parameters, and the display effects of a game scene and the virtual objects can be improved.
In one embodiment, after matching the corresponding target map from the multiple candidate maps, the method further includes:
when the target map is not matched, rendering an object to be rendered according to the basic map to obtain an initial object;
and according to the rendering parameters, rendering the connection part of the initial object and the associated area.
In this embodiment, the user terminal traverses the target area of each to-be-selected map according to the acquired rendering parameter of the similar position to match the target map from the multiple to-be-selected maps. And when the user terminal is matched with the target map, rendering the object to be rendered according to the target map by the user terminal. And when the target map is not matched after traversing all the maps to be selected, the user terminal renders the object to be rendered according to the basic map, wherein the basic map is the map of the game scene, namely the map which can be used as the basis when the whole game scene is rendered. When the user terminal renders the object to be rendered according to the basic chartlet, the position and the associated area corresponding to the position information of the object to be rendered in the chartlet are obtained according to the position information of the object to be rendered, the object to be rendered and the associated area are rendered according to the rendering effect of the position, and the rendered object to be rendered is the initial object. After acquiring the initial object, the user terminal re-renders a connection part with the associated area in the initial object according to the rendering parameters of the similar position in the acquired game scene, wherein the connection part is a part directly connected with the initial object, and if the position of the initial object is the coordinate (60,120), the connection part can be the coordinates (60,119), (59,120), (61,120) and (60,121).
In this embodiment, when the target map is not matched according to the rendering parameters, the object to be rendered is rendered according to the basic map, and then the connection part between the rendered object and the associated region is rendered according to the rendering parameters, so that the rendering effect of the connection part between the associated region which is not matched with the target map can be ensured, the connection between the virtual object and the connection part between the associated regions which are not matched with the target map is more natural, and the display effect of the game scene is improved.
In one embodiment, rendering the connection portion with the associated region in the initial object according to the rendering parameters comprises:
and performing parameter superposition on the connecting part of the initial object and the associated area according to the rendering parameters.
In this embodiment, in the process that the user terminal renders the connection part between the initial object and the associated region according to the rendering parameters, the connection part between the initial object and the associated region is subjected to parameter superposition according to the rendering parameters of similar positions in the acquired game scene, wherein the parameter superposition specifically is to superpose data of the same parameter type, and the data of different parameter types are respectively retained, for example, the connection part between the initial object and the associated region rendered according to the scene map comprises three groups of data, namely, illumination 20, texture 10 and color 30, and the acquired rendering parameters comprise two groups of data, namely, illumination 10 and the other data 10; then, after the parameters are superposed, four groups of data are respectively illumination 30, texture 10, color 30 and others 10, wherein the data type is before the colon and the data value of the data type is after the colon.
In the embodiment, when the target map is not matched according to the rendering parameters, the complexity of the rendering process can be reduced by performing parameter superposition on the connecting part of the initial object and the associated area, the natural connection of the connecting part of the initial object and the associated area can be ensured, and the display effect of the game scene is improved.
In one embodiment, as shown in fig. 4, there is provided a virtual object rendering apparatus including:
the map obtaining module 101 is configured to obtain multiple to-be-selected maps corresponding to an object to be rendered in a game scene, where the to-be-selected maps are preset according to the object to be rendered and an associated area that may be connected to the object to be rendered.
And the map matching module 102 is configured to match corresponding target maps from multiple candidate maps according to the associated regions connected with the objects to be rendered in the game scene.
And the object rendering module 103 is used for rendering the object to be rendered according to the target map.
In one embodiment, the map obtaining module 101 is further configured to:
acquiring an image identifier of an object to be rendered;
and acquiring a plurality of to-be-selected maps corresponding to the to-be-rendered objects according to the image identifiers.
In one embodiment, the map matching module 102 is further configured to:
acquiring a relevant area corresponding to the position information from a game scene according to the position information of the object to be rendered;
and matching the corresponding target map from the multiple candidate maps according to the associated area.
In one embodiment, the map matching module 102 is further configured to:
according to the position information of the object to be rendered, acquiring a ground identifier corresponding to the position information from a game scene;
and acquiring the associated area according to the ground identification.
In one embodiment, the map matching module 102 is further configured to:
according to the position information, obtaining rendering parameters of corresponding positions in the associated area;
and traversing each to-be-rendered map according to the rendering parameters, and matching a corresponding target map from the to-be-rendered maps, wherein a target area of the target map comprises the rendering parameters, and the target area corresponds to a connecting part of an object to be rendered and the associated area.
In one embodiment, the map matching module 102 is further configured to:
after matching corresponding target maps from a plurality of to-be-selected maps, when the target maps are not matched, rendering an object to be rendered according to the basic maps to obtain an initial object;
and according to the rendering parameters, rendering the connection part of the initial object and the associated area.
In one embodiment, the map matching module 102 is further configured to:
and performing parameter superposition on the connecting part of the initial object and the associated area according to the rendering parameters.
In one embodiment, a computer apparatus is provided, as shown in fig. 5, comprising a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement a virtual object rendering method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a virtual object rendering method. Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the virtual object rendering apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 5. The memory of the computer device may store therein the respective program modules constituting the virtual object rendering apparatus. The program modules constitute computer programs that cause a processor to execute the steps in the virtual object rendering method according to the embodiments of the present application described in the present specification.
In one embodiment, a computer-readable storage medium is provided, having stored thereon computer-executable instructions for causing a computer to perform the steps of the above-described virtual object rendering method. Here, the steps of the virtual object rendering method may be steps in the virtual object rendering method of the above embodiments.
The foregoing is a preferred embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations are also regarded as the protection scope of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Claims (10)

1. A virtual object rendering method, comprising:
acquiring a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene, wherein the to-be-selected maps are preset according to the objects to be rendered and associated areas which can be connected with the objects to be rendered;
matching corresponding target maps from the multiple to-be-selected maps according to the associated areas connected with the to-be-rendered objects in the game scene;
and rendering the object to be rendered according to the target map.
2. The virtual object rendering method of claim 1, wherein obtaining a plurality of candidate maps corresponding to objects to be rendered in a game scene comprises:
acquiring an image identifier of the object to be rendered;
and acquiring a plurality of to-be-selected maps corresponding to the to-be-rendered objects according to the image identifiers.
3. The virtual object rendering method according to claim 1, wherein matching a corresponding target map from the plurality of maps to be selected according to the associated region connected to the object to be rendered in the game scene comprises:
acquiring the association area corresponding to the position information from the game scene according to the position information of the object to be rendered;
and matching corresponding target maps from the multiple to-be-selected maps according to the associated areas.
4. The virtual object rendering method according to claim 3, wherein acquiring the associated region corresponding to the position information from the game scene according to the position information of the object to be rendered comprises:
according to the position information of the object to be rendered, acquiring a ground identifier corresponding to the position information from the game scene;
and acquiring the associated area according to the ground identification.
5. The virtual object rendering method of claim 3, wherein matching a corresponding target map from the plurality of candidate maps according to the associated region comprises:
acquiring rendering parameters of corresponding positions in the associated area according to the position information;
and traversing each to-be-selected map according to the rendering parameters, and matching a corresponding target map from the to-be-selected maps, wherein a target area of the target map comprises the rendering parameters, and the target area corresponds to a connecting part of the to-be-rendered object and the associated area.
6. The virtual object rendering method of claim 5, further comprising, after matching a corresponding target map from the plurality of candidate maps:
when the target map is not matched, rendering the object to be rendered according to a basic map to obtain an initial object;
and rendering the connection part of the initial object and the associated area according to the rendering parameters.
7. The virtual object rendering method of claim 6, wherein rendering the connection portion of the initial object with the associated region according to the rendering parameters comprises:
and according to the rendering parameters, performing parameter superposition on the connecting part of the initial object and the associated area.
8. A virtual object rendering apparatus, comprising:
the system comprises a map obtaining module, a map selecting module and a map processing module, wherein the map obtaining module is used for obtaining a plurality of to-be-selected maps corresponding to objects to be rendered in a game scene, and the to-be-selected maps are preset according to the objects to be rendered and associated areas which can be connected with the objects to be rendered;
the map matching module is used for matching corresponding target maps from the multiple maps to be selected according to the associated areas connected with the objects to be rendered in the game scene;
and the object rendering module is used for rendering the object to be rendered according to the target map.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the virtual object rendering method according to any of claims 1 to 7 when executing the program.
10. A computer-readable storage medium, in which a computer program is stored which is adapted to be loaded and executed by a processor to cause a computer device having said processor to carry out the method of any one of claims 1 to 7.
CN202110380010.2A 2021-04-08 2021-04-08 Virtual object rendering method and device and electronic equipment Active CN113140028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110380010.2A CN113140028B (en) 2021-04-08 2021-04-08 Virtual object rendering method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110380010.2A CN113140028B (en) 2021-04-08 2021-04-08 Virtual object rendering method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113140028A true CN113140028A (en) 2021-07-20
CN113140028B CN113140028B (en) 2024-08-16

Family

ID=76811478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110380010.2A Active CN113140028B (en) 2021-04-08 2021-04-08 Virtual object rendering method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113140028B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018072470A1 (en) * 2016-10-19 2018-04-26 华为技术有限公司 Image display method, and terminal
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
US20180365518A1 (en) * 2016-03-29 2018-12-20 Tencent Technology (Shenzhen) Company Limited Target object presentation method and apparatus
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN109544663A (en) * 2018-11-09 2019-03-29 腾讯科技(深圳)有限公司 The virtual scene of application program identifies and interacts key mapping matching process and device
CN109977731A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Scene identification method, scene identification equipment and terminal equipment
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment
CN111228801A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Rendering method and device of game scene, storage medium and processor
CN111798554A (en) * 2020-07-24 2020-10-20 上海米哈游天命科技有限公司 Rendering parameter determination method, device, equipment and storage medium
CN111882632A (en) * 2020-07-24 2020-11-03 上海米哈游天命科技有限公司 Rendering method, device and equipment of ground surface details and storage medium
CN111882640A (en) * 2020-07-24 2020-11-03 上海米哈游天命科技有限公司 Rendering parameter determination method, device, equipment and storage medium
CN112215934A (en) * 2020-10-23 2021-01-12 网易(杭州)网络有限公司 Rendering method and device of game model, storage medium and electronic device
CN112316424A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium
CN112370783A (en) * 2020-12-02 2021-02-19 网易(杭州)网络有限公司 Virtual object rendering method and device, computer equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365518A1 (en) * 2016-03-29 2018-12-20 Tencent Technology (Shenzhen) Company Limited Target object presentation method and apparatus
WO2018072470A1 (en) * 2016-10-19 2018-04-26 华为技术有限公司 Image display method, and terminal
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN109977731A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Scene identification method, scene identification equipment and terminal equipment
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN109544663A (en) * 2018-11-09 2019-03-29 腾讯科技(深圳)有限公司 The virtual scene of application program identifies and interacts key mapping matching process and device
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment
CN111228801A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Rendering method and device of game scene, storage medium and processor
CN111798554A (en) * 2020-07-24 2020-10-20 上海米哈游天命科技有限公司 Rendering parameter determination method, device, equipment and storage medium
CN111882632A (en) * 2020-07-24 2020-11-03 上海米哈游天命科技有限公司 Rendering method, device and equipment of ground surface details and storage medium
CN111882640A (en) * 2020-07-24 2020-11-03 上海米哈游天命科技有限公司 Rendering parameter determination method, device, equipment and storage medium
CN112215934A (en) * 2020-10-23 2021-01-12 网易(杭州)网络有限公司 Rendering method and device of game model, storage medium and electronic device
CN112370783A (en) * 2020-12-02 2021-02-19 网易(杭州)网络有限公司 Virtual object rendering method and device, computer equipment and storage medium
CN112316424A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium

Also Published As

Publication number Publication date
CN113140028B (en) 2024-08-16

Similar Documents

Publication Publication Date Title
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
CN113192168B (en) Game scene rendering method and device and electronic equipment
CN107638690B (en) Method, device, server and medium for realizing augmented reality
CN113256781B (en) Virtual scene rendering device, storage medium and electronic equipment
CN108389241A (en) The methods, devices and systems of textures are generated in scene of game
US20230040777A1 (en) Method and apparatus for displaying virtual landscape picture, storage medium, and electronic device
CN111815786A (en) Information display method, device, equipment and storage medium
CN113069763A (en) Game role reloading method and device and electronic equipment
CN114863014A (en) Fusion display method and device for three-dimensional model
CN113034658B (en) Method and device for generating model map
CN111097169A (en) Game image processing method, device, equipment and storage medium
CN111242838B (en) Blurred image rendering method and device, storage medium and electronic device
CN114565709A (en) Data storage management method, object rendering method and device
CN113140028B (en) Virtual object rendering method and device and electronic equipment
CN116486018A (en) Three-dimensional reconstruction method, apparatus and storage medium
JP7301453B2 (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, COMPUTER PROGRAM, AND ELECTRONIC DEVICE
CN113350786A (en) Skin rendering method and device for virtual character and electronic equipment
CN113069765A (en) Game picture rendering method and device and electronic equipment
CN113350787B (en) Game role rendering method and device and electronic equipment
CN114842127A (en) Terrain rendering method and device, electronic equipment, medium and product
CN111445572B (en) Method and device for displaying virtual three-dimensional model
CN109634567B (en) Information creating method, device, terminal and storage medium
CN113694519B (en) Applique effect processing method and device, storage medium and electronic equipment
CN115239869B (en) Shadow processing method, shadow rendering method and device
CN116723303B (en) Picture projection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant