CN117815652A - Virtual object rendering method, device, equipment and storage medium - Google Patents

Virtual object rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN117815652A
CN117815652A CN202211185429.3A CN202211185429A CN117815652A CN 117815652 A CN117815652 A CN 117815652A CN 202211185429 A CN202211185429 A CN 202211185429A CN 117815652 A CN117815652 A CN 117815652A
Authority
CN
China
Prior art keywords
space
block
virtual object
spatial
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211185429.3A
Other languages
Chinese (zh)
Inventor
张道明
朱光育
李振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211185429.3A priority Critical patent/CN117815652A/en
Publication of CN117815652A publication Critical patent/CN117815652A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a virtual object rendering method, device, equipment and storage medium, and belongs to the technical field of virtual environments. The method comprises the following steps: performing collision detection on a visual field space range and a space block in the virtual environment; under the condition that collision exists between the visual field space range and the first space block, collision detection is carried out on the visual field space range and the space sub-blocks in the first space block, wherein the first space block comprises at least one space sub-block; and rendering the second virtual object in the first space sub-block in the case that collision exists between the visual field space range and the first space sub-block. According to the method and the device, the collision detection is carried out on the visual field space range and the space blocks in the virtual environment, and the secondary collision detection is carried out only under the condition that the collision exists between the visual field space range and the first space block, so that the problem that the number of collision detection times is large due to the increase of the number of the second virtual objects is avoided, and the calculation complexity is effectively reduced.

Description

Virtual object rendering method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of virtual environments, and in particular, to a method, an apparatus, a device, and a storage medium for rendering a virtual object.
Background
With the development of internet technology, the number of virtual objects in a virtual environment is increasing in order to pursue immersion in the virtual environment.
In the related art, collision detection is performed on edge points of virtual objects in a visual field space range and a virtual environment one by one, whether the virtual objects are located in the visual field space range is judged, and the virtual objects located in the visual field space range are rendered to obtain images of the virtual environment.
However, as the number of virtual objects increases, a large number of collision detections are required in the above process, and a high requirement is put on the computing power of the computer device in the practical application process, so how to reduce the computing complexity is a problem to be solved.
Disclosure of Invention
The application provides a virtual object rendering method, device, equipment and storage medium, wherein the technical scheme is as follows:
according to an aspect of the present application, there is provided a method of rendering a virtual object, the method including:
performing collision detection on a field of view spatial range and a spatial block in a virtual environment, wherein the virtual environment comprises at least one spatial block;
performing collision detection on a space sub-block in the visual field space range and a first space block under the condition that collision exists between the visual field space range and the first space block, wherein the first space block comprises at least one space sub-block;
And rendering a second virtual object in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block.
According to another aspect of the present application, there is provided a rendering apparatus of a virtual object, the apparatus including:
the detection module is used for performing collision detection on the visual field space range and the space blocks in the virtual environment, and the virtual environment comprises at least one space block;
the detection module is further configured to perform collision detection on a spatial sub-block in the view space range and the first spatial block in the case that a collision exists between the view space range and the first spatial block, where the first spatial block includes at least one spatial sub-block;
and the rendering module is used for rendering the second virtual object in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block.
In an optional design of the present application, the positions of the spatial sub-blocks are determined according to the positions of the second virtual objects, and the spatial sub-blocks are arranged in the first spatial block in a non-closely-spaced manner.
In an optional design of the present application, the first space block includes a first virtual object and the second virtual object, and the space sub-block includes the second virtual object;
the size of the first virtual object is larger than a size threshold corresponding to the space block; the size of the second virtual object is less than or equal to the size threshold.
In an alternative design of the present application, the rendering module is further configured to:
rendering the first virtual object in the first spatial block in the event of a collision between the field of view spatial extent and the first spatial block.
In an alternative design of the present application, the apparatus further comprises:
and the determining module is used for determining the first virtual object and/or the second virtual object in the second space block as a hidden state under the condition that no collision exists between the visual field space range and the second space block.
In an alternative design of the present application, the rendering module is further configured to:
determining the number of the second virtual objects in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block and memory space is not allocated;
Distributing the memory space according to the number of the second virtual objects;
setting parameter information and index information for a parameter matrix of the second virtual object based on the memory space, wherein the parameter information is used for indicating the object form of the second virtual object in the virtual environment, and the index information is used for indicating the index relation between the second virtual object and the first space subblock;
and rendering the second virtual object based on the parameter matrix of the second virtual object.
In an alternative design of the present application, the rendering module is further configured to:
in the case that there is a collision between the field of view space range and the first space sub-block and memory space has been allocated, adding the second virtual object in the second space block to a display list of the memory space;
and rendering the second virtual object in the display list based on the parameter matrix of the second virtual object.
In an alternative design of the present application, the apparatus further comprises:
the processing module is used for setting parameter information and index information for the parameter matrix of the second virtual object under the condition that the memory space does not overflow;
The processing module is further configured to add the second virtual object to a to-be-processed list and allocate a new memory space if the memory space overflows;
the processing module is further configured to multiplex the list to be processed and the display list in the new memory space;
the processing module is further used for setting parameter information and index information for a parameter matrix of the second virtual object in the to-be-processed list;
the parameter information is used for indicating the object form of the second virtual object in the virtual environment, and the index information is used for indicating the index relation between the second virtual object and the first space subblock.
In an alternative design of the present application, the apparatus further comprises:
a processing module, configured to add a second virtual object in a second spatial sub-block to a hidden list in a case where there is no collision between the field of view spatial range and the second spatial sub-block;
the processing module is further configured to determine a second virtual object in the hidden list as a hidden state.
In an alternative design of the present application, the rendering module is further configured to:
Adding a shadow drawing identifier to the second virtual object under the condition that the second virtual object has shadows, and adding the second virtual object to a shadow list;
and rendering the second virtual object and the shadow of the second virtual object in the shadow list based on the parameter matrix of the second virtual object.
In an alternative design of the present application, the detection module is further configured to:
performing collision detection on a view projection and a space block projection, wherein the view projection is the projection of the view space range on the horizontal plane of the virtual environment, and the space block projection is the projection of the space block on the horizontal plane of the virtual environment;
and in the case that collision exists between the view projection and the first space block projection, collision detection is carried out on the view projection and the space sub-block projection, wherein the space sub-block projection is the projection of the space sub-block on the horizontal plane of the virtual environment.
In an alternative design of the present application, the apparatus further comprises:
the processing module is used for centering on the visual field space range, determining a loading space range in the virtual environment, and the distance between any spatial point in the loading space range and the visual field space range is smaller than a first distance threshold;
The processing module is further configured to perform loading processing on a loading space block in a memory space, where the loading space block is the space block overlapping the loading space range.
In an alternative design of the present application, the processing module is further configured to:
determining a clearance space range in the virtual environment based on the field of view space range, wherein a distance between any spatial point in the clearance space range and the field of view space range is greater than a second distance threshold;
and deleting the clearing space block in the memory space, wherein the clearing space block is the space block which has overlap with the clearing space range and has no overlap with the loading space range.
In an alternative design of the present application, the processing module is further configured to:
determining a cache space range in the virtual environment based on the view space range, wherein the distance between any spatial point in the cache space range and the view space range is greater than a third distance threshold;
and establishing a cache for a cache space block, wherein the cache space block is the space block which has an overlapping with the cache space range and has no overlapping with the loading space range.
According to another aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by the processor to implement the method of rendering a virtual object as described in the above aspect.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by a processor to implement the method of rendering a virtual object as described in the above aspect.
According to another aspect of the present application, there is provided a computer program product comprising computer instructions stored in a computer readable storage medium, from which a processor reads and executes the computer instructions to implement the method of rendering a virtual object as described in the above aspects.
The beneficial effects that this application provided technical scheme brought include at least:
the virtual object is rendered based on the collision result of the space block by collision detection of the visual field space range and the space block in the virtual environment, so that the complexity of judging whether to render the virtual object is reduced; by performing secondary collision detection only under the condition that collision exists between the visual field space range and the first space block, the problem of multiple collision detection times caused by the increase of the number of the second virtual objects is avoided, and the calculation complexity is effectively reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a virtual environment provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a spatial block provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a spatial block provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a spatial block provided by an exemplary embodiment of the present application;
FIG. 7 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 8 is a flowchart of a method for rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 9 is a flowchart of a method for rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 10 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 12 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 13 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 14 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 15 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 16 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 17 is a schematic diagram of a virtual environment provided by an exemplary embodiment of the present application;
FIG. 18 is a flowchart of a method of rendering a virtual object provided by an exemplary embodiment of the present application;
FIG. 19 is a block diagram of a virtual object rendering apparatus provided in an exemplary embodiment of the present application;
fig. 20 is a block diagram of a computer device according to an exemplary embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions. It should be understood that, although the terms first, second, etc. may be used in this disclosure to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first parameter may also be referred to as a second parameter, and similarly, a second parameter may also be referred to as a first parameter, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
FIG. 1 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a multi-person online fight program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on a screen of the first terminal 110. The client 111 may be any one of a large fleeing shooting Game, a Virtual Reality (VR) application, an augmented Reality (Augmented Reality, AR) program, a three-dimensional map program, a Virtual Reality Game, an augmented Reality Game, a First-person shooting Game (FPS), a Third-person shooting Game (Third-Personal Shooting Game, TPS), a multiplayer online tactical competition Game (Multiplayer Online Battle Arena Games, MOBA), a strategy Game (SLG). In the present embodiment, the client 111 is exemplified as an FPS game. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual character located in the virtual environment to perform an activity, and the first virtual character may be referred to as a virtual character of the first user 112. The activity of the first avatar includes, but is not limited to: at least one of moving, jumping, transmitting, releasing skills, using props, adjusting body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first avatar, such as an emulated persona or a cartoon persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual environment, and the client 131 may be a multi-person online fight program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on a screen of the second terminal 130. The client may be any one of a fleeing game, VR application, AR program, three-dimensional map program, virtual reality game, augmented reality game, FPS, TPS, MOBA, SLG, in this embodiment exemplified by the client being a MOBA game. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual character located in the virtual environment to perform an activity, and the second virtual character may be referred to as a virtual character of the second user 132. Illustratively, the second virtual character is a second virtual character, such as an emulated persona or a cartoon persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have hostile relationships.
Alternatively, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may refer broadly to one of the plurality of terminals and the second terminal 130 may refer broadly to another of the plurality of terminals, the present embodiment being illustrated with only the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but in different embodiments there are a plurality of other terminals 140 that can access the server 120. Optionally, there are one or more terminals 140 corresponding to the developer, a development and editing platform for supporting the client of the virtual environment is installed on the terminal 140, the developer can edit and update the client on the terminal 140, and transmit the updated client installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the client installation package from the server 120 to implement the update of the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server 120 through a wireless network or a wired network.
Server 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for clients supporting a three-dimensional virtual environment. Optionally, the server 120 takes on primary computing work and the terminal takes on secondary computing work; alternatively, the server 120 takes on secondary computing work and the terminal takes on primary computing work; alternatively, a distributed computing architecture is used for collaborative computing between the server 120 and the terminals.
In one illustrative example, server 120 includes a processor 122, a user account database 123, an engagement service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load instructions stored in the server 121, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as an avatar of the user account, a nickname of the user account, and a combat index of the user account, where the user account is located; the combat service module 124 is configured to provide a plurality of combat rooms for users to combat, such as 1V1 combat, 3V3 combat, 5V5 combat, etc.; the user-oriented I/O interface 125 is used to establish communication exchanges of data with the first terminal 110 and/or the second terminal 130 via a wireless network or a wired network.
In an exemplary embodiment, the virtual object rendering method provided in the embodiment of the present application is described by taking a computer device as an example of an execution subject of each step, where the computer device refers to an electronic device, such as a terminal and/or a server, that has data computing, processing, and storage capabilities.
FIG. 2 illustrates a schematic diagram of a virtual environment 300 provided in one embodiment of the present application.
There are at least three spatial blocks in the virtual environment 300: a space block 310, B space block 320, and C space block 330; it will be appreciated that there may be more blocks of space in the virtual environment 300, which are not shown in FIG. 2 for ease of viewing in this embodiment; further, for the sake of convenience of observation, the virtual object in the a-space block 310 is also not shown in this embodiment.
The field-of-view space range 340 is a field-of-view range in which a virtual space is observed using a virtual camera. Collision detection is carried out on the visual field space range 340 and the three space blocks, so that collision exists between the visual field space range 340 and the space blocks A, 310 and 320; there is no collision between the field of view space range 340 and the a space block 310, C space block 330. Note that, in this embodiment, the field of view space range 340 is represented as a trapezoidal table, which is a three-dimensional shape obtained by translating a trapezoid; it is understood that the view space range 340 is shown as a trapezoid table for only one exemplary description, and the view space range 340 may be implemented as any one of three-dimensional shapes such as a cone, a pyramid, a truncated cone, a truncated pyramid, a cuboid, or the like, or a three-dimensional shape based on a combination of at least one of the above three-dimensional shapes, and the shape of the view space range is not limited in any way.
There are two spatial sub-blocks in the B spatial block 320 that are not closely packed: a space sub-block 322, B space sub-block 324; a space sub-block 322 contains a first virtual fence and B space sub-block 324 contains a first virtual tree. The first virtual fence and the first virtual tree are each smaller in size than the space sub-block. The B space block 320 further includes a virtual house 320a, and the size of the virtual house 320a is larger than that of the space sub-block.
There are four spatial sub-blocks in the C-space block 330 that are not closely packed: c-space sub-block 332, D-space sub-block 334, E-space sub-block 336, F-space sub-block 338; the C space sub-block 332 contains a virtual bird, the D space sub-block 334 contains a second virtual tree, the E space sub-block 336 contains a second virtual fence, and the F space sub-block 338 contains a third virtual fence. By way of example, the virtual objects in the virtual environment 300 are typically static virtual objects, but the presence of dynamic virtual objects is not precluded.
Because there is a collision between field of view space range 340 and B-space block 320, collision detection is performed on field of view space range 340 and a-space sub-block 322, resulting in a collision between the two, rendering the first virtual fence in a-space sub-block 322.
Because of collision between the view space range 340 and the B space block 320, collision detection is performed on the view space range 340 and the B space sub-block 324; in this embodiment, there is a collision between the two, rendering the first virtual tree in B-space sub-block 324. In another embodiment, no collision exists between the two, the first virtual tree in B-space sub-block 324 is determined to be in a hidden state, and no rendering is performed.
Virtual house 320a in B-space block 320 is rendered due to the collision between field of view space range 340 and B-space block 320.
Since there is no collision between the view space range 340 and the C space block 330, the virtual bird, the second virtual tree, the second virtual fence, and the third virtual fence in the C space block 330 are all determined to be hidden, and are not rendered. Collision detection is not required for any of the spatial sub-blocks in the view space range 340 and the C space block 330.
Next, a method of rendering a virtual object will be described by the following embodiments.
Fig. 3 illustrates a flowchart of a method for rendering a virtual object according to an exemplary embodiment of the present application. The method may be performed by a computer device. The method comprises the following steps:
Step 510: performing collision detection on a visual field space range and a space block in the virtual environment;
illustratively, the virtual environment includes at least one block of space; the space block is a closed space in the virtual environment, and the shape of the space block can be any one of three-dimensional shapes such as a cube, a cuboid, a polygonal cylinder, a sphere and the like; the shape of the different spatial blocks is generally the same, but is not intended to exclude different situations. Further, the spatial blocks in the virtual space may be densely arranged or non-densely arranged, and the arrangement manner of the spatial blocks in the virtual space is not limited in this embodiment.
Further, the spatial blocks in the present embodiment are only used for dividing the virtual environment, and when the virtual environment is observed by the virtual camera, the spatial blocks in the virtual environment cannot be directly observed. The data of the virtual objects in the same spatial block may be stored in the same file or may be stored in different files, which is not defined in any limiting manner by the present embodiment.
By way of example, by collision detection of the field of view spatial range and the spatial block in the virtual environment, it is determined whether there is an overlap region between the field of view spatial range and the spatial block by collision detection. In the case of collision, an overlapping region exists between the field of view spatial range and the spatial block; in the absence of a collision, there is no overlap region between the field of view spatial extent and the spatial block.
Step 520: under the condition that collision exists between the visual field space range and the first space block, collision detection is carried out on the spatial sub-blocks in the visual field space range and the first space block;
illustratively, the first spatial block is a spatial block that collides with the field of view spatial range, and the number of the first spatial blocks may be one or more. It will be appreciated that the first spatial block is typically a partial spatial block in the virtual environment, but this does not exclude the case that the first spatial block is a complete spatial block in the virtual environment.
The first spatial block includes at least one spatial sub-block; the shape of the spatial sub-blocks and the shape of the spatial blocks may be the same or different.
Step 530: rendering a second virtual object in the first spatial sub-block in the presence of a collision between the field of view spatial extent and the first spatial sub-block;
illustratively, the first spatial sub-block is a spatial sub-block that collides with the field of view spatial range, and the number of the first spatial sub-blocks may be one or more.
The second virtual object in the first spatial sub-block is located entirely within the first spatial sub-block, also referred to as the first spatial sub-block comprising the second virtual object. Illustratively, the second virtual object is rendered and the second virtual object in the virtual environment is displayed.
In summary, according to the method provided by the embodiment, the virtual object is determined to be rendered based on the collision result of the space block by performing collision detection on the view space range and the space block in the virtual environment, so that the complexity of judging whether to render the virtual object is reduced; by performing secondary collision detection only under the condition that collision exists between the visual field space range and the first space block, the problem of multiple collision detection times caused by the increase of the number of the second virtual objects is avoided, and the calculation complexity is effectively reduced.
The spatial extent of the field of view in the embodiment shown in fig. 3 is described; illustratively, the field of view spatial range is the range in which the virtual environment is observed by the camera model.
Optionally, the camera model automatically follows the virtual character in the virtual environment, that is, when the position of the virtual character in the virtual environment changes, the camera model simultaneously changes along with the position of the virtual character in the virtual environment, and the camera model is always within a preset distance range of the virtual character in the virtual environment. Optionally, the relative positions of the camera model and the virtual character do not change during the automatic following process.
The camera model refers to a three-dimensional model located around the virtual character in the virtual environment, which is located near or at the head of the virtual character when the first-person perspective is adopted; when a third person viewing angle is adopted, the camera model can be positioned behind the virtual character and bound with the virtual character, and can also be positioned at any position with a preset distance from the virtual character, and the virtual character in the virtual environment can be observed from different angles through the camera model. Optionally, the viewing angle includes other viewing angles, such as a top view, in addition to the first-person viewing angle and the third-person viewing angle; when a top view is used, the camera model may be located above the head of the virtual character, and the top view is a view of the virtual environment from an overhead view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment of the user interface display.
Describing the example that the camera model is located at any position at a preset distance from the virtual character, optionally, one virtual character corresponds to one camera model, and the camera model may rotate with the virtual character as a rotation center, for example: the camera model is rotated by taking any point of the virtual character as a rotation center, the camera model not only rotates in angle, but also shifts in displacement in the rotation process, and the distance between the camera model and the rotation center is kept unchanged during rotation, namely, the camera model is rotated on the surface of a sphere taking the rotation center as a sphere center, wherein the any point of the virtual character can be any point of the head, the trunk or the periphery of the virtual character, and the embodiment of the application is not limited. Optionally, when the camera model observes the virtual character, the center of the view angle of the camera model points in the direction of the center of sphere, where the point of the sphere where the camera model is located points.
Optionally, the camera model may also observe the virtual character at a preset angle in different directions of the virtual character.
It should be noted that in the embodiment shown in fig. 3, only two-level spatial structures of the spatial block and the spatial sub-block are shown, and in one implementation, the virtual environment may be divided into more levels of spatial structures. Taking a three-layer space structure as an example, the virtual environment comprises at least one first-level space block, the first-level space block comprises at least one second-level space block, and the second-level space block comprises at least one third-level space block. In the three-level spatial structure, any two adjacent level spatial structure may be referred to as a spatial block and a spatial sub-block, for example: for the first-level spatial block and the second-level spatial block, the second-level spatial block is called a spatial sub-block, and the first-level spatial block is called a spatial block.
Optionally, the position of the spatial sub-block is determined according to the position of the second virtual object. Illustratively, in different spatial blocks, the relative positions of the spatial sub-blocks in the spatial blocks are different due to the different positions of the second virtual objects. Referring to fig. 2 above, the relative positions of the spatial sub-blocks in different spatial blocks are different. Optionally, the spatial sub-blocks are arranged in a non-tiled arrangement in the spatial block. Illustratively, there is a void space between at least two adjacent spatial sub-blocks, and at least one spatial point in a spatial block does not belong to any one spatial sub-block. Referring to the spatial blocks shown in fig. 2 above, the spatial sub-blocks are all arranged in a non-tiled manner.
By way of example, the three-layer spatial structure above is taken as an example, and spatial blocks and virtual objects in a virtual environment are described.
Fig. 4 shows a schematic diagram of a space block provided in an embodiment of the present application. Illustratively, the first-level space block 602 includes a virtual house 602a, where a dashed line on the periphery of the virtual house 602a is used to indicate the size of the virtual house, and for convenience of observation, the schematic diagram shown in fig. 4 is a front view of the space block.
Fig. 5 shows a schematic diagram of a space block provided in an embodiment of the present application. Illustratively, the first-level space block 612 includes a second-level space block 614, where the second-level space block 614 includes virtual trees 614a; it is appreciated that the size of the virtual tree 614a is smaller than the size of the second level spatial block 614.
Fig. 6 shows a schematic diagram of a space block provided in an embodiment of the present application. Illustratively, the first-level space block 622 includes a second-level space block 624 therein, and the second-level space block 624 includes a third-level space block 626 therein; third-level spatial block 626 includes virtual grass 626a; it is appreciated that the size of virtual grass 626a is smaller than the size of third level spatial block 626.
In the three-layer space structure, the dimensions of the first-stage space block, the second-stage space block and the third-stage space block decrease, for example, the side length of the first-stage space block is 1024 meters, the side length of the second-stage space block is 128 meters, and the side length of the third-stage space block is 16 meters. Similarly, virtual objects in a space block are classified into large objects, medium objects, and small objects according to size, for example: the side length of the large object is more than 64 meters, the side length of the medium object is less than or equal to 64 meters, and is more than 8 meters. The side length of the small object is less than or equal to 8 meters.
Optionally, for any two adjacent spatial structures, the spatial block includes a first virtual object and a second virtual object, and the spatial sub-block includes only the second virtual object. The size of the first virtual object is larger than a size threshold corresponding to the space block; the size of the second virtual object is smaller than or equal to a size threshold corresponding to the space block; the size threshold may be the size of the spatial sub-block or may be independent of the size of the spatial sub-block. Taking the space block as the first-level space block above as an example, the first virtual object is the virtual house shown in fig. 5. The second virtual object is a virtual tree shown in fig. 6.
In one implementation, the number of virtual objects in a spatial block, spatial sub-block may be one or more. There is a correspondence between the size threshold and the spatial block, but the size is not limited thereto and is determined based on the size of the spatial block; such as: the size threshold may be preset, independent of the size of the spatial block. The size threshold may also be the product of the size of the spatial block and the scaling factor, i.e. the size threshold is determined based on the size of the spatial block. For any two-level spatial structure that is adjacent, the size of the spatial sub-blocks is typically the same, and the size threshold corresponding to the spatial blocks is also typically the same, but this is not intended to exclude different situations.
Fig. 7 shows a flowchart of a virtual object rendering method according to an exemplary embodiment of the present application. The method may be performed by a computer device. I.e. on the basis of the embodiment shown in fig. 3, further comprises a step 542:
step 542: rendering a first virtual object in a first space block under the condition that collision exists between the visual field space range and the first space block;
illustratively, the first spatial block is a spatial block in which there is a collision with the field of view spatial extent, and the first virtual object is an object in the first spatial block; the first virtual object and the first space block have a corresponding relationship. Further, the size of the first virtual object is larger than the size threshold corresponding to the space blocks, the first virtual object only has the first space blocks with the corresponding relation, but no corresponding relation exists between any space sub-block in the first space blocks and the first virtual object. It should be noted that, in this embodiment, the correspondence between the first virtual object and the space sub-block is limited only, but it does not indicate that there is no overlapping area between the first virtual object and the space sub-block. I.e. there may or may not be an overlap region between the first virtual object and the spatial sub-block.
In this embodiment, in the case that there is a collision between the field of view space range and the first space block, the first virtual object in the first space block is rendered; that is, the first virtual object is rendered without collision detection of the view space range and the space sub-block, and the first virtual object is rendered no matter whether collision exists in the view space range and the space sub-block.
In summary, according to the method provided by the embodiment, the virtual object is determined to be rendered based on the collision result of the space block by performing collision detection on the view space range and the space block in the virtual environment, so that the complexity of judging whether to render the virtual object is reduced; by rendering the first virtual object in the presence of a collision between the field of view spatial extent and the first spatial block, repeated collision detection for the first virtual object is avoided.
Fig. 8 is a flowchart illustrating a method for rendering a virtual object according to an exemplary embodiment of the present application. The method may be performed by a computer device. I.e. on the basis of the embodiment shown in fig. 3, further comprises a step 544:
step 544: determining the first virtual object and/or the second virtual object in the second space block as a hidden state in the case that no collision exists between the field of view space range and the second space block;
Illustratively, the second spatial block is a spatial block that does not collide with the field of view spatial range, and the number of the second spatial blocks may be one or more. The at least one second space block comprises a first virtual object and/or a second object, and the size of the first virtual object is larger than a size threshold corresponding to the space block; the size of the second virtual object is less than or equal to the size threshold;
in the case where there is no collision between the view space region and the second space block, it is not necessary to perform collision detection on the space sub-blocks and the view space region in the second space block, and all virtual objects in the second space block, that is, the first virtual object and/or the second virtual object, are determined to be in a hidden state. It should be further noted that the first virtual object and/or the second virtual object are all virtual objects in the second spatial block. Such as: determining the first virtual object as a hidden state in the case where only the first virtual object exists in the second spatial block; in the case where only the second virtual object exists in the second spatial block, the second virtual object is determined to be in a hidden state.
In summary, according to the method provided by the embodiment, the virtual object is determined to be rendered based on the collision result of the space block by performing collision detection on the view space range and the space block in the virtual environment, so that the complexity of judging whether to render the virtual object is reduced; by rendering the first virtual object in the presence of a collision between the field of view spatial extent and the first spatial block, repeated collision detection for the first virtual object is avoided.
Fig. 9 shows a flowchart of a method for rendering a virtual object according to an embodiment of the present application. The method may be performed by a computer device.
Step 602: changing the visual field space range;
the field of view spatial extent of the virtual environment viewed by the camera is changed by at least one of moving the camera position, changing the camera height, and adjusting the camera angle.
Step 604: judging whether the space of the large-size data block and the space range of the visual field collide or not;
the large-size data block corresponds to a space in the virtual environment, and whether collision occurs is judged by collision detection of the visual field space range in the virtual environment and the space of the large-size data block. For example, if the determination is yes, step 610 is performed; otherwise, step 606 is performed.
Step 606: judging whether the display state of the large-size data block changes or not;
for example, in the case where there is no collision between the space of the large-size data block and the field-of-view space, it is determined whether the display state of the large-size data block is changed.
The display state is changed to indicate that the display state of the large-size data block is different before the change of the visual field spatial range and after the change of the visual field spatial range.
For example, if the determination is yes, step 608 is executed; and otherwise, the virtual object I is not processed.
Step 608: determining the first virtual object as a hidden state;
in an exemplary case, in which the display state of the large-size data block is changed, it is determined that the virtual object one is in the hidden state. The virtual object one is all virtual objects in a large-sized data block.
Step 610: judging whether the space of the middle-size data block and the space range of the visual field collide or not;
in an exemplary case where there is a collision between the space of the large-size data block and the visual field space, it is determined whether or not the space of the medium-size data block and the visual field space collide.
The medium-size data block corresponds to a space in the virtual environment, and whether collision occurs is judged by collision detection of the visual field space range in the virtual environment and the space of the medium-size data block. Illustratively, if the determination is yes, step 616 is performed; otherwise, step 612 is performed.
Step 612: judging whether the display state of the data block with the middle size is changed or not;
for example, in the case where there is no collision between the space of the middle-size data block and the field of view space, it is determined whether the display state of the middle-size data block is changed.
The display state is changed to indicate that the display state of the medium-sized data block is different before the change of the visual field space range and after the change of the visual field space range.
For example, if the determination is yes, step 614 is performed; and otherwise, the second virtual object is not processed.
Step 614: determining the second virtual object as a hidden state;
in an exemplary embodiment, in a case where the display state of the middle-size data block is changed, the virtual object two is determined to be in a hidden state. The second virtual object is all the virtual objects in the middle-size data block.
Step 616: judging whether the space of the small-size data block and the space range of the visual field collide or not;
in an exemplary case where there is a collision between the space of the medium-sized data block and the field of view space, it is determined whether or not the space of the small-sized data block and the field of view space are collided.
The small-size data block corresponds to a space in the virtual environment, and whether collision occurs is judged by collision detection of the visual field space range in the virtual environment and the space of the small-size data block. For example, if the determination is yes, step 620 is performed; otherwise, step 618 is performed.
Step 618: determining the virtual object III as a hidden state;
For example, in the case where there is no collision between the space of the small-sized data block and the field-of-view space, the virtual object three is determined to be in a hidden state. The third virtual object is a virtual object in the small-size data block, and the size of the third virtual object is smaller than or equal to the size of the small-size data block.
Step 620: determining the third virtual object as a display state, and rendering the third virtual object;
for example, in the case that there is a collision between the space of the small-sized data block and the field of view space, the virtual object three is determined to be in a display state, and rendered.
For example, in the case that the number of the virtual objects three exceeds a number threshold or multiplexing resources are adopted, rendering is performed by adopting a mode of image processor instantiation (gpunanstance); in the case that the number of virtual objects three does not exceed the number threshold or non-reusable resources are adopted, rendering is performed by adopting a game engineering (game object) mode. The number threshold is preset.
In summary, according to the method provided by the embodiment, the virtual object is determined to be rendered based on the collision result of the space block by performing collision detection on the view space range and the space block in the virtual environment, so that the complexity of judging whether to render the virtual object is reduced; by performing secondary collision detection only under the condition that collision exists between the visual field space range and the first space block, the problem of multiple collision detection times caused by the increase of the number of the second virtual objects is avoided, and the calculation complexity is effectively reduced.
Fig. 10 is a flowchart illustrating a method for rendering a virtual object according to an exemplary embodiment of the present application. The method may be performed by a computer device. That is, in the embodiment shown in fig. 3, step 530 may be implemented as step 531, step 532, step 533, and step 534:
step 531: under the condition that collision exists between the visual field space range and the first space sub-block and no memory space is allocated, determining the number of second virtual objects in the first space sub-block;
the memory space is, for example, a memory space for processing a parameter matrix of the second virtual object. In one example, the case of unallocated memory typically occurs in a first-in virtual environment, or switch to a new virtual environment, but other scenarios where unallocated memory exists are not precluded.
Step 532: according to the number of the second virtual objects, memory space is allocated;
the second virtual objects are respectively provided with a parameter matrix, the space size required for processing the parameter matrix is determined according to the number of the second virtual objects, and the allocated memory space is larger than or equal to the space size required for processing the parameter matrix.
Step 533: setting parameter information and index information for a parameter matrix of the second virtual object based on the memory space;
illustratively, the parameter matrix includes parameter information and index information. The parameter information is used to indicate an object shape of a second virtual object in the virtual environment, such as: at least one of a coordinate position, a rotation angle, and a scaling of the second virtual object in the virtual environment. The index information is used for indicating an index relation between the second virtual object and the first space subblock; further, the index information is further used for indicating the relation between at least one of the space block and the space sub-block and the second virtual object; in the case where there is a multi-level spatial structure in the virtual environment, such as the three-level spatial structure shown above, the index information may be an index relationship between the more level spatial blocks and the second virtual object. Illustratively, setting a parameter matrix converts transform (transform) data into matrix (matrix 4x 4) data.
Step 534: rendering the second virtual object based on the parameter matrix of the second virtual object;
illustratively, the second virtual object is rendered according to the parameter information and the index information in the parameter matrix of the second virtual object, and the second virtual object in the virtual environment is displayed.
In summary, in the method provided in this embodiment, the parameter matrix of the second virtual object is set by allocating the memory space, so that the manner of rendering the second virtual object is perfected; by performing secondary collision detection only under the condition that collision exists between the visual field space range and the first space block, the problem of multiple collision detection times caused by the increase of the number of the second virtual objects is avoided, and the calculation complexity is effectively reduced.
Fig. 11 is a flowchart illustrating a method for rendering a virtual object according to an exemplary embodiment of the present application. The method may be performed by a computer device. That is, in the embodiment shown in fig. 3, step 530 may be implemented as steps 536, 538:
step 536: in the case that collision exists between the visual field space range and the first space sub-block and the memory space is allocated, adding a second virtual object in the second space block to a display list of the memory space;
the memory space is, for example, a memory space for processing a parameter matrix of the second virtual object. In one example, the case of allocated memory space typically occurs after displaying an image of the viewing virtual environment, but other scenarios where unallocated memory space exists are not precluded. Illustratively, the rendering process is performed using the already allocated memory space in the present embodiment, and there is no need to reallocate the memory space each time the rendering is performed.
Optionally, in an optional design of this embodiment, the method further includes the following steps:
adding a second virtual object in a second spatial sub-block to the hidden list in the absence of a collision between the field of view spatial extent and the second spatial sub-block;
and determining the second virtual object in the hidden list as a hidden state.
The second spatial sub-block is illustratively a spatial sub-block of the first spatial block that does not collide with the field of view spatial range, and the number of the second spatial sub-blocks may be one or more. It is understood that the second spatial sub-block may be part or all of the spatial sub-blocks in the first spatial block. Since there is no collision between the view space range and the second space sub-block, rendering processing is not required for the objects in the second space sub-block, the second virtual object in the second space sub-block is added to the hidden list, and the second virtual object in the hidden list is determined to be in a hidden state. Similarly to the display list, the hidden list is a list in the memory space. In one implementation, the display list and the hidden list are located in the same queue; wherein the position of the display list is adjusted to be in front of the hidden list. And determining the number of the second virtual objects in the display list as i, wherein the first i virtual objects in the queue are objects in the display list, and performing rendering processing. The object behind the ith virtual object in the queue is the object in the hidden list, and is determined to be in a hidden state.
Optionally, in another optional design of this embodiment, the method further includes the following steps:
adding a shadow drawing identifier for the second virtual object and adding the second virtual object to a shadow list under the condition that the second virtual object has shadows;
and rendering the second virtual object and the shadow of the second virtual object in the shadow list based on the parameter matrix of the second virtual object.
Illustratively, the second virtual object that is shaded is part or all of the second virtual object in the display list. A second virtual object with a shadow is added to the shadow list. In one implementation, the display list and the shadow list are located in the same queue; wherein the shadow list is adjusted to be in front of the display list. And drawing the second virtual object with the shadow, and drawing the shadow in a separate map, wherein the shadow does not need to be repeatedly drawn. For example, the second virtual object is rendered for each frame, and the shadow is rendered in a separate map to avoid multiple rendering of the shadow.
Step 538: and rendering the second virtual object in the display list based on the parameter matrix of the second virtual object.
Illustratively, the second virtual object is rendered according to the parameter information and the index information in the parameter matrix of the second virtual object, and the second virtual object in the virtual environment is displayed. The display list includes a second virtual object that needs to be rendered.
In summary, according to the method provided by the embodiment, the parameter matrix of the second virtual object is set by using the allocated memory space, so that the manner of rendering the second virtual object is expanded, and the incremental refreshing of the parameter matrix is realized; by performing secondary collision detection only under the condition that collision exists between the visual field space range and the first space block, the problem of multiple collision detection times caused by the increase of the number of the second virtual objects is avoided, and the calculation complexity is effectively reduced.
Next, a description is given of a parameter matrix of the second virtual object:
in one example, the data structure of the parameter matrix of the second virtual object includes at least:
/>
it should be noted that, the number of gpunanstances is 1023 at most, and when the number exceeds 1023, the drawmeshstanded needs to be recalled to draw, so as to implement rendering processing. With a 2-dimensional array of matrices, the first dimension is to record the lot that needs to be rendered, and the second dimension is to record the number of matrices that the current lot needs to process.
Next, preprocessing content in the process of rendering shadows of virtual objects is described by way of example.
Fig. 12 shows a flowchart of a method for rendering a virtual object according to an embodiment of the present application. The method may be performed by a computer device.
Step 902: judging whether the virtual object needs to draw shadows;
illustratively, determining whether the virtual object requires shading is based on at least one of the following: the position of the virtual object in the virtual environment, illumination information in the virtual environment, and rendering settings of the virtual environment.
Shadows are created by the virtual object being illuminated by light in the virtual environment, and shadows of the virtual object are typically displayed on the virtual object or on the peripheral side of the virtual object.
Step 904: adding a drawing shadow mark for the virtual object;
for example, in the case where the virtual object needs to draw a shadow, a shadow identification is added to the virtual object. For example, the shadow is identified as true (needDrawShadow).
Step 906: determining the number of virtual objects needing to draw shadows;
illustratively, when adding the shadow mark, adding one to the number of virtual objects needing to draw the shadow, and counting the number of virtual objects needing to draw the shadow.
Step 908: adding a virtual object needing to draw shadows to a shadow list;
illustratively, the shadow list is lsshadow model. Illustratively, the steps in this embodiment are performed prior to refreshing the parameter matrix of the virtual object, which is what is prior to shadow rendering of the virtual object. Wherein the position of the shadow list is adjusted to be in front of the display list. The number of virtual objects needing to draw shadows is j, the first j virtual objects in a queue comprising a shadow list are objects in the shadow list, rendering processing is carried out, and rendering processing is carried out on the virtual objects and shadows of the virtual objects.
In summary, in the method provided in this embodiment, virtual objects having shadows are distinguished by the shadow list, and repeated drawing of shadows is avoided by performing rendering processing on the virtual objects having shadows first.
Fig. 13 shows a flowchart of a virtual object rendering method according to an exemplary embodiment of the present application. The method may be performed by a computer device. I.e. on the basis of the embodiment shown in fig. 11, further comprises steps 537a to 537d:
step 537a: setting parameter information and index information for a parameter matrix of the second virtual object under the condition that the memory space does not overflow;
It should be noted that, in the embodiment shown in fig. 11, the second virtual object may be a virtual object that has completed the parameter matrix setting, or may be a virtual object that has not performed the parameter matrix setting; in the present embodiment, description is made of a parameter matrix setting process of a virtual object for which parameter matrix setting is not performed.
For example, since the second virtual object is added to the display list, it is necessary to determine whether the memory space of the display list overflows. And under the condition that no overflow is generated, setting matrix parameters of the second virtual object directly. The parameter matrix of the second virtual object comprises parameter information and index information.
Step 537b: under the condition that the memory space overflows, adding a second virtual object to the to-be-processed list, and distributing a new memory space;
illustratively, in the event of an overflow, new memory space needs to be allocated. Since the memory space overflows, the second virtual object needs to be added to the to-be-processed list, and matrix parameter setting is performed in the new memory space. For example, since only the identification information of the second virtual object needs to be determined in the to-be-processed list, matrix parameter setting is not needed, the space occupation of the identification information is small, and overflow is not generated. In another implementation, the list to be processed does not belong to memory space and does not cause overflow.
Step 537c: multiplexing a list to be processed and a display list in a new memory space;
multiplexing the list to be processed and the display list based on the new memory space, namely, all data in the list to be processed and the display list can be obtained in the new memory space.
Step 537d: setting parameter information and index information for a parameter matrix of a second virtual object in the list to be processed;
illustratively, after the new memory space is allocated, setting a parameter matrix for the second virtual object in the to-be-processed list based on the new memory space.
For example, the parameter information is used to indicate an object morphology of the second virtual object in the virtual environment, and the index information is used to indicate an index relationship between the second virtual object and the first spatial sub-block. For detailed description of the parameter information and the index information, please refer to the parameter matrix of the second virtual object in the above description, and the detailed description is omitted in this embodiment.
In summary, according to the method provided by the embodiment, the parameter matrix of the second virtual object is set by using the allocated memory space, so that the manner of rendering the second virtual object is expanded, and the incremental refreshing of the parameter matrix is realized; a reasonable technical scheme is provided for whether the allocated memory space overflows or not, and the setting mode of the parameter matrix is perfected.
Fig. 14 shows a flowchart of a method for rendering a virtual object according to an embodiment of the present application. The method may be performed by a computer device.
Step 622: judging whether the matrix is refreshed for the first time;
by way of example, the first instance of refreshing the matrix typically occurs in a scenario where the virtual environment is first entered, or switched to a new virtual environment. Illustratively, in the case of refreshing the matrix for the first time, the unallocated memory space performs processing of the parameter matrix. For example, if the determination is yes, step 624 is performed; otherwise, step 630 is performed.
Step 624: judging whether the virtual object is in a display state or not;
illustratively, in the case of refreshing the matrix for the first time, determining whether the virtual object is in a display state; under the condition that a space block where the virtual object is located collides with the visual field space range, the virtual object is in a display state; otherwise, the display state is not in. Illustratively, if the determination is yes, step 626 is performed; and otherwise, the virtual object is not processed.
Step 626: increasing the display quantity of the virtual objects;
for example, in the case where the virtual object is in the display state, the display number of the virtual object is increased; by increasing the number of virtual objects displayed, the number of virtual objects in the display state is statistically determined.
Step 628: creating a memory space according to the display quantity of the virtual objects, and executing a full refresh matrix;
based on the number of virtual objects displayed, the created memory space is larger than the size of the parameter matrix of the virtual object in the display state. Setting a parameter matrix of the virtual object by executing the full refresh matrix, and rendering the virtual object in a display state.
Step 630: judging whether the virtual object is in a display state or not;
for example, if the matrix is not refreshed for the first time, it is determined whether the virtual object is in a display state. For example, if the determination is yes, step 636 is performed; otherwise, step 632 is performed.
Step 632: judging whether the virtual object is initialized;
for example, in the case that the virtual object is not in the display state, whether the virtual object is initialized is judged; illustratively, the initializing of the virtual object includes setting parameter information and index information for a parameter matrix of the virtual object. For example, if the determination is yes, step 634 is performed; and otherwise, the virtual object is not processed.
Step 634: adding a hidden list;
for example, in the case where the initialization of the virtual object is completed, the virtual object is added to the hidden list, and the virtual object in the hidden list is determined to be in a hidden state, and rendering processing is not performed.
Optionally, for the virtual object that is not initialized, no processing is performed, and since the virtual object is not initialized, the parameter matrix is not set, and rendering processing cannot be performed on the virtual object.
Step 636: adding a display list;
for example, in the case that the virtual object is in the display state, adding the virtual object to the display list; rendering is required for virtual objects in the display list.
Step 638: judging whether the virtual object is initialized;
illustratively, the initializing of the virtual object includes setting parameter information and index information for a parameter matrix of the virtual object. Illustratively, if the determination is negative, step 640 is performed; and otherwise, the virtual object is not processed.
Step 640: judging whether the memory space overflows or not;
for example, since matrix parameter setting is performed based on the already allocated memory space without refreshing the matrix for the first time, it is necessary to determine whether overflow of the existing space within the matrix occurs. For example, if the determination is yes, step 642 is performed; otherwise, step 644 is performed.
Step 642: adding a list to be initialized;
illustratively, in the case of overflow of the memory space, the virtual object that is not initialized is added to the list to be initialized.
Step 644: performing object initialization;
for example, in the case that the memory space is not overflowed, the initialization of the object is performed for the object for which the initialization is completed in the display list, and the parameter information and the index information are set for the parameter matrix of the virtual object.
Illustratively, what is shown in the present embodiment is an embodiment that is performed at the frame start time.
In summary, the method provided by the embodiment perfects the implementation mode of initializing the object, provides a technical scheme for the situation of overflowing the memory space, perfects the setting mode of the parameter matrix, and lays a foundation for rendering the virtual object.
Fig. 15 shows a flowchart of a method for rendering a virtual object according to an embodiment of the present application. The method may be performed by a computer device.
Step 652: judging whether the memory space overflows or not;
illustratively, the content shown in the present embodiment is an embodiment performed at the end of frame;
illustratively, since the already allocated memory space is utilized, it is necessary to determine that overflow occurs when the already allocated memory space is utilized. For example, if the determination is yes, step 654 is performed; otherwise, step 660 is performed.
Step 654: creating a new memory space according to the display quantity of the virtual objects;
illustratively, in the case of overflowing the memory space, the display number of the virtual objects counted according to the above embodiments creates a new memory space.
Step 656: multiplexing the matrix in the new memory space;
illustratively, the parameter matrix of the virtual object in the display list is multiplexed in the new memory space.
Step 658: initializing the virtual objects in the list to be initialized;
exemplary, the virtual objects in the to-be-processed list obtained in the above embodiment are initialized.
Step 660: calculating the minimum value of the number of virtual objects in the display list and the hidden list;
by way of example, the present embodiment shows only an exemplary manner of exchanging positions of the display list and the hidden list. In another implementation, there may be other implementations that adjust the position of the virtual object in the display list to be before hiding the list.
Step 662: performing an exchange of the first a virtual objects in the display list and the hidden list;
illustratively, a is the minimum of the number of virtual objects in the show list and the hidden list; the positions of the virtual objects in the display list are adjusted to be in front of the hidden list by performing exchange on the first a virtual objects in the display list and the hidden list.
Step 664: judging whether the number of objects in the display list is larger than that of the hidden list;
by way of example, by determining whether the number of objects in the display list is greater than the hidden list, it is determined whether the virtual object is a virtual object in the display list or a virtual object in the hidden list without a change in position after a virtual object exchange is performed. For example, if the determination is yes, step 666 is executed; otherwise, step 668 is performed.
Step 666: inserting the virtual object at the tail of the display list in front of the hidden list;
for example, in the case where the number of objects in the display list is greater than the hidden list, the virtual object at the end of the display list is inserted before the hidden list. It will be appreciated that the object that is not subject to a change in position is an object in the display list, the virtual object at the end of the display list is an object after the a-th virtual object, and the virtual object at the end is inserted before the hidden list.
Step 668: adjusting the virtual object at the tail of the hidden list to the tail of the queue;
for example, in the case that the number of objects in the display list is less than or equal to the number of objects in the hidden list, the virtual object at the tail of the hidden list is adjusted to the tail of the queue. It can be understood that the object that does not undergo the position change is an object in the hidden list, the virtual object at the tail of the hidden list is an object after the a-th virtual object, and the virtual object at the tail of the hidden list is adjusted to the tail of the queue.
Step 670: clearing the display list, the hidden list and the list to be processed;
and deleting the display list, the hidden list and the list data to be processed.
Step 672: executing the incremental refresh matrix;
and performing rendering processing on the virtual objects positioned in the display list by executing the increment refreshing matrix.
In summary, the method provided by the embodiment provides a technical scheme for creating a new memory space for the situation of overflowing the memory space, and sets a parameter matrix in the new memory space, thereby laying a foundation for rendering the virtual object.
Fig. 16 shows a flowchart of a virtual object rendering method according to an exemplary embodiment of the present application. The method may be performed by a computer device. I.e. on the basis of the embodiment shown in fig. 3, further comprises steps 502, 504:
step 502: determining a loading space range in the virtual environment by taking the visual field space range as the center;
illustratively, a distance between any spatial point in the loading spatial range and the field of view spatial range is less than a first distance threshold; in one implementation, the shape of the loading space is the same as the field of view space. The field of view space is generally the central region of the loading space, but the case where the loading space includes the field of view space, but the field of view space is not the central region of the loading space is not excluded.
Step 504: loading the loading space block in the memory space;
illustratively, a load space block is a space block that overlaps with a load space range; by loading the loading space block, the virtual environment is constructed.
The loading of the load space block is illustratively accomplished by reading data of the load space block from the storage space to the memory space.
In an alternative design of this embodiment, the method further comprises the steps of:
determining a clearance space range in the virtual environment based on the visual field space range, wherein the distance between any spatial point in the clearance space range and the visual field space range is larger than a second distance threshold;
and deleting the clearing space blocks in the memory space, wherein the clearing space blocks are space blocks which are overlapped with the clearing space range and are not overlapped with the loading space range.
Illustratively, the purge space block and the load space block are different space blocks. Illustratively, the purge space range does not contain a load space range; the purge space range and the load space range may or may not be adjacent to each other. Illustratively, the processing of the purge space block is accomplished by deleting the data of the purge space block in the memory space.
In another alternative design of this embodiment, the method further comprises the steps of:
determining a cache space range in the virtual environment based on the view space range, wherein the distance between any spatial point in the cache space range and the view space range is greater than a third distance threshold;
and establishing a cache for the cache space block, wherein the cache space block is a space block which has an overlapping with the cache space range and has no overlapping with the loading space range.
Illustratively, the cache space block and the load space block are different space blocks. Illustratively, the cache space range does not contain a load space range; the cache space range and the loading space range may or may not be adjacent to each other. Illustratively, processing the cache space block is accomplished by establishing a cache in memory space for data of the cache space block.
It should be noted that, in one implementation, a loading space range, a cache space range, and a clearing space range exist in the virtual environment at the same time; wherein the second distance threshold is greater than the third distance threshold.
In summary, according to the method provided by the embodiment, the loading space range is determined, the data of the loading space block is loaded in the memory space, and the technical scheme of loading in advance avoids the time delay caused by loading in the rendering process of the virtual object, so that the computational complexity is effectively reduced, and the time required for rendering the virtual object is saved.
FIG. 17 illustrates a schematic diagram of a virtual environment provided by one embodiment of the present application.
For ease of viewing, the schematic diagram of the virtual environment shown in fig. 17 is an observation diagram from a top view perspective.
At least fifty-six space blocks 402 exist in the virtual environment, as shown in fig. 17, the fifty-six space blocks 402 are arranged in the virtual space in 7 rows and 8 columns;
a load space range 406 is determined in the virtual environment centered on the field of view space range 404; in the present embodiment, the view space range 404 and the loading space range 406 are both rectangular, and the projection in a plan view is rectangular, and in other embodiments, the view space range 404 may be in other shapes such as a pyramid, a cone, or the like, and the present embodiment is not limited thereto.
In the virtual environment, a space block 402 overlapping with the load space range 406 is determined as a load space block, and load processing is performed on the load space block in the memory. In this embodiment, the loading space block includes twelve space blocks 402, specifically: in the third to fifth rows from top to bottom, each row is left with a third to sixth spatial block 402.
Determining a cache space range and a purge space range based on the field of view space range 404; illustratively, the cache space range is an annular enclosed region enclosed between the load space range 406 and the first boundary line 408, and the purge space range is an annular enclosed region enclosed between the first boundary line 408 and the second boundary line 410.
Determining a cache space block according to the cache space range, and establishing a cache for the cache space block; in this embodiment, the cache space block includes eighteen space blocks 402, specifically: second through seventh space blocks 402 from the top to the bottom in the second and sixth rows, each row being left; and second to sixth spatial blocks 402 from each column in the second left to right column and the seventh row.
And determining a clearing space block according to the clearing space range, and deleting the clearing space block in the memory. In this embodiment, the cache space block includes twenty-six space blocks 402, specifically: all spatial blocks 402 in the first and seventh rows from top to bottom; and all spatial blocks 402 in the first column and eighth row from left to right.
Fig. 18 shows a flowchart of a virtual object rendering method according to an exemplary embodiment of the present application. The method may be performed by a computer device. That is, in the embodiment shown in fig. 3, step 510 may be implemented as step 510a, and step 520 may be implemented as step 520a:
step 510a: performing collision detection on the view projection and the space block projection;
illustratively, in the present embodiment, collision detection is performed based on projections on a two-dimensional plane.
For example, the field of view projection is a projection of a field of view spatial extent on a horizontal plane of the virtual environment, and the spatial block projection is a projection of a spatial block on a horizontal plane of the virtual environment; for example, collision detection is performed on the view projection and the spatial block projection by determining whether there is an overlapping region of the view projection and the spatial block projection.
Step 520a: under the condition that collision exists between the view projection and the first space block projection, collision detection is carried out on the view projection and the space sub-block projection;
the projection of the spatial sub-block is, for example, a projection of the spatial sub-block onto a horizontal plane of the virtual environment. Similarly, collision detection is performed on the view projection and the spatial sub-block projection by determining whether there is an overlap region between the view projection and the spatial sub-block projection.
In summary, according to the method provided by the embodiment, the collision detection is performed by projection on the two-dimensional horizontal plane, so that the collision detection in the three-dimensional space is simplified to the two-dimensional space, the calculation complexity is effectively reduced, and the time required for executing the collision detection is saved.
Those skilled in the art can understand that the above embodiments may be implemented independently, or the above embodiments may be combined freely to form a new embodiment to implement the virtual object rendering method of the present application.
Fig. 19 shows a block diagram of a virtual object rendering apparatus provided by an exemplary embodiment of the present embodiment. The device comprises:
a detection module 810 for collision detection of a field of view spatial range and a spatial block in a virtual environment, the virtual environment comprising at least one of the spatial blocks;
the detection module 810 is further configured to perform collision detection on a spatial sub-block in the view space range and the first spatial block in a case where there is a collision between the view space range and the first spatial block, where the first spatial block includes at least one spatial sub-block;
and a rendering module 820, configured to render the second virtual object in the first spatial sub-block in the case that there is a collision between the field of view spatial range and the first spatial sub-block.
In an optional design of this embodiment, the positions of the spatial sub-blocks are determined according to the positions of the second virtual objects, and the spatial sub-blocks are arranged in the first spatial block in a non-closely-spaced manner.
In an optional design of this embodiment, the first space block includes a first virtual object and the second virtual object, and the space sub-block includes the second virtual object;
The size of the first virtual object is larger than a size threshold corresponding to the space block; the size of the second virtual object is less than or equal to the size threshold.
In an alternative design of this embodiment, the rendering module 820 is further configured to:
rendering the first virtual object in the first spatial block in the event of a collision between the field of view spatial extent and the first spatial block.
In an alternative design of this embodiment, the apparatus further comprises:
a determining module 830 is configured to determine the first virtual object and/or the second virtual object in the second spatial block as a hidden state in a case where there is no collision between the field of view spatial range and the second spatial block.
In an alternative design of this embodiment, the rendering module 820 is further configured to:
determining the number of the second virtual objects in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block and memory space is not allocated;
distributing the memory space according to the number of the second virtual objects;
setting parameter information and index information for a parameter matrix of the second virtual object based on the memory space, wherein the parameter information is used for indicating the object form of the second virtual object in the virtual environment, and the index information is used for indicating the index relation between the second virtual object and the first space subblock;
And rendering the second virtual object based on the parameter matrix of the second virtual object.
In an alternative design of this embodiment, the rendering module 820 is further configured to:
in the case that there is a collision between the field of view space range and the first space sub-block and memory space has been allocated, adding the second virtual object in the second space block to a display list of the memory space;
and rendering the second virtual object in the display list based on the parameter matrix of the second virtual object.
In an alternative design of this embodiment, the apparatus further comprises:
a processing module 840, configured to set parameter information and index information for the parameter matrix of the second virtual object if the memory space does not overflow;
the processing module 840 is further configured to, in case that the memory space overflows, add the second virtual object to a to-be-processed list, and allocate a new memory space;
the processing module 840 is further configured to multiplex the to-be-processed list and the display list in the new memory space;
the processing module 840 is further configured to set parameter information and index information for a parameter matrix of the second virtual object in the to-be-processed list;
The parameter information is used for indicating the object form of the second virtual object in the virtual environment, and the index information is used for indicating the index relation between the second virtual object and the first space subblock.
In an alternative design of this embodiment, the apparatus further comprises:
a processing module 840 for adding a second virtual object in a second spatial sub-block to the hidden list in the absence of a collision between the field of view spatial extent and the second spatial sub-block;
the processing module 840 is further configured to determine a second virtual object in the hidden list as a hidden state.
In an alternative design of this embodiment, the rendering module 820 is further configured to:
adding a shadow drawing identifier to the second virtual object under the condition that the second virtual object has shadows, and adding the second virtual object to a shadow list;
and rendering the second virtual object and the shadow of the second virtual object in the shadow list based on the parameter matrix of the second virtual object.
In an alternative design of this embodiment, the detection module 810 is further configured to:
Performing collision detection on a view projection and a space block projection, wherein the view projection is the projection of the view space range on the horizontal plane of the virtual environment, and the space block projection is the projection of the space block on the horizontal plane of the virtual environment;
and in the case that collision exists between the view projection and the first space block projection, collision detection is carried out on the view projection and the space sub-block projection, wherein the space sub-block projection is the projection of the space sub-block on the horizontal plane of the virtual environment.
In an alternative design of this embodiment, the apparatus further comprises:
a processing module 840 configured to determine a loading spatial range in the virtual environment centered on the field of view spatial range, and a distance between any spatial point in the loading spatial range and the field of view spatial range is less than a first distance threshold;
the processing module 840 is further configured to perform a loading process on a loading space block in a memory space, where the loading space block is the space block that overlaps the loading space range.
In an alternative design of the present embodiment, the processing module 840 is further configured to:
determining a clearance space range in the virtual environment based on the field of view space range, wherein a distance between any spatial point in the clearance space range and the field of view space range is greater than a second distance threshold;
And deleting the clearing space block in the memory space, wherein the clearing space block is the space block which has overlap with the clearing space range and has no overlap with the loading space range.
In an alternative design of the present embodiment, the processing module 840 is further configured to:
determining a cache space range in the virtual environment based on the view space range, wherein the distance between any spatial point in the cache space range and the view space range is greater than a third distance threshold;
and establishing a cache for a cache space block, wherein the cache space block is the space block which has an overlapping with the cache space range and has no overlapping with the loading space range.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the respective functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to actual needs, that is, the content structure of the device is divided into different functional modules, so as to perform all or part of the functions described above.
With respect to the apparatus in the above embodiments, the specific manner in which the respective modules perform the operations has been described in detail in the embodiments regarding the method; the technical effects achieved by the execution of the operations by the respective modules are the same as those in the embodiments related to the method, and will not be described in detail herein.
Fig. 20 shows a block diagram of a computer device according to an exemplary embodiment of the present application. The computer device 900 may be a portable mobile terminal such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg 3), MP4 (Moving Picture Experts Group Audio Layer IV, mpeg 4) players. The computer device 900 may also be referred to by other names of user devices, portable terminals, etc.
In general, the computer device 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 901 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 901 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 901 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 901 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 902 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement the virtual object rendering methods provided in embodiments of the present application.
In some embodiments, the computer device 900 may also optionally include: a peripheral interface 903, and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, a touch display 905, a camera 906, audio circuitry 907, and a power source 908.
The peripheral interface 903 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 901, the memory 902, and the peripheral interface 903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 904 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication ) related circuits, which are not limited in this application.
The touch display 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The touch display 905 also has the ability to capture touch signals at or above the surface of the touch display 905. The touch signal may be input as a control signal to the processor 901 for processing. The touch display 905 is used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the touch display 905 may be one, providing a front panel of the computer device 900; in other embodiments, the touch display 905 may be at least two, respectively disposed on different surfaces of the computer device 900 or in a folded design; in some embodiments, touch display 905 may be a flexible display disposed on a curved surface or a folded surface of computer device 900. Even more, the touch display 905 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The touch display 905 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 906 is used to capture images or video. Optionally, the camera assembly 906 includes a front camera and a rear camera. In general, a front camera is used for realizing video call or self-photographing, and a rear camera is used for realizing photographing of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and the rear cameras are any one of a main camera, a depth camera and a wide-angle camera, so as to realize fusion of the main camera and the depth camera to realize a background blurring function, and fusion of the main camera and the wide-angle camera to realize a panoramic shooting function and a Virtual Reality (VR) shooting function. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 907 is used to provide an audio interface between the user and computer device 900. The audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each disposed at a different location of the computer device 900. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 907 may also include a headphone jack.
The power supply 908 is used to power the various components in the computer device 900. The power source 908 may be alternating current, direct current, disposable or rechargeable. When the power source 908 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 900 also includes one or more sensors 909. The one or more sensors 909 include, but are not limited to: acceleration sensor 910, gyroscope sensor 911, pressure sensor 912, optical sensor 913, and proximity sensor 914.
The acceleration sensor 910 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the computer device 900. For example, the acceleration sensor 910 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 901 may control the touch display 905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 910. The acceleration sensor 910 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 911 may detect the body direction and the rotation angle of the computer device 900, and the gyro sensor 911 may collect the 3D motion of the user to the computer device 900 in cooperation with the acceleration sensor 910. The processor 901 may implement the following functions based on the data collected by the gyro sensor 911: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 912 may be disposed on a side frame of computer device 900 and/or on an underlying layer of touch display 905. When the pressure sensor 912 is disposed at a side frame of the computer device 900, a grip signal of the computer device 900 by a user may be detected, and left-right hand recognition or shortcut operation may be performed according to the grip signal. When the pressure sensor 912 is disposed at the lower layer of the touch display 905, control of the operability control on the UI interface can be achieved according to the pressure operation of the user on the touch display 905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 913 is used to collect the intensity of the ambient light. In one embodiment, the processor 901 may control the display brightness of the touch display 905 based on the intensity of ambient light collected by the optical sensor 913. Specifically, when the ambient light intensity is high, the display brightness of the touch display 905 is turned up; when the ambient light intensity is low, the display brightness of the touch display panel 905 is turned down. In another embodiment, the processor 901 may also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 913.
A proximity sensor 914, also known as a distance sensor, is typically disposed on the front of the computer device 900. The proximity sensor 914 is used to collect the distance between the user and the front of the computer device 900. In one embodiment, when the proximity sensor 914 detects a gradual decrease in the distance between the user and the front face of the computer device 900, the processor 901 controls the touch display 905 to switch from the bright screen state to the off screen state; when the proximity sensor 914 detects that the distance between the user and the front of the computer device 900 is gradually increasing, the touch display 905 is controlled by the processor 901 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structures shown above are not limiting of computer device 900 and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a chip is also provided, which includes programmable logic circuits and/or program instructions for implementing the virtual object rendering method described in the above aspect when the chip is run on a computer device.
In an exemplary embodiment, a computer program product is also provided, the computer program product comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor reads and executes the computer instructions from the computer readable storage medium to implement the virtual object rendering method provided by the above method embodiments.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program loaded and executed by a processor to implement the virtual object rendering method provided by the above-described method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (18)

1. A method of rendering a virtual object, the method comprising:
performing collision detection on a field of view spatial range and a spatial block in a virtual environment, wherein the virtual environment comprises at least one spatial block;
performing collision detection on a space sub-block in the visual field space range and a first space block under the condition that collision exists between the visual field space range and the first space block, wherein the first space block comprises at least one space sub-block;
and rendering a second virtual object in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block.
2. The method of claim 1, wherein the locations of the spatial sub-blocks are determined based on the locations of the second virtual objects, the spatial sub-blocks being arranged in the first spatial block in a non-tiled arrangement.
3. The method of claim 2, wherein the first spatial block contains a first virtual object and the second virtual object, and wherein the spatial sub-block contains the second virtual object;
the size of the first virtual object is larger than a size threshold corresponding to the space block; the size of the second virtual object is less than or equal to the size threshold.
4. A method according to claim 3, characterized in that the method further comprises:
rendering the first virtual object in the first spatial block in the event of a collision between the field of view spatial extent and the first spatial block.
5. The method according to claim 1, wherein the method further comprises:
and determining the first virtual object and/or the second virtual object in the second space block as a hidden state in the case that no collision exists between the visual field space range and the second space block.
6. The method of any one of claims 1 to 5, wherein said rendering a second virtual object in said first spatial sub-block in the presence of a collision between said field of view spatial extent and said first spatial sub-block comprises:
Determining the number of the second virtual objects in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block and memory space is not allocated;
distributing the memory space according to the number of the second virtual objects;
setting parameter information and index information for a parameter matrix of the second virtual object based on the memory space, wherein the parameter information is used for indicating the object form of the second virtual object in the virtual environment, and the index information is used for indicating the index relation between the second virtual object and the first space subblock;
and rendering the second virtual object based on the parameter matrix of the second virtual object.
7. The method of any one of claims 1 to 5, wherein said rendering a second virtual object in said first spatial sub-block in the presence of a collision between said field of view spatial extent and said first spatial sub-block comprises:
in the case that there is a collision between the field of view space range and the first space sub-block and memory space has been allocated, adding the second virtual object in the second space block to a display list of the memory space;
And rendering the second virtual object in the display list based on the parameter matrix of the second virtual object.
8. The method of claim 7, wherein the method further comprises:
setting parameter information and index information for the parameter matrix of the second virtual object under the condition that the memory space does not overflow;
under the condition that the memory space overflows, adding the second virtual object to a to-be-processed list, and distributing a new memory space;
multiplexing the list to be processed and the display list in the new memory space;
setting parameter information and index information for a parameter matrix of a second virtual object in the to-be-processed list;
the parameter information is used for indicating the object form of the second virtual object in the virtual environment, and the index information is used for indicating the index relation between the second virtual object and the first space subblock.
9. The method of claim 7, wherein the method further comprises:
adding a second virtual object in a second spatial sub-block to a hidden list in the absence of a collision between the field of view spatial range and the second spatial sub-block;
And determining a second virtual object in the hidden list as a hidden state.
10. The method of claim 7, wherein the method further comprises:
adding a shadow drawing identifier to the second virtual object under the condition that the second virtual object has shadows, and adding the second virtual object to a shadow list;
and rendering the second virtual object and the shadow of the second virtual object in the shadow list based on the parameter matrix of the second virtual object.
11. The method of any one of claims 1 to 5, wherein said collision detection of field of view spatial extent and spatial blocks in the virtual environment comprises:
performing collision detection on a view projection and a space block projection, wherein the view projection is the projection of the view space range on the horizontal plane of the virtual environment, and the space block projection is the projection of the space block on the horizontal plane of the virtual environment;
and in the case that a collision exists between the visual field space range and the first space block, performing collision detection on the spatial sub-blocks in the visual field space range and the first space block, including:
And in the case that collision exists between the view projection and the first space block projection, collision detection is carried out on the view projection and the space sub-block projection, wherein the space sub-block projection is the projection of the space sub-block on the horizontal plane of the virtual environment.
12. The method according to any one of claims 1 to 5, further comprising:
determining a loading space range in the virtual environment by taking the visual field space range as a center, wherein the distance between any spatial point in the loading space range and the visual field space range is smaller than a first distance threshold;
and loading the loading space block in the memory space, wherein the loading space block is the space block overlapped with the loading space range.
13. The method according to claim 12, wherein the method further comprises:
determining a clearance space range in the virtual environment based on the field of view space range, wherein a distance between any spatial point in the clearance space range and the field of view space range is greater than a second distance threshold;
and deleting the clearing space block in the memory space, wherein the clearing space block is the space block which has overlap with the clearing space range and has no overlap with the loading space range.
14. The method according to claim 12, wherein the method further comprises:
determining a cache space range in the virtual environment based on the view space range, wherein the distance between any spatial point in the cache space range and the view space range is greater than a third distance threshold;
and establishing a cache for a cache space block, wherein the cache space block is the space block which has an overlapping with the cache space range and has no overlapping with the loading space range.
15. A virtual object rendering apparatus, the apparatus comprising:
the detection module is used for performing collision detection on the visual field space range and the space blocks in the virtual environment, and the virtual environment comprises at least one space block;
the detection module is further configured to perform collision detection on a spatial sub-block in the view space range and the first spatial block in the case that a collision exists between the view space range and the first spatial block, where the first spatial block includes at least one spatial sub-block;
and the rendering module is used for rendering the second virtual object in the first space sub-block under the condition that collision exists between the visual field space range and the first space sub-block.
16. A computer device, the computer device comprising: a processor and a memory, wherein at least one section of program is stored in the memory; the processor is configured to execute the at least one program in the memory to implement the virtual object rendering method according to any one of claims 1 to 14.
17. A computer readable storage medium having stored therein executable instructions that are loaded and executed by a processor to implement a method of rendering a virtual object as claimed in any one of claims 1 to 14.
18. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium, from which a processor reads and executes them to implement the method of rendering a virtual object according to any of the preceding claims 1 to 14.
CN202211185429.3A 2022-09-27 2022-09-27 Virtual object rendering method, device, equipment and storage medium Pending CN117815652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211185429.3A CN117815652A (en) 2022-09-27 2022-09-27 Virtual object rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211185429.3A CN117815652A (en) 2022-09-27 2022-09-27 Virtual object rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117815652A true CN117815652A (en) 2024-04-05

Family

ID=90508366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211185429.3A Pending CN117815652A (en) 2022-09-27 2022-09-27 Virtual object rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117815652A (en)

Similar Documents

Publication Publication Date Title
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111589128A (en) Operation control display method and device based on virtual scene
CN108664231B (en) Display method, device, equipment and storage medium of 2.5-dimensional virtual environment
CN111467802B (en) Method, device, equipment and medium for displaying picture of virtual environment
CN110102052B (en) Virtual resource delivery method and device, electronic device and storage medium
CN109634413B (en) Method, device and storage medium for observing virtual environment
CN110496392B (en) Virtual object control method, device, terminal and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN111744185B (en) Virtual object control method, device, computer equipment and storage medium
WO2022227915A1 (en) Method and apparatus for displaying position marks, and device and storage medium
CN112169330B (en) Method, device, equipment and medium for displaying picture of virtual environment
KR102633468B1 (en) Method and device for displaying hotspot maps, and computer devices and readable storage media
CN111589116B (en) Method, device, terminal and storage medium for displaying function options
CN112138390A (en) Object prompting method and device, computer equipment and storage medium
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN112354180A (en) Method, device and equipment for updating integral in virtual scene and storage medium
CN110478900B (en) Map area generation method, device, equipment and storage medium in virtual environment
JP7483056B2 (en) Method, device, equipment, and computer program for determining selection target
CN111672115B (en) Virtual object control method and device, computer equipment and storage medium
CN111068323B (en) Intelligent speed detection method, intelligent speed detection device, computer equipment and storage medium
CN113633970B (en) Method, device, equipment and medium for displaying action effect
CN117815652A (en) Virtual object rendering method, device, equipment and storage medium
CN114130020A (en) Virtual scene display method, device, terminal and storage medium
CN112957732A (en) Searching method, searching device, terminal and storage medium
CN113144595A (en) Virtual road generation method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination