CN111784612A - Method and device for eliminating scene object model in game - Google Patents

Method and device for eliminating scene object model in game Download PDF

Info

Publication number
CN111784612A
CN111784612A CN202010652238.8A CN202010652238A CN111784612A CN 111784612 A CN111784612 A CN 111784612A CN 202010652238 A CN202010652238 A CN 202010652238A CN 111784612 A CN111784612 A CN 111784612A
Authority
CN
China
Prior art keywords
object model
scene
viewing cone
rejection
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010652238.8A
Other languages
Chinese (zh)
Inventor
蒋松佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010652238.8A priority Critical patent/CN111784612A/en
Publication of CN111784612A publication Critical patent/CN111784612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a method and a device for eliminating scene object models in a game. Wherein, the method comprises the following steps: adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; and rendering the object model of the target scene. The invention solves the technical problem that the CPU has lower computing efficiency due to the computing redundancy in the process of removing the scene object model from the CPU in the prior art.

Description

Method and device for eliminating scene object model in game
Technical Field
The invention relates to the technical field of games, in particular to a method and a device for eliminating scene object models in games.
Background
In a game scene, there are usually both complex static scene object models and a large number of dynamic player or NPC models, and if all scene object models are submitted to GPU rendering without culling, very large performance consumption is brought. Therefore, some elimination algorithms are usually used at the CPU end to eliminate the object models that do not contribute to the current frame, and the object models do not enter the rendering process of the GPU, so as to reduce the pressure of the GPU, but also bring about some CPU calculations.
In the prior art, when scene object models are removed from a CPU, a view cone is removed for one time, models which are not in the view cone of a camera are removed, then some static object models are removed by utilizing the immovable characteristic of static objects so as to reduce performance consumption, and then the shielded dynamic object models are removed in a rendering process. However, since data of each removing stage in the prior art are not communicated with each other, computational redundancy may occur in the removing process, for example, a certain object model is removed in the viewing cone removing stage, but visibility is queried once again in the potential visible object removing stage, and computation is performed once again in the software occlusion removing stage.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for removing scene object models in games, which are used for at least solving the technical problem of low calculation efficiency of a CPU (central processing unit) caused by calculation redundancy in the process of removing the scene object models in the CPU in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a method for rejecting scene object models in a game, including: adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; and rendering the object model of the target scene.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for rejecting scene object models in a game, including: the elimination processing module is used for eliminating each scene object model in the game scene by adopting a preset elimination processing mode, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; the first acquisition module is used for acquiring the shielding information of the scene object model after the elimination processing; the determining module is used for determining the occlusion type of the scene object model after the elimination processing according to the occlusion information; a second obtaining module, configured to obtain a target scene object model from the scene object model after the elimination processing based on the occlusion type detection; and the rendering processing module is used for rendering the object model of the target scene.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium, where the non-volatile storage medium includes a stored program, and when the program runs, the apparatus where the non-volatile storage medium is located is controlled to execute any one of the methods for rejecting the scene object model in the game.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program stored in a memory, where the program is executed to perform any one of the above methods for rejecting scene object models in a game.
In the embodiment of the invention, each scene object model in a game scene is removed by adopting a preset removing processing mode, wherein the preset removing processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; the target scene object model is rendered, so that the purpose of reducing the calculation redundancy in the process of removing the scene object model in the CPU is achieved, the technical effects of improving the calculation efficiency of the CPU and reducing the calculation efficiency of the CPU are achieved, and the technical problem that in the prior art, the calculation redundancy exists in the process of removing the scene object model in the CPU, so that the calculation efficiency of the CPU is low is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for rejecting scene object models in a game according to an embodiment of the invention;
FIG. 2 is a schematic view of an alternative in-game scene with scene object models removed in accordance with an embodiment of the invention;
FIG. 3 is a schematic view of an alternative in-game scene with scene object models removed in accordance with an embodiment of the invention;
FIG. 4 is a schematic view of an alternative in-game scene with scene object models removed in accordance with an embodiment of the invention;
FIG. 5 is a schematic view of an alternative in-game scene with scene object models culled in accordance with an embodiment of the invention;
fig. 6 is a schematic structural diagram of an apparatus for rejecting scene object models in a game according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, in order to facilitate understanding of the embodiments of the present invention, some terms or nouns referred to in the present invention will be explained as follows:
potential Visibility set pvs (potential Visibility set): the set of cells and the visibility information that determines which cells can be visible in any other cell is referred to as a set of potentially visible objects.
Software Occlusion culling SOC (software Occlusion culling): namely, when one object model is blocked by other objects and is not in the visible range of the camera, the object model is subjected to rejection processing.
Cone Culling (Frustum Culling): namely, when one object model is not in the viewing cone range, the object model is subjected to rejection processing.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for rejecting scene object models in a game, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The technical scheme of the method embodiment can be executed in a mobile terminal, a computer terminal or a similar arithmetic device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device, an input/output device, and a display device for a communication function. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the game screen processing method in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the computer program stored in the memory, that is, implements the game screen processing method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner. The technical scheme of the embodiment of the method can be applied to various communication systems, such as: a Global System for Mobile communications (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a General Packet Radio Service (GPRS), a Long Term Evolution (Long Term Evolution, LTE) System, a Frequency Division Duplex (FDD) System, a Time Division Duplex (TDD), a Universal Mobile Telecommunications System (UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication System, or a 5G System. Optionally, Device-to-Device (D2D for short) communication may be performed between multiple mobile terminals. Alternatively, the 5G system or the 5G network is also referred to as a New Radio (NR) system or an NR network.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable non-volatile storage media.
FIG. 1 is a flow chart of a method for rejecting scene object models in a game according to an embodiment of the present invention, as shown in FIG. 1, the method includes the following steps:
step S102, adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process;
step S104, obtaining the shielding information of the scene object model after the elimination processing;
step S106, determining the shielding type of the scene object model after the elimination processing according to the shielding information;
step S108, a target scene object model is obtained from the scene object model after the elimination processing based on the shielding type detection;
and step S110, rendering the object model of the target scene.
In the embodiment of the invention, each scene object model in a game scene is removed by adopting a preset removing processing mode, wherein the preset removing processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; the target scene object model is rendered, so that the purpose of reducing the calculation redundancy in the process of removing the scene object model in the CPU is achieved, the technical effects of improving the calculation efficiency of the CPU and reducing the calculation efficiency of the CPU are achieved, and the technical problem that in the prior art, the calculation redundancy exists in the process of removing the scene object model in the CPU, so that the calculation efficiency of the CPU is low is solved.
It should be noted that, in the embodiment of the present application, first, a viewing cone removing process may be performed on each scene object model in a game scene, and for a scene object model that has been determined not to be within a viewing cone range, a potential visible object removing process is not performed any more, and for a scene object model that is determined to be within a viewing cone range, a potential visible object removing process is performed.
Then, the software occlusion rejection system further performs occlusion rejection processing according to the visible object rejection processing result, for example, for the rejected occlusion object model, the software occlusion rejection system does not perform a screen depth calculation process on the rejected occlusion object model, and for the rejected occlusion object model, the software occlusion rejection system does not perform a visibility judgment calculation process on the rejected occlusion object model.
Optionally, in this embodiment of the application, by first performing a viewing cone rejection process and/or a potential visible object rejection process on each scene object model in the game scene, a scene object model that is not within a viewing cone range and a static object model that is in an invisible state at the current camera position are rejected, so as to obtain a rejected scene object model; obtaining the shielding information of the scene object model after the shielding processing through a software shielding and removing system SOC, determining the shielding type of the scene object model after the shielding processing according to the shielding information, such as the shielding object model and the shielded object model, detecting a target scene object model, namely the scene object model which is not removed, from the scene object model after the shielding processing based on the shielding type, and submitting the target scene object model to a graphic processing unit GPU for rendering the target scene object model. Through the optimization processing process, the performance consumption and the rendering pressure of the CPU can be effectively reduced, and the number of game frames can be effectively increased.
In an optional embodiment, if the scene object model is a static object model, performing a predetermined culling processing mode on each scene object model in the game scene, including:
step S202, traversing whether each static object model is in a viewing cone range or not to obtain a viewing cone rejection detection result;
step S204, if the viewing cone rejection detection result indicates that the static object model is in the viewing cone range, potential visible object rejection detection is performed on the static object model to obtain a potential visible object rejection detection result;
in step S206, if the viewing cone rejection detection result indicates that the static object model is not within the viewing cone range, the static object model is rejected.
In the optional embodiment, a static scene octree is traversed in a multithread mode, and whether the static object model is in a viewing cone range or not is judged to obtain a viewing cone rejection detection result; if the viewing cone rejection detection result indicates that the static object model is not in the viewing cone range, rejecting the static object model; as shown in fig. 2, if the static object models in the viewing pyramid are within the viewing pyramid range, the next step is performed, and then the potential visible object rejection detection is performed on the static object models to obtain a potential visible object rejection detection result; and if the static object models outside the viewing cone are not in the viewing cone range, rejecting the static object models.
In an alternative embodiment, performing potential visible rejection detection on the static object model to obtain a potential visible rejection detection result includes:
step S302, detecting whether the static object model is in a visible state at the current camera position to obtain the potential visible object rejection detection result;
step S304, if the potential visible object rejection detection result indicates that the static object model is in an invisible state, rejecting the static object model;
step S306, if the potential visible object rejection detection result indicates that the static object model is in a visible state, storing the occlusion information of the static object model in a visible object array.
In the above optional embodiment, by querying the set data of potential visible objects, it is detected whether the static object model is in a visible state at the current camera position, so as to obtain the detection result of potential visible object rejection; if the potential visible object rejection detection result indicates that the static object model is in an invisible state, rejecting the static object model; and if the potential visible object rejection detection result indicates that the static object model is in a visible state, storing the shielding information of the static object model into a visible object array.
As shown in fig. 3, if the potential visible object rejection detection result indicates that the rightmost static object model in the view frustum is in an invisible state, the static object model is subjected to a deletion process.
And as an alternative embodiment, after all query threads finish running, combining the query results of all threads into the same visible object array.
In an optional embodiment, if the scene object model is a dynamic object model, detecting each scene object model in a game scene in a predetermined detection manner to obtain a detection result, including:
step S402, traversing whether each dynamic object model is in a viewing cone range or not to obtain a viewing cone rejection detection result;
step S404, if the viewing cone removing detection result indicates that the dynamic object model is in the viewing cone range, storing the shielding information of the dynamic object model into a visible object array;
step S406, if the viewing cone rejection detection result indicates that the dynamic object model is not within the viewing cone range, rejecting the dynamic object model.
In the optional embodiment, a viewing frustum removal detection result is obtained by traversing whether each dynamic object model in a game scene is in a viewing frustum range; if the viewing cone rejection detection result indicates that the dynamic object model is not in the viewing cone range, rejecting the dynamic object model; and if the viewing cone rejection detection result indicates that the dynamic object model is in the viewing cone range, storing the shielding information of the dynamic object model into a visible object array.
As shown in fig. 4, the square NPC in the viewing pyramid is in the viewing pyramid range, and is not rejected, and the occlusion information of the dynamic object model is stored in the visible object array, so that the ancient city can be processed by depth detection in the software occlusion rejection stage, and the square NPC in the viewing pyramid is not in the viewing pyramid range, and is rejected, and the depth detection processing process in the software occlusion rejection stage is not performed.
In an optional embodiment, if the occlusion type indicates that the scene object model after the removing process is an occlusion object model, the method further includes:
step S502, traversing the shielding object model through a plurality of traversal threads to obtain a first traversal result;
step S504, performing soft raster calculation on the shielding object model according to the first traversal result to obtain screen depth information of the plurality of traversal threads;
step S506, merging the screen depth information of the plurality of traversal threads.
In the above optional embodiment, if the occlusion type indicates that the scene object model after the culling process is an occlusion object model,
in the above optional embodiment, the first traversal result is obtained by traversing the shielding object model through the multiple traversal threads, and if the first traversal result indicates that the shielding object model is the object model that has not been removed in the above step, the soft raster calculation is performed on the shielding object model to obtain the screen depth information of the multiple traversal threads, and the screen depth information obtained by the calculation of the multiple traversal threads is combined.
In an optional embodiment, if the occlusion type indicates that the scene object model after the removing process is an occluded object model, detecting, based on the occlusion type, to obtain a target scene object model from the scene object model after the removing process, includes:
step S602, traversing the shielded object model through a plurality of traversal threads to obtain a second traversal result;
step S604, obtaining the depth detection information of the occluded object model based on the second traversal result;
step S606, if the depth detection information meets the requirement of the screen depth information, determining that the occluded object model is the object model of the target scene.
In the optional embodiment, the occluded object model is traversed through a plurality of traversal threads to obtain a second traversal result, and if the second traversal result indicates that the occluded object model is an object model which is not eliminated in the step, depth detection is performed on the occluded object model to obtain depth detection information of the occluded object model; and if the depth detection information meets the requirement of the screen depth information, namely the depth detection information passes the detection, determining that the shielded object model is the target scene object model.
In another optional embodiment, if the depth detection information does not meet the requirement of the screen depth information, that is, the depth detection information fails to pass the detection, the blocked object model is removed; as shown in fig. 5, if the depth detection information of the two top NPCs in the view pyramid does not satisfy the requirement of the screen depth information, the depth detection information is removed, and the GPU rendering stage is not performed.
According to an embodiment of the present invention, there is further provided an embodiment of an apparatus for implementing the method for rejecting a scene object model in a game, and fig. 6 is a schematic structural diagram of an apparatus for rejecting a scene object model in a game according to an embodiment of the present invention, as shown in fig. 6, the apparatus for rejecting a scene object model in a game includes: a culling processing module 60, a first obtaining module 62, a determining module 64, a second obtaining module 66, and a rendering processing module 68, wherein:
a removing processing module 60, configured to remove each scene object model in the game scene by using a predetermined removing processing manner, where the predetermined removing processing manner includes: a viewing cone rejection process and/or a potential visible rejection process; a first obtaining module 62, configured to obtain occlusion information of the scene object model after the removing processing; a determining module 64, configured to determine an occlusion type of the scene object model after the removing process according to the occlusion information; a second obtaining module 66, configured to obtain a target scene object model from the scene object models after the elimination processing based on the occlusion type detection; and the rendering processing module 68 is configured to perform rendering processing on the object model of the target scene.
In an alternative embodiment, if the scene object model is a static object model, the culling processing module 60 includes: traversal unit 70, first processing unit 72, and second processing unit 74, wherein:
a traversing unit 70, configured to traverse whether each of the static object models is within a viewing cone range, so as to obtain a viewing cone rejection detection result; a first processing unit 72, configured to, if the viewing cone rejection detection result indicates that the static object model is within the viewing cone range, perform potential visible object rejection detection on the static object model to obtain a potential visible object rejection detection result; a second processing unit 74, configured to reject the static object model if the viewing cone rejection detection result indicates that the static object model is not within the viewing cone range.
It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; alternatively, the modules may be located in different processors in any combination.
It should be noted here that the above-mentioned culling processing module 60, the first obtaining module 62, the determining module 64, the second obtaining module 66 and the rendering processing module 68 correspond to steps S102 to S110 in the method embodiment, the above-mentioned traversing unit 70, the first processing unit 72 and the second processing unit 74 correspond to steps S202 to S206 in the method embodiment, and the above-mentioned modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the above-mentioned method embodiment. It should be noted that the modules described above may be implemented in a computer terminal as part of an apparatus.
It should be noted that, for alternative or preferred embodiments of the present embodiment, reference may be made to the related description in the method embodiment, and details are not described herein again.
The device for rejecting the scene object model in the game may further include a processor and a memory, where the rejection processing module 60, the first obtaining module 62, the determining module 64, the second obtaining module 66, the rendering processing module 68, the traversing unit 70, the first processing unit 72, the second processing unit 74, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory, wherein one or more than one kernel can be arranged. The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to the embodiment of the application, the embodiment of the nonvolatile storage medium is also provided. Optionally, in this embodiment, the nonvolatile storage medium includes a stored program, and when the program runs, the apparatus where the nonvolatile storage medium is located is controlled to execute the method for rejecting the scene object model in any game.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals, and the nonvolatile storage medium includes a stored program.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; and rendering the object model of the target scene.
According to the embodiment of the application, the embodiment of the processor is also provided. Optionally, in this embodiment, the processor is configured to execute a program, where the program executes any method for rejecting the scene object model in the game.
The embodiment of the application provides equipment, the equipment comprises a processor, a memory and a program which is stored on the memory and can run on the processor, and the following steps are realized when the processor executes the program: adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; and rendering the object model of the target scene.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process; obtaining the shielding information of the scene object model after the elimination processing; determining the shielding type of the scene object model after the elimination processing according to the shielding information; acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection; and rendering the object model of the target scene.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable non-volatile storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a non-volatile storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned nonvolatile storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for eliminating scene object models in a game is characterized by comprising the following steps:
adopting a preset elimination processing mode to eliminate each scene object model in the game scene, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process;
obtaining the shielding information of the scene object model after the elimination processing;
determining the shielding type of the scene object model after the elimination processing according to the shielding information;
acquiring a target scene object model from the scene object model after the elimination processing based on the shielding type detection;
and rendering the object model of the target scene.
2. The method of claim 1, wherein if the scene object models are static object models, performing a predetermined culling process on each scene object model in the game scene, comprising:
traversing whether each static object model is in a viewing cone range or not to obtain a viewing cone rejection detection result;
if the viewing cone rejection detection result indicates that the static object model is in the viewing cone range, potential visible rejection detection is performed on the static object model to obtain a potential visible rejection detection result;
and if the viewing cone rejection detection result indicates that the static object model is not in the viewing cone range, rejecting the static object model.
3. The method of claim 2, wherein performing potential visible rejection detection on the static object model, resulting in a potential visible rejection detection result, comprises:
detecting whether the static object model is in a visible state at the current camera position to obtain the potential visible object rejection detection result;
if the potential visible object rejection detection result indicates that the static object model is in an invisible state, rejecting the static object model;
and if the potential visible object rejection detection result indicates that the static object model is in a visible state, storing the shielding information of the static object model into a visible object array.
4. The method of claim 1, wherein if the scene object model is a dynamic object model, detecting each scene object model in the game scene by using a predetermined detection method to obtain a detection result, comprising:
traversing whether each dynamic object model is in a viewing cone range or not to obtain a viewing cone rejection detection result;
if the viewing cone removing detection result indicates that the dynamic object model is in the viewing cone range, storing the shielding information of the dynamic object model into a visible object array;
and if the viewing cone rejection detection result indicates that the dynamic object model is not in the viewing cone range, rejecting the dynamic object model.
5. The method according to claim 1, wherein if the occlusion type indicates that the culled scene object model is an occlusion object model, the method further comprises:
traversing the shielding object model through a plurality of traversal threads to obtain a first traversal result;
performing soft raster calculation on the shielding object model according to the first traversal result to obtain screen depth information of the plurality of traversal threads;
merging the screen depth information of the plurality of traversal threads.
6. The method according to claim 5, wherein if the occlusion type indicates that the culled scene object model is an occluded object model, acquiring a target scene object model from the culled scene object model based on the occlusion type detection comprises:
traversing the shielded object model through a plurality of traversal threads to obtain a second traversal result;
obtaining depth detection information of the shielded object model based on the second traversal result;
and if the depth detection information meets the requirement of the screen depth information, determining the shielded object model as the object model of the target scene.
7. An apparatus for rejecting scene object models in a game, comprising:
the elimination processing module is used for eliminating each scene object model in the game scene by adopting a preset elimination processing mode, wherein the preset elimination processing mode comprises the following steps: a viewing cone rejection process and/or a potential visible rejection process;
the first acquisition module is used for acquiring the shielding information of the scene object model after the elimination processing;
the determining module is used for determining the occlusion type of the scene object model after the elimination processing according to the occlusion information;
a second obtaining module, configured to obtain a target scene object model from the scene object model after the elimination processing based on the occlusion type detection;
and the rendering processing module is used for rendering the object model of the target scene.
8. The apparatus of claim 7, wherein if the scene object model is a static object model, the culling module comprises:
the traversing unit is used for traversing whether each static object model is in a viewing cone range or not to obtain a viewing cone rejection detection result;
the first processing unit is used for executing potential visible object rejection detection on the static object model to obtain a potential visible object rejection detection result if the viewing cone rejection detection result indicates that the static object model is in the viewing cone range;
and the second processing unit is used for rejecting the static object model if the viewing cone rejection detection result indicates that the static object model is not in the viewing cone range.
9. A non-volatile storage medium, comprising a stored program, wherein when the program runs, the apparatus on which the non-volatile storage medium is located is controlled to execute the method for rejecting the scene object model in the game according to any one of claims 1 to 6.
10. A processor for executing a program stored in a memory, wherein the program is operable to execute the method of rejecting scene object models in a game according to any one of claims 1 to 6.
CN202010652238.8A 2020-07-08 2020-07-08 Method and device for eliminating scene object model in game Pending CN111784612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010652238.8A CN111784612A (en) 2020-07-08 2020-07-08 Method and device for eliminating scene object model in game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010652238.8A CN111784612A (en) 2020-07-08 2020-07-08 Method and device for eliminating scene object model in game

Publications (1)

Publication Number Publication Date
CN111784612A true CN111784612A (en) 2020-10-16

Family

ID=72758440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010652238.8A Pending CN111784612A (en) 2020-07-08 2020-07-08 Method and device for eliminating scene object model in game

Country Status (1)

Country Link
CN (1) CN111784612A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686992A (en) * 2021-01-12 2021-04-20 北京知优科技有限公司 Geometric figure view frustum realization method and device for OCC tree in smart city and storage medium
WO2022142547A1 (en) * 2020-12-29 2022-07-07 完美世界(北京)软件科技发展有限公司 Data driving method and apparatus for tile based deferred rendering

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010012018A1 (en) * 1998-05-06 2001-08-09 Simon Hayhurst Occlusion culling for complex transparent scenes in computer generated graphics
US20100231588A1 (en) * 2008-07-11 2010-09-16 Advanced Micro Devices, Inc. Method and apparatus for rendering instance geometry
CN102831631A (en) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN103700137A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Space-time related hierachical shielding removal method
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
CN106355644A (en) * 2016-08-31 2017-01-25 北京像素软件科技股份有限公司 Method and device for culling object models from three-dimensional video game pictures
CN108038816A (en) * 2017-12-20 2018-05-15 浙江煮艺文化科技有限公司 A kind of virtual reality image processing unit and method
CN109754454A (en) * 2019-01-30 2019-05-14 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the equipment of object model
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010012018A1 (en) * 1998-05-06 2001-08-09 Simon Hayhurst Occlusion culling for complex transparent scenes in computer generated graphics
US20100231588A1 (en) * 2008-07-11 2010-09-16 Advanced Micro Devices, Inc. Method and apparatus for rendering instance geometry
CN102831631A (en) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN103700137A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Space-time related hierachical shielding removal method
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
CN106355644A (en) * 2016-08-31 2017-01-25 北京像素软件科技股份有限公司 Method and device for culling object models from three-dimensional video game pictures
CN108038816A (en) * 2017-12-20 2018-05-15 浙江煮艺文化科技有限公司 A kind of virtual reality image processing unit and method
CN109754454A (en) * 2019-01-30 2019-05-14 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the equipment of object model
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142547A1 (en) * 2020-12-29 2022-07-07 完美世界(北京)软件科技发展有限公司 Data driving method and apparatus for tile based deferred rendering
CN112686992A (en) * 2021-01-12 2021-04-20 北京知优科技有限公司 Geometric figure view frustum realization method and device for OCC tree in smart city and storage medium

Similar Documents

Publication Publication Date Title
CN109064390B (en) Image processing method, image processing device and mobile terminal
CN107590461B (en) Face recognition method and related product
CN103871051B (en) Image processing method, device and electronic equipment
CN109840881B (en) 3D special effect image generation method, device and equipment
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN111784612A (en) Method and device for eliminating scene object model in game
CN107959965B (en) Frame dropping method and device of application program, mobile terminal and readable storage medium
CN109120862A (en) High-dynamic-range image acquisition method, device and mobile terminal
CN106227483B (en) Display control method and mobile terminal
CN112169314A (en) Method and device for selecting target object in game
CN111450529A (en) Game map acquisition method and device, storage medium and electronic device
CN113256755B (en) Image rendering method, intelligent terminal and storage device
CN108629767B (en) Scene detection method and device and mobile terminal
CN111582353B (en) Image feature detection method, system, device and medium
CN113952740A (en) Method and device for sharing virtual props in game, storage medium and electronic equipment
CN113835585A (en) Application interface switching method, device and equipment based on navigation and storage medium
CN109313531A (en) A kind of graphic user interface that checking application program, method and terminal
CN109683798B (en) Text determination method, terminal and computer readable storage medium
CN112703534B (en) Image processing method and related product
CN110796147A (en) Image segmentation method and related product
CN111773679B (en) Method and device for processing icons in game
CN113138693B (en) Operation identification method and device, electronic equipment and storage medium
CN115499577A (en) Image processing method and terminal equipment
CN108876294A (en) Attendance implementation method and Related product
CN108898081B (en) Picture processing method and device, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination