WO2020114274A1 - 潜在可视集合的确定方法、装置、设备及存储介质 - Google Patents

潜在可视集合的确定方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020114274A1
WO2020114274A1 PCT/CN2019/120864 CN2019120864W WO2020114274A1 WO 2020114274 A1 WO2020114274 A1 WO 2020114274A1 CN 2019120864 W CN2019120864 W CN 2019120864W WO 2020114274 A1 WO2020114274 A1 WO 2020114274A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection point
area
point area
dimensional
pvs
Prior art date
Application number
PCT/CN2019/120864
Other languages
English (en)
French (fr)
Inventor
李海全
赖贵雄
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19893835.9A priority Critical patent/EP3832605B1/en
Publication of WO2020114274A1 publication Critical patent/WO2020114274A1/zh
Priority to US17/185,328 priority patent/US11798223B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the present application relates to the field of data processing technology, and in particular, to a method, device, device, and storage medium for determining a potential visual set.
  • Rendering performance is an important factor in the application process of the 3D virtual environment.
  • the rendering performance determines the fluency of the application running in the 3D virtual environment.
  • the bottleneck that limits the rendering performance is the Central Processing Unit. , CPU).
  • the CPU frequently sends commands to the graphics processor (Graphics, Processing, Unit, GPU) through Draw to perform rendering operations, so that Draw Call accounts for half of the CPU consumption of the 3D racing game. Therefore, the CPU can visually detect the map scene of the 3D racing game based on the pre-calculation method, which can reduce the CPU consumption of Draw Call.
  • the camera model itself is used as the origin to randomly hit rays around to determine the visible objects around the camera model.
  • the CPU determines that the objects are visible objects; when the shot There is no intersection between the ray and the object, and the CPU determines that the object is an invisible object.
  • the CPU pre-calculates by randomly hitting rays.
  • the number of rays hit is not enough, there may be some objects that are not hit by the rays and are misjudged as invisible, so that the CPU does not send it to the GPU through DrawCall
  • the command for rendering the object although the consumption of Draw Call on the CPU is reduced, but the result displayed in the Potentially Visible Set (PVS) is wrong.
  • PVS Potentially Visible Set
  • Embodiments of the present application provide a method, device, device, and storage medium for determining a potential visual set, and a method, device, device, and storage medium for rendering a three-dimensional scene.
  • a method for determining a potential visual set is executed by a computer device. The method includes:
  • a rendering method of a three-dimensional scene is applied to a computer device that stores a detection point area and a PVS.
  • the PVS is generated using the method described above. The method includes:
  • a method for determining a potential visual set is performed by a computer device.
  • the method is applied in a 3D racing game.
  • the 3D racing game includes a track area in a virtual environment.
  • the method includes:
  • a device for determining a potential visual set includes:
  • the first dividing module is used to divide the map area into multiple detection point areas
  • the first replacement module is used to replace the texture material of the three-dimensional object in the detection point area with a single-color material, and the color identification of the single-color material corresponding to each three-dimensional object is different;
  • a first determination module configured to determine at least one detection point in the detection point area
  • a first rendering module configured to render a cube map corresponding to the detection point, and determine a target color mark appearing on the cube map
  • the first adding module is used to add the three-dimensional object corresponding to the target color identifier to the potential visual set PVS of the detection point area.
  • a three-dimensional scene rendering device is applied to a terminal storing a detection point area and a PVS.
  • the PVS is generated using the method described above.
  • the device includes:
  • the detection module is used to detect whether the detection point area where the camera model is located in the current frame is the same as the detection point area where the previous frame is located;
  • a reading module configured to read the PVS of the detection point area where the current frame is located when the detection point area where the current frame is located in the current frame is different from the detection point area where the previous frame is located;
  • the second rendering module is configured to render a lens image of the camera model according to the PVS of the detection point area where the current frame is located.
  • a device for determining a potential visual set is used in a 3D racing game.
  • the 3D racing game includes a track area in a virtual environment.
  • the device includes:
  • the second division module is used to divide the track area into multiple detection point areas
  • the second replacement module is used to replace the texture material of the three-dimensional object in the detection point area with a single-color material, and the color identification of the single-color material corresponding to each three-dimensional object is different;
  • a second determination module configured to determine at least one detection point in the detection point area
  • a third rendering module used to render the cube map corresponding to the detection point, and determine the target color mark appearing on the cube map
  • the second adding module is used to add the three-dimensional object corresponding to the target color identifier to the track PVS of the detection point area.
  • a computer device the terminal includes a processor and a memory, and the memory stores computer-readable instructions.
  • the processor causes the processor to execute the potential The steps of the visual collection determination method.
  • a computer device the terminal includes a processor and a memory, and the memory stores computer-readable instructions, which when executed by the processor, causes the processor to execute the three-dimensional The steps of the scene rendering method.
  • a computer-readable storage medium in which computer-readable instructions are stored, and when the computer-readable instructions are executed by one or more processors, cause the one or more processors to execute as described above The steps of the method for determining the potential visual set.
  • a computer-readable storage medium in which computer-readable instructions are stored, and when the computer-readable instructions are executed by one or more processors, cause the one or more processors to execute as described above The steps of the 3D scene rendering method.
  • FIG. 1 is a structural block diagram of a computer system provided by an exemplary embodiment of the present application
  • FIG. 2 is a flowchart of a method for determining a PVS provided by an exemplary embodiment of the present application
  • FIG. 3 is a flowchart of a method for determining a PVS provided by another exemplary embodiment of the present application.
  • FIG. 4 is a schematic diagram of a three-dimensional object in a virtual environment provided by another exemplary embodiment of the present application.
  • FIG. 5 is a schematic diagram of a three-dimensional object in a virtual environment provided by another exemplary embodiment of the present application after mapping;
  • FIG. 6 is a schematic diagram of road surface height at a plane detection point provided by another exemplary embodiment of the present application.
  • FIG. 7 is a schematic diagram of a two-dimensional texture map at a detection point provided by another exemplary embodiment of the present application.
  • FIG. 8 is a flowchart of a visual detection method for a semi-transparent three-dimensional object provided by another exemplary embodiment of the present application.
  • FIG. 9 is a flowchart of a method for determining a PVS provided by another exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of a first dialog box provided by another exemplary embodiment of the present application.
  • FIG. 11 is a schematic diagram of a second dialog box provided by another exemplary embodiment of the present application.
  • FIG. 12 is a schematic diagram of a third dialog box provided by another exemplary embodiment of the present application.
  • FIG. 13 is a flowchart of a three-dimensional scene rendering method provided by an exemplary embodiment of the present application.
  • FIG. 14 is a schematic diagram of a detection point area divided along a track provided by an exemplary embodiment of the present application.
  • 15 is a schematic diagram of determining whether a camera model is in a detection point area provided by an exemplary embodiment of the present application.
  • 16 is a schematic diagram of an indoor scene of a camera model provided by an exemplary embodiment of the present application.
  • FIG. 17 is a schematic diagram of a lens screen of a camera model at an indoor scene station provided by an exemplary embodiment of the present application.
  • FIG. 18 is a schematic diagram of an outdoor mountain foot scene of a camera model provided by an exemplary embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a PVS determination device provided by an exemplary embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a first replacement module provided by an exemplary embodiment of the present application.
  • 21 is a schematic structural diagram of a first rendering module provided by an exemplary embodiment of the present application.
  • FIG. 22 is a schematic structural diagram of a three-dimensional scene rendering device provided by an exemplary embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of a PVS determination device provided by another exemplary embodiment of the present application.
  • 24 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • FIG. 25 is a structural block diagram of a terminal provided by an exemplary embodiment of the present invention.
  • Virtual environment A virtual environment that is displayed (or provided) when an application is running on a terminal.
  • the virtual environment may be a simulation environment for the real world, a semi-simulation semi-fictional three-dimensional environment, or a purely fictional three-dimensional environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment.
  • the following embodiments illustrate the virtual environment as a three-dimensional virtual environment, but this is not limited.
  • the virtual environment includes at least one virtual character, and the virtual character is active in the virtual environment.
  • the virtual environment is also used for virtual environment racing between at least two virtual characters.
  • the virtual environment is also used for racing between at least two virtual characters using virtual vehicles.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable object may be at least one of a virtual character, a virtual animal, anime character, and a virtual vehicle.
  • the virtual object is a three-dimensional stereoscopic model.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment and occupies a part of the space in the three-dimensional virtual environment.
  • GPU Also known as the display core, visual processor, and display chip, it is a microprocessor that specifically performs image operation on the image on the terminal.
  • the GPU is used to render the three-dimensional object in the detection point area according to the command of the CPU, so that the rendered three-dimensional object has a three-dimensional effect.
  • Draw Call is a function interface for CPU to call graphics programming.
  • the CPU issues a rendering command to the GPU through Draw Call, and the GPU performs the rendering operation according to the rendering command.
  • Draw Call is used to transfer the parameters of rendering commands between CPU and GPU.
  • Frustum refers to a cube shaped like a pyramid with the top cut off and the top parallel to the bottom.
  • the cone has six planes: up, down, left, right, near, and far.
  • the viewing cone takes the camera model as the origin and determines the visual range according to the angle of the camera lens of the camera model, that is, according to the angle of the camera lens of the camera model, a visual range similar to a pyramid shape is formed. Objects that are too close to the camera model or objects that are too far away from the camera model in the visual range like a pyramid shape will not be displayed in the camera lens.
  • the visual range that can be displayed in the camera lens is the viewing cone, so
  • the frustum is a cubic space for displaying objects in the camera lens, and the objects inside the frustum are visible objects, and the objects outside the frustum are invisible objects.
  • the three-dimensional object located in the viewing cone is displayed on the two-dimensional plane of the camera lens by projection, and the image displayed by the projection is the image that the human eye sees in the map scene.
  • Detection point area It is a convex quadrilateral area divided according to the division rules in the virtual environment.
  • the division rule includes division along a predetermined route traveled by the virtual object, and at least one of division according to the range of activity of the virtual object.
  • the detection point area is a convex quadrilateral area divided according to the race line in the virtual environment of the 3D racing game, and the detection point area will not change after being divided.
  • Occlusion culling In the camera lens, an object is blocked by other objects, making the object invisible, and the CPU does not render the object.
  • the CPU reduces the rendering of objects in the map scene by occlusion culling, thereby reducing the consumption of Draw Call on the CPU.
  • the method of measuring the distance between the object and the camera is used, and the object is occluded and culled from near and far according to the distance.
  • Cubemap is a cube composed of six two-dimensional texture maps.
  • the cube map is based on the camera model.
  • the camera model renders a cube with six two-dimensional texture maps by rendering once in six directions (that is, rotating the angle of the camera lens by 90° each time).
  • Cube maps are used to render distant scenes such as the sky, so that the distant scenes can remain relatively still with moving near scenes (such as people), thereby achieving the effect of distant scenes.
  • PVS It is the set of objects that can be seen at the position of the viewpoint or the area where the viewpoint is occlusion removed.
  • Vertex Shader Used to perform various operations on vertex data with vertex attributes.
  • Pixel renderer (PixelShader): used to perform various operations on pixel data with pixel attributes.
  • Embodiments of the present application provide a method, device, device, and storage medium for determining a potential visual set, which can solve the problem that when the CPU performs pre-calculation by randomly hitting rays and the number of rays emitted is not enough, there may be a camera
  • the visible objects in the lens are misjudged as invisible because they are not hit by the rays, which leads to the problem that the rendering result finally displayed in the camera lens is missing.
  • FIG. 1 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 100 includes: a first terminal 110, a second terminal 130, and a server 120.
  • the first terminal 110 installs and runs an application that supports the virtual environment.
  • the application can be any of sports games, vehicle simulation games, and action games.
  • the first terminal 110 is a terminal used by the first user 101.
  • the first user 101 uses the first terminal 110 to control a first virtual object located in a virtual environment for racing.
  • the first virtual object includes but is not limited to: a racing car or an off-road vehicle , At least one of karts, flying cars, airplanes, motorcycles, mountain bikes, and avatars.
  • the first virtual object is a first virtual vehicle, such as a simulated flying car or a simulated racing car.
  • the second terminal 130 has an application program supporting the virtual environment installed and running.
  • the application program may be any one of a sports game, a vehicle simulation game, and an action game.
  • the user interface 131 of the application program is displayed on the screen of the second terminal 130.
  • the second terminal 130 is a terminal used by the second user 102, and the second user 102 uses the second terminal 130 to control a second virtual object located in a virtual environment for racing.
  • the second virtual object includes but is not limited to: a racing car, an off-road vehicle , At least one of karts, flying cars, airplanes, motorcycles, mountain bikes, and avatars.
  • the second virtual object is a second virtual vehicle, such as a simulated flying car or a simulated racing car.
  • first virtual object and the second virtual object are in the same virtual environment.
  • first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, have a friend relationship, or have temporary communication rights.
  • first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship.
  • the application programs installed on the first terminal 110 and the second terminal 130 are the same, or the application programs installed on the two terminals are the same type of application programs on different control system platforms.
  • the first terminal 110 may refer to one of a plurality of terminals
  • the second terminal 130 may refer to one of a plurality of terminals. In this embodiment, only the first terminal 110 and the second terminal 130 are used as examples.
  • the device types of the first terminal 110 and the second terminal 130 are the same or different.
  • the device types include: desktop computer, laptop portable computer, mobile phone, tablet computer, e-book reader, MP3 (Moving Pictures Experts Group Audio Layer III, At least one of the standard audio layer of the motion picture expert compression 3) player, MP4 (Moving Pictures, Experts Group, Audio IV), the standard audio layer of the motion picture expert compression 4) player.
  • the other terminal 140 may be a terminal corresponding to a developer, and a development and editing platform of an application program of a virtual environment is installed on the terminal 140.
  • the developer may edit the application program on the terminal 140 and pass the edited application program file through
  • the wired or wireless network is transmitted to the server 120, and the first terminal 110 and the second terminal 130 may download the update package corresponding to the application program from the server 120 to update the application program.
  • the first terminal 110, the second terminal 130, and other terminals 140 are connected to the server 120 through a wireless network or a wired network.
  • the server 120 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 120 is used to provide background services for applications that support a three-dimensional virtual environment.
  • the server 120 undertakes the main computing work, and the terminal undertakes the secondary computing work; or, the server 120 takes the secondary computing work, and the terminal undertakes the main computing work; or, the distributed computing architecture is used for collaborative computing between the server 120 and the terminal .
  • both the terminal and the server mentioned in the embodiments of the present application may be used independently to perform the method for determining the potential visual set and/or the method for rendering the three-dimensional scene provided in the embodiments of the present application.
  • the terminal and the server may also be used in cooperation to execute the method for determining the potential visual set and/or the method for rendering the three-dimensional scene provided in the embodiments of the present application.
  • the server 120 includes at least one server module 121.
  • the server module 121 includes a processor 122, a user database 123, an application database 124, a user-oriented input/output interface (I/O interface) 125, and a development-oriented ⁇ Output/Output Interface 126.
  • the processor 122 is used to load instructions stored in the server module 121 and process data in the user database 123 and the application database 124; the user database 123 is used to store the first terminal 110 and/or the second terminal 130 through the wireless network Or user data uploaded by a wired network; the application database 124 is used to store data in applications in a 2.5-dimensional virtual environment; the user-oriented I/O interface 125 is used to connect the first terminal 110 and/or via a wireless network or a wired network The second terminal 130 establishes communication and exchanges data; the developer-oriented I/O interface 126 is used to establish communication and exchange data with other terminals 140 through a wireless network or a wired network.
  • the number of the above-mentioned terminals may be more or less.
  • the above-mentioned terminals may be only one, or the above-mentioned terminals may be dozens or hundreds, or more.
  • the embodiments of the present application do not limit the number of terminals and device types.
  • FIG. 2 shows a flowchart of a method for determining a PVS provided by an exemplary embodiment of the present application.
  • the method may be applied to a computer device, and the computer device may specifically be the first terminal in the computer system shown in FIG. 1 110.
  • Second terminal 130, other terminal 140, or server 120, the method includes:
  • Step S201 the map area is divided into multiple detection point areas.
  • the map area is the active range of the virtual object in the virtual environment. Include opaque objects and translucent objects in the map area. Virtual objects are active in the map area. Both opaque objects and translucent objects are visible objects.
  • the processor of the computer device calls the GPU to draw the visible objects through Draw Call, so that the visible objects have a three-dimensional effect when displayed.
  • opaque objects are called three-dimensional objects in this embodiment
  • semi-transparent objects are called semi-transparent three-dimensional objects in this embodiment.
  • the processor of the computer device divides the map area into multiple detection point areas according to the division rule, and the detection point area is a convex quadrilateral area divided according to the division rule.
  • the division rule includes division along a predetermined route traveled by the virtual object, and, at least one of division according to the range of activity of the virtual object.
  • the shapes of the multiple detection point areas may be the same shape or different; the areas of the multiple detection point areas may be equal or unequal; divided by different map areas The number of detection point areas may be the same or different.
  • step S202 the texture material of the three-dimensional object in the detection point area is replaced with a single-color material, and the color identification of the single-color material corresponding to each three-dimensional object is different.
  • the processor of the computer device uniquely identifies each three-dimensional object in the detection point area, that is, different three-dimensional objects correspond to a unique object identifier.
  • the unique object identifier is used to mark the three-dimensional object in the detection point area, and the processor determines the corresponding three-dimensional object in the three-dimensional virtual world according to the unique object identifier.
  • the processor of the computer device maps the unique object identifier and replaces the texture material of the three-dimensional object in the detection point area with a single color material. Due to the uniqueness of the unique object identification, the color identification of the single-color material corresponding to each three-dimensional object is different. The color identification is used to identify the single color corresponding to the single color material after the texture material of the three-dimensional object is replaced.
  • Step S203 Determine at least one detection point in the detection point area.
  • the processor of the computer device sets a plurality of detection points in the detection point area.
  • the detection point is used to monitor the position of the visible object.
  • the multiple detection points are scattered at different positions in the detection point area.
  • this embodiment does not limit the positions and number of detection points in the detection point area.
  • step S204 a cube map corresponding to the detection point is rendered, and the target color identification appearing on the cube map is determined.
  • the detection point that can be determined by the computer device is used as the location of the camera model, and the camera lens renders the six direction planes of the detection point once by rotating each 90° at the detection point to obtain a two-dimensional texture map.
  • the processor of the computer device obtains the cube map corresponding to the detection point according to the two-dimensional texture map rendered in the six directions.
  • the processor performs visual detection of the three-dimensional object according to the cube map, and determines the single color that appears on the cube map by detecting the target color identification on the cube map.
  • step S205 the three-dimensional object corresponding to the target color mark is added to the potential visual set PVS of the detection point area.
  • the processor of the computer device determines the three-dimensional object corresponding to the target color mark according to the determined target color mark on the cube map, thereby determining what the camera lens can see when the camera model forms the detection point of the cube map Three-dimensional objects.
  • the target color identification is used to identify the three-dimensional objects existing on the cube map.
  • the processor of the computer device adds the determined three-dimensional object to the PVS of the detection point area, so that when running on the user side, all visible objects in the detection point area are rendered according to the detected detection point area where the user is currently located, Therefore, PVS is a collection of all visible objects in the detection point area.
  • the method provided in this embodiment divides the map area into multiple detection point areas, and replaces the texture material of the three-dimensional objects in the detection point area with single-color materials, each corresponding to a single color
  • the color identification of the material is different, so that the three-dimensional object in the detection point area has a unique identification, and then the target color identification is determined on the cube map corresponding to the detection point determined in the detection point area, and the three-dimensional object corresponding to the target color identification is added to the detection In the PVS of the point area, the visible objects on the cube map are determined. It is different from the related art that uses cube maps to render distant scenes such as the sky to achieve the effect of distant scenes.
  • This application creatively uses cube maps to detect visible objects in the detection point area, by detecting the six directions on the cube map
  • the target color marking makes it possible to detect visible objects at any angle in the detection point area. Compared with related technologies, it avoids the randomness and instability of randomly hitting rays, and the accuracy when detecting visible objects is guaranteed. So that the final result displayed in PVS is correct.
  • the three-dimensional objects in the detection point area are replaced with two-dimensional single colors, which reduces the amount of calculation during detection.
  • FIG. 3 shows a flowchart of a method for determining a PVS provided by another exemplary embodiment of the present application.
  • the method may be applied to the computer system shown in FIG. 1.
  • the method includes:
  • step S301 the map area is divided into multiple detection point areas.
  • the map area is the active range of the virtual object in the virtual environment. Include opaque objects and translucent objects in the map area. Virtual objects are active in the map area. Both opaque objects and translucent objects are visible objects.
  • the processor calls the GPU through DrawCall to render the visible objects, so that the visible objects have a three-dimensional effect when displayed.
  • opaque objects are called three-dimensional objects in this embodiment
  • semi-transparent objects are called semi-transparent three-dimensional objects in this embodiment.
  • the processor of the computer device divides the map area into multiple detection point areas according to the division rule, and the detection point area is a convex quadrilateral area divided according to the division rule.
  • the division rule includes division along a predetermined route traveled by the virtual object, and, at least one of division according to the range of activity of the virtual object.
  • the shapes of the multiple detection point areas may be the same shape or different; the areas of the multiple detection point areas may be equal or unequal; divided by different map areas The number of detection point areas may be the same or different.
  • Step S302 Map the unique object identifier of the three-dimensional object in the detection point area to a color identifier.
  • the unique object identifier is a unique mark of each three-dimensional object in the detection point area.
  • the processor of the computer device uses the unique object identifier to uniquely identify the three-dimensional objects in the detection point area, that is, each three-dimensional object in the detection point area corresponds to a unique object identifier, and the unique object identifier is not repeated.
  • the unique object identifier is used to uniquely identify the three-dimensional object in the three-dimensional scene.
  • FIG. 4 a three-dimensional object in a virtual environment is shown. After the three-dimensional object in FIG. 4 is rendered by the GPU, a three-dimensional effect is displayed, and each three-dimensional object is a visible object. The processor reads the unique object identifier corresponding to each three-dimensional object.
  • the processor when the number of visible objects in each detection point area in the map area shown in FIG. 4 does not exceed 255, the processor will The last three digits of the unique object identifier of the three-dimensional object are mapped to the red channel value, green channel value, and blue channel value in the red, green, and blue color space. After the map area shown in Figure 4 is mapped, the processor obtains the map The map shown in 5. Combined with the visual object analysis in Figure 4 and Figure 5, it is found that each three-dimensional object corresponds to a unique single color, that is, each three-dimensional object is mapped to obtain a different single color, and each three-dimensional object obtains a unique single color according to a different single color Color identification.
  • the code to map the last three digits of the unique object identifier of the three-dimensional object to the red channel value, the green channel value, and the blue channel value is as follows:
  • the processor of the computer device determines the color identification corresponding to the three-dimensional object according to the red channel value, the green channel value, and the blue channel value.
  • the color identification is used to identify the three-dimensional object in the detection point area according to the uniqueness of the color identification.
  • Step S303 Replace the texture material of the three-dimensional object with a single color material corresponding to the color identification.
  • the color identification is the unique identification obtained after mapping the unique object identification of the three-dimensional object.
  • the color identification includes a red channel value, a green channel value, and a blue channel value.
  • the single-color material is a material corresponding to a single color synthesized according to the red channel value, the green channel value, and the blue channel value.
  • step S304 a plurality of discrete plane detection points are determined on the detection point area.
  • the processor of the computer device determines the position where the interpolation is located as discrete multiple plane detection points by interpolating on the plane of the detection point area.
  • the processor of the computer device obtains a plurality of plane detection points that are evenly distributed on the plane of the detection point area by equal interpolation on the plane of the detection point area.
  • how to determine a plurality of plane detection points on the detection point area and the number of plane detection points determined on the detection point area are not specifically limited.
  • Step S305 For each plane detection point of the plurality of plane detection points, the plane detection point is detected by physical model ray detection to obtain the road surface height at the plane detection point.
  • the processor of the computer device performs collision detection on each plane detection point through physical model ray detection.
  • a plurality of plane detection points in the detection point area are on the same horizontal line, the processor of the computer device determines at least one plane detection point, and emits rays in the vertical downward direction at the plane detection point, according to the collision of the emitted rays to the road surface Point to determine the height of the road surface at the detection point of the plane.
  • FIG. 6 Schematically, referring to FIG. 6, a schematic diagram of performing collision detection on a plane detection point to obtain a road surface height is shown. 6 includes a first plane detection point 601 and a second plane detection point 602, and the first plane detection point 601 and the second plane detection point 602 are on the same horizontal line.
  • the processor of the computer device performs collision detection by physical model ray detection at the first plane detection point 601.
  • the ray emits rays from the first plane detection point 601 in a vertical downward direction, and the processor obtains the collision of the first plane detection point 601
  • the road surface height h2 of the first plane detection point 601 is determined according to the collision point 603 of the first plane detection point 601.
  • the processor of the computer device performs collision detection by physical model ray detection at the second plane detection point 602, the ray emits rays from the second plane detection point 602 in a vertical downward direction, and the processor of the computer device obtains the second plane detection point
  • the planar detection points are several detection points uniformly distributed in the detection point area.
  • step S306 the low-area detection point corresponding to the plane detection point is determined according to the first sum value after the road surface height is added to the first height.
  • the first height is the height of the virtual object in the active state.
  • the processor of the computer device adds the height of the road surface at the detected plane detection point to the first height to obtain a first sum value, where the first sum value is that the virtual object clings to the road surface when in motion The height value of the state.
  • the processor of the computer device determines the plane detection point corresponding to the first sum value as the low-area detection point according to the first sum value, and the low-area detection point is a detection point where the virtual object moves closely to the road surface.
  • the virtual object 605 races to the road surface corresponding to the second plane detection point 602
  • the virtual object 601 clings to the road surface, so the first height is the height h1 of the virtual object 601 itself.
  • the processor adds h1 and h4 to obtain a first sum value, that is, h1+h4, and determines a low-area detection point 606 corresponding to the second plane detection point 602 according to the first sum value.
  • the low-area detection points are several detection points uniformly distributed in the detection point area.
  • step S307 the high-area detection point corresponding to the plane detection point is determined according to the second sum of the road surface height and the second height.
  • the second height is the sum of the height of the virtual object in the active state and the height of the virtual object from the flying height of the road surface when the virtual object is in motion.
  • the processor of the computer device adds the height of the road surface at the detected plane detection point to the second height to obtain a second sum value.
  • the second sum value is a virtual object that is vacated on the road surface when moving.
  • the height value of the state The processor of the computer device determines the plane detection point corresponding to the second sum value as the high-area detection point according to the second sum value, and the high-area detection point is a detection point where the virtual object is vacated above the road surface.
  • the height of the virtual object 605 when it is active is h1, and the road surface height h2 at the detection point 601 of the first plane.
  • the virtual object 605 races to the road surface corresponding to the first plane detection point 601, the virtual object 601 is vacated above the road surface, and the vacated height is h3, so the second height is the sum of the height h1 of the virtual object 601 and the vacated height h3.
  • the processor adds the second height to h2 to obtain a second sum value, that is, h1+h3+h2, and determines the high area detection point 605 corresponding to the first plane detection point 601 according to the second sum value.
  • the high-area detection points are several detection points uniformly distributed in the detection point area.
  • Step S308 respectively rendering the six directional surfaces at the detection point to obtain corresponding two-dimensional texture maps on each directional surface.
  • the processor of the computer device performs two-dimensional rendering on the front, back, left, right, bottom, and top surfaces of the low-level detection point, respectively, to obtain six two-dimensional texture maps.
  • Six two-dimensional texture maps correspond to the six directional planes at the detection points in the low zone.
  • the processor of the computer device performs two-dimensional rendering on the six directional planes at the detection point of the high zone, respectively, to obtain six two-dimensional texture maps, and the six two-dimensional texture maps correspond to the six directional planes at the detection point of the high zone.
  • FIG. 7 shows a two-dimensional texture map obtained by a processor of a computer device after two-dimensional rendering at a determined detection point.
  • the detection point may be a low-area detection point or a high-area detection point.
  • the processor of the computer equipment uses the detection point as the camera model, and uses the camera lens to two-dimensionally render a total of six directions of the front, back, left, right, bottom, and top of the detection point, and the detection point six is obtained. 2D texture maps on each direction.
  • step S309 the two-dimensional texture maps in the six directions are combined to obtain a cube map corresponding to the detection point.
  • the processor of the computer device merges the six two-dimensional texture maps in the six directions obtained at the low-region detection points to obtain a cube map corresponding to the low-region detection points.
  • the processor of the computer device merges the six two-dimensional texture maps on the six directions obtained at the detection points of the high area to obtain a cube map corresponding to the detection points of the high area.
  • the processor of the computer device merges the two-dimensional texture maps on the six directions of the detection point to obtain a cube map corresponding to the detection point.
  • Step S310 traverse the pixel values of the two-dimensional texture map on the six directions of the cube map, and determine the target color identifier appearing on the cube map according to the pixel values appearing on the two-dimensional texture map.
  • the target color identification is the color identification that appears on the cube map corresponding to the detection point.
  • the processor of the computer device traverses the pixel values of the two-dimensional texture map on the six directions of the cube map corresponding to the detection points in the low zone, and determines the pixel value according to the pixel values appearing on the two-dimensional texture map It belongs to the single color, so as to determine the target color mark that appears on the cube map corresponding to the detection point of the low zone.
  • the processor of the computer device traverses the pixel values of the two-dimensional texture map on the six directions of the cube map corresponding to the detection points of the high zone, and determines the pixel value according to the pixel values appearing on the two-dimensional texture map It belongs to the single color, so as to determine the target color mark that appears on the cube map corresponding to the detection point of the high area.
  • Step S311 Add a three-dimensional object corresponding to the target color mark to the PVS of the detection point area.
  • the target color identification is used to determine the three-dimensional object existing on the cube map corresponding to the detection point.
  • the PVS of the detection point area includes a first PVS and a second PVS.
  • the processor of the computer device determines the three-dimensional object corresponding to the target color identifier according to the target color identifier on the cube map corresponding to the low-region detection point, and adds the three-dimensional object to the first PVS in the detection point area.
  • the processor of the computer device determines the three-dimensional object corresponding to the target color identifier according to the target color identifier on the cube map corresponding to the high-region detection point, and adds the three-dimensional object to the second PVS in the detection point area.
  • the first PVS and the second PVS may be combined into one PVS.
  • the method provided in this embodiment divides the map area into multiple detection point areas, and replaces the texture material of the three-dimensional objects in the detection point area with single-color materials, each corresponding to a single color
  • the color identification of the material is different, so that the three-dimensional object in the detection point area has a unique identification, and then the target color identification is determined on the cube map corresponding to the detection point determined in the detection point area, and the three-dimensional object corresponding to the target color identification is added to the detection In the PVS of the point area, the visible objects on the cube map are determined. It is different from the related art that uses cube maps to render distant scenes such as the sky to achieve the effect of distant scenes.
  • This application creatively uses cube maps to detect visible objects in the detection point area, by detecting the six directions on the cube map
  • the target color marking makes it possible to detect visible objects at any angle in the detection point area. Compared with related technologies, it avoids the randomness and instability of randomly hitting rays, and the accuracy when detecting visible objects is guaranteed. So that the final result displayed in PVS is correct.
  • the three-dimensional objects in the detection point area are replaced with two-dimensional single colors, which reduces the amount of calculation during detection.
  • the method provided in this embodiment obtains corresponding two-dimensional texture maps in each direction surface by two-dimensionally rendering the six direction surfaces at the detection points respectively, and then merges the six two-dimensional texture maps to obtain the corresponding detection points Compared with the three-dimensional rendering, the GPU's consumption of the two-dimensional rendering is smaller than that of the three-dimensional rendering. By determining the target color identification on the cubic texture, the visual detection of the three-dimensional objects on the cubic texture is realized.
  • the method provided in this embodiment maps the last three digits of the unique object identifier of the three-dimensional object to the red channel value, the green channel value, and the blue channel value in the red, green, and blue color spaces, respectively, to obtain the color identifier corresponding to the three-dimensional object, and
  • the color identification of each three-dimensional object is different, which guarantees the uniqueness of each three-dimensional object, so that the same color identification will not occur during visual inspection, resulting in an error in the visual inspection result of the three-dimensional object.
  • the method provided in this embodiment divides the detection points in the detection point area into a low-area detection point and a high-area detection point, and adds the three-dimensional object at the low-area detection point to the first PVS to add the high-area detection point
  • the three-dimensional object at the location is added to the second PVS, so that when the user operates the virtual object activity, the user has the real feeling of the activity in two different states of the road surface and the empty space on the road surface.
  • Step S701 Set the semi-transparent three-dimensional object in the detection point area as a hidden attribute.
  • the translucent three-dimensional object is a three-dimensional object with translucency relative to the three-dimensional object, that is, through the translucent three-dimensional object, the three-dimensional object after the translucent three-dimensional object can be seen visually, that is, the semi-transparent three-dimensional object is unobstructed three-dimensional Objects, but objects that can be blocked by three-dimensional objects.
  • step S301 to step S303 shown in FIG. 3 the processor of the computer device sets the attribute of the semi-transparent three-dimensional object in the detection point area to a hidden attribute, and sets the attribute of the three-dimensional object in the detection point area.
  • the display attributes remain unchanged, making semi-transparent objects invisible in the detection point area and three-dimensional objects in the detection point area.
  • step S702 the translucent three-dimensional object in the detection point area is reset as the display attribute, and the three-dimensional object other than the translucent three-dimensional object is set as the hidden attribute.
  • the processor of the computer device visually detects the three-dimensional object, and determines the three-dimensional object in the detection point area.
  • the processor of the computer device resets the attributes of the translucent 3D object in the detection point area from the hidden attribute to the display attribute, and resets the attributes of the 3D object from the display attribute to the hidden attribute, so that the translucent 3D object is in the detection point area It can be seen that the three-dimensional object is not visible in the detection point area.
  • step S703 the texture material of the translucent three-dimensional object is replaced with a single color material, and the color identification of the single color material corresponding to each translucent three-dimensional object is different.
  • Each translucent three-dimensional object has a unique object identifier, and the color identifier is the unique identifier obtained after mapping the unique object identifier of the translucent three-dimensional object.
  • the single-color material is a material corresponding to a single color synthesized according to the red channel value, the green channel value, and the blue channel value.
  • the unique object identifier of each translucent three-dimensional object is different, so that the color identifier of the single-color material obtained after mapping according to the unique object identifier is different.
  • Step S704 Determine at least one detection point in the detection point area.
  • step S705 a cube map corresponding to the detection point is rendered, and the target color identification appearing on the cube map is determined.
  • Step S706 the three-dimensional object corresponding to the target color identifier and the semi-transparent three-dimensional object are combined and added to the PVS of the detection point area.
  • the target color identification is used to determine the semi-transparent three-dimensional object existing on the cube map corresponding to the detection point.
  • the PVS in the detection point area includes a first PVS and a second PVS.
  • the processor of the computer device determines the translucent three-dimensional object corresponding to the target color identification according to the target color identification on the cube map corresponding to the low zone detection point, and adds the translucent three-dimensional object to the first of the detection point area PVS.
  • the processor of the computer device determines the translucent three-dimensional object corresponding to the target color identifier according to the target color identifier on the cube map corresponding to the high-region detection point, and adds the translucent three-dimensional object to the second PVS in the detection point area.
  • the processor of the computer device will add the three-dimensional object and the translucent three-dimensional object determined according to the target color identification to the first PVS of the detection point area at the detection point of the low area; the detection point of the high area will be At this point, the three-dimensional object and the semi-transparent three-dimensional object determined according to the target color mark are added to the second PVS in the detection point area.
  • the processor of the computer device renders the three-dimensional object first and then the semi-transparent three-dimensional object to avoid repeated rendering caused by rendering the semi-transparent three-dimensional object first and then rendering the three-dimensional object.
  • the artist can find the problem that part of the material of the three-dimensional object changes due to the presence of the semi-transparent three-dimensional object.
  • the artist can modify the problem to ensure that the final result displayed in the PVS conforms to the three-dimensional effect.
  • the PVS is determined.
  • the 3D racing game can be any of sports games, vehicle simulation games, and action games.
  • the 3D racing game includes multiple tracks, and multiple virtual objects race along the track.
  • the virtual object is where the camera model is located.
  • the position of the camera model shows at least one of visible objects such as mountains, sea, flowers, trees, houses, tunnels, etc. in the lens screen of the camera model, so multiple racetracks and visual objects distributed along the racetrack constitute the track area .
  • FIG. 9 shows a flowchart of a method for determining a PVS provided by another exemplary embodiment of the present application.
  • the method is applied to a 3D racing game.
  • the 3D racing game includes a track area located in a virtual environment.
  • the method may Applied to the computer system shown in FIG. 1, the method includes:
  • Step S801 Divide the track area into multiple detection point areas.
  • the track area is the racing range of virtual objects in the virtual environment.
  • the track area includes the race line, opaque objects and translucent objects.
  • the race line is the predetermined route for the virtual object to race, the opaque objects and Translucent objects are distributed along the race line. Both opaque objects and translucent objects are visible objects.
  • the processor of the computer device calls the GPU to draw the visible objects through Draw Call, so that the visible objects have a three-dimensional effect when displayed.
  • opaque objects are called three-dimensional objects in this embodiment
  • semi-transparent objects are called semi-transparent three-dimensional objects in this embodiment.
  • the three-dimensional object may be a visible object such as flowers, trees, mountains, houses, sea, or cartoon characters
  • the translucent three-dimensional object may be a visible object such as smoke, nitrogen gas jets, or splashed water droplets.
  • the processor of the computer device divides the map area into a plurality of detection point areas according to the track line in the track area, and the detection point area is a convex formed by dividing the track line in the track area Quadrilateral area.
  • the shapes of the multiple detection point areas may be the same shape or different; the areas of the multiple detection point areas may be equal or unequal; divided by different map areas The number of detection point areas may be the same or different.
  • Step S802 the texture material of the three-dimensional object in the detection point area is replaced with a single-color material, and the color identification of the single-color material corresponding to each three-dimensional object is different.
  • Step S803 Determine at least one detection point in the detection point area.
  • step S804 a cube map corresponding to the detection point is rendered, and the target color identification appearing on the cube map is determined.
  • steps S802 to S804 is the same as the content shown in FIG. 2 and FIG. 3, and will not be repeated here.
  • step S805 the three-dimensional object corresponding to the target color mark is added to the track PVS in the detection point area.
  • the processor of the computer device determines the three-dimensional object corresponding to the target color identifier according to the determined target color identifier, and the processor adds the determined three-dimensional object to the track PVS of the detection point area, so that the user When running on the side, all visible objects in the detection point area are rendered according to the detection point area where the detected user is currently located, so the track PVS is distributed along the track line, and the visible objects in the detection point area are set.
  • step S802 the processor hides the translucent three-dimensional object; after step S805, the processor displays the translucent three-dimensional object, and then repeats steps S803 to S805.
  • the processor of the computer device avoids the repeated rendering caused by rendering the semi-transparent three-dimensional object first and then rendering the three-dimensional object by rendering the three-dimensional object first, and then rendering the three-dimensional object.
  • the artist handles the image, the artist can find that the semi-transparent
  • the existence of the three-dimensional object leads to the problem that some materials of the three-dimensional object change. Artists can modify this problem to ensure that the final result displayed in the PVS of the track conforms to the three-dimensional effect.
  • the PVS of each detection point area in the map area is determined. Artists need to process the PVS of the detection point area obtained by developers to ensure the accuracy of the final PVS display results.
  • the processor runs the program, and a first dialog box 901 shown in FIG. 10 is displayed on the interface of the terminal.
  • the first dialog box 901 is PVS occlusion and clipping.
  • the processor runs the program of the track editor, and a second dialog box shown in FIG. 11 pops up on the interface of the terminal.
  • the second dialog box is the track editor.
  • the processor imports the calculated PVS of the detection point area into the track editor.
  • the track editor is divided into a map area 1001 and a manual adjustment area 1002.
  • the processor receives that the artist has selected the target detection point area 1003 in the map area 1001, it also receives that the artist has selected the check box of the display area 1004 in the manual adjustment area 1002, and determines that the artist needs to check the target detection point area 1003
  • the PVS is manually adjusted, and the processor displays the PVS of the target detection point area 1003.
  • the processor of the computer device runs the program of the Draw Call monitor, and a third dialog box 1101 as shown in FIG. 12 appears on the interface of the terminal.
  • the third dialog box 1101 is the Draw Call monitor.
  • the processor determines to automatically run the map area.
  • the processor of the computer device monitors its own consumption on the PVS that allows the GPU to render the detection point area through Draw Call, and issues an alarm to the over-consumption detection point area to remind the artist to detect the excessive consumption. The point area is manually removed again.
  • the artist can make the Draw Call monitor 1101 no longer give an alarm when running the picture automatically, thus confirming that the manual adjustment part is completed.
  • the repeated rendering caused by rendering the translucent three-dimensional object first and then rendering the three-dimensional object is avoided.
  • the artist can find the problem that part of the material of the three-dimensional object changes due to the presence of the semi-transparent three-dimensional object. The artist can modify this problem to ensure that the final result displayed in the track PVS conforms to the three-dimensional effect.
  • the developer obtains the track PVS of the detection point area of the 3D racing game based on the method shown in FIG. 9, and packs the track PVS of the detection point area in each track area into a compressed package, which is saved on the server in.
  • the user playing the 3D racing game downloads the compressed package from the server, the server detects the position of the current frame of the camera model of the user terminal, reads the track PVS of the detection point area where the current frame is located, and reads the track PVS performs 3D rendering.
  • FIG. 13 shows a flowchart of a method for rendering a three-dimensional scene provided by an exemplary embodiment of the present application. Taking the provided virtual environment as an example in a 3D racing game, the method is applied to a region storing detection points and PVS. In the terminal, the PVS is obtained using the method described above. This method can be applied to the computer system shown in FIG. 1. The method includes:
  • Step S1201 Detect whether the detection point area where the camera model is located in the current frame is the same as the detection point area where the previous frame is located.
  • the map area includes multiple detection point areas, which are convex quadrilateral areas divided along the track. Referring to FIG. 14, the detection point areas divided along the track are exemplarily shown, and each detection point area is assigned a unique area identifier.
  • the gray portion 1301 in FIG. 14 is a detection point area in the map area.
  • the detection point area composed of points A, B, C, and D, and point P as the location of the camera model is taken as an example.
  • the algorithm for determining whether the camera model is in the detection point area is as follows:
  • the processor of the terminal detects whether the detection point area where the current frame of the camera model is located is the same as the detection point area where the previous frame is located according to the algorithm described above.
  • the processor detects that the detection point area of the current frame of the camera model is the same as the detection point area of the previous frame, the processor has read the PVS in the detection point area of the current frame and rendered the detection point of the current frame Visible objects in the PVS in the area; when the processor detects that the detection point area where the current frame of the camera model is located is not the same as the detection point area where the previous frame is located, go to step 1102.
  • the PVS in the detection point area where the current frame is located is obtained according to any one of the methods in FIG. 2, FIG. 3, and FIG. 8.
  • the processor saves the obtained PVS in the detection point area, and renders the three-dimensional scene When directly reading PVS in the saved detection point area.
  • Step S1202 when the detection point area where the camera model is located in the current frame is different from the detection point area where the previous frame is located, the track PVS of the detection point area where the current frame is located is read.
  • the processor of the computer device detects that the detection point area where the camera model is located in the current frame is different from the detection point area where the previous frame is located, it re-reads the track PVS of the detection point area where the current frame is located, and renders the current The visible objects in the track PVS in the detection point area where the frame is located.
  • step S1203 according to the PVS of the track at the detection point area where the current frame is located, a lens image of the camera model is rendered.
  • the processor of the computer device reads the track PVS of the detection point area where the current frame is located according to the track PVS of the detection point area where the current frame is located, and converts the Render according to the object to get the lens of the camera model.
  • the processor of the computer device detects that the area where the current frame is located is as shown in FIG. 16, and the area 1501 surrounded by the black dot in FIG. 16 is the detection point area where the current frame is located, and the area 1501 surrounded by the black dot It is an indoor scene.
  • the processor of the computer device reads the track PVS of the area 1501 surrounded by the black dots, and renders the visible objects in the track PVS of the area 1501 surrounded by the black dots, to obtain the lens screen of the camera model shown in FIG. 17 .
  • the lens screen of the camera model shown in FIG. 17 is the lens screen when the user plays a 3D racing game, and the left button 1601, the right button 1602, and the virtual object 1603 are displayed on the lens screen, and the left button 1601 is used to control The virtual object 1603 travels left when racing, the right button 1602 is used to control the virtual object 1603 travel right when racing, and the virtual object 1603 travels in an indoor scene.
  • the user sees the same lens scene as usual, but the visible objects in other detection point areas outside the indoor scene are not rendered, thereby reducing processor consumption.
  • the area where the current frame of the computer device detects is the area shown in FIG. 18, and the area 1701 surrounded by the black dots in FIG. 18 is the detection point area where the current frame is located
  • the area 1701 surrounded by the black dot is a scene at the foot of an outdoor mountain.
  • the processor of the computer device reads the track PVS of the area 1701 surrounded by the black dots, and renders the visible objects in the track PVS of the area 1701 surrounded by the black dots, and the area 1701 except for the black dots is the outdoor mountain foot Visible objects in an area outside of a scene are removed, reducing processor consumption.
  • the processor of the computer device determines that the virtual object is in a state of racing close to the race surface, determines that the camera model is located at the detection point in the low zone, and the processor reads Take the first track PVS corresponding to the low zone detection point.
  • the processor of the computer device renders the first lens image of the camera model according to the first track PVS of the detection point area where the current frame is located.
  • the first track PVS is a collection of visible objects distributed along the track line at the detection points in the low zone in the detection point area.
  • the first lens screen is the lens screen displayed after the camera model renders the visible object at the detection point in the low area of the detection point area, that is, the first lens screen is the lens displayed when the virtual object is close to the road surface when racing Screen.
  • the processor of the computer device After determining the detection point area where the current frame is located, the processor of the computer device determines that the virtual object is in a state of racing over the surface of the racing road, determines that the camera model is located at the detection point of the high area, and the processor reads the detection point of the high area Corresponding to the second track PVS.
  • the processor of the computer device renders the second lens image of the camera model according to the second track PVS of the detection point area where the current frame is located.
  • the second track PVS is a collection of visible objects distributed along the track line at the detection point in the high zone in the detection point area.
  • the second lens screen is the lens screen displayed by the camera model after rendering a visible object at the detection point in the high area of the detection point area, that is, the second lens screen is the lens displayed when the virtual object is vacated on the road surface when racing Screen.
  • the processor of the computer device after determining the detection point area where the current frame is located, the processor of the computer device first uses the vertex renderer to render the vertices of the visible objects in the detection point area to obtain the outline of each visible object, Then use the pixel renderer to render the pixels of each visible object to obtain the three-dimensional effect of each visible object displayed in the lens picture.
  • the detection point area that the GPU currently needs to render is determined, and there is no need to perform Redundant rendering, thereby reducing processor consumption.
  • FIG. 19 shows a schematic structural diagram of a PVS determination device provided by an exemplary embodiment of the present application.
  • the device includes a first division module 1810, a first replacement module 1820, a first determination module 1830, a first rendering module 1840, and a first One added module 1850, which:
  • the first dividing module 1810 is used to divide the map area into multiple detection point areas.
  • the first replacement module 1820 is used to replace the texture material of the three-dimensional object in the detection point area with a single-color material, and the color identification of the single-color material corresponding to each three-dimensional object is different.
  • the first replacement module 1820 includes:
  • the first mapping unit 1821 is configured to map the unique object identifier of the three-dimensional object in the detection point area to a color identifier.
  • the first mapping unit 1821 is configured to map the last three digits of the unique object identifier of the three-dimensional object in the detection point area to the red channel value, green channel value, and blue in the red, green, and blue color space, respectively.
  • Color channel value according to the red channel value, green channel value and blue channel value, determine the color identification corresponding to the three-dimensional object.
  • the first replacement unit 1822 is used to replace the texture material of the three-dimensional object with a single color material corresponding to the color identification.
  • the first determination module 1830 is configured to determine at least one detection point in the detection point area.
  • the first determining module 1830 is used to determine a plurality of discrete plane detection points on the detection point area; for each plane detection point in the plurality of plane detection points, the plane is detected by physical model rays The detection point is detected to obtain the road surface height at the plane detection point; the low-area detection point corresponding to the plane detection point is determined according to the first sum value after the road surface height and the first height are added; according to the road surface height and the second height phase The added second sum value determines the high-area detection point corresponding to the plane detection point.
  • the first rendering module 1840 is used to render a cube map corresponding to the detection point, and determine the target color identifier appearing on the cube map.
  • the first adding module 1850 as shown in FIG. 21, the first rendering module 1840 includes:
  • the rendering unit 1841 is configured to respectively render the six directional surfaces at the detection point to obtain corresponding two-dimensional texture maps on each directional surface.
  • the merging unit 1842 is used to assemble the two-dimensional texture maps in the six directions to obtain cube maps corresponding to the detection points.
  • the traversing unit 1843 is used to traverse the pixel values of the two-dimensional texture map on the six directions of the cubemap, and determine the target color identifier appearing on the cubemap according to the pixel values appearing on the two-dimensional texture map.
  • the first adding module 1850 is used to add the three-dimensional object corresponding to the target color identifier to the PVS of the detection point area.
  • the first addition module 1850 is used to add a three-dimensional object corresponding to the target color identifier to the first PVS of the detection point area when the target color identifier belongs to the cube map corresponding to the low-area detection point; when When the target color mark belongs to the cube map corresponding to the detection point of the high area, the three-dimensional object corresponding to the target color mark is added to the second PVS of the detection point area.
  • the first addition module 1850, the device further includes:
  • the setting module 1860 is used to set the semi-transparent three-dimensional object in the detection point area as a hidden attribute.
  • the setting module 1860 is used to reset the translucent three-dimensional objects in the detection point area to display attributes, and set the three-dimensional objects other than the translucent three-dimensional objects to hidden attributes.
  • the first replacement module 1820 is used to replace the texture material of the translucent three-dimensional object with a single color material, and the color identification of the single color material corresponding to each translucent three-dimensional object is different.
  • the first determination module 1830 is configured to determine at least one detection point in the detection point area.
  • the first rendering module 1840 is used to render a cube map corresponding to the detection point, and determine the target color identifier appearing on the cube map.
  • the first adding module 1850 is used to add the translucent three-dimensional object corresponding to the target color identifier to the PVS of the detection point area.
  • FIG. 22 shows a schematic structural diagram of a three-dimensional scene rendering device provided by an exemplary embodiment of the present application, which is applied to a terminal storing a detection point area and a PVS.
  • the PVS is obtained by using the device described above.
  • the detection module 2110 is used to detect whether the detection point area where the camera model is located in the current frame is the same as the detection point area where the previous frame is located.
  • the reading module 2120 is configured to read the PVS of the detection point area where the current frame is located when the detection point area where the camera model is located in the current frame is different from the detection point area where the previous frame is located.
  • the second rendering module 2130 is configured to render the lens image of the camera model according to the PVS of the detection point area where the current frame is located.
  • the first adding module 1850 and the second rendering module 2130 are configured to render the first lens image of the camera model according to the first PVS of the detection point area where the current frame is located.
  • the second rendering module 2130 is configured to render a second lens image of the camera model according to the second PVS of the detection point area where the current frame is located.
  • the device is applied to a 3D racing game.
  • the 3D racing game includes a track area located in a virtual environment.
  • the device includes:
  • the second dividing module 2210 is used to divide the track area into multiple detection point areas.
  • the second replacement module 2220 is configured to replace the texture material of the three-dimensional object in the detection point area with a single-color material, and the color identification of the single-color material corresponding to each three-dimensional object is different.
  • the second determination module 2230 is configured to determine at least one detection point in the detection point area.
  • the third rendering module 2240 is configured to render a cube map corresponding to the detection point, and determine the target color identifier appearing on the cube map.
  • the second adding module 2250 is used to add the track object corresponding to the target color identification to the track PVS in the detection point area.
  • the PVS determination device and the three-dimensional scene rendering device provided in the above embodiments are only exemplified by the division of the above functional modules.
  • the above functions can be allocated by different functional modules as needed Completed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device for determining the PVS and the device for rendering the three-dimensional scene provided in the above embodiments and the method embodiment for the method for determining the PVS and the method for rendering the three-dimensional scene belong to the same concept. For the specific implementation process, refer to the method embodiment for details, which will not be repeated here. .
  • FIG. 24 shows a schematic structural diagram of a server provided by an embodiment of the present application.
  • the server may be the computer device mentioned in the above embodiment, and is used to implement the method for determining the potential visual set and/or the method for rendering the three-dimensional scene provided in the above embodiment. Specifically:
  • the server 2300 includes a central processing unit (CPU) 2301, a system memory 2304 including a random access memory (RAM) 2302 and a read only memory (ROM) 2303, and a system bus 2305 connecting the system memory 2304 and the central processing unit 2301.
  • the server 2300 also includes a basic input/output system (I/O system) 2306 that helps transfer information between various devices in the computer, and mass storage for storing the operating system 2313, application programs 2314, and other program modules 2315 Device 2307.
  • I/O system basic input/output system
  • the basic input/output system 2306 includes a display 2308 for displaying information and an input device 2309 for a user to input information such as a mouse and a keyboard.
  • the display 2308 and the input device 2309 are both connected to the central processing unit 2301 through an input and output controller 2310 connected to the system bus 2305.
  • the basic input/output system 2306 may further include an input-output controller 2310 for receiving and processing input from a number of other devices such as a keyboard, mouse, or electronic stylus.
  • the input output controller 2310 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 2307 is connected to the central processing unit 2301 through a mass storage controller (not shown) connected to the system bus 2305.
  • the mass storage device 2307 and its associated computer-readable medium provide non-volatile storage for the server 2300. That is, the mass storage device 2307 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
  • the computer-readable medium may include a computer storage medium and a communication medium.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory, or other solid-state storage technologies, CD-ROM, DVD, or other optical storage, tape cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other solid-state storage technologies
  • CD-ROM, DVD or other optical storage
  • tape cassettes magnetic tape
  • magnetic disk storage or other magnetic storage devices.
  • the above-mentioned system memory 2304 and mass storage device 2307 may be collectively referred to as a memory.
  • the server 2300 may also be operated by connecting to a remote computer on the network through a network such as the Internet. That is, the server 2300 can be connected to the network 2312 through the network interface unit 2311 connected to the system bus 2305, or the network interface unit 2311 can also be used to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes one or more programs that are stored in the memory and configured to be executed by one or more processors.
  • One or more of the above programs contain instructions for performing the following operations:
  • FIG. 25 shows a structural block diagram of a terminal 2400 provided by an exemplary embodiment of the present invention.
  • the terminal 2400 may be: a smartphone, a tablet computer, an MP3 player (Moving Pictures Experts Group Audio Audio Layer III, motion picture expert compression standard audio level 3), MP4 (Moving Pictures Experts Group Audio Audio Layer IV, motion picture expert compression standard audio Level 4) Player, laptop or desktop computer.
  • the terminal 2400 may also be called other names such as user equipment, portable terminal, laptop terminal, and desktop terminal.
  • the terminal 2400 includes a processor 2401 and a memory 2402.
  • the processor 2401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 2401 can adopt at least one hardware form from DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). achieve.
  • the processor 2401 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in a wake-up state, also known as a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in the standby state.
  • the processor 2401 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 2401 may further include an AI (Artificial Intelligence, Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, Artificial Intelligence
  • the memory 2402 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 2402 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 2402 is used to store at least one instruction that is executed by the processor 2401 to implement the virtual A method for observing the environment and/or a method for rendering three-dimensional scenes.
  • the terminal 2400 may optionally further include: a peripheral device interface 2403 and at least one peripheral device.
  • the processor 2401, the memory 2402, and the peripheral device interface 2403 may be connected by a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 2403 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 2404, a touch display screen 2405, a camera 2406, an audio circuit 2407, a positioning component 2408, and a power supply 2409.
  • the peripheral device interface 2403 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 2401 and the memory 2402.
  • the processor 2401, the memory 2402, and the peripheral device interface 2403 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 2401, the memory 2402, and the peripheral device interface 2403, or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 2404 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 2404 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 2404 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 2404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 2404 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 2404 may further include a circuit related to NFC (Near Field Communication), which is not limited in this application.
  • the display screen 2405 is used to display a UI (User Interface, user interface).
  • the UI may include graphics, text, icons, video, and any combination thereof.
  • the display screen 2405 also has the ability to collect touch signals on or above the surface of the display screen 2405.
  • the touch signal can be input to the processor 2401 as a control signal for processing.
  • the display screen 2405 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 2405 may be one, and the front panel of the terminal 2400 is provided; in other embodiments, the display screen 2405 may be at least two, respectively provided on different surfaces of the terminal 2400 or in a folded design; In still other embodiments, the display screen 2405 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the terminal 2400. Even, the display screen 2405 can also be set as a non-rectangular irregular figure, that is, a shaped screen.
  • the display screen 2405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light emitting diode) and other materials.
  • the camera assembly 2406 is used to collect images or videos.
  • the camera assembly 2406 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • there are at least two rear cameras which are respectively one of the main camera, the depth-of-field camera, the wide-angle camera, and the telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with wide-angle camera to realize panoramic shooting and VR (Virtual Reality, virtual reality) shooting function or other fusion shooting functions.
  • the camera assembly 2406 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 2407 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 2401 for processing, or input them to the radio frequency circuit 2404 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 2401 or the radio frequency circuit 2404 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 2407 may also include a headphone jack.
  • the positioning component 2408 is used to locate the current geographic location of the terminal 2400 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 2408 may be a positioning component based on the GPS (Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 2409 is used to supply power to each component in the terminal 2400.
  • the power source 2409 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • the wired rechargeable battery is a battery charged through a wired line
  • the wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 2400 further includes one or more sensors 2410.
  • the one or more sensors 2410 include, but are not limited to, an acceleration sensor 2411, a gyro sensor 2412, a pressure sensor 2413, a fingerprint sensor 2414, an optical sensor 2415, and a proximity sensor 2416.
  • the acceleration sensor 2411 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established with the terminal 2400.
  • the acceleration sensor 2411 can be used to detect the components of gravity acceleration on three coordinate axes.
  • the processor 2401 may control the touch display screen 2405 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 2411.
  • the acceleration sensor 2411 can also be used for game or user movement data collection.
  • the gyro sensor 2412 can detect the body direction and rotation angle of the terminal 2400, and the gyro sensor 2412 can cooperate with the acceleration sensor 2411 to collect the user's 3D action on the terminal 2400. Based on the data collected by the gyro sensor 2412, the processor 2401 can realize the following functions: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 2413 may be disposed on the side frame of the terminal 2400 and/or the lower layer of the touch display 2405.
  • the pressure sensor 2413 When the pressure sensor 2413 is disposed on the side frame of the terminal 2400, it can detect the user's grip signal on the terminal 2400, and the processor 2401 can perform left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 2413.
  • the processor 2401 controls the operability control on the UI interface according to the user's pressure operation on the touch display 2405.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 2414 is used to collect the user's fingerprint.
  • the processor 2401 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 2414, or the fingerprint sensor 2414 identifies the user's identity based on the collected fingerprint.
  • the processor 2401 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 2414 may be provided on the front, back, or side of the terminal 2400. When a physical button or manufacturer logo is provided on the terminal 2400, the fingerprint sensor 2414 may be integrated with the physical button or manufacturer logo.
  • the optical sensor 2415 is used to collect the ambient light intensity.
  • the processor 2401 may control the display brightness of the touch display 2405 according to the ambient light intensity collected by the optical sensor 2415. Specifically, when the ambient light intensity is high, the display brightness of the touch display 2405 is increased; when the ambient light intensity is low, the display brightness of the touch display 2405 is decreased.
  • the processor 2401 may also dynamically adjust the shooting parameters of the camera assembly 2406 according to the ambient light intensity collected by the optical sensor 2415.
  • the proximity sensor 2416 also called a distance sensor, is usually provided on the front panel of the terminal 2400.
  • the proximity sensor 2416 is used to collect the distance between the user and the front of the terminal 2400.
  • the processor 2401 controls the touch display 2405 to switch from the bright screen state to the breathing state; when the proximity sensor 2416 detects When the distance from the user to the front of the terminal 2400 gradually becomes larger, the processor 2401 controls the touch display 2405 to switch from the screen-on state to the screen-on state.
  • FIG. 24 does not constitute a limitation on the terminal 2400, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • a computer-readable storage medium is also provided.
  • the computer-readable storage medium is a non-volatile computer-readable storage medium, and the computer-readable storage medium stores computer-readable instructions.
  • the method for determining the PVS provided by the foregoing embodiments of the present disclosure can be implemented.
  • a computer-readable storage medium is also provided.
  • the computer-readable storage medium is a non-volatile computer-readable storage medium, and the computer-readable storage medium stores computer-readable instructions.
  • the method for rendering the three-dimensional scene provided by the foregoing embodiments of the present disclosure can be implemented.
  • a computer program product stores at least one instruction, at least one program, code set, or instruction set.
  • the at least one instruction, at least one program, code set, or instruction set is composed of
  • the processor loads and executes to implement the PVS determination method executed by the terminal as shown in the above method embodiments.
  • a computer program product stores at least one instruction, at least one program, code set, or instruction set.
  • the at least one instruction, at least one program, code set, or instruction set is composed of
  • the processor loads and executes to implement the three-dimensional scene rendering method performed by the terminal as shown in the above method embodiments.
  • the program may be stored in a computer-readable storage medium.
  • the mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种潜在可视集合的确定方法,包括:将地图区域划分为多个检测点区域;将检测点区域中的三维物体的贴图材质替换为单颜色材质;在检测点区域中确定至少一个检测点;渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识;及将目标颜色标识对应的三维物体添加至检测点区域的潜在可视集合PVS中。

Description

潜在可视集合的确定方法、装置、设备及存储介质
本申请要求于2018年12月07日提交中国专利局,申请号为201811493375.0、发明名称为“潜在可视集合的确定方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,特别涉及一种潜在可视集合的确定方法、装置、设备及存储介质。
背景技术
渲染性能是三维虚拟环境的应用程序在运行过程中的重要因素,渲染性能的高低决定了三维虚拟环境的应用程序运行时的流畅程度,而限制渲染性能的瓶颈则是中央处理器(Central Processing Unit,CPU)。比如,在3D竞速游戏中,CPU频繁地通过Draw Call向图形处理器(Graphics Processing Unit,GPU)发送执行渲染操作的命令,使得Draw Call占3D竞速游戏对CPU消耗的一半。因此,CPU通过基于预计算的方式,对3D竞速游戏的地图场景进行可视检测,能够减少Draw Call对CPU的消耗。
当CPU在预计算时,以摄像机模型自身为原点向四周随机打射线,判断该摄像机模型四周的可视物体,当打出的射线与物体存在交集,CPU判定该物体为可视物体;当打出的射线与物体没有存在交集,CPU判定该物体为不可视物体。
然而,CPU采用随机打射线的方式进行预计算,当打出的射线的数量不够多时,可能会存在一些物体由于未被射线打中而导致误判成不可视,使得CPU没有通过Draw Call向GPU发送渲染该物体的命令,虽然Draw Call在CPU上的消耗减少了,但是最终显示在潜在可视集合(Potentially Visible Set,PVS)中的结果有误。
发明内容
本申请实施例提供了一种潜在可视集合的确定方法、装置、设备及存储介质,三维场景的渲染方法装置、设备及存储介质。
一种潜在可视集合的确定方法,由计算机设备执行,所述方法包括:
将地图区域划分为多个检测点区域;
将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
在所述检测点区域中确定至少一个检测点;
渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
将所述目标颜色标识对应的三维物体添加至所述检测点区域的PVS中。
一种三维场景的渲染方法,应用于存储有检测点区域和PVS的计算机设备中,所述PVS是采用如上所述的方法生成的,所述方法包括:
检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同;
当所述摄像机模型在所述当前帧所在的检测点区域与所述上一帧所在的检测点区域不相同时,读取所述当前帧所在的检测点区域的PVS;及
根据所述当前帧所在的检测点区域的PVS,渲染得到所述摄像机模型的镜头画面。
一种潜在可视集合的确定方法,由计算机设备执行,所述方法应用于3D竞速游戏中,所述3D竞速游戏包括位于虚拟环境中的赛道区域,所述方法包括:
将赛道区域划分为多个检测点区域;
将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
在所述检测点区域中确定至少一个检测点;
渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
将所述目标颜色标识对应的三维物体添加至所述检测点区域的赛道PVS 中。
一种潜在可视集合的确定装置,所述装置包括:
第一划分模块,用于将地图区域划分为多个检测点区域;
第一替换模块,用于将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
第一确定模块,用于在所述检测点区域中确定至少一个检测点;
第一渲染模块,用于渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
第一添加模块,用于将所述目标颜色标识对应的三维物体添加至所述检测点区域的潜在可视集合PVS中。
一种三维场景的渲染装置,应用于存储有检测点区域和PVS的终端中,所述PVS是采用如上所述的方法生成的,所述装置包括:
检测模块,用于检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同;
读取模块,用于当所述摄像机模型在所述当前帧所在的检测点区域与所述上一帧所在的检测点区域不相同时,读取所述当前帧所在的检测点区域的PVS;及
第二渲染模块,用于根据所述当前帧所在的检测点区域的PVS,渲染得到所述摄像机模型的镜头画面。
一种潜在可视集合的确定装置,所述装置应用于3D竞速游戏中,所述3D竞速游戏包括位于虚拟环境中的赛道区域,所述装置包括:
第二划分模块,用于将赛道区域划分为多个检测点区域;
第二替换模块,用于将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
第二确定模块,用于在所述检测点区域中确定至少一个检测点;
第三渲染模块,用于渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
第二添加模块,用于将所述目标颜色标识对应的三维物体添加至所述检测点区域的赛道PVS中。
一种计算机设备,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如上所述的潜在可视集合的确定方法的步骤。
一种计算机设备,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如上所述的三维场景的渲染方法的步骤。
一种计算机可读存储介质,所述存储介质中存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如上所述的潜在可视集合的确定方法的步骤。
一种计算机可读存储介质,所述存储介质中存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如上所述的三维场景的渲染方法的步骤。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的计算机系统的结构框图;
图2是本申请一个示例性实施例提供的PVS的确定方法的流程图;
图3是本申请另一个示例性实施例提供的PVS的确定方法的流程图;
图4是本申请另一个示例性实施例提供的虚拟环境中的三维物体的示意图;
图5是本申请另一个示例性实施例提供的虚拟环境中的三维物体映射后的示意图;
图6是本申请另一个示例性实施例提供的在平面检测点处路面高度的示意图;
图7是本申请另一个示例性实施例提供的在检测点处的二维纹理贴图的示意图;
图8是本申请另一个示例性实施例提供的半透明三维物体的可视检测方法的流程图;
图9是本申请另一个示例性实施例提供的PVS的确定方法的流程图;
图10是本申请另一个示例性实施例提供的第一对话框的示意图;
图11是本申请另一个示例性实施例提供的第二对话框的示意图;
图12是本申请另一个示例性实施例提供的第三对话框的示意图;
图13是本申请一个示例性实施例提供的三维场景的渲染方法的流程图;
图14是本申请一个示例性实施例提供的沿赛道划分的检测点区域的示意图;
图15是本申请一个示例性实施例提供的判断摄像机模型是否在检测点区域的示意图;
图16是本申请一个示例性实施例提供的摄像机模型在室内场景的示意图;
图17是本申请一个示例性实施例提供的摄像机模型在室内场景站的镜头画面的示意图;
图18是本申请一个示例性实施例提供的摄像机模型在室外山脚场景的示意图;
图19是本申请一个示例性实施例提供的PVS的确定装置的结构示意图;
图20是本申请一个示例性实施例提供的第一替换模块的结构示意图;
图21是本申请一个示例性实施例提供的第一渲染模块的结构示意图;
图22是本申请一个示例性实施例提供的三维场景的渲染装置的结构示意图;
图23是本申请另一个示例性实施例提供的PVS的确定装置的结构示意图;
图24是本申请一个实施例提供的服务器的结构示意图;
图25是本发明一个示例性实施例提供的终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申 请实施方式作进一步地详细描述。
首先,对本申请实施例涉及的若干个名词进行解释:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的三维环境,还可以是纯虚构的三维环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,下述实施例以虚拟环境是三维虚拟环境来举例说明,但对此不加以限定。可选地,该虚拟环境包括至少一个虚拟角色,该虚拟角色在虚拟环境中活动。可选地,该虚拟环境还用于至少两个虚拟角色之间的虚拟环境竞速。可选地,该虚拟环境还用于至少两个虚拟角色之间使用虚拟载具进行竞速。
虚拟对象:是指在虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物、虚拟载具中的至少一种。可选地,当虚拟环境为三维虚拟环境时,虚拟对象是三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
GPU:又称显示核心、视觉处理器、显示芯片,是一种在终端上专门对图像进行图像运算工作的微处理器。GPU用于根据CPU的命令,对检测点区域中的三维物体进行渲染,使得渲染后的三维物体具有三维效果。
Draw Call:是CPU调用图形编程的函数接口。CPU通过Draw Call向GPU下渲染命令,GPU根据渲染命令执行渲染操作。Draw Call用于实现CPU与GPU之间渲染命令的参数的传递。
视锥体(Frustum):是指类似于切去顶部,且顶部与底部平行的金字塔形状的立方体。视锥体具有上、下、左、右、近、远,共六个面。视锥体是以摄像机模型为原点,根据摄像机模型的摄像镜头的角度确定的视觉范围,即根据摄像机模型的摄像镜头的角度,形成类似金字塔形状的视觉范围。而在类似金字塔形状的视觉范围内距离摄像机模型过近的物体或距离摄像机模型过远的物体,都不会显示在摄像镜头中,能够显示在摄像镜头中的视觉范围即为视锥体,故视锥体是用于在摄像镜头中显示物体的立方体空间,且位于视锥体内的物体即为可见物体,位于视锥体外的物体即为不可见物体。而 位于视锥体内的三维物体通过投影显示在摄像镜头的二维平面上,该投影显示的图像即为人眼在地图场景中看到的图像。
检测点区域:是在虚拟环境中,按照划分规则划分出的凸四边形区域。划分规则包括沿虚拟对象行进的预定路线进行划分,按照虚拟对象活动范围进行划分中的至少一种。可选地,在3D竞速游戏中,检测点区域是在3D竞速游戏的虚拟环境中,根据赛道路线划分的凸四边形区域,该检测点区域在划分好之后,不会变化。
遮挡剔除:指在摄像镜头中,一个物体被其他物体遮挡住,使得该物体不可见,CPU对该物体不渲染。CPU通过遮挡剔除的方法,减少CPU对地图场景中的物体的渲染,从而减少Draw Call在CPU上的消耗。而且,为了减少重复渲染的操作,采用测量物体与摄像机之间的距离的方法,根据距离的远近,由近及远的对物体进行遮挡剔除操作。
立方体贴图(Cubemap):是由六张二维纹理贴图拼合而成的立方体。立方体贴图是以摄像机模型为中心点,摄像机模型通过向六个方向分别进行一次渲染(即每次将摄像镜头的角度旋转90°),形成的具有六张二维纹理贴图的立方体。立方体贴图用于对天空等远景进行渲染,使得远景能够与移动的近景(比如人物)保持相对静止,从而实现远景的效果。
PVS:是在视点位置,或视点所在区域,经过遮挡剔除后,剩余的能看见的物体集合。
顶点渲染器(VertexShader):用于对具有顶点属性的顶点数据进行各种运算。
像素渲染器(PixelShader):用于对具有像素属性的像素数据进行各种运算。
本申请实施例提供了一种潜在可视集合的确定方法、装置、设备及存储介质,可以解决当CPU采用随机打射线的方式进行预计算,且打出的射线的数量不够多时,可能会存在摄像镜头中的可视物体由于未被射线打中而导致误判成不可视,导致最终显示在摄像镜头中的渲染结果出现缺失的问题。
图1示出了本申请一个示例性实施例提供的计算机系统的结构框图,该计算机系统100包括:第一终端110、第二终端130以及服务器120。
第一终端110安装和运行有支持虚拟环境的应用程序,当第一终端110运行应用程序时,第一终端110的屏幕上显示应用程序的用户界面111。该应用程序可以是体育游戏、载具模拟游戏、动作游戏中的任意一种。第一终端110是第一用户101使用的终端,第一用户101使用第一终端110控制位于虚拟环境中的第一虚拟对象进行竞速,该第一虚拟对象包括但不限于:赛车、越野车、卡丁车、飞车、飞机、摩托车、山地车、和虚拟人物中的至少一种。示意性的,第一虚拟对象是第一虚拟载具,比如仿真飞车或仿真赛车。
第二终端130安装和运行有支持虚拟环境的应用程序。该应用程序可以是体育游戏、载具模拟游戏、动作游戏中的任意一种,当第二终端130运行应用程序时,第二终端130的屏幕上显示应用程序的用户界面131。第二终端130是第二用户102使用的终端,第二用户102使用第二终端130控制位于虚拟环境中的第二虚拟对象进行竞速,该第二虚拟对象包括但不限于:赛车、越野车、卡丁车、飞车、飞机、摩托车、山地车、和虚拟人物中的至少一种。示意性的,第二虚拟对象是第二虚拟载具,比如仿真飞车或仿真赛车。
可选地,第一虚拟对象和第二虚拟对象具处于同一虚拟环境中。可选地,第一虚拟对象和第二虚拟对象可以属于同一个阵营、同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。可选的,第一虚拟对象和第二虚拟对象可以属于不同的阵营、不同的队伍、不同的组织或具有敌对关系。
可选地,第一终端110和第二终端130上安装的应用程序是相同的,或两个终端上安装的应用程序是不同控制系统平台的同一类型应用程序。第一终端110可以泛指多个终端中的一个,第二终端130可以泛指多个终端中的一个,本实施例仅以第一终端110和第二终端130来举例说明。第一终端110和第二终端130的设备类型相同或不同,该设备类型包括:台式计算机、膝上型便携计算机、手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器中的至少一种。
其它终端140可以是开发者对应的终端,在终端140上安装有虚拟环境的应用程序的开发和编辑平台,开发者可在终端140上对应用程序进行编辑,并将编辑后的应用程序文件通过有线或无线网络传输至服务器120,第一终端110和第二终端130可从服务器120下载应用程序对应的更新包实现对应用程序的更新。
第一终端110、第二终端130以及其它终端140通过无线网络或有线网络与服务器120相连。
服务器120包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器120用于为支持三维虚拟环境的应用程序提供后台服务。可选地,服务器120承担主要计算工作,终端承担次要计算工作;或者,服务器120承担次要计算工作,终端承担主要计算工作;或者,服务器120和终端之间采用分布式计算架构进行协同计算。
可以理解,本申请实施例所提及的终端和服务器均可单独用于执行本申请实施例中提供的潜在可视集合的确定方法和/或三维场景的渲染方法。终端和服务器也可协同用于执行本申请实施例中提供的潜在可视集合的确定方法和/或三维场景的渲染方法。
服务器120包括至少一个服务器模组121,服务器模组121包括处理器122、用户数据库123、应用程序数据库124、面向用户的输入/输出接口(Input/Output Interface,I/O接口)125以及面向开发者的输出/输出接口126。其中,处理器122用于加载服务器模组121中存储的指令,处理用户数据库123和应用程序数据库124中的数据;用户数据库123用于存储第一终端110和/或第二终端130通过无线网络或有线网络上传的用户数据;应用程序数据库124用于存储2.5维虚拟环境的应用程序中的数据;面向用户的I/O接口125用于通过无线网络或有线网络和第一终端110和/或第二终端130建立通信交换数据;面向开发者的I/O接口126用于通过无线网络或有线网络和其它终端140建立通信交换数据。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
图2示出了本申请一个示例性实施例提供的PVS的确定方法的流程图,该方法可以应用于计算机设备,该计算机设备具体可以是如图1所示的计算机系统中中的第一终端110、第二终端130、其它终端140或者服务器120,该方法包括:
步骤S201,将地图区域划分为多个检测点区域。
其中,地图区域是虚拟对象在虚拟环境中的活动范围。在地图区域中包括不透明物体和半透明物体。虚拟对象在地图区域中活动,不透明物体和半透明物体皆是可视物体,计算机设备的处理器通过Draw Call调用GPU对可视物体进行渲染,使得可视物体显示时具有三维效果。其中,不透明物体在本实施例中称为三维物体,半透明物体在本实施例中称为半透明三维物体。
具体地,计算机设备的处理器根据划分规则将地图区域划分为多个检测点区域,检测点区域是根据划分规则划分而成的凸四边形区域。划分规则包括沿虚拟对象行进的预定路线进行划分,和,按照虚拟对象活动范围进行划分中的至少一种。
可选地,多个检测点区域的形状可以是相同的形状,也可以是不相同的;多个检测点区域的面积可以是相等的,也可以是不相等的;不同的地图区域划分出的检测点区域的数量可以是相同的,也可以是不相同的。
步骤S202,将检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同。
计算机设备的处理器对检测点区域中的每个三维物体进行唯一标识,即不同的三维物体对应一个唯一物体标识。唯一物体标识用于标记检测点区域中的三维物体,处理器根据唯一物体标识在三维虚拟世界中确定对应的三维物体。
在一个实施例中,计算机设备的处理器对唯一物体标识进行映射,将检测点区域中的三维物体的贴图材质替换为单颜色材质。由于唯一物体标识的唯一性,每个三维物体对应的单颜色材质的颜色标识不同。颜色标识用于标识三维物体的贴图材质替换后的单颜色材质对应的单颜色。
步骤S203,在检测点区域中确定至少一个检测点。
具体地,计算机设备的处理器在检测点区域中设置多个检测点。检测点是用于监测可视物体的位置点。多个检测点是分散位于检测点区域中的不同位置点。
可选地,本实施例对检测点区域中的检测点设置的位置和数量不做限定。
步骤S204,渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识。
具体地,计算机设备的可以确定的检测点作为摄像机模型的所在位置,摄像镜头在检测点处通过每次旋转90°对检测点的六个方向面进行一次渲染,得到二维纹理贴图。计算机设备的处理器根据六个方向面渲染得到的二维纹理贴图,得到检测点对应的立方体贴图。处理器根据立方体贴图做三维物体的可视检测,通过检测在立方体贴图上的目标颜色标识,确定在该立方体贴图上出现的单颜色。
步骤S205,将目标颜色标识对应的三维物体添加至检测点区域的潜在可视集合PVS中。
具体地,计算机设备的处理器根据确定出的立方体贴图上的目标颜色标识,确定目标颜色标识对应的三维物体,从而确定当摄像机模型在形成该立方体贴图的检测点时,摄像镜头能够看到的三维物体。目标颜色标识用于识别立方体贴图上存在的三维物体。计算机设备的处理器将确定的三维物体添加至检测点区域的PVS中,使得在用户侧运行时,根据检测出的用户当前所在的检测点区域,渲染该检测点区域中的所有可视物体,故PVS是该检测点区域中所有可视物体的集合。
综上所述,本实施例提供的方法,通过将地图区域划分为多个检测点区域,并将检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同,使得检测点区域中的三维物体具有唯一标识,再在检测点区域中确定的检测点对应的立方体贴图上,确定目标颜色标识,将目标颜色标识对应的三维物体添加至检测点区域的PVS中,从而确定出在立方体贴图上的可视物体。区别于相关技术中使用立方体贴图对天空等远景进行渲染,达到远景的效果,本申请创造性地利用立方体贴图做检测点区域内的可视物体的检测,通过检测立方体贴图的六个方向面上的目 标颜色标识,使得检测点区域内任何角度的可视物体都得以检测到,与相关技术相比,避免了随机打射线的随机性与不稳定性,检测可视物体时的准确性得以保证,从而使得最终显示在PVS中的结果无误。而检测点区域内的三维物体替换为二维的单颜色,减少了检测时的计算量。
图3示出了本申请另一个示例性实施例提供的PVS的确定方法的流程图,该方法可以应用于如图1所示的计算机系统中,该方法包括:
步骤S301,将地图区域划分为多个检测点区域。
其中,地图区域是虚拟对象在虚拟环境中的活动范围。在地图区域中包括不透明物体和半透明物体。虚拟对象在地图区域中活动,不透明物体和半透明物体皆是可视物体,处理器通过Draw Call调用GPU对可视物体进行渲染,使得可视物体显示时具有三维效果。其中,不透明物体在本实施例中称为三维物体,半透明物体在本实施例中称为半透明三维物体。
具体地,计算机设备的处理器根据划分规则将地图区域划分为多个检测点区域,检测点区域是根据划分规则划分而成的凸四边形区域。划分规则包括沿虚拟对象行进的预定路线进行划分,和,按照虚拟对象活动范围进行划分中的至少一种。
可选地,多个检测点区域的形状可以是相同的形状,也可以是不相同的;多个检测点区域的面积可以是相等的,也可以是不相等的;不同的地图区域划分出的检测点区域的数量可以是相同的,也可以是不相同的。
步骤S302,将检测点区域中的三维物体的唯一物体标识映射为颜色标识。
其中,唯一物体标识是检测点区域中的每个三维物体的唯一标记。计算机设备的处理器对检测点区域中的三维物体使用唯一物体标识进行唯一标识,即检测点区域中的每个三维物体对应一个唯一物体标识,且唯一物体标识不重复。该唯一物体标识用于在三维场景中唯一地标识三维物体。
参见图4,示出了虚拟环境中的三维物体,图4中的三维物体由GPU进行渲染后,显示出三维效果,每个三维物体皆是可视物体。处理器读取每个三维物体对应的唯一物体标识。
在一种可选的实施方式中,当图4所示的地图区域中的每个检测点区域 中的可视物体的数量不超过255个时,处理器将根据读取到的检测点区域中的三维物体的唯一物体标识的最后三位,分别映射为红绿蓝颜色空间中的红色通道值、绿色通道值和蓝色通道值,图4所示的地图区域经过映射后,处理器得到图5所示的映射图。结合图4与图5中的可视物体分析,发现每个三维物体对应唯一一个单颜色,即每个三维物体经过映射后得到不同的单颜色,每个三维物体根据不同的单颜色获得唯一的颜色标识。
在一个示意性的例子中,实现将三维物体的唯一物体标识的最后三位分别映射为红色通道值、绿色通道值和蓝色通道值的代码如下:
Figure PCTCN2019120864-appb-000001
计算机设备的处理器根据红色通道值、绿色通道值和蓝色通道值,确定出三维物体对应的颜色标识。颜色标识用于根据颜色标识的唯一性,识别检测点区域中的三维物体。
步骤S303,将三维物体贴图材质替换为与颜色标识对应的单颜色材质。
其中,颜色标识是对三维物体的唯一物体标识映射后得到的唯一标识。在一个实施例中,颜色标识包括红色通道值、绿色通道值、以及蓝色通道值。单颜色材质是根据红色通道值、绿色通道值和蓝色通道值合成的单颜色对应的材质。
步骤S304,在检测点区域上确定出离散的多个平面检测点。
在一种可选的实施方式中,计算机设备的处理器通过在检测点区域的平面上插值,将插值所在的位置确定为离散的多个平面检测点。
在一个实施例中,计算机设备的处理器通过在检测点区域的平面上均等插值,得到在检测点区域的平面上均等分布的多个平面检测点。
其中,本实施例对如何在检测点区域上确定出多个平面检测点,以及在检测点区域上确定出的平面检测点的数量不做具体限定。
步骤S305,对于多个平面检测点中的每个平面检测点,通过物理模型射线检测对平面检测点进行检测,获得平面检测点处的路面高度。
具体地,对于多个平面检测点中的每个平面检测点,计算机设备的处理器通过物理模型射线检测,对每个平面检测点进行碰撞检测。在检测点区域中的多个平面检测点处于同一水平线上,计算机设备的处理器确定至少一个平面检测点,在该平面检测点处沿垂直向下方向发射射线,根据发射射线碰撞到路面的碰撞点,确定该平面检测点处的路面高度。
示意性的,参见图6,示出了对平面检测点进行碰撞检测,获取路面高度的示意图。图6中包括第一平面检测点601和第二平面检测点602,第一平面检测点601和第二平面检测点602处于同一水平线上。
计算机设备的处理器在第一平面检测点601处通过物理模型射线检测进行碰撞检测,射线从第一平面检测点601处沿垂直向下方向发射射线,处理器获得第一平面检测点601的碰撞点603,根据第一平面检测点601的碰撞点603,确定第一平面检测点601的路面高度h2。
计算机设备的处理器在第二平面检测点602处通过物理模型射线检测进行碰撞检测,射线从第二平面检测点602处沿垂直向下方向发射射线,计算机设备的处理器获得第二平面检测点602的碰撞点604,根据第二平面检测点602的碰撞点604,确定第二平面检测点602的路面高度h4(图中未示出)。在图6中,第二平面检测点602处的路面高度h4=0,即第二平面检测点602处的路面是平路。
在一个实施例中,平面检测点是若干个均匀分布在检测点区域中的检测点。
步骤S306,根据路面高度与第一高度相加后的第一和值,确定出平面检测点对应的低区检测点。
其中,第一高度是虚拟对象处在活动状态下自身的高度。在一个实施例中,计算机设备的处理器将检测到的平面检测点处的路面高度与第一高度相加,得到第一和值,第一和值是虚拟对象在活动时紧贴在路面上的状态的高度值。计算机设备的处理器根据第一和值,将第一和值对应的平面检测点确定为低区检测点,低区检测点是虚拟对象紧贴在路面上活动的检测点。
参见图6,虚拟对象605活动时自身的高度为h1,第二平面检测点602处的路面高度h4=0。当虚拟对象605竞速至第二平面检测点602对应的路面处,虚拟对象601紧贴在路面上,故第一高度是虚拟对象601自身高度h1。处理器将h1与h4相加得到第一和值,即h1+h4,根据第一和值,确定第二平面检测点602对应的低区检测点606。
在一个实施例中,低区检测点是若干个均匀分布在检测点区域中的检测点。
步骤S307,根据路面高度与第二高度相加后的第二和值,确定出平面检测点对应的高区检测点。
其中,第二高度是虚拟对象处在活动状态下自身的高度,与虚拟对象在活动时自身距离路面的腾空高度的高度之和。在一个实施例中,计算机设备的处理器将检测到的平面检测点处的路面高度与第二高度相加,得到第二和值,第二和值是虚拟对象在活动时腾空在路面上空的状态的高度值。计算机设备的处理器根据第二和值,将第二和值对应的平面检测点确定为高区检测点,高区检测点是虚拟对象腾空在路面上空的检测点。
参见图6,虚拟对象605活动时自身的高度为h1,第一平面检测点601处的路面高度h2。当虚拟对象605竞速至第一平面检测点601对应的路面处,虚拟对象601腾空在路面上空,腾空高度为h3,故第二高度是虚拟对象601自身高度h1和腾空高度h3之和。处理器将第二高度与h2相加得到第二和值,即h1+h3+h2,根据第二和值,确定第一平面检测点601对应的高区检测点605。
在一个实施例中,高区检测点是若干个均匀分布在检测点区域中的检测点。
步骤S308,对检测点处的六个方向面分别进行渲染,得到每个方向面上对应的二维纹理贴图。
在一个实施例中,计算机设备的处理器对低区检测点处的前面、后面、左面、右面、底面、以及顶面共六个方向面分别进行二维渲染,得到六个二维纹理贴图,六个二维纹理贴图对应低区检测点处的六个方向面。计算机设备的处理器对高区检测点处的六个方向面分别进行二维渲染,得到六个二维纹理贴图,六个二维纹理贴图对应高区检测点处的六个方向面。
示意性的,参见图7,示出了计算机设备的处理器在确定的一个检测点处二维渲染后得到的二维纹理贴图。该检测点可以是低区检测点,也可以是高区检测点。计算机设备的处理器以检测点作为摄像机模型,通过摄像镜头分别对检测点处的前面、后面、左面、右面、底面、以及顶面共六个方向面进行二维渲染,得到了该检测点六个方向面上的二维纹理贴图。
步骤S309,将六个方向面上的二维纹理贴图进行拼合,得到检测点对应的立方体贴图。
具体地,计算机设备的处理器将在低区检测点处得到的六个方向面上的六个二维纹理贴图进行拼合,得到低区检测点对应的立方体贴图。计算机设备的处理器将在高区检测点处得到的六个方向面上的六个二维纹理贴图进行拼合,得到高区检测点对应的立方体贴图。
参见图7,计算机设备的处理器将检测点的六个方向面上的二维纹理贴图进行拼合,得到该检测点对应的立方体贴图。
步骤S310,遍历立方体贴图的六个方向面上的二维纹理贴图的像素值,根据二维纹理贴图上出现的像素值确定立方体贴图上出现的目标颜色标识。
其中,目标颜色标识是在检测点对应的立方体贴图上出现的颜色标识。
在一个实施例中,计算机设备的处理器遍历低区检测点对应的立方体贴图的六个方向面上的二维纹理贴图的像素值,根据二维纹理贴图上出现的像素值,确定该像素值属于的单颜色,从而确定低区检测点对应的立方体贴图上出现的目标颜色标识。
在一个实施例中,计算机设备的处理器遍历高区检测点对应的立方体贴图的六个方向面上的二维纹理贴图的像素值,根据二维纹理贴图上出现的像 素值,确定该像素值属于的单颜色,从而确定高区检测点对应的立方体贴图上出现的目标颜色标识。
步骤S311,将目标颜色标识对应的三维物体添加至检测点区域的PVS中。
其中,目标颜色标识用于确定在检测点对应的立方体贴图上存在的三维物体。
在一个实施例中,检测点区域的PVS包括第一PVS和第二PVS。
计算机设备的处理器根据低区检测点对应的立方体贴图上的目标颜色标识,确定目标颜色标识对应的三维物体,并将三维物体添加至检测点区域的第一PVS中。计算机设备的处理器根据高区检测点对应的立方体贴图上的目标颜色标识,确定目标颜色标识对应的三维物体,并将三维物体添加至检测点区域的第二PVS中。
在一个实施例中,第一PVS和第二PVS可以合并为一个PVS。
综上所述,本实施例提供的方法,通过将地图区域划分为多个检测点区域,并将检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同,使得检测点区域中的三维物体具有唯一标识,再在检测点区域中确定的检测点对应的立方体贴图上,确定目标颜色标识,将目标颜色标识对应的三维物体添加至检测点区域的PVS中,从而确定出在立方体贴图上的可视物体。区别于相关技术中使用立方体贴图对天空等远景进行渲染,达到远景的效果,本申请创造性地利用立方体贴图做检测点区域内的可视物体的检测,通过检测立方体贴图的六个方向面上的目标颜色标识,使得检测点区域内任何角度的可视物体都得以检测到,与相关技术相比,避免了随机打射线的随机性与不稳定性,检测可视物体时的准确性得以保证,从而使得最终显示在PVS中的结果无误。而检测点区域内的三维物体替换为二维的单颜色,减少了检测时的计算量。
本实施例提供的方法,通过对检测点处的六个方向面分别进行二维渲染,得到每个方向面上对应的二维纹理贴图,将六个二维纹理贴图进行拼合后得到检测点对应的立方体贴图,相比于三维渲染,GPU对二维渲染的消耗较小,通过确定立方体贴图上的目标颜色标识,实现对立方体贴图上的三维物体的 可视检测。
本实施例提供的方法,根据三维物体的唯一物体标识的最后三位分别映射为红绿蓝颜色空间中的红色通道值、绿色通道值和蓝色通道值,得到三维物体对应的颜色标识,且每个三维物体的颜色标识不相同,保证了每个三维物体的唯一性,使得在做可视检测时,不会出现相同的颜色标识,导致三维物体的可视检测的结果有误。
本实施例提供的方法,将检测点区域中的检测点分为低区检测点和高区检测点,并将在低区检测点处的三维物体添加进第一PVS中,将高区检测点处的三维物体添加进第二PVS中,使得用户在操作虚拟对象活动时,具有活动在路面和腾空于路面两种不同状态下的真实感受。
开发人员在做可视检测过程中,会先对不透明物体进行可视检测,不透明物体即为三维物体,再对半透明物体进行可视检测,半透明物体即为半透明三维物体,以防止半透明三维物体的存在使得三维物体的部分颜色发生变化的现象,故开发人员先将半透明三维物体进行隐藏,待三维物体的可视检测结束后,再重新检测一遍半透明三维物体,图8示出了在基于图3所示的方法上,示例性的说明半透明三维物体的可视检测方法的流程图,该方法可以应用于如图1所示的计算机系统中,该方法包括:
步骤S701:将检测点区域中的半透明三维物体设置为隐藏属性。
其中,半透明三维物体是相对于三维物体,具备透光性的三维物体,即透过半透明三维物体,视觉可以看到半透明三维物体之后的三维物体,也即半透明三维物体是不可遮挡三维物体,但可被三维物体遮挡的物体。
在一个实施例中,在图3所示的步骤S301至步骤S303之后,计算机设备的处理器将检测点区域中的半透明三维物体的属性设置为隐藏属性,将检测点区域中的三维物体的显示属性不变,使得半透明物体在检测点区域中不可见,三维物体在检测点区域中可见。
步骤S702,将检测点区域中的半透明三维物体重新设置为显示属性,将除半透明三维物体之外的三维物体设置为隐藏属性。
在图3所示的步骤S304至步骤S310之后,计算机设备的处理器对三维 物体进行了可视检测,确定了在检测点区域中的三维物体。计算机设备的处理器将检测点区域中的半透明三维物体的属性从隐藏属性重新设置为显示属性,将三维物体的属性从显示属性重新设置为隐藏属性,使得半透明三维物体在检测点区域中可见,三维物体在检测点区域中不可见。
步骤S703,将半透明三维物体的贴图材质替换为单颜色材质,每个半透明三维物体对应的单颜色材质的颜色标识不同。
每个半透明三维物体具有唯一物体标识,颜色标识是对半透明三维物体的唯一物体标识映射后得到的唯一标识。单颜色材质是根据红色通道值、绿色通道值和蓝色通道值合成的单颜色对应的材质。每个半透明三维物体的唯一物体标识不同,使得根据唯一物体标识映射后得到的单颜色材质的颜色标识不同。
步骤S704,在检测点区域中确定至少一个检测点。
步骤S705,渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识。
本实施例中对如何实现步骤S703至步骤S705的内容,在图3所示的方法中有说明,这里不再赘叙。
步骤S706,将目标颜色标识对应的三维物体和半透明三维物体合并添加至检测点区域的PVS中。
其中,目标颜色标识用于确定在检测点对应的立方体贴图上存在的半透明三维物体。检测点区域的PVS包括第一PVS和第二PVS。
在一个实施例中,计算机设备的处理器根据低区检测点对应的立方体贴图上的目标颜色标识,确定目标颜色标识对应的半透明三维物体,将半透明三维物体添加至检测点区域的第一PVS中。计算机设备的处理器根据高区检测点对应的立方体贴图上的目标颜色标识,确定目标颜色标识对应的半透明三维物体,将半透明三维物体添加至检测点区域的第二PVS中。
在一个实施例中,计算机设备的处理器将在低区检测点处,根据目标颜色标识确定的三维物体和半透明三维物体合并添加至检测点区域的第一PVS中;将在高区检测点处,根据目标颜色标识确定的三维物体和半透明三维物体合并添加至检测点区域的第二PVS中。
本实施例提供的方法中,计算机设备的处理器通过先渲染三维物体再渲染半透明三维物体,避免了因先渲染半透明三维物体后渲染三维物体导致的重复渲染,同时在美工人员处理图像时,美工人员可以发现因半透明三维物体的存在导致三维物体的部分材质发生变化的问题,美工人员可以针对该问题进行修改,保证了最终显示在PVS中的结果符合三维效果。
在本实施例中,以提供的虚拟环境是应用于3D竞速游戏中为例,在3D竞速游戏中,确定PVS。3D竞速游戏可以是体育游戏、载具模拟游戏、动作游戏中的任意一种,3D竞速游戏中包括多个赛道,多个虚拟对象沿赛道竞速,虚拟对象即为摄像机模型所在的位置,在摄像机模型的镜头画面中显示有山、海、花草树木、房子、隧道等可视物体中的至少一种,故多个赛道以及沿赛道分布的可视物体构成赛道区域。
图9示出了本申请另一个示例性实施例提供的PVS的确定方法的流程图,该方法应用于3D竞速游戏中,3D竞速游戏包括位于虚拟环境中的赛道区域,该方法可以应用于如图1所示的计算机系统中,该方法包括:
步骤S801,将赛道区域划分为多个检测点区域。
其中,赛道区域是虚拟对象在虚拟环境中的竞速范围,在赛道区域中包括赛道路线、不透明物体和半透明物体,赛道路线是虚拟对象进行竞速的预定路线,不透明物体和半透明物体沿赛道路线分布,不透明物体和半透明物体皆是可视物体,计算机设备的处理器通过Draw Call调用GPU对可视物体进行渲染,使得可视物体显示时具有三维效果。其中,不透明物体在本实施例中称为三维物体,半透明物体在本实施例中称为半透明三维物体。
可选地,三维物体可以是花草树木、山、房子、海、或卡通人物等可视物体,半透明三维物体可以是烟雾、氮气喷气、或溅起的水珠等可视物体。
在一个实施例中,计算机设备的处理器根据沿赛道区域中的赛道路线将地图区域划分为多个检测点区域,检测点区域是沿赛道区域中的赛道路线划分而成的凸四边形区域。
可选地,多个检测点区域的形状可以是相同的形状,也可以是不相同的;多个检测点区域的面积可以是相等的,也可以是不相等的;不同的地图区域 划分出的检测点区域的数量可以是相同的,也可以是不相同的。
步骤S802,将检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同。
步骤S803,在检测点区域中确定至少一个检测点。
步骤S804,渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识。
步骤S802至步骤S804的内容与图2、图3所示的内容相同,这里不再赘叙。
步骤S805,将目标颜色标识对应的三维物体添加至检测点区域的赛道PVS中。
在一个实施例中,计算机设备的处理器根据确定出的目标颜色标识,确定出目标颜色标识对应的三维物体,处理器将确定的三维物体添加至检测点区域的赛道PVS中,使得在用户侧运行时,根据检测出的用户当前所在的检测点区域,渲染该检测点区域中的所有可视物体,故赛道PVS是沿赛道路线分布的,位于检测点区域中的可视物体的集合。
在一个实施例中,在步骤S802之后,处理器隐藏半透明三维物体;在步骤S805之后,处理器显示半透明三维物体,之后重复步骤S803至步骤S805。计算机设备的处理器通过先渲染三维物体再渲染半透明三维物体,避免了因先渲染半透明三维物体后渲染三维物体导致的重复渲染,同时在美工人员处理图像时,美工人员可以发现因半透明三维物体的存在导致三维物体的部分材质发生变化的问题,美工人员可以针对该问题进行修改,保证了最终显示在赛道PVS中的结果符合三维效果。
在开发人员对三维物体和半透明三维物体做过可视检测后,确定地图区域中的每个检测点区域的PVS。美工人员需要对开发人员得到的检测点区域的PVS进行美工处理,以保证最终PVS显示的结果的准确性。
处理器运行程序,在终端的界面上跳出如图10所示的第一对话框901,该第一对话框901是PVS遮挡剪裁。处理器在确定PVS遮挡剪裁901中的运行场景902的按键被点击后,自动运行地图区域,并对地图区域中的每个检测点区域的PVS进行计算。处理器将计算得到的检测点区域的PVS进行保存。
处理器运行赛道编辑器的程序,在终端的界面上跳出如图11所示的第二对话框,第二对话框是赛道编辑器。处理器将计算得到的检测点区域的PVS导入到该赛道编辑器中。赛道编辑器分为地图区域1001和手工调整区域1002。处理器在接收到美工人员在地图区域1001中选中目标检测点区域1003后,还接收到美工人员在手工调整区域1002中选中显示区域1004的复选框,确定美工人员需要对目标检测点区域1003的PVS进行手工调整,处理器显示目标检测点区域1003的PVS。美工人员在美术接受的效果范围内,加大目标检测点区域的PVS中可视物体的剔除力度,并根据美术的效果,修改因半透明三维物体的存在导致三维物体的部分材质发生变化的现象。
计算机设备的处理器运行Draw Call监视器的程序,在终端的界面上跳出如图12所示的第三对话框1101,第三对话框1101是Draw Call监视器。处理器在接收到美工人员选中Draw Call监视器1101中的场景1102的复选框后,确定对地图区域进行自动跑图。在自动跑图过程中,计算机设备的处理器监测自身在通过Draw Call让GPU渲染检测点区域的PVS上的消耗,对消耗过大的检测点区域发出警报,提醒美工人员对消耗过大的检测点区域重新进行手工剔除。
美工人员通过重复进行上述步骤的方式,使得Draw Call监视器1101在自动跑图时不再进行报警,从而确定手工调整部分完成。
综上所述,本实施例提供的方法中,通过先渲染三维物体再渲染半透明三维物体,避免了因先渲染半透明三维物体后渲染三维物体导致的重复渲染,同时在美工人员处理图像时,美工人员可以发现因半透明三维物体的存在导致三维物体的部分材质发生变化的问题,美工人员可以针对该问题进行修改,保证了最终显示在赛道PVS中的结果符合三维效果。
开发人员在基于图9所示的方法中获取了3D竞速游戏的检测点区域的赛道PVS,并将每个赛道区域中的检测点区域的赛道PVS打包成压缩包,保存于服务器中。玩该3D竞速游戏的用户通过从服务器下载压缩包,服务器通过检测用户终端的摄像机模型当前帧所在的位置,读取当前帧所在的检测点区域的赛道PVS,并对读取的赛道PVS进行三维渲染。
图13示出了本申请一个示例性实施例提供的三维场景的渲染方法的流程图,以提供的虚拟环境是应用于3D竞速游戏中为例,该方法应用于存储有检测点区域和PVS的终端中,PVS是采用如上所述的方法来得到的,该方法可以应用于图1所示的计算机系统中,该方法包括:
步骤S1201,检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同。
地图区域中包括多个检测点区域,检测点区域是沿赛道划分的凸四边形区域。参见图14,示例性的示出了沿赛道划分的检测点区域,每个检测点区域分配有唯一区域标识,图14中的灰色部分1301是地图区域中的一个检测点区域。
参见图15,图15中以点A、点B、点C、点D构成的检测点区域,点P作为摄像机模型所在位置为例,判断摄像机模型是否在检测点区域中的算法如下:
1)判断点P在向量
Figure PCTCN2019120864-appb-000002
左边还是右边。
当点P在向量
Figure PCTCN2019120864-appb-000003
左边时,判断点P是否在三角形ACD内;当点P在向量
Figure PCTCN2019120864-appb-000004
右边时,判断点P是否在三角形ABC内。
2)判断点P是否在三角形ACD内,需要同时符合如下3个条件:
a.点P在向量
Figure PCTCN2019120864-appb-000005
的左边;
b.点P在向量
Figure PCTCN2019120864-appb-000006
的左边;
c.点P在向量
Figure PCTCN2019120864-appb-000007
的左边。
3)判断点P是否在三角形ABC内,需要同时符合如下3个条件:
a.点P在向量
Figure PCTCN2019120864-appb-000008
的右边;
b.点P在向量
Figure PCTCN2019120864-appb-000009
的右边;
c.点P在向量
Figure PCTCN2019120864-appb-000010
的右边。
判断一个点在向量的左边还是右边,以判断点P是在向量
Figure PCTCN2019120864-appb-000011
的左边还是右边为例,算法如下:
1)计算出向量
Figure PCTCN2019120864-appb-000012
2)向量
Figure PCTCN2019120864-appb-000013
叉乘向量
Figure PCTCN2019120864-appb-000014
得到一个垂直于平面APC的法向量
Figure PCTCN2019120864-appb-000015
3)根据左手坐标系原则:
a.当法向量
Figure PCTCN2019120864-appb-000016
垂直平面APC向上(向量
Figure PCTCN2019120864-appb-000017
的Y分量大于0)时,
Figure PCTCN2019120864-appb-000018
Figure PCTCN2019120864-appb-000019
左边,即点P在
Figure PCTCN2019120864-appb-000020
左边;
b.当法向量
Figure PCTCN2019120864-appb-000021
垂直平面APC向下(向量
Figure PCTCN2019120864-appb-000022
的Y分量小于0)时,
Figure PCTCN2019120864-appb-000023
Figure PCTCN2019120864-appb-000024
右边,即点P在
Figure PCTCN2019120864-appb-000025
右边。
上述向量
Figure PCTCN2019120864-appb-000026
叉乘向量
Figure PCTCN2019120864-appb-000027
的计算公式如下:
Figure PCTCN2019120864-appb-000028
Figure PCTCN2019120864-appb-000029
时,判断出
Figure PCTCN2019120864-appb-000030
Figure PCTCN2019120864-appb-000031
左边,即点P在
Figure PCTCN2019120864-appb-000032
左边;
Figure PCTCN2019120864-appb-000033
时,判断出
Figure PCTCN2019120864-appb-000034
Figure PCTCN2019120864-appb-000035
右边,即点P在
Figure PCTCN2019120864-appb-000036
右边。
终端的处理器根据上述说明的算法,检测摄像机模型当前帧所在的检测点区域与上一帧所在的检测点区域是否相同。当处理器检测出摄像机模型当前帧所在的检测点区域与上一帧所在的检测点区域相同,处理器已经读取当前帧所在的检测点区域中的PVS,并渲染出当前帧所在的检测点区域中的PVS中的可视物体;当处理器检测出摄像机模型当前帧所在的检测点区域与上一帧所在的检测点区域不相同,转到步骤1102。
其中,当前帧所在的检测点区域中的PVS是根据图2、图3、图8中任意一种方法得到的,处理器将得到的检测点区域中的PVS进行保存,在进行三维场景的渲染时直接读取保存的检测点区域中的PVS。
步骤S1202,当摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域不相同时,读取当前帧所在的检测点区域的赛道PVS。
当计算机设备的处理器检测出摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域不相同时,重新读取当前帧所在的检测点区域的赛道PVS,并渲染出当前帧所在的检测点区域中的赛道PVS中的可视物体。
步骤S1203,根据当前帧所在的检测点区域的赛道PVS,渲染得到摄像机模型的镜头画面。
具体地,计算机设备的处理器根据当前帧所在的检测点区域的赛道PVS,读取当前帧所在的检测点区域的赛道PVS,将当前帧所在的检测点区域的赛道PVS中的可视物体进行渲染,得到摄像机模型的镜头画面。
示意性的,计算机设备的处理器检测出当前帧所在的区域是如图16所示 的区域,图16中黑色点包围的区域1501是当前帧所在的检测点区域,该黑色点包围的区域1501是室内场景。计算机设备的处理器读取黑色点包围的区域1501的赛道PVS,并对黑色点包围的区域1501的赛道PVS中的可视物体进行渲染,得到如图17所示的摄像机模型的镜头画面。
在图17中所示的摄像机模型的镜头画面即为用户玩3D竞速游戏时的镜头画面,在该镜头画面中显示有左按键1601、右按键1602和虚拟对象1603,左按键1601用于控制虚拟对象1603在竞速时向左行驶,右按键1602用于控制虚拟对象1603在竞速时向右行驶,虚拟对象1603行驶在室内场景中。用户玩3D竞速游戏时,用户看到的镜头画面与平时无差异,但室内场景外面的其他检测点区域中的可视物体未进行渲染,从而减少了处理器的消耗。
相同的,在图18所示的情况中,计算机设备的处理器检测出当前帧所在的区域是如图18所示的区域,图18中黑色点包围的区域1701是当前帧所在的检测点区域,该黑色点包围的区域1701是室外山脚的一处场景。计算机设备的处理器读取黑色点包围的区域1701的赛道PVS,并对黑色点包围的区域1701的赛道PVS中的可视物体进行渲染,而除该黑色点包围的区域1701是室外山脚的一处场景以外的区域中的可视物体进行剔除,减少了处理器的消耗。
可选地,计算机设备的处理器在确定当前帧所在的检测点区域后,确定虚拟对象处于紧贴在赛道路面上竞速的状态中,确定摄像机模型位于低区检测点处,处理器读取低区检测点对应的第一赛道PVS。计算机设备的处理器根据当前帧所在的检测点区域的第一赛道PVS,渲染得到摄像机模型的第一镜头画面。第一赛道PVS是检测点区域中在低区检测点处,沿赛道路线分布的可视物体的集合。第一镜头画面是摄像机模型在检测点区域的低区检测点处渲染可视物体后的显示的镜头画面,即第一镜头画面是虚拟对象在竞速时紧贴赛道路面状态下显示的镜头画面。
计算机设备的处理器在确定当前帧所在的检测点区域后,确定虚拟对象处于腾空于赛道路面上空竞速的状态中,确定摄像机模型位于高区检测点处,处理器读取高区检测点对应的第二赛道PVS。计算机设备的处理器根据当前帧所在的检测点区域的第二赛道PVS,渲染得到摄像机模型的第二镜头画面。 第二赛道PVS是检测点区域中在高区检测点处,沿赛道路线分布的可视物体的集合。第二镜头画面是摄像机模型在检测点区域的高区检测点处渲染可视物体后的显示的镜头画面,即第二镜头画面是虚拟对象在竞速时腾空于赛道路面状态下显示的镜头画面。
在一个实施例中,计算机设备的处理器在确定当前帧所在的检测点区域后,先使用顶点渲染器对检测点区域中的可视物体的顶点进行渲染,得到每个可视物体的轮廓,再使用像素渲染器对每个可视物体的像素进行渲染,得到每个可视物体显示在镜头画面中的三维效果。
综上所示,本实施例提供的方法中,通过检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同,决定GPU当前需要渲染的检测点区域,不需要进行多余的渲染,从而减少了处理器的消耗。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
图19示出了本申请一个示例性实施例提供的PVS的确定装置的结构示意图,该装置包括第一划分模块1810、第一替换模块1820、第一确定模块1830、第一渲染模块1840和第一添加模块1850,其中:
第一划分模块1810,用于将地图区域划分为多个检测点区域。
第一替换模块1820,用于将检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同。
在一个实施例中,如图20所示,第一替换模块1820,包括:
第一映射单元1821,用于将检测点区域中的三维物体的唯一物体标识映射为颜色标识。
在一个实施例中,第一映射单元1821,用于将检测点区域中的三维物体的唯一物体标识的最后三位,分别映射为红绿蓝颜色空间中的红色通道值、绿色通道值和蓝色通道值;根据红色通道值、绿色通道值和蓝色通道值,确定出三维物体对应的颜色标识。
第一替换单元1822,用于将三维物体的贴图材质替换为与颜色标识对应的单颜色材质。
第一确定模块1830,用于在检测点区域中确定至少一个检测点。
在一个实施例中,第一确定模块1830,用于在检测点区域上确定出离散的多个平面检测点;对于多个平面检测点中的每个平面检测点,通过物理模型射线检测对平面检测点进行检测,获得平面检测点处的路面高度;根据路面高度与第一高度相加后的第一和值,确定出平面检测点对应的低区检测点;根据路面高度与第二高度相加后的第二和值,确定出平面检测点对应的高区检测点。
第一渲染模块1840,用于渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识。
第一添加模块1850,如图21所示,第一渲染模块1840,包括:
渲染单元1841,用于对检测点处的六个方向面分别进行渲染,得到每个方向面上对应的二维纹理贴图。
拼合单元1842,用于将六个方向面上的二维纹理贴图进行拼合,得到检测点对应的立方体贴图。
遍历单元1843,用于遍历Cubemap(立方体贴图)的六个方向面上的二维纹理贴图的像素值,根据二维纹理贴图上出现的像素值确定Cubemap上出现的目标颜色标识。
第一添加模块1850,用于将目标颜色标识对应的三维物体添加至检测点区域的PVS中。
第一添加模块1850,第一添加模块1850,用于当目标颜色标识属于与低区检测点对应的立方体贴图时,将目标颜色标识对应的三维物体添加至检测点区域的第一PVS中;当目标颜色标识属于与高区检测点对应的立方体贴图时,将目标颜色标识对应的三维物体添加至检测点区域的第二PVS中。
第一添加模块1850,该装置还包括:
设置模块1860,用于将检测点区域中的半透明三维物体设置为隐藏属性。
设置模块1860,用于将检测点区域中的半透明三维物体重新设置为显示属性,将除半透明三维物体之外的三维物体设置为隐藏属性。
第一替换模块1820,用于将半透明三维物体的贴图材质替换为单颜色材质,每个半透明三维物体对应的单颜色材质的颜色标识不同。
第一确定模块1830,用于在检测点区域中确定至少一个检测点。
第一渲染模块1840,用于渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识。
第一添加模块1850,用于将目标颜色标识对应的半透明三维物体添加至检测点区域的PVS中。
图22示出了本申请一个示例性实施例提供的三维场景的渲染装置的结构示意图,应用于存储有检测点区域和PVS的终端中,PVS是采用如上所述的装置来得到的,该装置包括:
检测模块2110,用于检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同。
读取模块2120,用于当摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域不相同时,读取当前帧所在的检测点区域的PVS。
第二渲染模块2130,用于根据当前帧所在的检测点区域的PVS,渲染得到摄像机模型的镜头画面。
第一添加模块1850,第二渲染模块2130,用于根据当前帧所在的检测点区域的第一PVS,渲染得到摄像机模型的第一镜头画面。
第二渲染模块2130,用于根据当前帧所在的检测点区域的第二PVS,渲染得到摄像机模型的第二镜头画面。
图23示出了本申请一个示例性实施例提供的PVS的确定装置的结构示意图,该装置应用于3D竞速游戏中,3D竞速游戏包括位于虚拟环境中的赛道区域,该装置包括:
第二划分模块2210,用于将赛道区域划分为多个检测点区域。
第二替换模块2220,用于将检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同。
第二确定模块2230,用于在检测点区域中确定至少一个检测点。
第三渲染模块2240,用于渲染出检测点对应的立方体贴图,确定立方体贴图上出现的目标颜色标识。
第二添加模块2250,用于将目标颜色标识对应的赛道物体添加至检测点区域的赛道PVS中。
需要说明的是:上述实施例提供的PVS的确定装置和三维场景的渲染装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的PVS的确定装置和三维场景的渲染装置与PVS的确定方法和三维场景的渲染方法的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图24示出了本申请一个实施例提供的服务器的结构示意图。该服务器可以是上述实施例所提及的计算机设备,用于实施上述实施例中提供的潜在可视集合的确定方法和/或三维场景的渲染方法。具体来讲:
所述服务器2300包括中央处理单元(CPU)2301、包括随机存取存储器(RAM)2302和只读存储器(ROM)2303的系统存储器2304,以及连接系统存储器2304和中央处理单元2301的系统总线2305。所述服务器2300还包括帮助计算机内的各个器件之间传输信息的基本输入/输出系统(I/O系统)2306,和用于存储操作系统2313、应用程序2314和其他程序模块2315的大容量存储设备2307。
所述基本输入/输出系统2306包括有用于显示信息的显示器2308和用于用户输入信息的诸如鼠标、键盘之类的输入设备2309。其中所述显示器2308和输入设备2309都通过连接到系统总线2305的输入输出控制器2310连接到中央处理单元2301。所述基本输入/输出系统2306还可以包括输入输出控制器2310以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器2310还提供输出到显示屏、打印机或其他类型的输出设备。
所述大容量存储设备2307通过连接到系统总线2305的大容量存储控制器(未示出)连接到中央处理单元2301。所述大容量存储设备2307及其相关联的计算机可读介质为服务器2300提供非易失性存储。也就是说,所述大容量存储设备2307可以包括诸如硬盘或者CD-ROM驱动器之类的计算机可 读介质(未示出)。
其中,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器2304和大容量存储设备2307可以统称为存储器。
根据本申请的各种实施例,所述服务器2300还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即服务器2300可以通过连接在所述系统总线2305上的网络接口单元2311连接到网络2312,或者说,也可以使用网络接口单元2311来连接到其他类型的网络或远程计算机系统(未示出)。
所述存储器还包括一个或者一个以上的程序,所述一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行。上述一个或者一个以上程序包含用于进行以下操作的指令:
检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同;当摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域不相同时,读取当前帧所在的检测点区域的PVS;及根据当前帧所在的检测点区域的PVS,渲染得到摄像机模型的镜头画面。
图25示出了本发明一个示例性实施例提供的终端2400的结构框图。该终端2400可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端2400还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端2400包括有:处理器2401和存储器2402。
处理器2401可以包括一个或多个处理核心,比如4核心处理器、8核心 处理器等。处理器2401可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器2401也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器2401可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器2401还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器2402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器2402还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器2402中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器2401所执行以实现本申请中方法实施例提供的对虚拟环境进行观察的方法和/或三维场景的渲染方法。
在一些实施例中,终端2400还可选包括有:外围设备接口2403和至少一个外围设备。处理器2401、存储器2402和外围设备接口2403之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口2403相连。具体地,外围设备包括:射频电路2404、触摸显示屏2405、摄像头2406、音频电路2407、定位组件2408和电源2409中的至少一种。
外围设备接口2403可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器2401和存储器2402。在一些实施例中,处理器2401、存储器2402和外围设备接口2403被集成在同一芯片或电路板上;在一些其他实施例中,处理器2401、存储器2402和外围设备接口2403中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路2404用于接收和发射RF(Radio Frequency,射频)信号,也 称电磁信号。射频电路2404通过电磁信号与通信网络以及其他通信设备进行通信。射频电路2404将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路2404包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路2404可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路2404还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏2405用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏2405是触摸显示屏时,显示屏2405还具有采集在显示屏2405的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器2401进行处理。此时,显示屏2405还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏2405可以为一个,设置终端2400的前面板;在另一些实施例中,显示屏2405可以为至少两个,分别设置在终端2400的不同表面或呈折叠设计;在再一些实施例中,显示屏2405可以是柔性显示屏,设置在终端2400的弯曲表面上或折叠面上。甚至,显示屏2405还可以设置成非矩形的不规则图形,也即异形屏。显示屏2405可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件2406用于采集图像或视频。可选地,摄像头组件2406包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件2406还可以包括闪光灯。闪光灯可以是 单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路2407可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器2401进行处理,或者输入至射频电路2404以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端2400的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器2401或射频电路2404的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路2407还可以包括耳机插孔。
定位组件2408用于定位终端2400的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件2408可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源2409用于为终端2400中的各个组件进行供电。电源2409可以是交流电、直流电、一次性电池或可充电电池。当电源2409包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端2400还包括有一个或多个传感器2410。该一个或多个传感器2410包括但不限于:加速度传感器2411、陀螺仪传感器2412、压力传感器2413、指纹传感器2414、光学传感器2415以及接近传感器2416。
加速度传感器2411可以检测以终端2400建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器2411可以用于检测重力加速度在三个坐标轴上的分量。处理器2401可以根据加速度传感器2411采集的重力加速度信号,控制触摸显示屏2405以横向视图或纵向视图进行用户界面的显示。加速度传感器2411还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器2412可以检测终端2400的机体方向及转动角度,陀螺仪 传感器2412可以与加速度传感器2411协同采集用户对终端2400的3D动作。处理器2401根据陀螺仪传感器2412采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器2413可以设置在终端2400的侧边框和/或触摸显示屏2405的下层。当压力传感器2413设置在终端2400的侧边框时,可以检测用户对终端2400的握持信号,由处理器2401根据压力传感器2413采集的握持信号进行左右手识别或快捷操作。当压力传感器2413设置在触摸显示屏2405的下层时,由处理器2401根据用户对触摸显示屏2405的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器2414用于采集用户的指纹,由处理器2401根据指纹传感器2414采集到的指纹识别用户的身份,或者,由指纹传感器2414根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器2401授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器2414可以被设置终端2400的正面、背面或侧面。当终端2400上设置有物理按键或厂商Logo时,指纹传感器2414可以与物理按键或厂商Logo集成在一起。
光学传感器2415用于采集环境光强度。在一个实施例中,处理器2401可以根据光学传感器2415采集的环境光强度,控制触摸显示屏2405的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏2405的显示亮度;当环境光强度较低时,调低触摸显示屏2405的显示亮度。在另一个实施例中,处理器2401还可以根据光学传感器2415采集的环境光强度,动态调整摄像头组件2406的拍摄参数。
接近传感器2416,也称距离传感器,通常设置在终端2400的前面板。接近传感器2416用于采集用户与终端2400的正面之间的距离。在一个实施例中,当接近传感器2416检测到用户与终端2400的正面之间的距离逐渐变小时,由处理器2401控制触摸显示屏2405从亮屏状态切换为息屏状态;当接近传感器2416检测到用户与终端2400的正面之间的距离逐渐变大时,由 处理器2401控制触摸显示屏2405从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图24中示出的结构并不构成对终端2400的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质为非易失性的计算机可读存储介质,该计算机可读存储介质中存储有计算机可读指令,存储的计算机可读指令被处理组件执行时能够实现本公开上述实施例提供的PVS的确定方法。
在示例性实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质为非易失性的计算机可读存储介质,该计算机可读存储介质中存储有计算机可读指令,存储的计算机可读指令被处理组件执行时能够实现本公开上述实施例提供的三维场景的渲染方法。
在示例性实施例中,还提供了一种计算机程序产品,该程序产品中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上述方法实施例中所示的由终端执行的PVS的确定方法。
在示例性实施例中,还提供了一种计算机程序产品,该程序产品中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上述方法实施例中所示的由终端执行的三维场景的渲染方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种潜在可视集合的确定方法,由计算机设备执行,其特征在于,所述方法包括:
    将地图区域划分为多个检测点区域;
    将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
    在所述检测点区域中确定至少一个检测点;
    渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
    将所述目标颜色标识对应的三维物体添加至所述检测点区域的潜在可视集合PVS中。
  2. 根据权利要求1所述的方法,其特征在于,所述渲染出所述检测点对应的立方体贴图,包括:
    对所述检测点处的六个方向面分别进行渲染,得到每个方向面上对应的二维纹理贴图;及
    将所述六个方向面上的二维纹理贴图进行拼合,得到所述检测点对应的立方体贴图。
  3. 根据权利要求2所述的方法,其特征在于,所述确定所述立方体贴图上出现的目标颜色标识,包括:
    遍历所述立方体贴图的六个方向面上的二维纹理贴图的像素值,根据所述二维纹理贴图上出现的像素值确定所述立方体贴图上出现的目标颜色标识。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,包括:
    将所述检测点区域中的三维物体的唯一物体标识映射为颜色标识;及
    将所述三维物体的贴图材质替换为与所述颜色标识对应的单颜色材质。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述检测点区域中的三维物体的唯一物体标识映射为颜色标识,包括:
    将所述检测点区域中的三维物体的唯一物体标识的最后三位,分别映射 为红绿蓝颜色空间中的红色通道值、绿色通道值和蓝色通道值;
    根据所述红色通道值、所述绿色通道值和所述蓝色通道值,确定出所述三维物体对应的颜色标识。
  6. 根据权利要求1至3任一所述的方法,其特征在于,所述检测点包括地区检测点和高区检测点,所述在所述检测点区域中确定至少一个检测点,包括:
    在所述检测点区域上确定出离散的多个平面检测点;
    对于所述多个平面检测点中的每个平面检测点,通过物理模型射线检测对所述平面检测点进行检测,获得所述平面检测点处的路面高度;
    根据所述路面高度与第一高度相加后的第一和值,确定出所述平面检测点对应的低区检测点;及
    根据所述路面高度与第二高度相加后的第二和值,确定出所述平面检测点对应的高区检测点。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述目标颜色标识对应的三维物体添加至所述检测点区域的潜在可视集合PVS中,包括:
    当所述目标颜色标识属于与所述低区检测点对应的立方体贴图时,将所述目标颜色标识对应的三维物体添加至所述检测点区域的第一PVS中;及
    当所述目标颜色标识属于与所述高区检测点对应的立方体贴图时,将所述目标颜色标识对应的三维物体添加至所述检测点区域的第二PVS中。
  8. 根据权利要求1至3任一所述的方法,其特征在于,所述将所述检测点区域中的三维物体的贴图材质替换为单颜色材质之后,还包括:
    将所述检测点区域中的半透明三维物体设置为隐藏属性;
    所述方法还包括:
    将所述检测点区域中的半透明三维物体重新设置为显示属性,将除所述半透明三维物体之外的三维物体设置为隐藏属性;
    将所述半透明三维物体的贴图材质替换为单颜色材质,每个半透明三维物体对应的单颜色材质的颜色标识不同;
    在所述检测点区域中确定至少一个检测点;
    渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目 标颜色标识;及
    将所述目标颜色标识对应的半透明三维物体添加至所述检测点区域的PVS中。
  9. 一种三维场景的渲染方法,其特征在于,应用于存储有检测点区域和潜在可视集合PVS的计算机设备中,所述PVS是采用如上权利要求1至8任一所述的方法生成的,所述方法包括:
    检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同;
    当所述摄像机模型在所述当前帧所在的检测点区域与所述上一帧所在的检测点区域不相同时,读取所述当前帧所在的检测点区域的PVS;及
    根据所述当前帧所在的检测点区域的PVS,渲染得到所述摄像机模型的镜头画面。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述当前帧所在的检测点区域的PVS,渲染得到所述摄像机模型的镜头画面,包括:
    根据所述当前帧所在的检测点区域的第一PVS,渲染得到所述摄像机模型的第一镜头画面;及
    根据所述当前帧所在的检测点区域的第二PVS,渲染得到所述摄像机模型的第二镜头画面。
  11. 一种潜在可视集合的确定方法,由计算机设备执行,其特征在于,所述方法应用于3D竞速游戏中,所述3D竞速游戏包括位于虚拟环境中的赛道区域,所述方法包括:
    将赛道区域划分为多个检测点区域;
    将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
    在所述检测点区域中确定至少一个检测点;
    渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
    将所述目标颜色标识对应的三维物体添加至所述检测点区域的赛道潜在可视集合PVS中。
  12. 一种潜在可视集合的确定装置,其特征在于,所述装置包括:
    第一划分模块,用于将地图区域划分为多个检测点区域;
    第一替换模块,用于将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
    第一确定模块,用于在所述检测点区域中确定至少一个检测点;
    第一渲染模块,用于渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
    第一添加模块,用于将所述目标颜色标识对应的三维物体添加至所述检测点区域的潜在可视集合PVS中。
  13. 根据权利要求12所述的装置,其特征在于,所述第一渲染模块包括渲染单元和拼合单元,其中,所述渲染单元,用于对所述检测点处的六个方向面分别进行渲染,得到每个方向面上对应的二维纹理贴图;所述拼合单元,用于将所述六个方向面上的二维纹理贴图进行拼合,得到所述检测点对应的立方体贴图。
  14. 根据权利要求13所述的装置,其特征在于,所述第一渲染模块还包括遍历单元,用于遍历所述立方体贴图的六个方向面上的二维纹理贴图的像素值,根据所述二维纹理贴图上出现的像素值确定所述立方体贴图上出现的目标颜色标识。
  15. 一种三维场景的渲染装置,其特征在于,应用于存储有检测点区域和潜在可视集合PVS的终端中,所述PVS是采用如上权利要求1至8任一所述的方法生成的,所述装置包括:
    检测模块,用于检测摄像机模型在当前帧所在的检测点区域与上一帧所在的检测点区域是否相同;
    读取模块,用于当所述摄像机模型在所述当前帧所在的检测点区域与所述上一帧所在的检测点区域不相同时,读取所述当前帧所在的检测点区域的 PVS;及
    第二渲染模块,用于根据所述当前帧所在的检测点区域的PVS,渲染得到所述摄像机模型的镜头画面。
  16. 一种潜在可视集合的确定装置,其特征在于,所述装置应用于3D竞速游戏中,所述3D竞速游戏包括位于虚拟环境中的赛道区域,所述装置包括:
    第二划分模块,用于将赛道区域划分为多个检测点区域;
    第二替换模块,用于将所述检测点区域中的三维物体的贴图材质替换为单颜色材质,每个三维物体对应的单颜色材质的颜色标识不同;
    第二确定模块,用于在所述检测点区域中确定至少一个检测点;
    第三渲染模块,用于渲染出所述检测点对应的立方体贴图,确定所述立方体贴图上出现的目标颜色标识;及
    第二添加模块,用于将所述目标颜色标识对应的三维物体添加至所述检测点区域的赛道潜在可视集合PVS中。
  17. 一种计算机设备,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至8任一所述的潜在可视集合的确定方法的步骤,或,如权利要求11所述的潜在可视集合的确定方法的步骤。
  18. 一种计算机设备,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求9至10任一所述的三维场景的渲染方法的步骤。
  19. 一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1至8任一所述的潜在可视集合的确定方法的步骤,或,如 权利要求11所述的潜在可视集合的确定方法的步骤。
  20. 一种计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求9至10任一所述的三维场景的渲染方法的步骤。
PCT/CN2019/120864 2018-12-07 2019-11-26 潜在可视集合的确定方法、装置、设备及存储介质 WO2020114274A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19893835.9A EP3832605B1 (en) 2018-12-07 2019-11-26 Method and device for determining potentially visible set, apparatus, and storage medium
US17/185,328 US11798223B2 (en) 2018-12-07 2021-02-25 Potentially visible set determining method and apparatus, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811493375.0A CN109615686B (zh) 2018-12-07 2018-12-07 潜在可视集合的确定方法、装置、设备及存储介质
CN201811493375.0 2018-12-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/185,328 Continuation US11798223B2 (en) 2018-12-07 2021-02-25 Potentially visible set determining method and apparatus, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020114274A1 true WO2020114274A1 (zh) 2020-06-11

Family

ID=66007752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120864 WO2020114274A1 (zh) 2018-12-07 2019-11-26 潜在可视集合的确定方法、装置、设备及存储介质

Country Status (4)

Country Link
US (1) US11798223B2 (zh)
EP (1) EP3832605B1 (zh)
CN (1) CN109615686B (zh)
WO (1) WO2020114274A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927365A (zh) * 2021-04-13 2021-06-08 网易(杭州)网络有限公司 在应用程序的三维虚拟场景中渲染山体的方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615686B (zh) 2018-12-07 2022-11-29 腾讯科技(深圳)有限公司 潜在可视集合的确定方法、装置、设备及存储介质
CN110223589A (zh) * 2019-05-17 2019-09-10 上海蜂雀网络科技有限公司 一种基于3d绘画协议的汽车模型展示方法
CN113298918B (zh) * 2020-02-24 2022-12-27 广东博智林机器人有限公司 一种重叠区域的异色显示方法及装置
CN111583398B (zh) * 2020-05-15 2023-06-13 网易(杭州)网络有限公司 图像显示的方法、装置、电子设备及计算机可读存储介质
CN113763545A (zh) * 2021-09-22 2021-12-07 拉扎斯网络科技(上海)有限公司 图像确定方法、装置、电子设备和计算机可读存储介质
CN114522420A (zh) * 2022-02-16 2022-05-24 网易(杭州)网络有限公司 游戏数据处理方法、装置、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020190989A1 (en) * 2001-06-07 2002-12-19 Fujitsu Limited Program and apparatus for displaying graphical objects
CN106780693A (zh) * 2016-11-15 2017-05-31 广州视源电子科技股份有限公司 一种通过绘制方式选择三维场景中物体的方法及系统
CN108888954A (zh) * 2018-06-20 2018-11-27 苏州玩友时代科技股份有限公司 一种拾取坐标的方法、装置、设备及存储介质
CN109615686A (zh) * 2018-12-07 2019-04-12 腾讯科技(深圳)有限公司 潜在可视集合的确定方法、装置、设备及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2112464C (en) * 1991-06-28 2002-05-14 Lim Hong Lip Improvements in visibility calculations for 3d computer graphics
GB0027133D0 (en) * 2000-11-07 2000-12-20 Secr Defence Improved method of producing a computer generated hologram
US6862025B2 (en) * 2002-02-28 2005-03-01 David B. Buehler Recursive ray casting method and apparatus
US8619078B2 (en) * 2010-05-21 2013-12-31 International Business Machines Corporation Parallelized ray tracing
US8692825B2 (en) * 2010-06-24 2014-04-08 International Business Machines Corporation Parallelized streaming accelerated data structure generation
US10109103B2 (en) * 2010-06-30 2018-10-23 Barry L. Jenkins Method of determining occluded ingress and egress routes using nav-cell to nav-cell visibility pre-computation
US8847965B2 (en) * 2010-12-03 2014-09-30 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for fast geometric sound propagation using visibility computations
US10157495B2 (en) * 2011-03-04 2018-12-18 General Electric Company Method and device for displaying a two-dimensional image of a viewed object simultaneously with an image depicting the three-dimensional geometry of the viewed object
US12008707B2 (en) * 2013-02-14 2024-06-11 David Todd Kaplan Highly scalable cluster engine for hosting simulations of objects interacting within a space
CN106683155B (zh) * 2015-11-04 2020-03-10 南京地心坐标信息科技有限公司 一种三维模型综合动态调度方法
CN107886552B (zh) * 2016-09-29 2021-04-27 网易(杭州)网络有限公司 贴图处理方法和装置
CN108876931B (zh) * 2017-05-12 2021-04-16 腾讯科技(深圳)有限公司 三维物体颜色调整方法、装置、计算机设备及计算机可读存储介质
CN108257103B (zh) * 2018-01-25 2020-08-25 网易(杭州)网络有限公司 游戏场景的遮挡剔除方法、装置、处理器及终端
CN108810538B (zh) * 2018-06-08 2022-04-05 腾讯科技(深圳)有限公司 视频编码方法、装置、终端及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020190989A1 (en) * 2001-06-07 2002-12-19 Fujitsu Limited Program and apparatus for displaying graphical objects
CN106780693A (zh) * 2016-11-15 2017-05-31 广州视源电子科技股份有限公司 一种通过绘制方式选择三维场景中物体的方法及系统
CN108888954A (zh) * 2018-06-20 2018-11-27 苏州玩友时代科技股份有限公司 一种拾取坐标的方法、装置、设备及存储介质
CN109615686A (zh) * 2018-12-07 2019-04-12 腾讯科技(深圳)有限公司 潜在可视集合的确定方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SYDDF_SHADOW: "The Implementation Principle and Details of SkyBox", 26 July 2018 (2018-07-26), pages 1 - 5, XP055713348, Retrieved from the Internet <URL:https://blog.csdn.net/yjr3426619/article/details/81224101> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927365A (zh) * 2021-04-13 2021-06-08 网易(杭州)网络有限公司 在应用程序的三维虚拟场景中渲染山体的方法及装置
CN112927365B (zh) * 2021-04-13 2023-05-26 网易(杭州)网络有限公司 在应用程序的三维虚拟场景中渲染山体的方法及装置

Also Published As

Publication number Publication date
US11798223B2 (en) 2023-10-24
EP3832605A1 (en) 2021-06-09
EP3832605A4 (en) 2021-10-20
CN109615686B (zh) 2022-11-29
EP3832605B1 (en) 2024-04-17
CN109615686A (zh) 2019-04-12
EP3832605C0 (en) 2024-04-17
US20210183137A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
WO2020114274A1 (zh) 潜在可视集合的确定方法、装置、设备及存储介质
US11221726B2 (en) Marker point location display method, electronic device, and computer-readable storage medium
US11224810B2 (en) Method and terminal for displaying distance information in virtual scene
US11703993B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
US20210170278A1 (en) Image rendering method, device, and storage medium
US11087537B2 (en) Method, device and medium for determining posture of virtual object in virtual environment
CN109754454B (zh) 物体模型的渲染方法、装置、存储介质及设备
CN111701238A (zh) 虚拟画卷的显示方法、装置、设备及存储介质
WO2022052620A1 (zh) 图像生成方法及电子设备
US11790607B2 (en) Method and apparatus for displaying heat map, computer device, and readable storage medium
WO2022227915A1 (zh) 显示位置标记的方法、装置、设备及存储介质
CN112245926A (zh) 虚拟地形的渲染方法、装置、设备及介质
US12061773B2 (en) Method and apparatus for determining selected target, device, and storage medium
CN112755533B (zh) 虚拟载具涂装方法、装置、设备及存储介质
CN111754631B (zh) 三维模型的生成方法、装置、设备及可读存储介质
WO2021143262A1 (zh) 地图元素添加方法、装置、终端及存储介质
CN113209610A (zh) 虚拟场景画面展示方法、装置、计算机设备及存储介质
CN113058266B (zh) 虚拟环境中场景字体的显示方法、装置、设备及介质
CN115869624B (zh) 游戏区域的标记方法、装置、设备及存储介质
CN113384902A (zh) 虚拟对象的移动控制方法、装置、设备及存储介质
CN114470763A (zh) 显示交互画面的方法、装置、设备及存储介质
KR20210097765A (ko) 가상 환경에 기반한 객체 구축 방법 및 장치, 컴퓨터 장치 및 판독 가능 저장 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19893835

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019893835

Country of ref document: EP

Effective date: 20210303

NENP Non-entry into the national phase

Ref country code: DE