CN115120973A - Model rendering method and device, nonvolatile storage medium and terminal equipment - Google Patents

Model rendering method and device, nonvolatile storage medium and terminal equipment Download PDF

Info

Publication number
CN115120973A
CN115120973A CN202210676311.4A CN202210676311A CN115120973A CN 115120973 A CN115120973 A CN 115120973A CN 202210676311 A CN202210676311 A CN 202210676311A CN 115120973 A CN115120973 A CN 115120973A
Authority
CN
China
Prior art keywords
data
model
target
spatial data
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210676311.4A
Other languages
Chinese (zh)
Inventor
梁哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Neteasy Brilliant Network Technology Co ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210676311.4A priority Critical patent/CN115120973A/en
Publication of CN115120973A publication Critical patent/CN115120973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a model rendering method and device, a nonvolatile storage medium and terminal equipment. The method comprises the following steps of providing a graphical user interface through terminal equipment, wherein the content displayed by the graphical user interface at least comprises the following steps: virtual scene, the method comprises: acquiring model data of a target model in a virtual scene; determining first spatial data for a first point on the target model based on the model data; determining second spatial data for a second point on the target model based on the first spatial data; determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; and rendering the virtual plane according to the reflection color. The invention solves the technical problem that rendering precision and cost overhead are difficult to balance when light reflection is rendered in real time in the prior art.

Description

Model rendering method and device, nonvolatile storage medium and terminal equipment
Technical Field
The invention relates to the field of computer vision, in particular to a model rendering method and device, a nonvolatile storage medium and terminal equipment.
Background
At present, most game industries adopt a real-time rendering mode to show game scenes, game interaction experience of users is improved, and in games, the quality of light reflection, such as the definition of houses reflected in a water body or the speed of houses reflected at different angles in the water body in the process of rotating a game lens, can embody the production accuracy of a game to a great extent. Although the light reflection technology is gradually mature with the rapid development of the game industry, the problem of high cost and high overhead still accompanies the delicate real-time rendering of the reflection picture in the game scene, which affects the game experience of the user.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a model rendering method and device, a nonvolatile storage medium and terminal equipment, and aims to at least solve the technical problem that rendering precision and cost are difficult to balance when light reflection is rendered in real time in the prior art.
According to an embodiment of the present invention, a method for rendering a model is provided, where a terminal device provides a graphical user interface, and content displayed by the graphical user interface at least includes: virtual scene, the method comprising: obtaining model data of a target model in a virtual scene, wherein part of sub models of the target model are positioned below a virtual plane, and the virtual plane is used for presenting a reflection effect; determining first spatial data of a first point on the target model based on the model data, wherein the first point is a point on a sub-model on the target model below the virtual plane; determining second spatial data of a second point on the target model based on the first spatial data, wherein the second point is a point on a sub-model positioned above the virtual plane on the target model; determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; and rendering the virtual plane according to the reflection color.
Optionally, determining, based on the model data, first spatial data of a first point on the target model comprises: determining original space data of a virtual camera in a virtual scene and target space data from the virtual camera to the lower left corner of a far clipping plane; determining first location data of a first point in a graphical user interface; the first spatial data is determined based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data.
Optionally, determining the first spatial data based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data comprises: mapping the original depth data to a graphical user interface to obtain target depth data; obtaining a product of the target depth data and a first spatial component in the target spatial data to obtain a first product; obtaining a product of a second space component in the target space data and a first coordinate component in the target position coordinates to obtain a second product; obtaining a product of a third space component in the target space data and a second coordinate component in the target position coordinate to obtain a third product; and acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain first spatial data.
Optionally, determining second spatial data for a second point on the target model based on the first spatial data comprises: acquiring third spatial data of the virtual plane; based on the first spatial data and the third spatial data, second spatial data is determined.
Optionally, determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data includes: mapping the second spatial data to a graphical user interface to obtain second position data; and acquiring color information corresponding to the second position data to obtain the reflection color.
Optionally, mapping the second spatial data to a graphical user interface, and obtaining the second location data includes: acquiring a camera position of a virtual camera in a virtual scene, a first distance from the virtual camera to a far clipping surface, a second distance from the virtual camera to a near clipping surface and a view angle; determining a projection matrix based on the camera position, the first distance, the second distance, and the field of view angle; and obtaining a product of the second spatial data and the projection matrix to obtain second position data.
Optionally, before obtaining model data of the target model in the virtual scene, the method further includes: determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane; and acquiring model data of the target model in the virtual scene in response to the difference between the first terrain data and the second terrain data being smaller than a preset threshold value.
According to an embodiment of the present invention, there is further provided a model rendering apparatus, which provides a graphical user interface through a terminal device, where contents displayed by the graphical user interface at least include: virtual scene, the device includes: the data acquisition module is used for acquiring model data of a target model in a virtual scene, wherein part of sub models of the target model are positioned below a virtual plane, and the virtual plane is used for presenting a reflection effect; the first data determining module is used for determining first space data of a first point on the target model based on the model data, wherein the first point is a point on a sub-model which is positioned below the virtual plane on the target model; the second data determining module is used for determining second spatial data of a second point on the target model based on the first spatial data, wherein the second point is a point on a sub-model positioned above the virtual plane on the target model; the color determining module is used for determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; and the rendering module is used for rendering the virtual plane according to the reflection color.
Optionally, the first data determination module includes: the first data determining unit is used for determining original space data of a virtual camera in a virtual scene and target space data from the virtual camera to the lower left corner of a far clipping plane; a second data determination unit for determining first position data of the first point in the graphical user interface; a third data determination unit for determining the first spatial data based on the original spatial data, the target spatial data, the first position data and the original depth data in the model data.
Optionally, the third data determination unit is further configured to: mapping the original depth data to a graphical user interface to obtain target depth data; obtaining a product of the target depth data and a first spatial component in the target spatial data to obtain a first product; obtaining a product of a second space component in the target space data and a first coordinate component in the target position coordinates to obtain a second product; obtaining a product of a third space component in the target space data and a second coordinate component in the target position coordinate to obtain a third product; and acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain first spatial data.
Optionally, the second data determination module comprises: a data acquisition unit configured to acquire third spatial data of the virtual plane; a fourth data determination unit for determining the second spatial data based on the first spatial data and the third spatial data.
Optionally, the color determination module comprises: the mapping unit is used for mapping the second spatial data to the graphical user interface to obtain second position data; and the color acquisition unit is used for acquiring color information corresponding to the second position data to obtain the reflection color.
Optionally, the mapping unit is further configured to: acquiring a camera position of a virtual camera in a virtual scene, a first distance from the virtual camera to a far clipping surface, a second distance from the virtual camera to a near clipping surface and a view angle; determining a projection matrix based on the camera position, the first distance, the second distance, and the field of view angle; and obtaining a product of the second spatial data and the projection matrix to obtain second position data.
Optionally, the apparatus further comprises: the terrain obtaining module is used for determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane; the data acquisition module is further used for responding to the fact that the difference between the first terrain data and the second terrain data is smaller than a preset threshold value, and acquiring model data of a target model in the virtual scene.
According to an embodiment of the present invention, there is further provided a non-volatile storage medium including a stored program, where the program controls a device in which the non-volatile storage medium is located to execute any one of the above model rendering methods when the program is executed.
According to an embodiment of the present invention, there is also provided a terminal device, including: the model rendering system comprises a memory and a processor, wherein the memory stores a computer program, and the processor is used for operating the computer program to execute the model rendering method of any one item.
In at least some embodiments of the present invention, model data of a target model in a virtual scene is obtained; determining first spatial data for a first point on the target model based on the model data; determining second spatial data for a second point on the target model based on the first spatial data; determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; according to the method of rendering the virtual plane by the reflection color, the position of the first space data mapped to the graphic user interface is determined as the reflection position, the position of the second space data mapped to the graphic user interface is determined as the reflected position, then the color corresponding to the position is directly determined as the reflection color and rendered on the reflection position, the color reflection color of the position of the second space data mapped on the user interface is not required to be repeated for many times, whether the second space data corresponds to the target model or not is not required to be repeatedly determined according to the reflection position, the problem of high cost and high expense caused by repeatedly sampling model data to determine the reflection position in the prior art is avoided, the efficiency of rendering the virtual plane is improved, the virtual plane is rendered by the color in the user interface, higher rendering effect can be ensured, and a user can obtain good vision, the game experience of the user is improved, and the technical problem that rendering precision and cost are difficult to balance when light reflection is rendered in real time in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a screen space reflection principle in the prior art;
FIG. 2 is a block diagram of a mobile terminal hardware architecture illustrating a model rendering method according to an embodiment of the present invention;
FIG. 3 is a flow diagram illustrating a method of model rendering according to an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating a spatial coordinate determination according to an embodiment of the present invention;
FIG. 5 is a block diagram illustrating an architecture of a model rendering apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, technical names or technical terms appearing in the embodiments of the present invention are explained as follows:
RT: a Render Target (Render Target), specifically a block of memory of a GPU (Graphics Processing Unit), for image caching to record rendering results.
Depth Buffer: a depth cache that holds the RT (distance of a scene element to the screen) of the depth information of the 3D scene rendering onto the screen.
Sampling: data is acquired from various RTs for calculation and the like, and sampling generates read-write operations (IO) of a CPU or a GPU, thereby generating bandwidth and power consumption.
Bandwidth: the bandwidth is an important index for performance evaluation, and in the rendering process, the bandwidth may refer to the number of times (data amount) of reading and writing the Depth Buffer and the Color Buffer, which may increase power consumption and heat generation of the mobile phone.
Screen Space Reflection algorithm (SSR for short): and simulating the reflection color at a certain position in the screen through the depth buffer and the color buffer. Repeated reading of the Depth Buffer is required to calculate the corresponding reflection position of a position on the screen.
Ray tracing and reflecting: a virtual ray is emitted from a virtual camera, the surface contacted by the ray and the direction of bounce are calculated in a 3D scene, and the finally obtained color is the reflection result.
A reflection probe: an image component placed in a 3D scene enters a direction and returns the color in that direction. It can be used to describe the scene around a location, and can also be used to simulate reflections, but with very low accuracy.
And (3) real-time rendering: mainly referring to rendering 3D scene data onto a 2D image. This image is usually referred to as a block of memory within the GPU. After rendering is completed, the engine copies the image data in the memory to the display for display.
Screen space technology: in general rendering techniques, different 2D images can be obtained by modifying scene data of a 3D space, but for some complex effects and optimizations, it needs to be achieved by screen space techniques. This technique requires rendering of a 2D depth image (depth map), a surface orientation information image (normal map), material information (material degree of metallization, roughness), and the like, in addition to rendering of the color of the 2D image. More complex effects can be made by using the series of 2D information, and the algorithm depends on repeatedly sampling various 2D information for operation, so that the dependence on the complexity of a 3D scene is reduced.
Projection matrix: a matrix that can project points in 3D space into 2D image space.
Common types of light reflection include: specular Reflection, ambient Reflection, light Reflection, and the like, in order to enable a user to have good game experience in a game, related technicians have proposed various light Reflection technologies in sequence, such as Reflection probe, specular rendering, ray tracing, Screen Space Reflection (SSR for short), and the like.
The reflection probe technology selects a simple color from a fixed image as a reflection result according to the direction of a point on the surface of a reflection object, and although the operation is simple and the equipment requirement is low in technical implementation, the flexibility is poor and the reflection precision is low.
The mirror rendering can be performed again under the water according to the position of the reflecting plane, so that a user can see an exquisite mirror reflection result, but the cost for re-rendering the scene model is too high, the requirement on equipment is high, and if the user does not prepare game equipment with strong performance, the user hardly has good game experience.
Ray tracing is an advanced real-time rendering technology, can accurately calculate reflection, refraction and scattering of all rays in a game scene, is not limited to rendering of a plane game scene, but has high requirements on the performance of equipment, and general high-performance equipment cannot well run ray tracing.
The screen space reflection algorithm is a post-processing technology based on screen space, and simulates the reflection color of a certain position in a screen through depth cache and color cache. The method does not depend on the complexity of the scene and the fixed cost overhead, has a good rendering effect, and is widely applied to different game devices such as a mobile terminal, a computer terminal and a host terminal.
Fig. 1 is a schematic diagram of a screen space reflection principle in the prior art, as shown in fig. 1, where a represents a virtual camera, which may refer to a view angle lens for acquiring a view angle of a user, and an image acquired by the virtual camera may be regarded as an image that can be observed by the user from a display screen; b represents a reflected object, and may refer to various game models in a game scene, such as an island, a ship, etc.; c represents a reflection plane, and may refer to a plane for reflecting a reflected object, such as a water surface, a mirror surface, etc.; n represents a reflected point; m represents a reflection point; d represents a normal line of the reflective screen at the reflection point; alpha represents the angle of incidence and the angle of reflection.
The principle of the screen space reflection algorithm is similar to that of observing a reflected object in a reflection plane by human eyes, a virtual camera is utilized to calculate a reflection direction according to a normal line and position information of a water surface, and multiple times of moving search is carried out along the reflection direction to determine whether a reflected object exists, namely, according to the positions of A, m and D in fig. 1, whether a point n exists is determined, if the point n does not exist, the visual angle of the virtual camera is continuously moved to a new position, the point m is searched again, then whether the point n exists is searched again according to A and the new point m, but each time the reflection position is changed, a system needs to resample Depth information and normal information once, read Depth Buffer to calculate the corresponding two-dimensional space position of the reflection position on a display screen of the device, and carry out one-time determination to determine whether the search is completed, so that a large number of repeated sampling and determination can cause the bandwidth utilization rate of the GPU to be too high to a great extent, resulting in a degradation of the performance of the device.
To enhance the player's gaming experience, in accordance with one embodiment of the present invention, an embodiment of a model rendering method is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the mobile terminal running on the mobile terminal, the mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a mobile internet device (MID for short), a PAD, a game machine, etc. Fig. 2 is a block diagram of a hardware structure of a mobile terminal illustrating a model rendering method according to an embodiment of the present invention. As shown in fig. 2, the mobile terminal may include one or more (only one shown in fig. 2) processors 202 (the processors 202 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 204 for storing data. Optionally, the mobile terminal may further include a transmission device 206 for communication function, an input-output device 208, and a display device 210. It will be understood by those skilled in the art that the structure shown in fig. 2 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2.
The memory 204 may be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the model rendering method in the embodiment of the present invention, and the processor 202 executes various functional applications and data processing by running the computer program stored in the memory 204, that is, implements the model rendering method described above. Memory 204 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 204 may further include memory located remotely from the processor 202, which may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 206 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 206 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmitting device 206 may be a Radio Frequency (RF) module, which is used to communicate with the internet via wireless.
The inputs in the input output Device 208 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). Some human interface devices may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 210 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
The model rendering method in one embodiment of the present disclosure may be executed on a local terminal device or a server. When the model rendering method is run on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, the cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and the operation of the model rendering method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through the electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides a model rendering method, which provides a graphical user interface through a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system. Fig. 3 is a flowchart illustrating a model rendering method according to an embodiment of the present invention, which proposes to provide a graphical user interface through a terminal device, where the content displayed by the graphical user interface at least includes: virtual scene, as shown in fig. 3, the method includes the following steps:
step S302, model data of a target model in a virtual scene is obtained.
Wherein, the partial submodel of the target model is positioned below the virtual plane, and the virtual plane is used for presenting the reflection effect.
The virtual scene can be a 3D game scene with light reflection; the target model may be a 3D model that is reflected, such as an iceberg, an island, etc.; the virtual plane may be a plane for reflecting light, such as a water surface, a mirror, etc.; the model data may be RT data of the target model in the virtual scene, including but not limited to normal data, depth data, color data, etc. of the target model in the virtual scene.
The virtual scene may be content displayed in a graphical user interface presented to the user through the terminal device, for example, a game image viewed by the user through a mobile phone screen or a computer display screen. The target model may intersect with the virtual plane, that is, the target model may be divided into two parts by the virtual plane, for easy understanding, the virtual plane is set as a horizontal plane, one part of the target model is located above the virtual plane, and the other part is located below the virtual plane, at this time, if the virtual camera is located above the virtual plane, the reflected part is the model part located above the virtual plane; if the virtual camera is located below the virtual plane, it is the portion of the model that is located below the virtual plane that is reflected.
It should be noted that the model rendering methods corresponding to the above two cases are the same, and for the convenience of understanding, the model rendering method when the virtual camera is located above the virtual plane is illustrated herein as an example.
In an alternative embodiment, after the virtual scene to be rendered in real time is determined, model data of the object model reflected in the scene may be obtained.
Step S304, based on the model data, first spatial data of a first point on the target model is determined.
Wherein the first point is a point on the sub-model on the target model that is located below the virtual plane.
The first point may be a point on the model portion located below the virtual plane; the first spatial data may be 3D world coordinates of the first point in the virtual scene.
In an alternative embodiment, after obtaining the model data of the target model, any point on the model portion below the virtual plane and the first spatial data corresponding to the point may be determined first.
Step S306, second space data of a second point on the target model is determined based on the first space data.
Wherein the second point is a point on the sub-model on the target model above the virtual plane.
The second point may be a point on the model portion located above the virtual plane; the second spatial data may be 3D world coordinates of the second point in the virtual scene.
It should be noted that the second point is not an arbitrary point, but is a point that is axisymmetric to the first point with respect to the virtual plane, that is, the 3D world coordinates of the second point can be determined from the 3D world coordinates of the first point and the coordinates of the virtual plane, and since the position of the virtual plane in the game scene is not changed in general, the virtual plane can be set to a plane whose vertical coordinates are 0 in order to improve the efficiency of determining the second spatial data. That is, after the first spatial data is acquired, the second spatial data of the second point can be quickly acquired from the first spatial data.
Step S308, determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data.
The color information may be color data of a point corresponding to the second spatial data in the virtual scene, and the reflection color may be a color of a reflection point in a virtual plane displayed in the graphic user interface.
In an alternative embodiment, after the second spatial data is obtained, the reflection color may be determined according to the corresponding relationship between the second spatial data and the corresponding reflection point on the virtual plane, and a specific determination method is described below.
And step S310, rendering the virtual plane according to the reflection color.
In an alternative embodiment, after the reflection color is determined, the color of the reflection point corresponding to the second point on the virtual plane may be rendered according to the reflection color immediately, and when all the reflection colors are determined and rendered on the virtual plane, the formed image is the reflection image, and the user may view the reflection image through the virtual camera.
In at least some embodiments of the present invention, model data of a target model in a virtual scene is obtained; determining first spatial data for a first point on the target model based on the model data; determining second spatial data for a second point on the target model based on the first spatial data; determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; according to the method of rendering the virtual plane by the reflection color, the position of the first space data mapped to the graphic user interface is determined as the reflection position, the position of the second space data mapped to the graphic user interface is determined as the reflected position, then the color corresponding to the position is directly determined as the reflection color and rendered on the reflection position, the position color reflection color of the second space data mapped on the user interface is not required to be repeated for multiple times, and whether the second space data corresponds to the target model or not is not required to be repeatedly determined according to the reflection position, thereby avoiding the problem of large cost and expense caused by repeatedly sampling model data to determine the reflection position in the prior art, improving the efficiency of rendering the virtual plane, utilizing the color in the user interface to render the virtual plane, ensuring higher rendering effect and leading a user to obtain good vision, the game experience of the user is improved, and the technical problem that rendering precision and cost are difficult to balance when light reflection is rendered in real time in the prior art is solved.
In the above embodiments of the present invention, determining the first spatial data of the first point on the target model based on the model data comprises: determining original space data of a virtual camera in a virtual scene and target space data from the virtual camera to the lower left corner of a far clipping plane; determining first location data of a first point in a graphical user interface; the first spatial data is determined based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data.
The original spatial data may be 3D world coordinates of a virtual camera, and is expressed by CameraPosition, and the CameraPosition acquisition method may refer to relevant documents, which are not described herein again; the target space data may be X, Y, Z vector components of the virtual camera to far clipping plane lower left corner camera space; the first position data may be position coordinates of the first point mapped to a corresponding pixel point in the 2D image, and may be expressed in Texture [ u ] [ v ]; the raw depth data may be a distance of the target model from the virtual camera.
Since the virtual camera, the first point, and the corresponding reflection point on the virtual plane are on the same straight line, the first position data may be position coordinates of the reflection point.
In an optional embodiment, after the original spatial data, the target spatial data, the first position data, and the original depth data are obtained, the 3D world coordinate corresponding to the first point may be calculated based on a preset calculation formula of the first spatial data, where the calculation formula of the first space is generally used for the 3D world coordinate of the first point, and a specific calculation method is as shown below.
In the above embodiments of the present invention, determining the first spatial data based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data comprises: mapping the original depth data to a graphical user interface to obtain target depth data; obtaining a product of the target depth data and a first spatial component in the target spatial data to obtain a first product; obtaining a product of a second space component in the target space data and a first coordinate component in the target position coordinates to obtain a second product; obtaining a product of a third space component in the target space data and a second coordinate component in the target position coordinate to obtain a third product; and acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain first spatial data.
The first spatial component may be the vector component Z; the second spatial component may be the vector component X; the first coordinate component may be an abscissa u of the first point mapped to a corresponding pixel point in the 2D image; the third spatial component may be the vector component Y; the second coordinate component generally refers to a vertical coordinate v of a corresponding pixel point in the 2D image mapped by the first point.
The first product may be a vertical coordinate value z corresponding to the first spatial data; the second product may be an abscissa value x corresponding to the first spatial data; the third product may be an ordinate value y corresponding to the first spatial data.
Since the process of presenting a graphical user interface to a user using a virtual camera is equivalent to the process of mapping a 3D virtual scene to a 2D image, the target depth data may be the result of mapping the original depth data in the 2D image.
In an optional embodiment, the mapping of the target model data from the 3D virtual scene to the graphical user interface is completed through a GPU rasterization process, that is, RT data such as depth data, color data, normal data, and the like of the target model in the 3D virtual scene may be obtained by performing mapping steps such as depth test, lighting calculation, coordinate transformation, and the like on the point, in order to distinguish the above different types of data, each type of data may be represented in the 2D image in the form of RGBA (Red-Green-Blue-Alpha) data, and a specific representation method and a mapping data relationship may refer to relevant documents, which is not described herein again. Alternatively, the mapped depth data image colors may be used to determine how far the target model is from the virtual camera, the farther the white representation is, the closer the black representation is.
That is, the above target depth data may be determined from the original depth data and the first position data according to the mapping step, and may be represented by DepthTexture [ u ] [ v ], representing the depth data of the target model at the first position coordinate Texture [ u ] [ v ].
After the target depth data is determined, the first spatial data, i.e. the 3D world coordinates of the first point with respect to the virtual camera, may be determined based on the target depth data, the target spatial data, the first position data and the raw spatial data.
The calculation formula of the vertical coordinate value z of the first spatial data is:
z=DepthTexture[u][v]×Z+CameraPosition(z);
the formula for calculating the abscissa value x of the first spatial data is:
x=u*X+CameraPosition(x);
the formula for calculating the ordinate value y of the first spatial data is:
y=v*Y+CameraPosition(y);
the final formula for the first spatial data WorldPosition is:
WorldPosition=DepthTexture[u][v]*(Z)+u*X+v*Y+CameraPosition。
it should be noted that the above-mentioned use of different letters, phrases, etc. is only used as a short term, which is convenient for viewing a formula, for example, the CameraPosition represents the 3D world coordinate of the virtual camera, and the 3D world coordinate itself has no special meaning and is not limited specifically.
In the above embodiments of the present invention, determining second spatial data of a second point on the target model based on the first spatial data comprises: acquiring third spatial data of the virtual plane; based on the first spatial data and the third spatial data, second spatial data is determined.
The third spatial data may be 3D world coordinates where a virtual plane is located, for example, the virtual plane may be a horizontal plane with a vertical coordinate value of 0.
Fig. 4 is a schematic diagram for determining spatial coordinates according to an embodiment of the present invention, and fig. 4 is similar to fig. 3, except that a point o in fig. 4 is the first point, and a point n is the second point, i.e., the reflected point, and when data of a reflection point m on a virtual plane C is acquired by using a virtual camera a, it may directly acquire data of the point o on a target model B, and according to the principle of light reflection and the principle of triangle symmetry, it may be determined that the point n and the point o are symmetric about the virtual plane C. That is, the vertical distance from the first point o to the virtual plane may be directly obtained, and then the second spatial coordinate corresponding to the second point n may be directly determined according to the vertical distance and the third spatial data.
For example, if the first spatial data corresponding to the first point o is (3, 5, -8) and the virtual plane C is a horizontal plane with a vertical coordinate value of 0, it can be directly determined that the second spatial data corresponding to the second point n is (3, 5, 8).
In the above embodiment of the present invention, determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data includes: mapping the second spatial data to a graphical user interface to obtain second position data; and acquiring color information corresponding to the second position data to obtain the reflection color.
In an optional embodiment, after the second spatial data is obtained, the second spatial data may be mapped to a pixel point in the graphical user interface according to the projection matrix, where a color corresponding to the pixel point is a reflection color, as in the implementation scheme of determining the first positional data according to the first spatial data.
The color displayed in the graphical user interface is used as the reflection color instead of the color of the target model corresponding to the second point, so that the problem that the determined reflection color is incorrect due to the fact that the second point is shielded by the target model can be effectively solved.
In the above embodiment of the present invention, mapping the second spatial data to the graphical user interface to obtain the second position data includes: acquiring a camera position of a virtual camera in a virtual scene, a first distance from the virtual camera to a far clipping surface, a second distance from the virtual camera to a near clipping surface and a view angle; determining a projection matrix based on the camera position, the first distance, the second distance, and the field of view angle; and obtaining a product of the second spatial data and the projection matrix to obtain second position data.
In an alternative embodiment, the projection matrix P is as follows:
Figure BDA0003696696270000131
wherein f represents the distance of the virtual camera to the far clipping plane; c represents the distance of the virtual camera to the near clipping plane; t-I represents the length of the graphical user interface; r represents the width of the graphical user interface.
After the projection matrix is determined, the second spatial data W may be directly multiplied by the projection matrix P to obtain the corresponding second position data H, where H is PW, that is, the coordinates of the pixel point in the graphical user interface.
In the above embodiment of the present invention, before obtaining model data of the object model in the virtual scene, the method further includes: determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane; and acquiring model data of the target model in the virtual scene in response to the difference between the first terrain data and the second terrain data being smaller than a preset threshold value.
The first terrain data may be terrain of a portion of the target model above the virtual plane near the virtual plane, such as a portion of an iceberg above the water surface; the second terrain data may be terrain below the virtual plane near a portion of the object model below the virtual plane, such as a portion of an iceberg below the surface of the water.
It should be noted that, in order to improve the efficiency of light reflection of the rendering model, the target model shown herein may be a model with consistent terrain trend and less deformation trend on the upper and lower virtual planes, such as icebergs, sand beaches, islands, etc., and for such target model, a smaller difference threshold H and a larger difference threshold I may be set. Optionally, after the first topographic data and the second topographic data are determined, a difference value between the first topographic data and the second topographic data is determined, and if the difference value is smaller than H, it is determined that the model may be rendered by directly obtaining model data by the method; if the difference value is larger than H but smaller than I, the target model can be adjusted in the game, and model data of the model are obtained for rendering after the adjustment is completed; if the difference value is greater than I, other rendering modes can be selected to render the model, for example, a screen space reflection algorithm is used.
Different rendering modes are set for different models, so that the model rendering precision can be better improved, and a user has better game experience.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a model rendering apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details of which have been already described are not described again. As used hereinafter, the terms "unit", "module" and "modules" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram illustrating a model rendering apparatus according to an embodiment of the present invention, which provides a graphical user interface using a terminal device, where the content displayed by the graphical user interface at least includes: a virtual scene, the apparatus comprising: a data obtaining module 502, configured to obtain model data of a target model in a virtual scene, where a part of sub models of the target model is located below a virtual plane, and the virtual plane is used for presenting a reflection effect; a first data determining module 504, configured to determine, based on the model data, first spatial data of a first point on the target model, where the first point is a point on a sub-model on the target model located below the virtual plane; a second data determining module 506, configured to determine second spatial data of a second point on the target model based on the first spatial data, where the second point is a point on a submodel on the target model above the virtual plane; a color determining module 508, configured to determine a reflection color of the virtual plane based on color information corresponding to the second spatial data; and a rendering module 510, configured to render the virtual plane according to the reflection color.
Optionally, the first data determining module 504 includes: the first determining unit is used for determining original space data of the virtual camera in the virtual scene and target space data from the virtual camera to the lower left corner of the far clipping plane; a second determination unit for determining first position data of the first point in the graphical user interface; a third determining unit for determining the first spatial data based on the original spatial data, the target spatial data, the first position data and the original depth data in the model data.
Optionally, the third determining unit includes: the target depth data determining subunit is used for mapping the original depth data to a graphical user interface to obtain target depth data; the first acquiring subunit is configured to acquire a product of the target depth data and a first spatial component in the target spatial data, so as to obtain a first product; the second acquiring subunit is used for acquiring a product of a second spatial component in the target spatial data and a first coordinate component in the target position coordinates to obtain a second product; the third acquiring subunit is used for acquiring a product of a third space component in the target space data and a second coordinate component in the target position coordinate to obtain a third product; and the first data determining subunit is used for acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain first spatial data.
Optionally, the second data determination module 506 includes: a first acquisition unit configured to acquire third spatial data of a virtual plane; a fourth determining unit configured to determine the second spatial data based on the first spatial data and the third spatial data.
Optionally, the color determination module 508 comprises: a fifth determining unit, configured to map the second spatial data to a graphical user interface to obtain second position data; and the second acquisition unit is used for acquiring the color information corresponding to the second position data to obtain the reflection color.
Optionally, the second obtaining unit includes: the fourth acquisition subunit is used for acquiring the camera position of the virtual camera in the virtual scene, the first distance from the virtual camera to the far clipping surface, the second distance from the virtual camera to the near clipping surface and the view angle; a projection matrix determination unit for determining a projection matrix based on the camera position, the first distance, the second distance, and the view angle; and the fifth acquiring subunit is used for acquiring the product of the second spatial data and the projection matrix to obtain second position data.
Optionally, the apparatus further comprises: the terrain data determining module is used for determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane; and the model data determining module is used for responding to the fact that the difference between the first terrain data and the second terrain data is smaller than a preset threshold value, and obtaining model data of the target model in the virtual scene.
It should be noted that, the above units and modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the units and the modules are all positioned in the same processor; alternatively, the units and modules may be located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Alternatively, in the present embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of:
s31, obtaining model data of the target model in the virtual scene;
s32, determining first space data of a first point on the target model based on the model data;
s33, determining second spatial data of a second point on the target model based on the first spatial data;
s34, determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data;
and S35, rendering the virtual plane according to the reflection color.
Optionally, the above-mentioned non-volatile storage medium is further configured to store program code for performing the following steps: based on the model data, determining first spatial data for a first point on the target model comprises: determining original space data of a virtual camera in a virtual scene and target space data from the virtual camera to the lower left corner of a far clipping plane; determining first location data of a first point in a graphical user interface; the first spatial data is determined based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: determining the first spatial data based on the raw spatial data, the target spatial data, the first position data, and the raw depth data in the model data comprises: mapping the original depth data to a graphical user interface to obtain target depth data; obtaining a product of the target depth data and a first spatial component in the target spatial data to obtain a first product; obtaining a product of a second space component in the target space data and a first coordinate component in the target position coordinates to obtain a second product; obtaining a product of a third space component in the target space data and a second coordinate component in the target position coordinates to obtain a third product; and acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain first spatial data.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: determining second spatial data for a second point on the target model based on the first spatial data comprises: acquiring third spatial data of the virtual plane; based on the first spatial data and the third spatial data, second spatial data is determined.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data comprises: mapping the second spatial data to a graphical user interface to obtain second position data; and acquiring color information corresponding to the second position data to obtain the reflection color.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: mapping the second spatial data to a graphical user interface, the obtaining the second location data comprising: acquiring a camera position of a virtual camera in a virtual scene, a first distance from the virtual camera to a far clipping surface, a second distance from the virtual camera to a near clipping surface and a view angle; determining a projection matrix based on the camera position, the first distance, the second distance, and the field of view angle; and obtaining a product of the second spatial data and the projection matrix to obtain second position data.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: before obtaining model data of a model of an object in a virtual scene, the method further comprises: determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane; and acquiring model data of the target model in the virtual scene in response to the difference between the first terrain data and the second terrain data being smaller than a preset threshold value.
Optionally, in this embodiment, the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
In at least some embodiments of the present invention, model data of a target model in a virtual scene is obtained; determining first spatial data for a first point on the target model based on the model data; determining second spatial data for a second point on the target model based on the first spatial data; determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; according to the method of rendering the virtual plane by the reflection color, the position of the first space data mapped to the graphic user interface is determined as the reflection position, the position of the second space data mapped to the graphic user interface is determined as the reflected position, then the color corresponding to the position is directly determined as the reflection color and rendered on the reflection position, the color reflection color of the position of the second space data mapped on the user interface is not required to be repeated for many times, whether the second space data corresponds to the target model or not is not required to be repeatedly determined according to the reflection position, the problem of high cost and high expense caused by repeatedly sampling model data to determine the reflection position in the prior art is avoided, the efficiency of rendering the virtual plane is improved, the virtual plane is rendered by the color in the user interface, higher rendering effect can be ensured, and a user can obtain good vision, the game experience of the user is improved, and the technical problem that rendering precision and cost are difficult to balance when light reflection is rendered in real time in the prior art is solved.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a computer-readable storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present application, a computer-readable storage medium has stored thereon a program product capable of implementing the above-described method of the present embodiment. In some possible implementations, various aspects of the embodiments of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary implementations of the present invention described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the embodiments of the invention is not limited thereto, and in the embodiments of the invention, the computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s41, obtaining model data of the target model in the virtual scene;
s42, determining first space data of a first point on the target model based on the model data;
s43, determining second spatial data of a second point on the target model based on the first spatial data;
s44, determining the reflection color of the virtual plane based on the color information corresponding to the second space data;
and S45, rendering the virtual plane according to the reflection color.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: based on the model data, determining first spatial data for a first point on the target model comprises: determining original space data of a virtual camera in a virtual scene and target space data from the virtual camera to the lower left corner of a far clipping plane; determining first location data of a first point in a graphical user interface; the first spatial data is determined based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data.
Optionally, the above-mentioned non-volatile storage medium is further configured to store program code for performing the following steps: determining the first spatial data based on the original spatial data, the target spatial data, the first position data, and the original depth data in the model data comprises: mapping the original depth data to a graphical user interface to obtain target depth data; obtaining a product of the target depth data and a first spatial component in the target spatial data to obtain a first product; obtaining a product of a second space component in the target space data and a first coordinate component in the target position coordinates to obtain a second product; obtaining a product of a third space component in the target space data and a second coordinate component in the target position coordinate to obtain a third product; and acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain first spatial data.
Optionally, the above-mentioned non-volatile storage medium is further configured to store program code for performing the following steps: determining second spatial data for a second point on the target model based on the first spatial data comprises: acquiring third spatial data of the virtual plane; based on the first spatial data and the third spatial data, second spatial data is determined.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data comprises: mapping the second spatial data to a graphical user interface to obtain second position data; and acquiring color information corresponding to the second position data to obtain the reflection color.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: mapping the second spatial data to a graphical user interface, the obtaining the second location data comprising: acquiring a camera position of a virtual camera in a virtual scene, a first distance from the virtual camera to a far clipping surface, a second distance from the virtual camera to a near clipping surface and a view angle; determining a projection matrix based on the camera position, the first distance, the second distance, and the field of view angle; and obtaining a product of the second spatial data and the projection matrix to obtain second position data.
Optionally, the non-volatile storage medium is further configured to store program code for performing the following steps: before obtaining model data of a target model in a virtual scene, the method further comprises: determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane; and acquiring model data of the target model in the virtual scene in response to the difference between the first terrain data and the second terrain data being smaller than a preset threshold value.
In at least some embodiments of the present invention, model data of a target model in a virtual scene is obtained; determining first spatial data for a first point on the target model based on the model data; determining second spatial data for a second point on the target model based on the first spatial data; determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data; according to the method of rendering the virtual plane by the reflection color, the position of the first space data mapped to the graphic user interface is determined as the reflection position, the position of the second space data mapped to the graphic user interface is determined as the reflected position, then the color corresponding to the position is directly determined as the reflection color and rendered on the reflection position, the color reflection color of the position of the second space data mapped on the user interface is not required to be repeated for many times, whether the second space data corresponds to the target model or not is not required to be repeatedly determined according to the reflection position, the problem of high cost and high expense caused by repeatedly sampling model data to determine the reflection position in the prior art is avoided, the efficiency of rendering the virtual plane is improved, the virtual plane is rendered by the color in the user interface, higher rendering effect can be ensured, and a user can obtain good vision, the game experience of the user is improved, and the technical problem that rendering precision and cost are difficult to balance when light reflection is rendered in real time in the prior art is solved.
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 6, the electronic device 600 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic apparatus 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processor 610, the at least one memory 620, the bus 630 connecting the various system components (including the memory 620 and the processor 610), and the display 640.
Wherein the above-mentioned memory 620 stores program codes, which can be executed by the processor 610, to cause the processor 610 to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned method section of the embodiments of the present application.
The memory 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, may further include a read-only memory unit (ROM)6203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
In some examples, memory 620 may also include program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The memory 620 may further include memory located remotely from the processor 610, which may be connected to the electronic device 600 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, and processor 610, or a local bus using any of a variety of bus architectures.
Display 640 may, for example, be a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 600.
Optionally, the electronic apparatus 600 may also communicate with one or more external devices 700 (e.g., a keyboard, a pointing device, a Bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 600, and/or with any devices (e.g., a router, a modem, etc.) that enable the electronic apparatus 600 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 660. As shown in FIG. 6, the network adapter 660 communicates with the other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with electronic device 600, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The electronic device 600 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 6 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 600 may also include more or fewer components than shown in FIG. 6, or have a different configuration than shown in FIG. 1. The memory 620 may be used for storing computer programs and corresponding data, such as computer programs and corresponding data corresponding to the model rendering method and apparatus, the non-volatile storage medium, and the terminal device method in the embodiments of the present invention. The processor 610 executes various functional applications and data processing by running the computer program stored in the memory 620, that is, implements the model rendering method and apparatus, the non-volatile storage medium, and the terminal device method described above.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (10)

1. A model rendering method, wherein a terminal device provides a graphical user interface, and the content displayed by the graphical user interface at least includes: a virtual scene, the method comprising:
obtaining model data of a target model in the virtual scene, wherein part of sub-models of the target model are positioned below a virtual plane, and the virtual plane is used for presenting a reflection effect;
determining first spatial data of a first point on the target model based on the model data, wherein the first point is a point on a sub-model on the target model that is located below the virtual plane;
determining second spatial data of a second point on the target model based on the first spatial data, wherein the second point is a point on a sub-model on the target model above the virtual plane;
determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data;
and rendering the virtual plane according to the reflection color.
2. The method of claim 1, wherein determining, based on the model data, first spatial data for a first point on the target model comprises:
determining original space data of a virtual camera in the virtual scene and target space data from the virtual camera to the lower left corner of a far clipping plane;
determining first position data of the first point in the graphical user interface;
determining the first spatial data based on the original spatial data, the target spatial data, the first position data, and original depth data in the model data.
3. The method of claim 2, wherein determining the first spatial data based on the raw spatial data, the target spatial data, the first location data, and raw depth data in the model data comprises:
mapping the original depth data to the graphical user interface to obtain target depth data;
obtaining a product of the target depth data and a first spatial component in the target spatial data to obtain a first product;
obtaining a product of a second space component in the target space data and a first coordinate component in the target position coordinates to obtain a second product;
obtaining a product of a third space component in the target space data and a second coordinate component in the target position coordinate to obtain a third product;
and acquiring the sum of the original spatial data, the first product, the second product and the third product to obtain the first spatial data.
4. The method of claim 1, wherein determining second spatial data for a second point on the target model based on the first spatial data comprises:
acquiring third spatial data of the virtual plane;
determining the second spatial data based on the first spatial data and the third spatial data.
5. The method of claim 1, wherein determining the reflection color of the virtual plane based on the color information corresponding to the second spatial data comprises:
mapping the second spatial data to a graphical user interface to obtain second position data;
and acquiring color information corresponding to the second position data to obtain the reflection color.
6. The method of claim 5, wherein mapping the second spatial data to a graphical user interface, resulting in second location data comprises:
acquiring a camera position of a virtual camera in the virtual scene, a first distance from the virtual camera to a far clipping plane, a second distance from the virtual camera to a near clipping plane and a view angle;
determining a projection matrix based on the camera position, the first distance, the second distance, and the view angle;
and acquiring a product of the second spatial data and the projection matrix to obtain the second position data.
7. The method of claim 1, wherein prior to obtaining model data for a model of an object in the virtual scene, the method further comprises:
determining first terrain data of the target model above the virtual plane and second terrain data of the target model below the virtual plane;
and acquiring model data of a target model in the virtual scene in response to the fact that the difference between the first topographic data and the second topographic data is smaller than a preset threshold value.
8. A model rendering device is characterized in that a graphical user interface is provided through a terminal device, and the content displayed by the graphical user interface at least comprises: a virtual scene, the apparatus comprising:
the data acquisition module is used for acquiring model data of a target model in the virtual scene, wherein part of sub-models of the target model are positioned below a virtual plane, and the virtual plane is used for presenting a reflection effect;
a first data determining module, configured to determine first spatial data of a first point on the target model based on the model data, where the first point is a point on a sub-model on the target model located below the virtual plane;
a second data determining module, configured to determine second spatial data of a second point on the target model based on the first spatial data, where the second point is a point on a submodel located above the virtual plane on the target model;
a color determining module, configured to determine a reflection color of the virtual plane based on color information corresponding to the second spatial data;
and the rendering module is used for rendering the virtual plane according to the reflection color.
9. A non-volatile storage medium, comprising a stored program, wherein a device on which the non-volatile storage medium is located is controlled to perform the model rendering method of any one of claims 1 to 7 when the program is run.
10. A terminal device, comprising: a memory having a computer program stored therein and a processor for executing the computer program to perform the model rendering method of any one of claims 1 to 7.
CN202210676311.4A 2022-06-15 2022-06-15 Model rendering method and device, nonvolatile storage medium and terminal equipment Pending CN115120973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210676311.4A CN115120973A (en) 2022-06-15 2022-06-15 Model rendering method and device, nonvolatile storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210676311.4A CN115120973A (en) 2022-06-15 2022-06-15 Model rendering method and device, nonvolatile storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN115120973A true CN115120973A (en) 2022-09-30

Family

ID=83377890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210676311.4A Pending CN115120973A (en) 2022-06-15 2022-06-15 Model rendering method and device, nonvolatile storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN115120973A (en)

Similar Documents

Publication Publication Date Title
US10754531B2 (en) Displaying a three dimensional user interface
US11119564B2 (en) Information processing apparatus, method for information processing, and game apparatus for performing different operations based on a movement of inputs
JP4698893B2 (en) Method, graphics system, and program for providing improved fog effects
US6700586B1 (en) Low cost graphics with stitching processing hardware support for skeletal animation
US7905779B2 (en) Video game including effects for providing different first person experiences of the same video game world and a storage medium storing software for the video game
US9483873B2 (en) Easy selection threshold
CN115375822A (en) Cloud model rendering method and device, storage medium and electronic device
US6717575B2 (en) Image drawing method, image drawing apparatus, recording medium, and program
US8400445B2 (en) Image processing program and image processing apparatus
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
CN117252982A (en) Material attribute generation method and device for virtual three-dimensional model and storage medium
CN115120973A (en) Model rendering method and device, nonvolatile storage medium and terminal equipment
JP4096710B2 (en) Image generation device
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN114299203A (en) Processing method and device of virtual model
WO2022127852A1 (en) Finger touch operation display method and apparatus
JP2012155731A (en) Retrieval system
US20220392152A1 (en) Method for operating component, electronic device, storage medium and program product
CN117496033A (en) Mapping processing method and device, computer readable storage medium and electronic device
CN115089964A (en) Method and device for rendering virtual fog model, storage medium and electronic device
CN116889723A (en) Picture generation method and device of virtual scene, storage medium and electronic device
CN117911600A (en) Method and device for generating stylized hand-drawing effect, storage medium and electronic device
CN115131489A (en) Cloud layer rendering method and device, storage medium and electronic device
CN117496035A (en) Texture map generation method and device, storage medium and electronic device
CN116468839A (en) Model rendering method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230831

Address after: Room 3040, 3rd floor, 2879 Longteng Avenue, Xuhui District, Shanghai, 2002

Applicant after: Shanghai NetEasy Brilliant Network Technology Co.,Ltd.

Address before: 310000 7 storeys, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: NETEASE (HANGZHOU) NETWORK Co.,Ltd.

TA01 Transfer of patent application right