CN114742970A - Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device - Google Patents

Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device Download PDF

Info

Publication number
CN114742970A
CN114742970A CN202210346588.0A CN202210346588A CN114742970A CN 114742970 A CN114742970 A CN 114742970A CN 202210346588 A CN202210346588 A CN 202210346588A CN 114742970 A CN114742970 A CN 114742970A
Authority
CN
China
Prior art keywords
virtual
view
map
dimensional model
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210346588.0A
Other languages
Chinese (zh)
Inventor
文涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210346588.0A priority Critical patent/CN114742970A/en
Publication of CN114742970A publication Critical patent/CN114742970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a processing method of a virtual three-dimensional model, a nonvolatile storage medium and an electronic device. The method comprises the following steps: determining a plurality of visual angles corresponding to the virtual three-dimensional model; generating a view information mapping array corresponding to a plurality of views based on rendering information corresponding to each view in the plurality of views; generating a search map through a virtual grid body corresponding to the virtual three-dimensional model and a view information map array; and acquiring a target visual angle information map corresponding to a target visual angle from the visual angle information map array by utilizing the search map, and performing rendering operation by using the target visual angle information map, wherein the target visual angle is any visual angle in a plurality of visual angles. The invention solves the technical problem that the pseudo-3D model processing mode provided by the related technology easily causes the waste of storage resources.

Description

Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to a processing method of a virtual three-dimensional model, a nonvolatile storage medium and an electronic device.
Background
Currently, virtual objects in a game scene are mostly composed of virtual three-dimensional (3D) models. For 3D gaming, the granularity of the game scene is often an important factor that affects the game player's gaming experience. However, since the computing power and memory resources of the game device are often limited, more computing power and memory resources are needed to be allocated to the near view virtual 3D model which is easily noticed by the game player, so that the near view virtual 3D model has a fine visual representation. For the distant view virtual 3D model, the visual representation details of the distant view virtual 3D model can be appropriately reduced, so that the visual effect of the distant view virtual 3D model is within the acceptable range of the game player, and the storage resources are saved as much as possible. In the related art, a processing method for a distant view virtual 3D model generally performs surface reduction preprocessing on an original model of the distant view virtual 3D model to generate models with multiple fineness, so that the original model is replaced by the models with low fineness at different distances. However, for a virtual 3D model (for example, a virtual tree model) with more detailed representation, it is not suitable to generate a virtual 3D model with low fineness by reducing facets, and for this reason, a way of pre-generating a pseudo 3D model is also provided in the related art, information of the virtual 3D model at different viewing angles is generated into a map in a pre-processing stage, and then a patch is used to replace the model in runtime, and rendering is performed by using the map recording the information, thereby achieving the effect of pseudo 3D.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least some embodiments of the present invention provide a processing method for a virtual three-dimensional model, a non-volatile storage medium, and an electronic device, so as to at least solve the technical problem that a pseudo 3D model processing method provided in the related art is prone to cause waste of storage resources.
According to an embodiment of the present invention, a method for processing a virtual three-dimensional model is provided, including:
determining a plurality of visual angles corresponding to the virtual three-dimensional model; generating a view information mapping array corresponding to a plurality of views based on rendering information corresponding to each view in the plurality of views; generating a search map through a virtual grid body corresponding to the virtual three-dimensional model and a view information map array; and acquiring a target visual angle information map corresponding to a target visual angle from the visual angle information map array by utilizing the search map, and performing rendering operation by using the target visual angle information map, wherein the target visual angle is any one of a plurality of visual angles.
Optionally, the gaze direction of each of the plurality of perspectives points from a corresponding vertex of the virtual mesh volume to a center of the virtual mesh volume.
Optionally, the lookup map is used to store a mapping relationship between a first location and a second location, where the first location is a vertex location of the virtual grid, and the second location is a storage location of the view information map corresponding to each view in the view information map array.
Optionally, generating the view information map array based on the rendering information corresponding to each of the plurality of views includes: shooting the virtual three-dimensional model at the position of each visual angle in a plurality of visual angles in sequence to obtain rendering information corresponding to each visual angle; respectively generating visual angle information maps corresponding to each information type based on the number of the information types contained in the rendering information; and storing the view information map corresponding to each view into the view information map array.
Optionally, the rendering information comprises: color, normal, depth of the virtual three-dimensional model at each viewing angle.
Optionally, generating the lookup map by using the virtual grid and the view information map array includes: determining the number of pixels contained in the searched mapping by the number of the vertexes of the virtual grid body; acquiring a first position consistent with the view angle position of each of a plurality of view angles from the virtual grid body, and acquiring a second position of a view angle information map corresponding to each of the plurality of view angles from the view angle information map array; establishing a mapping relation between the first position and the second position; and generating a lookup map based on the number of pixels and the mapping relation.
Optionally, the obtaining the target view information map from the view information map array by using the lookup map, and the performing the rendering operation using the target view information map includes: acquiring a first position corresponding to a target view angle from a virtual grid body; sampling the searched mapping by using the first position, and determining a second position corresponding to the target view angle from the view angle information mapping array; and performing a rendering operation using the target view information map stored at the second position.
According to an embodiment of the present invention, there is also provided an apparatus for processing a virtual three-dimensional model, including:
the determining module is used for determining a plurality of visual angles corresponding to the virtual three-dimensional model; the first generating module is used for generating a view information mapping array corresponding to a plurality of views based on rendering information corresponding to each view in the plurality of views; the second generation module is used for generating a searching chartlet through a virtual grid body corresponding to the virtual three-dimensional model and the visual angle information chartlet array; and the processing module is used for acquiring a target visual angle information map corresponding to a target visual angle from the visual angle information map array by utilizing the search map, and executing rendering operation by using the target visual angle information map, wherein the target visual angle is any visual angle in the multiple visual angles.
Optionally, the gaze direction of each of the plurality of perspectives points from a corresponding vertex of the virtual mesh volume to a center of the virtual mesh volume.
Optionally, the lookup map is used to store a mapping relationship between a first location and a second location, where the first location is a vertex location of the virtual grid, and the second location is a storage location of the view information map corresponding to each view in the view information map array.
Optionally, the first generating module is configured to sequentially shoot the virtual three-dimensional model at a position of each of the multiple viewing angles to obtain rendering information corresponding to each viewing angle; respectively generating visual angle information maps corresponding to each information type based on the number of the information types contained in the rendering information; and storing the view information map corresponding to each view into the view information map array.
Optionally, the rendering information comprises: color, normal, depth of the virtual three-dimensional model at each viewing angle.
Optionally, the second generating module is configured to determine, by using the number of vertices of the virtual grid, the number of pixels included in the lookup map; acquiring a first position consistent with the view angle position of each of the multiple view angles from the virtual grid body, and acquiring a second position of a view angle information map corresponding to each of the multiple view angles from the view angle information map array; establishing a mapping relation between the first position and the second position; and generating a lookup map based on the number of pixels and the mapping relation.
Optionally, the processing module is configured to obtain a first position corresponding to the target view from the virtual grid; sampling the searched mapping by using the first position, and determining a second position corresponding to the target view angle from the view angle information mapping array; and performing a rendering operation using the target view information map stored at the second position.
According to an embodiment of the present invention, there is further provided a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to execute the processing method of the virtual three-dimensional model in any one of the above items when running.
There is further provided, according to an embodiment of the present invention, an electronic apparatus including a memory and a processor, the memory storing a computer program therein, the processor being configured to execute the computer program to perform the processing method of the virtual three-dimensional model in any one of the above.
In at least some embodiments of the present invention, a plurality of views corresponding to the virtual three-dimensional model are determined, a rendering information corresponding to each of the plurality of views is generated based on the rendering information corresponding to each of the plurality of views, generating a search map by a virtual grid body corresponding to the virtual three-dimensional model and a view information map array, and obtaining a target visual angle information map corresponding to a target visual angle from the visual angle information map array by utilizing the search map and executing rendering operation by using the target visual angle information map, wherein the target visual angle is any visual angle in a plurality of visual angles, so that the aim of removing the visual angle information map which is not required to be used and the repeated visual angle information map from all the visual angle information maps occupying storage resources is fulfilled, and the technical effect of effectively reducing the consumption of the storage resources is realized, and further, the technical problem that storage resources are easily wasted due to a pseudo-3D model processing mode provided in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic view illustrating a processing manner of a pseudo 3D model according to the related art;
FIG. 2 is a schematic view showing a processing manner of another pseudo 3D model according to the related art;
fig. 3 is a block diagram of a hardware configuration of a mobile terminal according to a processing method of a virtual three-dimensional model according to an embodiment of the present invention;
FIG. 4 is a flow diagram of a method of processing a virtual three-dimensional model according to one embodiment of the invention;
fig. 5 is a block diagram of a processing device for processing a virtual three-dimensional model according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The related art provides two processing modes of the pseudo-3D model as follows:
first, fig. 1 is a schematic diagram of a processing method of a pseudo 3D model according to the related art, and as shown in fig. 1, the processing method of the pseudo 3D model is a patch method, in which a plurality of static patches are directly used to replace an original model. The advantages of this approach are: the treatment process is simpler, but the obvious defects are that: the visual effect is relatively poor, and the upper is easy to wear.
And in the preprocessing stage, acquiring model information under different specific visual angles and storing the model information into a chartlet, then adjusting the angle of a single surface patch according to the visual angle during operation, and selecting a corresponding chartlet for rendering. It can be seen that this approach merges all the view information maps into one map, i.e., Texture Atlas. Fig. 2 is a schematic diagram illustrating another processing manner of a pseudo 3D model according to the related art, and as shown in fig. 2, a mapping relationship from a viewing angle to a position of a virtual grid body corresponding to the virtual grid body can be determined, wherein each vertex in the virtual grid body corresponds to a separate viewing angle. However, the obvious drawbacks of this approach are:
(1) all view information needs to be saved. For some game scenes, only part of the viewing angles of the perspective virtual 3D model may be visible to the game player, and at this time, saving all the viewing angle information may cause waste of storage resources.
(2) Partial vertexes are overlapped when the virtual grid body is transformed into the virtual sphere, and in order to ensure the accuracy of mapping from the view angle to the mapping coordinate, the partial maps are completely consistent (for example, the view angle represented by the overlapped point at the seam of the lower hemisphere), which also causes the waste of storage resources.
In accordance with one embodiment of the present invention, there is provided an embodiment of a method for processing a virtual three-dimensional model, wherein the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer-executable instructions, and wherein, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that illustrated.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, a game console, etc. Fig. 3 is a block diagram of a hardware structure of a mobile terminal according to a processing method of a virtual three-dimensional model according to an embodiment of the present invention. As shown in fig. 3, the mobile terminal may include one or more (only one shown in fig. 3) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 3 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 3, or have a different configuration than shown in FIG. 3.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the processing method of the virtual three-dimensional model in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, implements the processing method of the virtual three-dimensional model. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). Some human interface devices may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
In this embodiment, a processing method of a virtual three-dimensional model running on the mobile terminal is provided, and fig. 4 is a flowchart of a processing method of a virtual three-dimensional model according to an embodiment of the present invention, as shown in fig. 4, the method includes the following steps:
step S40, determining a plurality of visual angles corresponding to the virtual three-dimensional model;
the virtual three-dimensional model may be a perspective virtual 3D model in a game scene, for example: a virtual tree model, a virtual rockery model, a virtual waterfall model, a virtual grass model, etc. In an alternative embodiment, the viewing direction of each of the plurality of perspectives points from a corresponding vertex of the virtual mesh volume to the center of the virtual mesh volume (which is typically the origin of the virtual 3D model). Thus, a plurality of horizontal and vertical viewing angles are obtained.
Step S41, generating a view information mapping array corresponding to a plurality of views based on rendering information corresponding to each view in the plurality of views;
the rendering information may include, but is not limited to: the color, normal, depth, custom static illumination, metal degree, roughness, etc. of the virtual three-dimensional model at each viewing angle. After the virtual 3D model is made by the art personnel, the virtual 3D model can be imported into the editor, then the resolution is set, the number of the horizontal and vertical viewing angles (the more the number of the horizontal and vertical viewing angles is, the more the pseudo 3D model becomes vivid, but the number of the baked pictures is increased) is determined, and whether the contents (including the color, the normal line, the depth information and the like of the virtual 3D model) stored in the viewing angle are baked or not is determined according to the viewing angle selected by the art personnel in the editor.
Step S42, generating a search map through a virtual grid body corresponding to the virtual three-dimensional model and a view information map array;
the lookup map is used for storing the mapping relation between the first position and the second position. The first position is a vertex position of the virtual mesh volume. The second position is a storage position of the view information map corresponding to each view in the plurality of views in the view information map array.
Step S43, obtaining a target view information map corresponding to the target view from the view information map array by using the lookup map, and performing a rendering operation using the target view information map, where the target view is any one of the multiple views.
Through the steps, a plurality of visual angles corresponding to the virtual three-dimensional model can be determined, and the visual angle information mapping array corresponding to the plurality of visual angles can be generated based on the rendering information corresponding to each visual angle in the plurality of visual angles, generating a search map by a virtual grid body corresponding to the virtual three-dimensional model and a view information map array, and obtaining a target visual angle information map corresponding to a target visual angle from the visual angle information map array by utilizing the search map and executing rendering operation by using the target visual angle information map, wherein the target visual angle is any visual angle in a plurality of visual angles, so that the aim of removing the visual angle information map which is not required to be used and the repeated visual angle information map from all the visual angle information maps occupying storage resources is fulfilled, and the technical effect of effectively reducing the consumption of the storage resources is realized, and further, the technical problem that storage resources are easily wasted due to a pseudo-3D model processing mode provided in the related technology is solved.
Optionally, in step S41, generating a view information map array based on the rendering information corresponding to each of the plurality of views may include performing the following steps:
step S410, shooting the virtual three-dimensional model at the position of each visual angle in a plurality of visual angles in sequence to obtain rendering information corresponding to each visual angle;
step S411, respectively generating view angle information maps corresponding to each information type based on the number of the information types contained in the rendering information;
in step S412, the view information map corresponding to each view is stored in the view information map array.
In the process of generating the view information mapping array based on the rendering information corresponding to each of the multiple views, the virtual three-dimensional model is sequentially photographed at the position of each of the multiple views to obtain the rendering information corresponding to each view (which may include the color, normal, depth and other information used for rendering of the virtual three-dimensional model at each view, such as custom static illumination, metallization degree, roughness, etc., which may be selected according to actual needs). Next, based on the number of information types included in the rendering information, view angle information maps corresponding to the respective information types are generated. How many types of rendering information need to be stored for each view, and how many view information maps need to be baked. For example: generating a view information map 1 based on the color of the virtual three-dimensional model at the current view angle, generating a view information map 2 based on the normal of the virtual three-dimensional model at the current view angle, generating a view information map 3 based on the depth of the virtual three-dimensional model at the current view angle, and generating a view information map 4 based on the custom static illumination of the virtual three-dimensional model at the current view angle. And then sequentially storing the view information maps corresponding to each view into a view information map array. For example: and storing the view information map 1, the view information map 2, the view information map 3 and the view information map 4 to a storage position corresponding to the current view in the view information map array.
Optionally, in step S42, generating the lookup map by the virtual grid volume and the view information map array may include performing the following steps:
step S420, determining the number of pixels contained in the searched mapping by the number of the vertexes of the virtual grid body;
step S421, obtaining a first position consistent with the view angle position of each of the multiple view angles from the virtual grid body, and obtaining a second position of the view angle information map corresponding to each of the multiple view angles from the view angle information map array;
step S422, establishing a mapping relation between the first position and the second position;
in step S423, a lookup map is generated based on the number of pixels and the mapping relationship.
In the process of generating the search map through the virtual grid body and the view information map array, the number of pixels contained in the search map can be determined through the number of vertexes of the virtual grid body. Then, a first position corresponding to the view angle position of each of the multiple view angles is obtained from the virtual grid body, and a second position of the view angle information map corresponding to each of the multiple view angles is obtained from the view angle information map array, so that a mapping relation is established between the first position and the second position. And finally, generating a searching map based on the number of pixels and the mapping relation.
In the conventional rendering process, an image processor (GPU) calculates vertex coordinate values of the position and view angle of a camera in a virtual grid volume, and samples a map containing all view angle information using the vertex coordinate values as offsets. In contrast, in an alternative embodiment of the present invention, the view information map array is used to store the view information map, and therefore, a LookUp (LookUp) map is further required to store the corresponding relationship between the vertex position of the virtual grid and the storage position of the view information map in the view information map array.
The number of pixels in the lookup map is consistent with the number of vertices of the virtual grid. Specifically, in the process of generating the view angle, it may be recorded simultaneously which vertices in the virtual grid are the same in corresponding view angle positions, and then, while the view angle information map is stored in the view angle information map array, all the vertex positions corresponding to the current view angle may be acquired, and a mapping relationship between the vertex positions and storage positions of the view angle information map in the view angle information map array is stored.
Optionally, in step S43, the obtaining the target view information map from the view information map array by using the lookup map, and performing the rendering operation using the target view information map may include performing the steps of:
step S430, acquiring a first position corresponding to a target view angle from the virtual grid body;
step S431, sampling the searched mapping by using the first position, and determining a second position corresponding to the target view angle from the view angle information mapping array;
and step S432, performing rendering operation by using the target view angle information map stored in the second position.
In the process of obtaining a target view information map from the view information map array using the lookup map and performing a rendering operation using the target view information map, a first position corresponding to a target view may first be obtained from the virtual grid volume. And secondly, sampling the searched map by using the first position, and determining a second position corresponding to the target view angle from the view angle information map array. And then, the target view angle information map stored in the second position is used for executing the rendering operation. That is, after the GPU calculates the vertex coordinate value of the target view in the virtual grid, the lookup map may be sampled by using the vertex coordinate value to obtain the storage location of the corresponding view information map in the view information map array, and then the rendering operation may be performed by using the view information map stored in the storage location.
Therefore, the art worker only needs to select the necessary number of the visual angles to automatically generate the corresponding visual angle information mapping array and search the mapping for the visual angles, and memory consumption caused by using only one mapping is avoided. Because the magnitude of the horizontal and vertical number of the visual angles is small, the size of the search map is limited. In addition, the artist can select the required perspective, and the situation of using the pseudo 3D model is usually used as a perspective virtual 3D model, so that the perspective appearing in the game is likely to cover only half a sphere or even less. On the contrary, the existing method of only using one map cannot remove the view angle information map which is not needed to be used, and the method of using the map array and searching the map can completely remove the view angle information map which is not needed to be used and is repeated, thereby reducing the consumption of storage resources.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method according to the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a processing apparatus for a virtual three-dimensional model is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated after the description is given. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a processing apparatus for processing a virtual three-dimensional model according to an embodiment of the present invention, as shown in fig. 5, the apparatus includes: a determining module 10, configured to determine multiple viewing angles corresponding to the virtual three-dimensional model; a first generating module 20, configured to generate a view information map array corresponding to a plurality of views based on rendering information corresponding to each of the plurality of views; the second generating module 30 is configured to generate a search map by using the virtual grid body and the view information map array corresponding to the virtual three-dimensional model; and the processing module 40 is configured to obtain a target view information map corresponding to a target view from the view information map array by using the lookup map, and perform a rendering operation by using the target view information map, where the target view is any one of multiple views.
Optionally, the gaze direction of each of the plurality of perspectives points from a corresponding vertex of the virtual mesh volume to a center of the virtual mesh volume.
Optionally, the lookup map is used to store a mapping relationship between a first location and a second location, where the first location is a vertex location of the virtual grid, and the second location is a storage location of the view information map corresponding to each view in the view information map array.
Optionally, the first generating module 20 is configured to sequentially shoot the virtual three-dimensional model at a position of each of the multiple viewing angles to obtain rendering information corresponding to each viewing angle; respectively generating visual angle information maps corresponding to each information type based on the number of the information types contained in the rendering information; and storing the view information map corresponding to each view into the view information map array.
Optionally, the rendering information comprises: color, normal, depth of the virtual three-dimensional model at each viewing angle.
Optionally, the second generating module 30 is configured to determine, by using the number of vertices of the virtual grid, the number of pixels included in the lookup map; acquiring a first position consistent with the view angle position of each of the multiple view angles from the virtual grid body, and acquiring a second position of a view angle information map corresponding to each of the multiple view angles from the view angle information map array; establishing a mapping relation between the first position and the second position; and generating a lookup map based on the number of pixels and the mapping relation.
Optionally, the processing module 40 is configured to obtain a first position corresponding to the target view from the virtual grid; sampling the searched mapping by using the first position, and determining a second position corresponding to the target view angle from the view angle information mapping array; and performing rendering operation by using the target visual angle information map stored in the second position.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of:
s1, determining a plurality of visual angles corresponding to the virtual three-dimensional model;
s2, generating a view information mapping array corresponding to a plurality of views based on rendering information corresponding to each view in the plurality of views;
s3, generating a search map through a virtual grid body corresponding to the virtual three-dimensional model and a view information map array;
and S4, obtaining a target view angle information map corresponding to the target view angle from the view angle information map array by utilizing the search map, and executing rendering operation by using the target view angle information map, wherein the target view angle is any one of a plurality of view angles.
Optionally, in this embodiment, the nonvolatile storage medium may include but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a plurality of visual angles corresponding to the virtual three-dimensional model;
s2, generating a view information mapping array corresponding to a plurality of views based on rendering information corresponding to each view in the plurality of views;
s3, generating a search map through a virtual grid body corresponding to the virtual three-dimensional model and the view information map array;
and S4, obtaining a target view angle information map corresponding to the target view angle from the view angle information map array by utilizing the search map, and executing rendering operation by using the target view angle information map, wherein the target view angle is any one of a plurality of view angles.
Optionally, for a specific example in this embodiment, reference may be made to the examples described in the above embodiment and optional implementation, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is substantially or partly contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for processing a virtual three-dimensional model, comprising:
determining a plurality of visual angles corresponding to the virtual three-dimensional model;
generating a view information mapping array corresponding to the plurality of views based on rendering information corresponding to each view in the plurality of views;
generating a search map through a virtual grid body corresponding to the virtual three-dimensional model and the view information map array;
and acquiring a target visual angle information map corresponding to a target visual angle from the visual angle information map array by using the search map, and performing rendering operation by using the target visual angle information map, wherein the target visual angle is any one of the multiple visual angles.
2. The method of processing a virtual three-dimensional model according to claim 1, wherein the direction of the line of sight of each of the plurality of perspectives points from a corresponding vertex of the virtual mesh volume to the center of the virtual mesh volume.
3. The method of processing a virtual three-dimensional model according to claim 1, wherein the lookup map is used to store a mapping relationship between a first location and a second location, the first location is a vertex location of the virtual grid, and the second location is a storage location of a view information map corresponding to each of the plurality of views in the view information map array.
4. The method of processing the virtual three-dimensional model according to claim 1, wherein generating the set of perspective information maps based on rendering information corresponding to each of the plurality of perspectives comprises:
shooting the virtual three-dimensional model at the position of each visual angle in the plurality of visual angles in sequence to obtain rendering information corresponding to each visual angle;
respectively generating visual angle information maps corresponding to each information type based on the number of the information types contained in the rendering information;
and storing the view information map corresponding to each view into the view information map array.
5. The method of processing a virtual three-dimensional model according to claim 4, wherein the rendering information comprises: the color, normal, depth of the virtual three-dimensional model at each viewing angle.
6. The method of processing a virtual three-dimensional model according to claim 3, wherein generating the lookup map by the virtual grid volume and the array of perspective information maps comprises:
determining the number of pixels contained in the lookup map according to the number of the vertexes of the virtual grid body;
obtaining the first position from the virtual grid volume that coincides with the view position for each of the plurality of views, and obtaining the second position from the view information map array for the view information map corresponding to each of the plurality of views;
establishing the mapping relationship between the first location and the second location;
and generating the lookup map based on the number of pixels and the mapping relation.
7. The method of processing a virtual three-dimensional model according to claim 3, wherein obtaining the target perspective information map from the perspective information map array using the lookup map, and performing a rendering operation using the target perspective information map comprises:
acquiring the first position corresponding to the target view angle from the virtual grid body;
sampling the searched mapping by using the first position, and determining the second position corresponding to the target view angle from the view angle information mapping array;
and performing a rendering operation using the target perspective information map stored at the second location.
8. An apparatus for processing a virtual three-dimensional model, comprising:
the determining module is used for determining a plurality of visual angles corresponding to the virtual three-dimensional model;
a first generating module, configured to generate a view information map array corresponding to the multiple views based on rendering information corresponding to each of the multiple views;
the second generation module is used for generating a search map through the virtual grid body corresponding to the virtual three-dimensional model and the view information map array;
and the processing module is used for acquiring a target visual angle information map corresponding to a target visual angle from the visual angle information map array by using the search map, and executing rendering operation by using the target visual angle information map, wherein the target visual angle is any one of the multiple visual angles.
9. A non-volatile storage medium, characterized in that a computer program is stored in the storage medium, wherein the computer program is arranged to execute the method of processing a virtual three-dimensional model according to any of claims 1 to 7 when running.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of processing the virtual three-dimensional model according to any one of claims 1 to 7.
CN202210346588.0A 2022-04-02 2022-04-02 Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device Pending CN114742970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210346588.0A CN114742970A (en) 2022-04-02 2022-04-02 Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210346588.0A CN114742970A (en) 2022-04-02 2022-04-02 Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114742970A true CN114742970A (en) 2022-07-12

Family

ID=82278734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210346588.0A Pending CN114742970A (en) 2022-04-02 2022-04-02 Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114742970A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761123A (en) * 2022-11-11 2023-03-07 北京百度网讯科技有限公司 Three-dimensional model processing method and device, electronic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761123A (en) * 2022-11-11 2023-03-07 北京百度网讯科技有限公司 Three-dimensional model processing method and device, electronic device and storage medium
CN115761123B (en) * 2022-11-11 2024-03-12 北京百度网讯科技有限公司 Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109754454B (en) Object model rendering method and device, storage medium and equipment
KR20220083839A (en) A method and apparatus for displaying a virtual scene, and an apparatus and storage medium
CN111957040B (en) Detection method and device for shielding position, processor and electronic device
CN111729307B (en) Virtual scene display method, device, equipment and storage medium
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN109725956B (en) Scene rendering method and related device
CN112370784B (en) Virtual scene display method, device, equipment and storage medium
CN108404414B (en) Picture fusion method and device, storage medium, processor and terminal
CN113318428B (en) Game display control method, nonvolatile storage medium, and electronic device
CN111450529B (en) Game map acquisition method and device, storage medium and electronic device
CN111710020B (en) Animation rendering method and device and storage medium
CN112245926A (en) Virtual terrain rendering method, device, equipment and medium
CN115082607B (en) Virtual character hair rendering method, device, electronic equipment and storage medium
CN115063518A (en) Track rendering method and device, electronic equipment and storage medium
CN114820915A (en) Method and device for rendering shading light, storage medium and electronic device
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN112950753A (en) Virtual plant display method, device, equipment and storage medium
CN116452704A (en) Method and device for generating lens halation special effect, storage medium and electronic device
CN115713589A (en) Image generation method and device for virtual building group, storage medium and electronic device
CN116233532A (en) Video playing method, device, computer equipment and computer readable storage medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN112348965A (en) Imaging method, imaging device, electronic equipment and readable storage medium
CN116617658B (en) Image rendering method and related device
CN114053704B (en) Information display method, device, terminal and storage medium
US11875445B2 (en) Seamless image processing of a tiled image region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination