CN110689626A - Game model rendering method and device - Google Patents

Game model rendering method and device Download PDF

Info

Publication number
CN110689626A
CN110689626A CN201910911907.6A CN201910911907A CN110689626A CN 110689626 A CN110689626 A CN 110689626A CN 201910911907 A CN201910911907 A CN 201910911907A CN 110689626 A CN110689626 A CN 110689626A
Authority
CN
China
Prior art keywords
model
information
camera
dimensional
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910911907.6A
Other languages
Chinese (zh)
Inventor
许飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910911907.6A priority Critical patent/CN110689626A/en
Publication of CN110689626A publication Critical patent/CN110689626A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor

Abstract

The invention discloses a game model rendering method and device. Wherein, the method comprises the following steps: responding to a game trigger event, and acquiring pixel information of different angles of a pre-stored virtual model, wherein the pixel information comprises color information and depth information, and the virtual model corresponds to the game trigger event; reconstructing a three-dimensional model corresponding to the virtual model according to the pixel information of the virtual model at different angles; acquiring current visual angle information, and acquiring a three-dimensional model and a model to be rendered corresponding to the current visual angle information; and rendering the model to be rendered for display. The invention solves the technical problem that the three-dimensional rendering cost is high due to the limitation of various factors when the complex model is subjected to three-dimensional rendering in a two-dimensional game in the prior art.

Description

Game model rendering method and device
Technical Field
The invention relates to the field of game graphic rendering, in particular to a game model rendering method and device.
Background
In order to enable a user to have a better game experience, in the process of developing a game, a model in the game needs to be rendered, wherein the model in the game can be a two-dimensional model or a three-dimensional model. The two-dimensional model is applied to the two-dimensional game, the resource of the two-dimensional game is mainly pictures, and the two-dimensional pictures are only the projection of the three-dimensional model at a certain angle, and relative to the three-dimensional model, the two-dimensional pictures lose space dimension information, so that the two-dimensional game easily realizes the effects of translation, scaling, rotation perpendicular to a screen and the like, but is difficult to realize the three-dimensional rotation effect.
At present, the following two ways are mainly adopted to realize the rotating effect in the two-dimensional game:
the first method is as follows: a traditional two-dimensional game production method. The method can disperse the possible rotation directions of the two-dimensional model to a limited number of directions, art resources in each direction are manufactured in advance, and the two-dimensional model can only be displayed in the directions. When the two-dimensional model is rotated, the resource display in the closest direction is selected, for example, the rotation direction of the two-dimensional model is 4 or 8. In this method, the rotation of the two-dimensional model is not smooth, and the two-dimensional model is likely to jump by 90 degrees or 45 degrees when rotated. In addition, although the jump of the two-dimensional model rotation can be reduced by increasing the direction, the amount of data corresponding to game resources increases with the increase of the discrete direction, and complete seamless rotation cannot be realized.
The second method comprises the following steps: a two-dimensional/three-dimensional hybrid rendering method. The method can directly use three-dimensional data to represent a two-dimensional model needing to be rotated, a three-dimensional engine is used for rendering, the rendering result is a two-dimensional resource (such as a two-dimensional picture), and finally the rendered two-dimensional resource is added into the final rendering result. In the method, the rendering quality corresponding to the finally obtained rendering result is limited by a three-dimensional real-time rendering effect, and the three-dimensional real-time rendering effect hardly reaches the level of two-dimensional off-line rendering. In addition, the difference between the three-dimensional online rendering style and the three-dimensional rendering style is large, and a large amount of work is needed to match the styles of the three-dimensional real-time rendering and the two-dimensional game. In general, a two-dimensional resource is rendered offline using a relatively complex renderer, and the offline rendering has a loose requirement on the number of surfaces of a model, the number of bones of a bone animation, the complexity of materials, the number of light sources, and the like. The three-dimensional 3D real-time rendering has strict requirements on the number of surfaces, the number of bones, the complexity of materials, the number of light sources, and the like of the model, that is, the two-dimensional/three-dimensional hybrid rendering method needs to render a fine effect by using a simplified model, the process is accompanied with a large amount of art and program work, and the rendering result may not be good enough.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a game model rendering method and device, which at least solve the technical problem that three-dimensional rendering cost is high due to the limitation of various factors when a complex model is three-dimensionally rendered in a two-dimensional game in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a game model rendering method including: responding to a game trigger event, and acquiring pixel information of different angles of a pre-stored virtual model, wherein the pixel information comprises color information and depth information, and the virtual model corresponds to the game trigger event; reconstructing a three-dimensional model corresponding to the virtual model according to the pixel information of the virtual model at different angles; acquiring current visual angle information, and acquiring a three-dimensional model and a model to be rendered corresponding to the current visual angle information; and rendering the model to be rendered for display.
Further, the pre-stored pixel information of different angles of the virtual model is performed by the following steps: the method comprises the steps that pixel information of different angles is obtained by shooting a virtual model through a plurality of cameras in advance, wherein the cameras are distributed on the surface of a preset model, and the virtual model is surrounded by the preset model.
Further, the preset model is a centrosymmetric geometric body, and the plurality of cameras are uniformly arranged on the surface of the geometric body.
Further, the game model rendering method further comprises the following steps: before acquiring pre-stored pixel information of different angles of a virtual model, detecting model information corresponding to the virtual model, wherein the model information at least comprises: the size of the virtual model; determining a number of cameras of the plurality of cameras based on the model information; the positions of the plurality of cameras in the surface of the geometric solid are mapped on the inscribed octahedron based on the number of cameras.
Further, the game model rendering method further comprises the following steps: determining a first part model to be rotated in the virtual model according to the game scene; determining a second partial model of the first partial model mapped in the inscribed octahedron; the positions of the plurality of cameras in the surface of the geometric solid are mapped on the inscribed octahedron based on the second partial model and the number of cameras.
Further, the game model rendering method further comprises the following steps: acquiring orientation information and camera parameters of each camera; converting the virtual model into a camera space based on the orientation information, the camera parameters and the perspective method to obtain a conversion result; and calculating the distance between each pixel and the camera corresponding to the pixel based on the conversion result to obtain depth information.
Further, the game model rendering method further comprises the following steps: acquiring a projected image shot by each camera to the virtual model; determining first position information of each pixel in a camera space according to the depth information of each pixel in the projection image of each camera; converting the first position information into second position information in a three-dimensional space where each camera is located to obtain the three-dimensional position of the pixel corresponding to each angle in the three-dimensional space; and reconstructing a three-dimensional model based on the three-dimensional position.
Further, the game model rendering method further comprises the following steps: determining the position information and the orientation information of each camera according to the distribution information of each camera in the preset model; obtaining a preset matrix according to the position and orientation information of each camera; and obtaining first position information of each pixel in the camera space based on the preset matrix and the depth information of each pixel in the projection image.
Further, the game model rendering method further comprises the following steps: determining a current camera corresponding to current visual angle information in the three-dimensional model; determining at least one preset camera from the plurality of cameras, wherein the at least one preset camera is a camera, the distance between the camera and the current camera in the plurality of cameras meets a preset condition; and performing color interpolation on the three-dimensional model based on the shooting angle of at least one preset camera to obtain a model to be rendered.
According to another aspect of the embodiments of the present invention, there is also provided a game model rendering apparatus including: the game system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for responding to a game trigger event and acquiring pixel information of different angles of a pre-stored virtual model, the pixel information comprises color information and depth information, and the virtual model corresponds to the game trigger event; the reconstruction module is used for reconstructing a three-dimensional model corresponding to the virtual model according to the pixel information of the virtual model at different angles; the second acquisition module is used for acquiring the current visual angle information and acquiring a model to be rendered, corresponding to the three-dimensional model and the current visual angle information; and the display module is used for rendering the model to be rendered for display.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the above-mentioned game model rendering method.
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, where the program executes the above-mentioned game model rendering method.
In the embodiment of the invention, a mode of three-dimensional rendering a two-dimensional model in a two-dimensional game is adopted, after a game trigger event is responded, the pixel information of different angles of the pre-stored virtual model is obtained, the three-dimensional model corresponding to the virtual model is reconstructed according to the pixel information of different angles of the virtual model, then the current visual angle information and the model to be rendered corresponding to the current visual angle information of the three-dimensional model are obtained, and the model to be rendered is rendered for display.
In the above process, in the process of reconstructing the three-dimensional model by using the pixel information of the virtual model at different angles, the process is actually an off-line rendering process, and the process does not involve the overhead of the system during operation, and has no limitation on the precision, the number of bones, the complexity of materials, the complexity of illumination and the like of the virtual model. In addition, after the three-dimensional model is reconstructed, the model to be rendered corresponding to the current view angle information is directly determined from the three-dimensional model without complex calculation, so that the rendering result is not limited by the complexity of the model, the complexity of illumination and the like under the condition of not increasing the resource amount, the three-dimensional rendering cost is reduced, and the rendering precision is improved.
Therefore, the scheme provided by the application achieves the purpose of rendering the model, so that the technical effect of reducing the three-dimensional rendering overhead is achieved, and the technical problem that the three-dimensional rendering overhead is large due to the fact that the complex model is limited by various factors when the complex model is three-dimensionally rendered in a two-dimensional game in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for rendering a game model according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an alternative camera distribution according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative sphere-to-octahedron mapping according to embodiments of the present invention;
FIG. 4 is a schematic diagram of an alternative projected image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative point cloud reconstruction map in accordance with an embodiment of the present invention; and
FIG. 6 is a diagram of a game model rendering apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided a game model rendering method embodiment, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a game model rendering method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, responding to a game trigger event, and acquiring pixel information of different angles of a pre-stored virtual model, wherein the pixel information comprises color information and depth information, and the virtual model corresponds to the game trigger event.
It should be noted that a rendering terminal capable of performing image rendering may be used as the execution subject of this embodiment, for example, the rendering terminal may be a computer terminal. In addition, in step S102, the game trigger event is an event for reconstructing a three-dimensional model, and for example, in a two-dimensional game, when a player needs to rotate a virtual model, the game trigger event is triggered. In addition, the depth information is used to reconstruct the three-dimensional model, and the color information is used to color the three-dimensional model
In an alternative embodiment, the pre-stored pixel information of different angles of the virtual model is performed by the following steps: the method comprises the steps of setting a plurality of cameras to shoot a virtual model, and shooting the virtual model through the plurality of cameras in advance to obtain pixel information of different angles, wherein the plurality of cameras are distributed on the surface of a preset model, and the preset model surrounds the virtual model. The cameras are virtual cameras, preferably, the cameras are virtual cameras or depth cameras outputting pixel colors and depths, wherein a plurality of cameras are distributed on the surface of a preset model, the preset model is a centrosymmetric geometric body, the plurality of cameras are uniformly arranged on the surface of the geometric body, and the geometric body can be, but is not limited to, a sphere, an octahedron, a cube and the like. The preset model surrounds the virtual model.
Optionally, as shown in the schematic diagram of camera distribution shown in fig. 2, the plurality of cameras face the center of the virtual model, and the virtual model can be rendered from each camera direction, where in fig. 2, the preset model is a hemisphere.
It should be noted that, cameras are uniformly arranged on the preset model, and it can be ensured that any point of the virtual model can be shot by as many cameras as possible, so that when three-dimensional reconstruction is performed, geometric and illumination information of the virtual model can be restored as much as possible, and rendering accuracy is improved.
And step S104, reconstructing a three-dimensional model corresponding to the virtual model according to the pixel information of the virtual model at different angles.
In the present application, since the camera is a virtual depth camera, when a certain pixel is photographed by a certain camera, the three-dimensional position of the pixel can be specified by pixel information obtained by photographing with the certain camera. In addition, the pixel information of each angle is information of each pixel in the camera space, and the three-dimensional position of each pixel in the three-dimensional space is information of each pixel in the world space, and therefore, in determining the three-dimensional position of the pixel corresponding to each angle in the three-dimensional space based on the pixel information of each angle, spatial conversion is required to convert the position information of each pixel in the camera space into the position information of each pixel in the three-dimensional space.
Further, after the three-dimensional position of each pixel in the three-dimensional space is obtained, the three-dimensional model can be directly reconstructed in a point cloud mode according to the three-dimensional position corresponding to each pixel, the three-dimensional positions corresponding to each pixel can be connected to obtain a grid, and then the three-dimensional model is generated according to the grid reconstruction.
It should be noted that, the selection of the point cloud form for reconstructing the three-dimensional model or the selection of the grid form for reconstructing the three-dimensional model may be set according to the program requirements in practical applications, and a specific reconstruction method is not limited in this application.
And S106, acquiring current visual angle information, and acquiring a model to be rendered, wherein the three-dimensional model corresponds to the current visual angle information.
In step S106, the current perspective information is the perspective of the three-dimensional model shown in front of the player in the current game scene. Optionally, the rendering terminal determines, according to the current game scene, current perspective information of the three-dimensional model displayed in front of the player, and then determines, from the three-dimensional model, a model to be rendered corresponding to the current perspective information, where the model to be rendered is a two-dimensional model of the three-dimensional model under the current perspective information.
And step S108, rendering the model to be rendered for display.
Further, after the model to be rendered is obtained, the rendering terminal may display the model to be rendered, and the rendering terminal may also push the model to be rendered to the game terminal, and the game terminal displays the model to be rendered.
Based on the schemes defined in the above steps S102 to S108, it can be known that, after a game trigger event is responded by using a manner of three-dimensional rendering a two-dimensional model in a two-dimensional game, pre-stored pixel information of different angles of a virtual model is acquired, a three-dimensional model corresponding to the virtual model is reconstructed according to the pixel information of different angles of the virtual model, then current perspective information and a model to be rendered corresponding to the three-dimensional model and the current perspective information are acquired, and the model to be rendered is rendered for display.
It is easy to note that, in the above process, in the process of reconstructing the three-dimensional model by using the pixel information of the virtual model at different angles, the process is actually an off-line rendering process, and the process does not involve the overhead of the system during operation, and has no limitation on the precision, the number of bones, the material complexity, the illumination complexity, and the like of the virtual model. In addition, after the three-dimensional model is reconstructed, the model to be rendered corresponding to the current view angle information is directly determined from the three-dimensional model without complex calculation, so that the rendering result is not limited by the complexity of the model, the complexity of illumination and the like under the condition of not increasing the resource amount, the three-dimensional rendering cost is reduced, and the rendering precision is improved.
Therefore, the scheme provided by the application achieves the purpose of rendering the model, so that the technical effect of reducing the three-dimensional rendering overhead is achieved, and the technical problem that the three-dimensional rendering overhead is large due to the fact that the complex model is limited by various factors when the complex model is three-dimensionally rendered in a two-dimensional game in the prior art is solved.
In an alternative embodiment, before acquiring the pixel information of different angles of the pre-stored virtual model, the position distribution of the plurality of cameras in the pre-stored virtual model needs to be determined. Specifically, model information corresponding to a preset model is detected, the number of cameras of the multiple cameras is determined based on the model information, and then the positions of the multiple cameras in the surface of the geometric solid are mapped on the connected octahedron based on the number of cameras.
In the above process, the model information at least includes: the size of the model is preset. In this scenario, the number of cameras is related to the size of the preset model, and optionally, if the size of the preset model is smaller than the preset size, the number of cameras corresponding to the virtual model is determined to be a first number (for example, 32); if the size of the virtual model is larger than or equal to the preset size, the number of cameras corresponding to the virtual model is determined to be a second number (for example, 64), wherein the first number is smaller than the second number.
In addition, the model information may further include complexity of the virtual model, wherein the greater the complexity, the greater the number of cameras corresponding to the virtual model. Optionally, the complexity of the virtual model is related to the number of partial models to be rotated in the virtual model, wherein the greater the number of partial models to be rotated, the greater the corresponding numerical value of the complexity of the virtual model.
It should be noted that the preset model may be a sphere, and in this scenario, the cameras may be completely and uniformly distributed on the spherical surface of the sphere. However, the mathematical expression of the spherical surface is complex, the calculation efficiency is low, and distortion exists when the spherical surface is mapped onto a two-dimensional picture. In order to simplify the calculation, the points on the spherical surface may be mapped onto an inscribed octahedron, as shown in the mapping diagram of sphere and octahedron in fig. 3, it can be known from fig. 3 that mapping the points on the spherical surface onto the octahedron not only can ensure that the distribution positions of the camera on the preset model are relatively uniform, but also can reduce the distortion of the octahedron mapped onto the two-dimensional picture.
In addition, it should be noted that in a two-dimensional game with a fixed view angle, it may not be necessary to rotate the entire virtual model in three-dimensional space, but only to rotate a part of the virtual model, and in this scenario, the positions of a plurality of cameras in the surface of the geometric solid can be determined based on the inscribed octahedron and the number of cameras.
Specifically, a first partial model to be rotated in the virtual model is determined according to the game scene, a second partial model of the first partial model mapped in the connected octahedron is determined, and then the positions of the cameras in the surface of the geometric body are mapped on the connected octahedron based on the second partial model and the number of the cameras. For example, taking the tree model in fig. 2 as an example, in fig. 2, the part that needs to be converted is a leaf part (i.e., a first part model) of the tree model, and then the first part model corresponding to the leaf part is mapped into a second part model in the inscribed octahedron, so as to obtain that the second part model is the upper half part of the inscribed octahedron.
It should be noted that, because the octahedron is easy to expand into a two-dimensional picture, mapping the position of the camera in the surface of the geometric solid onto the connected octahedron facilitates recording the position information of the camera by using the two-dimensional paste map, and reduces the overhead of three-dimensional rendering.
Further, after determining the positions of the plurality of cameras in the preset model (i.e., the distribution information), the plurality of cameras shoot the virtual model at the respective positions, and a projection image shot by each camera may be obtained, as shown in fig. 4, in which each tree model is an image shot by the corresponding camera at the corresponding position. In fig. 4, the number of cameras is 64. Further, the depth information can be obtained by processing the obtained projection image.
Specifically, first, orientation information and camera parameters of each camera are acquired, then, the virtual model is converted into a camera space based on the orientation information, the camera parameters and a perspective method to obtain a conversion result, and finally, the distance between each pixel and the camera corresponding to the pixel is calculated based on the conversion result to obtain depth information.
Optionally, fig. 5 shows a point cloud reconstruction map, in fig. 4, a pixel in a projection image in a two-dimensional space (i.e., a pixel in a virtual model) is converted to a point in a camera space, that is, depth information corresponding to the pixel is obtained, and then position information in the camera space is converted to a world space (i.e., a three-dimensional space), so that a point in the three-dimensional space is obtained.
Specifically, firstly, a projection image shot by each camera is acquired, then, according to the depth information of each pixel in the projection image of each camera, first position information of each pixel in a camera space is determined, the first position information is converted into second position information in a three-dimensional space where each camera is located, a three-dimensional position of each pixel in the three-dimensional space is obtained, and then, a three-dimensional model is reconstructed based on the three-dimensional position. After obtaining the projection information corresponding to each camera and the depth information of each pixel, the position information and the orientation information of each camera can be determined according to the distribution information of each camera in the virtual model, a preset matrix is obtained according to the position and the orientation information of each camera, and finally the first position information of each pixel in the camera space is obtained based on the preset matrix and the depth information of each pixel in the projection image.
Alternatively, the distribution information of the cameras in the preset model may include, but is not limited to, a distance between the camera and a center of the preset model, and a position of the camera in the preset model. After the mapping between the preset model and the inscribed octahedron is realized, the distribution information of the camera may further include an identifier of a surface of the camera mapped in the inscribed octahedron.
It should be noted that, the conversion process in the process of determining the first position information according to the depth information of each pixel in the projection image of each camera is a linear conversion process, and parameters required for conversion are obtained through camera parameters and are expressed in a matrix form, that is, a preset matrix in the conversion process is related to the camera parameters, wherein the camera parameters include, but are not limited to, position information and angle information of the camera. After the preset matrix is obtained, the depth information of each pixel is multiplied by the preset matrix, and then the first position information of each pixel in the camera space can be obtained.
Further, after the first position information is obtained, the first position information can be converted to obtain a three-dimensional position of each pixel in a three-dimensional space, then three-dimensional reconstruction is performed according to the three-dimensional position to obtain a three-dimensional model, and finally the three-dimensional model is rendered to obtain a model which renders the two-dimensional model into a three-dimensional effect.
Specifically, the rendering terminal firstly determines a current camera corresponding to current view information in the three-dimensional model, and then determines at least one preset camera from the plurality of cameras, wherein the at least one preset camera is a camera, of the plurality of cameras, whose distance from the current camera meets a preset condition. And finally, performing color interpolation on the three-dimensional model based on the shooting angle of at least one preset camera to obtain a model to be rendered.
In the above process, the camera meeting the preset condition may be a camera in which the preset pixel point can be shot by a plurality of cameras, and the distance from the camera to the current camera is smaller than the preset distance, or may be a preset number (e.g., three) of cameras in which the preset pixel point can be shot by a plurality of cameras and the distance is the smallest.
In addition, in the above process, the preset pixel point is any one pixel point in the three-dimensional model, and the pixel point can be obtained by shooting with the current camera, for example, in fig. 2, the camera 1, the camera 2, the camera 3, and the camera 4 can all shoot the pixel point a, and any one of the camera 1, the camera 2, the camera 3, and the camera 4 can be used as the current camera.
It should be noted that, since one pixel point can be obtained by simultaneously shooting with a plurality of cameras, the reconstructed three-dimensional model can be rendered by selecting at least one preset camera satisfying the preset condition with the current camera. Different cameras shoot the same pixel point, the obtained color of the pixel point may be different, and in the scene, the color of the reconstructed three-dimensional model can be interpolated according to the angle of the cameras to obtain the final rendering effect. The interpolation only affects the color of the pixel point, and does not affect the position of the pixel point.
In addition, it should be noted that the vertex space information of the reconstructed three-dimensional model is complete, and therefore, the reconstructed three-dimensional model can be arbitrarily rotated.
In addition, the process of rendering the model to be rendered is similar to a common three-dimensional rendering process, and the difference is that the process of rendering the three-dimensional model can be directly used for rendering colors through vertexes (namely point clouds) without a mapping step. In addition, because the color of the vertex of the three-dimensional model already contains all information such as illumination, material and the like, the illumination and the material do not need to be calculated again during rendering, and the color can be obtained by interpolation directly by adopting the imaging of the camera at the nearest angle. The method and the device are very simple in the rendering process of the three-dimensional model, and can achieve extremely high efficiency.
Therefore, the position information of the virtual model in the three-dimensional space (namely the three-dimensional position of each pixel in the three-dimensional space) can be obtained by the scheme provided by the application, and the position information in the three-dimensional space of the virtual model can be calculated in real time, so that the application can realize three-dimensional seamless rotation of the model. In addition, the virtual model is photographed by arranging the plurality of cameras, the process is an off-line rendering process, the overhead during system operation is not involved, the limitation on the precision, the number of bones, the material complexity, the illumination complexity and the like of the model is less, the rendering result under high-precision and complex illumination can be obtained, the three-dimensional rendering overhead is reduced, and the rendering precision is improved.
Example 2
According to an embodiment of the present invention, there is further provided an embodiment of a game model rendering apparatus, where fig. 6 is a schematic diagram of a game model rendering apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes: a first obtaining module 601, a reconstruction module 603, a second obtaining module 605, and a display module 607.
The first obtaining module 601 is configured to respond to a game trigger event, and obtain pre-stored pixel information of a virtual model at different angles, where the pixel information includes color information and depth information, and the virtual model corresponds to the game trigger event; a reconstructing module 603, configured to reconstruct a three-dimensional model corresponding to the virtual model according to pixel information of different angles of the virtual model; a second obtaining module 605, configured to obtain current view information, and obtain a model to be rendered, where the three-dimensional model corresponds to the current view information; and a display module 607 for rendering the model to be rendered for display.
It should be noted that the first obtaining module 601, the reconstructing module 603, the second obtaining module 605 and the displaying module 607 correspond to steps S102 to S108 of the above embodiment, and the four modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the above embodiment.
In an alternative embodiment, the first obtaining module includes: and a third obtaining module. The third acquisition module is used for shooting the virtual model through a plurality of cameras in advance to obtain pixel information of different angles, wherein the plurality of cameras are distributed on the surface of the preset model, and the preset model surrounds the virtual model.
Optionally, the preset model is a centrosymmetric geometric body, and the plurality of cameras are uniformly arranged on the surface of the geometric body.
In an alternative embodiment, the game model rendering apparatus further includes: the device comprises a detection module, a first determination module and a first mapping module. The detection module is used for detecting model information corresponding to the virtual model before acquiring pixel information of pre-stored virtual models at different angles, wherein the model information at least comprises: the size of the virtual model; a first determination module to determine a number of cameras of the plurality of cameras based on the model information; a first mapping module to map locations of a plurality of cameras in a surface of a geometric solid on an inscribed octahedron based on a number of cameras.
In an alternative embodiment, the first mapping module comprises: the device comprises a second determining module, a third determining module and a second mapping module. The second determining module is used for determining a first part model to be rotated in the virtual model according to the game scene; a third determining module for determining a second partial model of the first partial model mapped in the inscribed octahedron; a second mapping module for mapping the positions of the plurality of cameras in the surface of the geometric solid on the inscribed octahedron based on the second partial model and the number of cameras.
In an alternative embodiment, the third obtaining module includes: the device comprises a fourth acquisition module, a conversion module and a calculation module. The fourth acquisition module is used for acquiring the orientation information and the camera parameters of each camera; the conversion module is used for converting the virtual model into a camera space based on the orientation information, the camera parameters and the perspective method to obtain a conversion result; and the calculating module is used for calculating the distance between each pixel and the camera corresponding to the pixel based on the conversion result to obtain the depth information.
In an alternative embodiment, the reconstruction module comprises: the device comprises a fifth acquisition module, a fourth determination module, a conversion module and a first reconstruction module. The fifth acquisition module is used for acquiring a projection image shot by each camera for shooting the virtual model; a fourth determining module, configured to determine first position information of each pixel in the camera space according to the depth information of each pixel in the projection image of each camera; the conversion module is used for converting the first position information into second position information in a three-dimensional space where each camera is located to obtain the three-dimensional position of the pixel corresponding to each angle in the three-dimensional space; a first reconstruction module to reconstruct a three-dimensional model based on the three-dimensional position.
In an alternative embodiment, the fourth determining module includes: the device comprises a fifth determining module, a first processing module and a second processing module. The fifth determining module is used for determining the position information and the orientation information of each camera according to the distribution information of each camera in the preset model; the first processing module is used for obtaining a preset matrix according to the position and orientation information of each camera; and the second processing module is used for obtaining first position information of each pixel in the camera space based on the preset matrix and the depth information of each pixel in the projection image.
In an alternative embodiment, the second obtaining module includes: a sixth determination module, and an interpolation module. The sixth determining module is used for determining a current camera corresponding to the current view angle information in the three-dimensional model; the device comprises a sixth determining module, a first judging module and a second judging module, wherein the sixth determining module is used for determining at least one preset camera from the plurality of cameras, and the at least one preset camera is a camera with a distance from the current camera to meet a preset condition; and the interpolation module is used for carrying out color interpolation on the three-dimensional model based on the shooting angle of at least one preset camera to obtain a model to be rendered.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the above-mentioned game model rendering method.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, where the program executes the above-mentioned game model rendering method.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A game model rendering method, comprising:
responding to a game trigger event, and acquiring pixel information of different angles of a pre-stored virtual model, wherein the pixel information comprises color information and depth information, and the virtual model corresponds to the game trigger event;
reconstructing a three-dimensional model corresponding to the virtual model according to the pixel information of the virtual model at different angles;
acquiring current visual angle information, and acquiring a model to be rendered, corresponding to the three-dimensional model and the current visual angle information;
rendering the model to be rendered for display.
2. The method according to claim 1, wherein the pre-stored pixel information of different angles of the virtual model is performed by:
the method comprises the steps that pixel information of different angles is obtained by shooting the virtual model through a plurality of cameras in advance, wherein the cameras are distributed on the surface of a preset model, and the virtual model is surrounded by the preset model.
3. The method of claim 2, wherein the predetermined model is a centrosymmetric geometry, and the plurality of cameras are uniformly disposed on a surface of the geometry.
4. The method of claim 3, wherein prior to obtaining pre-stored pixel information for different angles of the virtual model, the method further comprises:
detecting model information corresponding to the virtual model, wherein the model information at least comprises: a size of the virtual model;
determining a number of cameras of the plurality of cameras based on the model information;
mapping the locations of the plurality of cameras in the surface of the geometric solid on an inscribed octahedron based on the number of cameras.
5. The method of claim 4, wherein mapping the locations of the plurality of cameras in the surface of the geometric solid on an inscribed octahedron based on the number of cameras comprises:
determining a first part model to be rotated in the virtual model according to a game scene;
determining a second partial model of the first partial model mapped in the inscribed octahedron;
mapping the positions of the plurality of cameras in the surface of the geometric solid on the inscribed octahedron based on the second partial model and the number of cameras.
6. The method of claim 2, wherein obtaining pixel information of different angles obtained by shooting the virtual model by a plurality of cameras comprises:
acquiring orientation information and camera parameters of each camera;
converting the virtual model into a camera space based on the orientation information, the camera parameters and a perspective method to obtain a conversion result;
and calculating the distance between each pixel and the camera corresponding to the pixel based on the conversion result to obtain the depth information.
7. The method of claim 6, wherein reconstructing the three-dimensional model corresponding to the virtual model from the pixel information of the virtual model at different angles comprises:
acquiring a projected image shot by each camera to the virtual model;
determining first position information of each pixel in the camera space according to the depth information of each pixel in the projection image of each camera;
converting the first position information into second position information in a three-dimensional space where each camera is located to obtain the three-dimensional position of the pixel corresponding to each angle in the three-dimensional space;
reconstructing the three-dimensional model based on the three-dimensional locations.
8. The method of claim 7, wherein determining first position information of each pixel in the camera space from depth information of the each pixel in the projection image of the each camera comprises:
determining the position information and the orientation information of each camera according to the distribution information of each camera in the preset model;
obtaining a preset matrix according to the position of each camera and the orientation information;
and obtaining first position information of each pixel in the camera space based on the preset matrix and the depth information of each pixel in the projection image.
9. The method of claim 2, wherein obtaining the model to be rendered of the three-dimensional model corresponding to the current perspective information comprises:
determining a current camera corresponding to the current view information in the three-dimensional model;
determining at least one preset camera from the plurality of cameras, wherein the at least one preset camera is a camera, the distance between the camera and the current camera meets a preset condition, and the camera is a camera in the plurality of cameras;
and performing color interpolation on the three-dimensional model based on the shooting angle of the at least one preset camera to obtain the model to be rendered.
10. A game model rendering apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for responding to a game trigger event and acquiring pixel information of different angles of a pre-stored virtual model, the pixel information comprises color information and depth information, and the virtual model corresponds to the game trigger event;
the reconstruction module is used for reconstructing a three-dimensional model corresponding to the virtual model according to the pixel information of the virtual model at different angles;
the second acquisition module is used for acquiring current visual angle information and acquiring a model to be rendered, corresponding to the current visual angle information, of the three-dimensional model;
and the display module is used for rendering the model to be rendered for display.
11. A storage medium comprising a stored program, wherein the program, when executed, controls a device on which the storage medium is located to perform a game model rendering method according to any one of claims 1 to 9.
12. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the game model rendering method according to any one of claims 1 to 9 when the program is run.
CN201910911907.6A 2019-09-25 2019-09-25 Game model rendering method and device Pending CN110689626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910911907.6A CN110689626A (en) 2019-09-25 2019-09-25 Game model rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910911907.6A CN110689626A (en) 2019-09-25 2019-09-25 Game model rendering method and device

Publications (1)

Publication Number Publication Date
CN110689626A true CN110689626A (en) 2020-01-14

Family

ID=69110138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910911907.6A Pending CN110689626A (en) 2019-09-25 2019-09-25 Game model rendering method and device

Country Status (1)

Country Link
CN (1) CN110689626A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640174A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed visual angle
CN111882631A (en) * 2020-07-24 2020-11-03 上海米哈游天命科技有限公司 Model rendering method, device, equipment and storage medium
CN112190935A (en) * 2020-10-09 2021-01-08 网易(杭州)网络有限公司 Dynamic volume cloud rendering method and device and electronic equipment
CN112562065A (en) * 2020-12-17 2021-03-26 深圳市大富网络技术有限公司 Rendering method, system and device of virtual object in virtual world
CN112712582A (en) * 2021-01-19 2021-04-27 广州虎牙信息科技有限公司 Dynamic global illumination method, electronic device and computer-readable storage medium
CN113052979A (en) * 2020-12-31 2021-06-29 视伴科技(北京)有限公司 Method and device for converting virtual elements in virtual venue
CN113516751A (en) * 2020-03-26 2021-10-19 网易(杭州)网络有限公司 In-game cloud display method and device and electronic terminal
CN111882631B (en) * 2020-07-24 2024-05-03 上海米哈游天命科技有限公司 Model rendering method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308486A (en) * 2018-08-03 2019-02-05 天津大学 Multi-source image fusion and feature extraction algorithm based on deep learning
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
US20200035019A1 (en) * 2018-07-25 2020-01-30 Sony Interactive Entertainment Inc. Method and system for generating an image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200035019A1 (en) * 2018-07-25 2020-01-30 Sony Interactive Entertainment Inc. Method and system for generating an image
CN109308486A (en) * 2018-08-03 2019-02-05 天津大学 Multi-source image fusion and feature extraction algorithm based on deep learning
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516751A (en) * 2020-03-26 2021-10-19 网易(杭州)网络有限公司 In-game cloud display method and device and electronic terminal
CN111640174A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed visual angle
CN111640174B (en) * 2020-05-09 2023-04-21 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed viewing angle
CN111882631A (en) * 2020-07-24 2020-11-03 上海米哈游天命科技有限公司 Model rendering method, device, equipment and storage medium
CN111882631B (en) * 2020-07-24 2024-05-03 上海米哈游天命科技有限公司 Model rendering method, device, equipment and storage medium
CN112190935A (en) * 2020-10-09 2021-01-08 网易(杭州)网络有限公司 Dynamic volume cloud rendering method and device and electronic equipment
CN112562065A (en) * 2020-12-17 2021-03-26 深圳市大富网络技术有限公司 Rendering method, system and device of virtual object in virtual world
CN113052979A (en) * 2020-12-31 2021-06-29 视伴科技(北京)有限公司 Method and device for converting virtual elements in virtual venue
CN112712582A (en) * 2021-01-19 2021-04-27 广州虎牙信息科技有限公司 Dynamic global illumination method, electronic device and computer-readable storage medium
CN112712582B (en) * 2021-01-19 2024-03-05 广州虎牙信息科技有限公司 Dynamic global illumination method, electronic device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110689626A (en) Game model rendering method and device
CN109658365B (en) Image processing method, device, system and storage medium
CN109448099B (en) Picture rendering method and device, storage medium and electronic device
Matsuyama et al. Real-time dynamic 3-D object shape reconstruction and high-fidelity texture mapping for 3-D video
Yemez et al. 3D reconstruction of real objects with high resolution shape and texture
US20210291056A1 (en) Method and Apparatus for Generating Game Character Model, Processor, and Terminal
CN108475327A (en) three-dimensional acquisition and rendering
CN110246146A (en) Full parallax light field content generating method and device based on multiple deep image rendering
EP3533218B1 (en) Simulating depth of field
CN108043027B (en) Storage medium, electronic device, game screen display method and device
US11461942B2 (en) Generating and signaling transition between panoramic images
Ebner et al. Multi‐view reconstruction of dynamic real‐world objects and their integration in augmented and virtual reality applications
CN113313818A (en) Three-dimensional reconstruction method, device and system
CN111199573B (en) Virtual-real interaction reflection method, device, medium and equipment based on augmented reality
CN115187729A (en) Three-dimensional model generation method, device, equipment and storage medium
CN108230430B (en) Cloud layer mask image processing method and device
CN116664752B (en) Method, system and storage medium for realizing panoramic display based on patterned illumination
GB2584753A (en) All-around spherical light field rendering method
Lee et al. Real time 3D avatar for interactive mixed reality
CN114255315A (en) Rendering method, device and equipment
CN110038302B (en) Unity 3D-based grid generation method and device
Schirmacher et al. Efficient Free Form Light Field Rendering.
Hapák et al. Real-time 4D reconstruction of human motion
WO2019052338A1 (en) Image processing method and apparatus, storage medium, and electronic device
CA2716257A1 (en) System and method for interactive painting of 2d images for iterative 3d modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114