CN111773719A - Rendering method and device of virtual object, storage medium and electronic device - Google Patents

Rendering method and device of virtual object, storage medium and electronic device Download PDF

Info

Publication number
CN111773719A
CN111773719A CN202010583263.5A CN202010583263A CN111773719A CN 111773719 A CN111773719 A CN 111773719A CN 202010583263 A CN202010583263 A CN 202010583263A CN 111773719 A CN111773719 A CN 111773719A
Authority
CN
China
Prior art keywords
vertex
model
target
patch
positions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010583263.5A
Other languages
Chinese (zh)
Inventor
杨文聪
何文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202010583263.5A priority Critical patent/CN111773719A/en
Publication of CN111773719A publication Critical patent/CN111773719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a rendering method and device of a virtual object, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring a first model and a second model, wherein the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model; simulating the first model through a physical engine to determine a target posture of the target object; determining the vertex position of a patch in a second model by using the vertex position of the patch in the first model, wherein the vertex position of the patch in the first model is the vertex position of the first model in the target posture; and rendering the target object in the target posture by using the second model. The data processing method and device solve the technical problem that the data processing efficiency is low in the related technology.

Description

Rendering method and device of virtual object, storage medium and electronic device
Technical Field
The application relates to the field of games, in particular to a rendering method and device of a virtual object, a storage medium and an electronic device.
Background
Currently, as players and developers pursue the quality of games, more realistic and natural animation effects have become one of the goals of many games. The real-time cloth simulation is a key characteristic, can be widely applied to various scenes such as character clothes, scene flexibility and the like, can overcome the defects of stiff performance and rough details of the traditional skeleton animation, and can solve the problems that the vertex animation cannot interact and the storage capacity is large, so that the realistic experience of the game is greatly improved, as shown in fig. 1.
Currently, a complete material distribution system is provided in commonly used physical engines PhysX, Bullet and hawok, wherein the PhysX of Nvidia is widely applied, and the two major 3D engines Unity and urea of the current mainstream use the material distribution solution in PhysX by default. The mainstream algorithms for cloth simulation are a Mass-spring model (Mass-spring model) Based on force and a constraint solving model (Position Based dynamics) Based on Position.
In the spring-mass model, as shown in fig. 2, the vertices of the model are regarded as masses, the relationship between the masses is represented by springs, and the springs may be provided in different types, such as tension springs (stretch springs), shear springs (shear springs), and bend springs (bend springs). During simulation, the stress of each mass point is calculated according to external force (such as gravity, wind power and the like) and spring elasticity, then the acceleration is calculated, and then the speed and the position of the mass point in a short time step length are calculated.
The location-based constraint solving model is similar to the spring particle model except that the relationships between the vertices are represented by constraints and the solution of the model is not force-based but, instead, a location-based, solving constraint functions. Compared with a spring particle model, the stability of the constraint solution model based on the position is better, but the iterative computation is needed in the simulation, and a large amount of computation is still needed. Therefore, in terms of algorithm, the running time runtime calculation cost of the cloth is large.
Considering the wide application of the Unity engine in the field of mobile games at present, in the realization of Unity fabric, the Unity fabric system is realized based on Nvidia Physx, a large amount of packaging is carried out, the internal details are shielded, and the use of fabric is simplified. Unity's cloth is packaged into a Component, and its method of use is simple: firstly, adding a click component to a gameObject of the SkinedMeshRenderer; then editing the attribute of each vertex of the cloth, and adjusting other attributes and collision body information in the cloth component; the effect achieved is shown in figure 3.
In addition to the Cloth system of Unity, there are other inserts in Asset Store that can achieve or approximate the effect of Dynamic Cloth, such as Dynamic Bone, Obi Cloth, etc., Obi Cloth is an insert simulating Cloth, the Cloth solver at the core is implemented by C + +, and the other parts are implemented by C #; obi Cloth provides rich Cloth characteristics and friendly Cloth editing experience, is not too complex to use, has good simulation effect, and is shown in figure 4, but the efficiency of Obi Cloth is much lower than that of Unity Cloth, and has some compatibility problems with different versions of Unity, so the Obi Cloth is limited in application.
In addition, a Dynamic Bone plug-in commonly used on a mobile platform can approach the cloth effect under certain conditions, the plug-in is a physical simulation method based on bones, the bones are regarded as chain-shaped structures connected through joints, the motion of a Bone chain is simulated through physical calculation, and then the motion is acted on the top point of the model through Skin.
Compared with Unity cloth, the Dynamic Bone has great advantages in efficiency, and after all, the number of bones is much less than the number of model vertices; the simulation effect is good for slender models such as ribbons. However, after all, the DynamicBone is based on bones, the simulation effect is obviously not as fine as that of a cloth based on vertexes, and in order to make the model have dynamic effect, additional bones and binding must be made for the model. In addition, due to the chain-like characteristic, the simulation model is often only suitable for the simulation effect of the long-strip-shaped cloth. However, for large-piece cloth, it is impossible to use the Dynamic Bone-in scheme, as shown in fig. 5, to make the Dynamic effect of the skirt, multiple Bone chains are required, and thus, the calculation amount is greatly increased.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a rendering method and device of a virtual object, a storage medium and an electronic device, so as to at least solve the technical problem of low data processing efficiency in the related art.
According to an aspect of an embodiment of the present application, there is provided a method for rendering a virtual object, including: acquiring a first model and a second model, wherein the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model; simulating the first model through a physical engine to determine a target posture of the target object; determining the vertex position of a patch in a second model by using the vertex position of the patch in the first model, wherein the vertex position of the patch in the first model is the vertex position of the first model in the target posture; and rendering the target object in the target posture by using the second model.
According to another aspect of the embodiments of the present application, there is also provided an apparatus for rendering a virtual object, including: the device comprises an obtaining unit, a judging unit and a judging unit, wherein the obtaining unit is used for obtaining a first model and a second model, the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model; the simulation unit is used for simulating the first model through a physical engine to determine the target posture of the target object; the determining unit is used for determining the vertex position of a patch in the second model by using the vertex position of the patch in the first model, wherein the vertex position of the patch in the first model is the vertex position of the first model in the target posture; and the rendering unit is used for rendering the target object in the target posture by utilizing the second model.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the application, the low-resolution model of the first model is used for simulation in the simulation stage, the high-resolution model of the second model is used for rendering in the rendering stage, and for the surface patch mesh which is complex in structure and difficult to simulate, the simple model is used for simulation, and the complex model is used for rendering instead of the complex model for simulation and rendering all the time, so that the operation amount in the physical simulation stage is reduced, the technical problem of low data processing efficiency in the related technology can be solved, and the technical effect of improving the processing efficiency is further achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a diagram illustrating rendering effects of an alternative virtual object in the related art;
FIG. 2 is a schematic diagram of an alternative spring mass model of the related art;
FIG. 3 is a diagram illustrating rendering effects of an alternative virtual object in the related art;
FIG. 4 is a diagram illustrating rendering effects of an alternative virtual object in the related art;
FIG. 5 is a diagram illustrating rendering effects of an alternative virtual object in the related art;
FIG. 6 is a diagram illustrating test results of an alternative virtual object rendering in the related art;
FIG. 7 is a diagram illustrating test results of an alternative virtual object rendering in the related art;
FIG. 8 is a diagram illustrating test results of an alternative virtual object rendering in the related art;
FIG. 9 is a schematic diagram of a hardware environment for a method of rendering virtual objects according to an embodiment of the present application;
FIG. 10 is a flow chart of an alternative method of rendering virtual objects according to an embodiment of the present application;
FIG. 11 is a schematic diagram of rendering effects of virtual objects according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a patch of a virtual object according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a rendering scheme of virtual objects according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a rendering scheme of virtual objects according to an embodiment of the present application;
FIG. 15 is a schematic diagram of rendering effects of virtual objects according to an embodiment of the application;
FIG. 16 is a schematic illustration of test results of virtual object rendering according to an embodiment of the application;
FIG. 17 is a schematic illustration of test results of virtual object rendering according to an embodiment of the application;
FIG. 18 is a schematic diagram of an alternative virtual object rendering apparatus according to an embodiment of the present application; and the number of the first and second groups,
fig. 19 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The inventor has found through research on the related art that although the simulation effect of the cloth in the related art is good, the efficiency problem in the game is very important, especially for the mobile terminal (the hardware performance is not as good as that of a desktop device due to the factors of volume and battery), the inventor tests the operation efficiency of the cloth with different vertex numbers on various platforms, as shown in fig. 6 to 8 (fig. 6 is the test result of a hpt 835 platform, fig. 7 is the test result of an apple a11 processor platform, and fig. 8 is the test result of an intel i5-6500 platform), the total time for simulating the 9 cloth with 1000 vertexes on the PC is less than 1ms, but at the mobile terminal, the time for simulating the cloth with the same scale is already about 5ms, which is a great expense for the mobile terminal. Of course, on the mobile platform, when the number of tops of the cloth is limited to 100, the cost of 9 pieces of cloth is only about 1ms, which means that the cloth can be used safely, but the display effect is poor.
Based on the above performance test conclusion, in order to enable cloth simulation at the mobile end, according to an aspect of the embodiments of the present application, a method embodiment of a rendering method for a virtual object is provided. The utility model provides a set of scheme that has realization effect and realization efficiency concurrently, this scheme uses the net of low precision to carry out the physical simulation calculation, later uses the dynamic effect to meticulous grid model on, just so can add cloth simulation effect when reducing the performance consumption.
Alternatively, in this embodiment, the rendering method of the virtual object may be applied to a hardware environment formed by the terminal 901 and the server 903 as shown in fig. 9. As shown in fig. 9, a server 903 is connected to the terminal 901 via a network, which may be used to provide services (such as game services) for the terminal or a client installed on the terminal, and a database 905 may be provided on the server or separately from the server, and is used to provide data storage services for the server 903, where the network includes but is not limited to: the terminal 901 is not limited to a PC, a mobile phone, a tablet computer, etc. in a wide area network, a metropolitan area network, or a local area network.
The rendering method of the virtual object in the embodiment of the present application may be executed by the terminal 901, or may be executed by the server 903 and the terminal 901 together. The terminal 901 may also be executed by a client installed thereon to execute the rendering method of the virtual object according to the embodiment of the present application. Fig. 10 is a flowchart of a method for rendering a selectable virtual object according to an embodiment of the present application, and as shown in fig. 10, the method may include the following steps:
step S1002, the terminal obtains a first model and a second model, the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model. The terminal is referred to hereinafter as the target terminal of the mobile device.
The first model and the second model are both models for describing the target object, and the difference between the two models is only the difference of patch precision, for example, the first model adopts N patches, the second model adopts M patches, and M is greater than N.
Step S1004, the terminal performs simulation on the first model through the physics engine to determine a target posture where the target object is located.
All objects in the real world follow the laws of nature, and a physical engine is used as an auxiliary to achieve the physical effect of simulating the real world in the game. The game engine is usually internally provided with a physical engine, and objects in the game can move according to a physical movement rule through the processing of the physical engine, for example, the objects are restricted by the nature, the restriction of the nature includes gravity, torque force and impulsive force, and the objects can have mutually collided acting force and friction force.
In step S1006, the terminal determines the vertex position of the patch in the second model by using the vertex position of the patch in the first model, where the vertex position of the patch in the first model is the vertex position of the first model when the first model is in the target posture.
And step S1008, rendering the target object in the target posture by the terminal through the second model.
When the terminal GPU is used for rendering, the method comprises vertex processing, rasterization calculation, texture mapping and pixel processing, the GPU works to complete the generation of the 3D graphics, maps the graphics to corresponding pixel points, calculates each pixel to determine the final color and complete output.
Vertex processing: at this stage, the GPU reads vertex data (including vertex positions) describing the appearance of the 3D graphics, determines the shape and positional relationship of the 3D graphics according to the vertex data, and establishes a skeleton of the 3D graphics. These tasks are done by hardware implemented Vertex shaders.
And (3) rasterization calculation: the image actually displayed by the display is composed of pixels, and points and lines on the generated graph need to be converted into corresponding pixel points through a certain algorithm. The process of converting a vector graphic into a series of pixels is known as rasterization. For example, a diagonal line segment of the mathematical representation is finally converted into a step-like continuous pixel point.
Texture mapping: the polygons generated by the vertex units constitute only the outlines of the 3D objects, and the texture mapping (textlemapping) work completes the mapping of the multi-deformed surface, i.e., the surface of the polygons is mapped with the corresponding picture, thereby generating a "real" graph. The TMU (texture mapping unit) is used to accomplish this task.
Pixel processing: at this stage (during the rasterization process for each pixel) the GPU performs the computations and processing of the pixels to determine the final attributes of each pixel. These tasks are done by hardware implemented Pixel shaders. And (3) final output: the output of the pixels is finally completed by an ROP (rasterization engine), and after 1 frame is rendered, the pixel is sent to a video memory frame buffer area.
In the scheme of the application, the low-resolution model of the first model is utilized to simulate in the simulation stage, the high-resolution model of the second model is utilized to render in the rendering stage, the mesh of the patch which is difficult to simulate due to complex structure is simulated by the simple model, and the complex model is utilized to render instead of being used to simulate and render all the time by the complex model, so that the operand in the physical simulation stage is reduced, the technical problem of low efficiency of data processing in the related technology can be solved, and the technical effect of improving the processing efficiency is further achieved.
In an alternative embodiment, determining the vertex positions of the patches in the second model by using the vertex positions of the patches in the first model includes the following two implementation schemes, and the first implementation scheme can be realized by the following steps 11 to 12:
step 11, searching a target vertex in the second model, where the target vertex is a vertex in the current display area of a target terminal in the second model, and the target terminal is used to control a player object (which may be the target object or another object) in the virtual scene.
And step 12, calling a graphics processor, and processing the vertex position of the patch in the first model to obtain the vertex position of the target vertex in the second model.
Optionally, invoking a graphics processor, and processing the vertex position of the patch in the first model to obtain the vertex position of the target vertex in the second model includes the following steps 121 to 122:
step 121, storing the vertex position of the patch in the first model to a target cache, for example, in a computer Buffer, where the target cache is a cache configured for the graphics processor and used for storing data to be processed.
Step 122, the graphics processor reads the vertex position of the patch in the first model from the target cache, and determines the vertex position of the target vertex in the second model by using the vertex position of the patch in the first model.
In another alternative embodiment, the second method for determining the vertex position of a patch in a second model by using the vertex position of a patch in a first model can be implemented by the following steps 21-22:
and step 21, searching a plurality of target vertexes in the second model, wherein the target vertexes are vertexes, located in the current display area of the target terminal, in the second model.
And step 22, calling a graphics processor, and performing parallel processing on the vertex positions of the patches in the first model to obtain the vertex positions of a plurality of target vertices in the second model.
Optionally, invoking a graphics processor, and performing parallel processing on vertex positions of patches in the first model to obtain vertex positions of a plurality of target vertices in the second model includes the following steps 221 to 222:
step 221, a plurality of first threads are created in the graphics processor, wherein the number of the threads of the plurality of first threads is the same as the number of the vertices of the plurality of target vertices.
Step 222, performing parallel processing on vertex positions of patches in the first model through a plurality of first threads to obtain vertex positions of a plurality of target vertices in the second model, where each of the plurality of first threads is used to obtain a vertex position of one target vertex, and the vertex positions of the target vertices obtained by any two threads in the plurality of first threads are different, which is equivalent to that the first threads and the target vertices are in a one-to-one correspondence relationship.
Optionally, considering that if the vertex position of each target vertex calls one thread to perform processing, a large amount of resources of the GPU may be occupied in a short time, the optimization may be performed by calling the graphics processor, and performing parallel processing on the vertex positions of the patches in the first model to obtain the vertex positions of the target vertices in the second model includes the following steps 223-224:
at step 223, a plurality of second threads are created in the graphics processor, the number of threads of the plurality of second threads being less than the number of vertices of the plurality of target vertices.
Step 224, performing parallel processing on vertex positions of patches in the first model through a plurality of second threads to obtain vertex positions of a plurality of target vertices in the second model, where each of the plurality of second threads is used to obtain a vertex position of at least one target vertex, and some or all of the plurality of second threads are used to continue processing after obtaining a vertex position of one target vertex to obtain a vertex position of another target vertex, which is equivalent to that the plurality of second threads are run in parallel, but each second thread may process a plurality of target vertices in series.
In the above implementation, determining the vertex positions of the patches in the second model by using the vertex positions of the patches in the first model includes determining the vertex position of each vertex to be confirmed in the second model as shown in the following steps 31 to 32:
and step 31, searching a target patch associated with a target vertex in the second model from the first model, wherein the target vertex is a vertex of the current vertex position to be determined in the second model.
And step 32, determining the vertex positions of the target vertices according to the incidence relation between the target vertices and the target patches and the vertex positions of all the vertices of the target patches.
In the above embodiment, the target patch is a triangular patch having three vertices, where the vertex positions of the target vertices are determined according to the association relationship between the target vertices and the target patch and the vertex positions of all the vertices of the target patch, and the coordinate calculation in step 321 and the normal calculation in step 322 are both:
step 322, determining the vertex coordinates P of the target vertex according to the following formula for describing the first relationship
Figure BDA0002553242690000101
Wherein A is the coordinate of the first vertex of the triangular patch, B is the coordinate of the second vertex of the triangular patch, C is the coordinate of the third vertex of the triangular patch, α, β and gamma are set parameters,
Figure BDA0002553242690000102
representing a vector from a first vertex to a second vertex,
Figure BDA0002553242690000103
representing a vector from the first vertex to the third vertex,
Figure BDA0002553242690000104
presentation pair
Figure BDA0002553242690000105
Taking a norm, wherein the association relation comprises a first relation.
Step 322, determining the vertex normal of the target vertex according to the following formula for describing the second relationship
Figure BDA0002553242690000106
Figure BDA0002553242690000107
Wherein,
Figure BDA0002553242690000108
representing the vertex normal of the first vertex in the triangle patch,
Figure BDA0002553242690000109
representing the vertex normal of the second vertex in the triangle patch,
Figure BDA00025532426900001010
the vertex normals representing the third vertices in the triangle patch, α, β, and γ are set parameters, and the association relationship includes a second relationship.
As an alternative embodiment, the following description will use the technical solution of the present application in a UNITY engine as an example. According to the scheme, the low-precision grid is used for carrying out physical simulation calculation, and then the dynamic effect is applied to the fine grid model, so that the cloth simulation effect can be added while the performance consumption is reduced.
The scheme can be called an Adhere chamber, the function of the method is that a patch mesh moves along with Cloth, and the Adhere chamber can be used for realizing simulation by a low-resolution model and rendering by a high-resolution model; and for mesh with a complex structure and difficult simulation, a simple model is used for simulation, and a complex model is used for rendering.
As shown in fig. 11, the cloth used for simulation may be called base Mesh (i.e., a first model), and the driven Mesh may be called Adhere Mesh (i.e., a second model). As shown in fig. 12, in the initial pose, each point P in the Adhere Mesh has a corresponding triangle (closest to P) in the base Mesh, and the point P can be represented by the centroid coordinates of the triangle ABC and the distance from the point to the triangle plane (e.g., Oh1, Oh2, Oh 3). And storing the information, so that the vertex of Adheremsesh can be calculated from the vertex of base mesh during operation, and the normal of P is interpolated from the normal of the vertex of the corresponding triangle.
During calculation, for each point P in the Adhere Mesh, traversing all triangles in the base Mesh, finding a triangle ABC closest to the point P, making a perpendicular line from the point P to a plane of the triangle ABC, and intersecting at a point O, then O can be represented as:
Figure BDA0002553242690000111
note the book
Figure BDA0002553242690000112
Then P can be expressed as:
Figure BDA0002553242690000113
and storing the indexes id1, id2, id3 and 1-alpha-beta, alpha, beta and gamma of the three points in the triangle corresponding to each point for runtime use.
And acquiring the vertex position and the normal of the base Mesh after the physical simulation is completed, wherein the vertex and the normal of the Adhere Mesh can be obtained by calculating from the base Mesh:
Figure BDA0002553242690000114
Figure BDA0002553242690000115
wherein A, B, C are the vertex coordinates of the P corresponding triangles, respectively.
In the implementation scheme of the AdhereClothGPU, each vertex of the Adhere Mesh needs to be calculated every frame, and the calculation amount is large, so that the calculation process is accelerated by using a computer shader.
As shown in fig. 13, all the calculations can be put into the OnWillRenderObject function, so that no more calculations per frame are needed when the cloth is out of view. Firstly, acquiring a vertex from the cloth in an OnWillRenderObject function, and updating a computer Buffer (namely a target cache); and then calling a computer shader to calculate the vertex position and the normal of the Adher Mesh, and finally setting the obtained vertex buffer (namely the coordinate) and normal buffer (namely the normal) to the material of the Adher Mesh to replace the original vertex and normal. The computer shader is used for calculating the Adhere Mesh of the 4000 vertexes, the total time of the CPU and the GPU is about 0.1ms, and the performance is greatly improved.
Although the computing speed is faster when the method is implemented by using the computer shader, in order to render data by using the computer buffer, the shader needs to be modified additionally, a vertex and a normal are obtained from the StructuredBuffer in the vertex shader part, and some mobile devices do not support the characteristic, so the method also implements the Job System version of Adhere Path.
AdhereClothJob adopts Job System and Burst Compiler to improve performance, each Job executes the calculation of an Adhere Mesh, and when a plurality of Adhere meshes exist in a scene, the Adhere meshes can be calculated in parallel. As shown in fig. 14, in the update stage, the vertex and normal of the base mesh are obtained, and then Job (e.g., Job1-Job3) is called for calculation; and waiting for Job to finish executing in the LateUpdate stage, taking out the calculated vertex and normal, and calling a graphics.
Fig. 15 shows the comparison effect of applying the Adhere chamber and directly using the original model for simulation (i.e. the clock), and it can be seen that although the scheme of the present application has a slightly reduced fineness compared with the original model, the Adhere chamber can still maintain good simulation effect.
The application compares the Runtime efficiency when the Cloth and the Adhere Cloth are directly used in detail, as shown in FIGS. 16 and 17, the number of the Cloth vertexes is 2501, the number of the low-resolution Cloth vertexes is 121, and the time of the Adhere GPU and the Adhere Job includes the time of Cloth simulation and the computation of the Adhere Cloth clock plug-in. Firstly, testing on a PC platform, wherein AdhereClothGPU and AdhereClothJob are greatly improved relative to cloth, and are only about 1/5 of the cloth when in use; during a mobile platform test (taking the high pass 835 processor as an example), the AdhereClothGPU cannot be used, only AdhereClothJob can be used, and compared with the method of directly using cloth, the AdhereClothGPU is improved by about 50%.
In the technical scheme of the application, the performance problem of the Unity fabric in use is tested and optimized, and the Adher Circuit plug-in Unity is realized. The Adhere Path plug-in is utilized to realize the LOD function on the Cloth, the performance of the Cloth in actual application is improved, and the application effect obtained in the actual project is very good; the AdhereCloth of the Job System version is time-consuming to use at present, and considering the problem of efficiency, the cloth used on the mobile platform is preferably a cloth model with low resolution, and if a high-resolution model is needed to be used, the AdhereCloth can be adopted to improve the efficiency.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a virtual object rendering apparatus for implementing the virtual object rendering method. Fig. 18 is a schematic diagram of an alternative virtual object rendering apparatus according to an embodiment of the present application, and as shown in fig. 18, the apparatus may include:
an obtaining unit 1801, configured to obtain a first model and a second model, where the first model and the second model are used to represent a target object in a virtual scene, and a patch precision of the first model is lower than a patch precision of the second model;
a simulation unit 1803, configured to determine, through simulation of the first model by a physics engine, a target pose where the target object is located;
a determining unit 1805, configured to determine vertex positions of patches in the second model by using vertex positions of patches in the first model, where the vertex positions of patches in the first model are vertex positions of the first model in the target pose;
a rendering unit 1807, configured to render the target object in the target pose by using the second model.
It should be noted that the obtaining unit 1801 in this embodiment may be configured to execute step S1002 in this embodiment, the simulating unit 1803 in this embodiment may be configured to execute step S1004 in this embodiment, the determining unit 1805 in this embodiment may be configured to execute step S1006 in this embodiment, and the rendering unit 1807 in this embodiment may be configured to execute step S1008 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 9, and may be implemented by software or hardware.
Through the module, the low-resolution model of the first model is utilized to simulate in the simulation stage, the high-resolution model of the second model is utilized to render in the rendering stage, the mesh of the patch which is difficult to simulate due to complex structure is simulated by the simple model, and the complex model is utilized to render instead of simulating and rendering by the complex model all the time, so that the operation amount in the physical simulation stage is reduced, the technical problem of low data processing efficiency in the related technology can be solved, and the technical effect of improving the processing efficiency is further achieved.
Optionally, the determining unit includes: the first searching module is used for searching a target patch associated with a target vertex in the second model from the first model, wherein the target vertex is a vertex of a current vertex position to be determined in the second model; and the first determining module is used for determining the vertex positions of the target vertexes according to the incidence relation between the target vertexes and the target patch and the vertex positions of all the vertexes of the target patch.
Optionally, the target patch is a triangular patch having three vertices, wherein the determining module is further configured to:
determining the vertex coordinates P of the target vertex according to the following formula for describing the first relation
Figure BDA0002553242690000151
Wherein A is the coordinate of the first vertex of the triangular patch, B is the coordinate of the second vertex of the triangular patch, C is the coordinate of the third vertex of the triangular patch, α, β and gamma are set parameters,
Figure BDA0002553242690000152
representing a vector from a first vertex to a second vertex,
Figure BDA0002553242690000153
representing a vector from the first vertex to the third vertex,
Figure BDA0002553242690000154
presentation pair
Figure BDA0002553242690000155
And taking a norm, wherein the association relation comprises the first relation.
Optionally, the target patch is a triangular patch having three vertices, wherein the determining module is further configured to:
determining a vertex normal of the target vertex according to the following formula for describing the second relation
Figure BDA0002553242690000156
Figure BDA0002553242690000157
Wherein,
Figure BDA0002553242690000158
representing a vertex normal of a first vertex in the triangle patch,
Figure BDA0002553242690000159
representing a vertex normal of a second vertex in the triangular patch,
Figure BDA00025532426900001510
a vertex normal representing a third vertex in the triangle patch, α, β, γ being a set parameter, the association relationship including the second relationship.
Optionally, the determining unit includes: the second searching module is used for searching a target vertex in the second model, wherein the target vertex is a vertex in a current display area of a target terminal in the second model, and the target terminal is used for controlling an object in the virtual scene; and the second determining module is used for calling a graphics processor to process the vertex position of the patch in the first model to obtain the vertex position of the target vertex in the second model.
Optionally, the second determining module is further configured to: storing vertex positions of patches in the first model to a target cache, wherein the target cache is configured for the graphics processor and used for storing data to be processed; and the graphics processor reads the vertex positions of the patches in the first model from the target cache, and determines the vertex positions of the target vertices in the second model by using the vertex positions of the patches in the first model.
Optionally, the determining unit includes: the third searching module is used for searching a plurality of target vertexes in the second model, wherein the target vertexes are vertexes positioned in a current display area of a target terminal in the second model; and the third determining module is used for calling a graphics processor to perform parallel processing on the vertex positions of the patches in the first model to obtain the vertex positions of the target vertices in the second model.
Optionally, the third determining module is further configured to: creating a plurality of first threads in the graphics processor, wherein the number of threads of the plurality of first threads is the same as the number of vertices of the plurality of target vertices; and performing parallel processing on vertex positions of patches in the first model through the first threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the first threads is used for obtaining the vertex position of one target vertex, and the vertex positions of the target vertices obtained by any two threads in the first threads are different.
Optionally, the third determining module is further configured to: creating a plurality of second threads in the graphics processor, wherein a number of threads of the plurality of second threads is less than a number of vertices of the plurality of target vertices; and processing vertex positions of patches in the first model in parallel through the second threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the second threads is used for obtaining a vertex position of at least one target vertex, and each thread in the second threads is used for continuing processing to obtain a vertex position of another target vertex after obtaining the vertex position of one target vertex.
Although the simulation effect of the cloth material in the related art is good, the efficiency problem is very important in the game, especially for the mobile terminal (which is limited by the factors of volume, battery, etc., and the performance of the hardware is not as good as that of a desktop device), in order to be able to use the cloth material simulation at the mobile terminal, according to an aspect of the embodiments of the present application, a method embodiment of a rendering method of a virtual object is provided. The utility model provides a set of scheme that has realization effect and realization efficiency concurrently, this scheme uses the net of low precision to carry out the physical simulation calculation, later uses the dynamic effect to meticulous grid model on, just so can add cloth simulation effect when reducing the performance consumption.
In the technical scheme of the application, the performance problem of the Unity fabric in use is tested and optimized, and the Adher Circuit plug-in Unity is realized. The Adhere Path plug-in is utilized to realize the LOD function on the Cloth, the performance of the Cloth in actual application is improved, and the application effect obtained in the actual project is very good; the AdhereCloth of the Job System version is time-consuming to use at present, and considering the problem of efficiency, the cloth used on the mobile platform is preferably a cloth model with low resolution, and if a high-resolution model is needed to be used, the AdhereCloth can be adopted to improve the efficiency.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 9, may be implemented by software, and may also be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the present application, there is also provided a server or a terminal for implementing the rendering method of the virtual object.
Fig. 19 is a block diagram of a terminal according to an embodiment of the present application, and as shown in fig. 19, the terminal may include: one or more processors 1901 (only one of which is shown in fig. 19), a memory 1903, and a transmission 1905. as shown in fig. 19, the terminal may further include an input-output device 1907.
The memory 1903 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for rendering a virtual object in the embodiment of the present application, and the processor 1901 executes various functional applications and data processing by running the software programs and modules stored in the memory 1903, so as to implement the above-mentioned method for rendering a virtual object. The memory 1903 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1903 may further include memory located remotely from the processor 1901, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1905 is used for receiving or sending data via a network, and can also be used for data transmission between the processor and the memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1905 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices so as to communicate with the internet or a local area Network. In one example, the transmission device 1905 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The memory 1903 is used for storing an application program, among others.
The processor 1901 may call the application stored in the memory 1903 through the transmission 1905 to perform the following steps:
obtaining a first model and a second model, wherein the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model;
simulating the first model through a physical engine to determine a target posture of the target object;
determining vertex positions of patches in the second model by using the vertex positions of the patches in the first model, wherein the vertex positions of the patches in the first model are the vertex positions of the first model in the target posture;
rendering the target object in the target pose using the second model.
The processor 1901 is further configured to perform the following steps:
creating a plurality of first threads in the graphics processor, wherein the number of threads of the plurality of first threads is the same as the number of vertices of the plurality of target vertices;
and performing parallel processing on vertex positions of patches in the first model through the first threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the first threads is used for obtaining the vertex position of one target vertex, and the vertex positions of the target vertices obtained by any two threads in the first threads are different.
The processor 1901 is further configured to perform the following steps:
creating a plurality of second threads in the graphics processor, wherein a number of threads of the plurality of second threads is less than a number of vertices of the plurality of target vertices;
and processing vertex positions of patches in the first model in parallel through the second threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the second threads is used for obtaining a vertex position of at least one target vertex, and each thread in the second threads is used for continuing processing to obtain a vertex position of another target vertex after obtaining the vertex position of one target vertex.
By adopting the embodiment of the application, the method comprises the steps of obtaining a first model and a second model, wherein the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model; simulating the first model through a physical engine to determine a target posture of the target object; determining vertex positions of patches in the second model by using the vertex positions of the patches in the first model, wherein the vertex positions of the patches in the first model are the vertex positions of the first model in the target posture; rendering a solution of the target object "in the target pose using the second model. The simulation method has the advantages that the simulation is carried out by using the low-resolution model of the first model in the simulation stage, the rendering is carried out by using the high-resolution model of the second model in the rendering stage, the mesh of the patch which is complex in structure and difficult to simulate is simulated by using the simple model, and the rendering is carried out by using the complex model instead of the simulation and the rendering by using the complex model all the time, so that the operation amount in the physical simulation stage is reduced, the technical problem of low data processing efficiency in the related technology can be solved, and the technical effect of improving the processing efficiency is further achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 19 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 19 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 19, or have a different configuration than shown in FIG. 19.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present application also provide a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a rendering method of a virtual object.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
obtaining a first model and a second model, wherein the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model;
simulating the first model through a physical engine to determine a target posture of the target object;
determining vertex positions of patches in the second model by using the vertex positions of the patches in the first model, wherein the vertex positions of the patches in the first model are the vertex positions of the first model in the target posture;
rendering the target object in the target pose using the second model.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
creating a plurality of first threads in the graphics processor, wherein the number of threads of the plurality of first threads is the same as the number of vertices of the plurality of target vertices;
and performing parallel processing on vertex positions of patches in the first model through the first threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the first threads is used for obtaining the vertex position of one target vertex, and the vertex positions of the target vertices obtained by any two threads in the first threads are different.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
creating a plurality of second threads in the graphics processor, wherein a number of threads of the plurality of second threads is less than a number of vertices of the plurality of target vertices;
and processing vertex positions of patches in the first model in parallel through the second threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the second threads is used for obtaining a vertex position of at least one target vertex, and each thread in the second threads is used for continuing processing to obtain a vertex position of another target vertex after obtaining the vertex position of one target vertex.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (15)

1. A method for rendering a virtual object, comprising:
obtaining a first model and a second model, wherein the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model;
simulating the first model through a physical engine to determine a target posture of the target object;
determining vertex positions of patches in the second model by using the vertex positions of the patches in the first model, wherein the vertex positions of the patches in the first model are the vertex positions of the first model in the target posture;
rendering the target object in the target pose using the second model.
2. The method of claim 1, wherein determining vertex positions for patches in the second model using vertex positions for patches in the first model comprises determining vertex positions for each vertex to be identified in the second model as follows:
searching a target patch associated with a target vertex in the second model from the first model, wherein the target vertex is a vertex of a current vertex position to be determined in the second model;
and determining the vertex position of the target vertex according to the incidence relation between the target vertex and the target patch and the vertex positions of all the vertices of the target patch.
3. The method of claim 2, wherein the target patch is a triangular patch having three vertices, and wherein determining the vertex positions of the target vertices according to the association relationship between the target vertices and the target patch and the vertex positions of all the vertices of the target patch comprises:
determining the vertex coordinates P' of the target vertex according to the following formula for describing the first relationship:
Figure FDA0002553242680000021
wherein A is the coordinate of the first vertex of the triangular patch, B is the coordinate of the second vertex of the triangular patch, C is the coordinate of the third vertex of the triangular patch, α, β and gamma are set parameters,
Figure FDA0002553242680000022
representing a vector from the first vertex to the second vertex,
Figure FDA0002553242680000023
representing a vector from the first vertex to the third vertex,
Figure FDA0002553242680000024
presentation pair
Figure FDA0002553242680000025
And taking a norm, wherein the association relation comprises the first relation.
4. The method of claim 2, wherein the target patch is a triangular patch having three vertices, and wherein determining the vertex positions of the target vertices according to the association relationship between the target vertices and the target patch and the vertex positions of all the vertices of the target patch comprises:
determining a vertex normal of the target vertex according to the following formula for describing the second relation
Figure FDA0002553242680000026
Figure FDA0002553242680000027
Wherein,
Figure FDA0002553242680000028
representing the triangleThe vertex normal of the first vertex in the shape patch,
Figure FDA0002553242680000029
representing a vertex normal of a second vertex in the triangular patch,
Figure FDA00025532426800000210
a vertex normal representing a third vertex in the triangle patch, α, β, γ being a set parameter, the association relationship including the second relationship.
5. The method of any of claims 1-4, wherein determining vertex positions of patches in the second model using vertex positions of patches in the first model comprises:
searching a target vertex in the second model, wherein the target vertex is a vertex in a current display area of a target terminal in the second model, and the target terminal is used for controlling an object in the virtual scene;
and calling a graphics processor to process the vertex position of the patch in the first model to obtain the vertex position of the target vertex in the second model.
6. The method of claim 5, wherein invoking a graphics processor to process vertex positions of patches in the first model to obtain vertex positions of target vertices in the second model comprises:
storing vertex positions of patches in the first model to a target cache, wherein the target cache is configured for the graphics processor and used for storing data to be processed;
and the graphics processor reads the vertex positions of the patches in the first model from the target cache, and determines the vertex positions of the target vertices in the second model by using the vertex positions of the patches in the first model.
7. The method of any of claims 1-4, wherein determining vertex positions of patches in the second model using vertex positions of patches in the first model comprises:
searching a plurality of target vertexes in the second model, wherein the target vertexes are vertexes, located in a current display area of a target terminal, in the second model;
and calling a graphics processor, and performing parallel processing on the vertex positions of the patches in the first model to obtain the vertex positions of the target vertices in the second model.
8. The method of claim 7, wherein invoking a graphics processor to perform parallel processing on vertex positions of patches in the first model to obtain vertex positions of the target vertices in the second model comprises:
creating a plurality of first threads in the graphics processor, wherein the number of threads of the plurality of first threads is the same as the number of vertices of the plurality of target vertices;
and performing parallel processing on vertex positions of patches in the first model through the first threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the first threads is used for obtaining the vertex position of one target vertex, and the vertex positions of the target vertices obtained by any two threads in the first threads are different.
9. The method of claim 7, wherein invoking a graphics processor to perform parallel processing on vertex positions of patches in the first model to obtain vertex positions of the target vertices in the second model comprises:
creating a plurality of second threads in the graphics processor, wherein a number of threads of the plurality of second threads is less than a number of vertices of the plurality of target vertices;
and processing vertex positions of patches in the first model in parallel through the second threads to obtain vertex positions of the target vertices in the second model, wherein each thread in the second threads is used for obtaining a vertex position of at least one target vertex, and the threads in the second threads are used for continuing processing to obtain a vertex position of another target vertex after obtaining the vertex position of one target vertex.
10. An apparatus for rendering a virtual object, comprising:
the device comprises an obtaining unit, a calculating unit and a processing unit, wherein the obtaining unit is used for obtaining a first model and a second model, the first model and the second model are used for representing a target object in a virtual scene, and the patch precision of the first model is lower than that of the second model;
the simulation unit is used for simulating the first model through a physical engine to determine a target posture of the target object;
a determining unit, configured to determine vertex positions of patches in the second model by using vertex positions of patches in the first model, where the vertex positions of the patches in the first model are vertex positions of the first model in the target pose;
and the rendering unit is used for rendering the target object in the target posture by utilizing the second model.
11. The apparatus of claim 10, wherein the determining unit comprises:
the first searching module is used for searching a target patch associated with a target vertex in the second model from the first model, wherein the target vertex is a vertex of a current vertex position to be determined in the second model;
and the first determining module is used for determining the vertex positions of the target vertexes according to the incidence relation between the target vertexes and the target patch and the vertex positions of all the vertexes of the target patch.
12. The apparatus of claim 11, wherein the target patch is a triangular patch having three vertices, and wherein the determining module is further configured to:
determining the vertex coordinates P' of the target vertex according to the following formula for describing the first relationship:
Figure FDA0002553242680000051
wherein A is the coordinate of the first vertex of the triangular patch, B is the coordinate of the second vertex of the triangular patch, C is the coordinate of the third vertex of the triangular patch, α, β and gamma are set parameters,
Figure FDA0002553242680000052
representing a vector from a first vertex to a second vertex,
Figure FDA0002553242680000053
representing a vector from the first vertex to the third vertex,
Figure FDA0002553242680000054
presentation pair
Figure FDA0002553242680000055
And taking a norm, wherein the association relation comprises the first relation.
13. The apparatus of claim 11, wherein the target patch is a triangular patch having three vertices, and wherein the determining module is further configured to:
determining a vertex normal of the target vertex according to the following formula for describing the second relation
Figure FDA0002553242680000056
Figure FDA0002553242680000057
Wherein,
Figure FDA0002553242680000058
representing a vertex normal of a first vertex in the triangle patch,
Figure FDA0002553242680000059
representing a vertex normal of a second vertex in the triangular patch,
Figure FDA00025532426800000510
a vertex normal representing a third vertex in the triangle patch, α, β, γ being a set parameter, the association relationship including the second relationship.
14. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 9.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 9 by means of the computer program.
CN202010583263.5A 2020-06-23 2020-06-23 Rendering method and device of virtual object, storage medium and electronic device Pending CN111773719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010583263.5A CN111773719A (en) 2020-06-23 2020-06-23 Rendering method and device of virtual object, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010583263.5A CN111773719A (en) 2020-06-23 2020-06-23 Rendering method and device of virtual object, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN111773719A true CN111773719A (en) 2020-10-16

Family

ID=72757072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010583263.5A Pending CN111773719A (en) 2020-06-23 2020-06-23 Rendering method and device of virtual object, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111773719A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200895A (en) * 2020-12-07 2021-01-08 江苏原力数字科技股份有限公司 Digital human cloth real-time resolving method based on deep learning
CN113706681A (en) * 2021-07-30 2021-11-26 华为技术有限公司 Image processing method and electronic device
CN113706683A (en) * 2021-08-06 2021-11-26 网易(杭州)网络有限公司 Shadow processing method and device of virtual three-dimensional model and electronic device
CN114053696A (en) * 2021-11-15 2022-02-18 完美世界(北京)软件科技发展有限公司 Image rendering processing method and device and electronic equipment
CN114401423A (en) * 2022-01-13 2022-04-26 上海哔哩哔哩科技有限公司 Data processing method and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140663A (en) * 2007-10-16 2008-03-12 中国科学院计算技术研究所 Clothing cartoon computation method
CN107393019A (en) * 2017-07-31 2017-11-24 天堃众联科技(深圳)有限公司 A kind of cloth simulation method and apparatus based on particle
US10022628B1 (en) * 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
CN109087369A (en) * 2018-06-22 2018-12-25 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic device and storage medium
CN109377542A (en) * 2018-09-28 2019-02-22 国网辽宁省电力有限公司锦州供电公司 Threedimensional model rendering method, device and electronic equipment
CN109448099A (en) * 2018-09-21 2019-03-08 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the electronic device of picture
CN109976827A (en) * 2019-03-08 2019-07-05 北京邮电大学 Loading method, server and the terminal of model
CN110570507A (en) * 2019-09-11 2019-12-13 珠海金山网络游戏科技有限公司 Image rendering method and device
CN110694276A (en) * 2019-10-14 2020-01-17 北京代码乾坤科技有限公司 Physical effect simulation method, physical effect simulation device, storage medium, processor, and electronic device
CN111028320A (en) * 2019-12-11 2020-04-17 腾讯科技(深圳)有限公司 Cloth animation generation method and device and computer readable storage medium
CN111145326A (en) * 2019-12-26 2020-05-12 网易(杭州)网络有限公司 Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN111167120A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Method and device for processing virtual model in game

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140663A (en) * 2007-10-16 2008-03-12 中国科学院计算技术研究所 Clothing cartoon computation method
US10022628B1 (en) * 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
CN107393019A (en) * 2017-07-31 2017-11-24 天堃众联科技(深圳)有限公司 A kind of cloth simulation method and apparatus based on particle
CN109087369A (en) * 2018-06-22 2018-12-25 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic device and storage medium
CN109448099A (en) * 2018-09-21 2019-03-08 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the electronic device of picture
CN109377542A (en) * 2018-09-28 2019-02-22 国网辽宁省电力有限公司锦州供电公司 Threedimensional model rendering method, device and electronic equipment
CN109976827A (en) * 2019-03-08 2019-07-05 北京邮电大学 Loading method, server and the terminal of model
CN110570507A (en) * 2019-09-11 2019-12-13 珠海金山网络游戏科技有限公司 Image rendering method and device
CN110694276A (en) * 2019-10-14 2020-01-17 北京代码乾坤科技有限公司 Physical effect simulation method, physical effect simulation device, storage medium, processor, and electronic device
CN111028320A (en) * 2019-12-11 2020-04-17 腾讯科技(深圳)有限公司 Cloth animation generation method and device and computer readable storage medium
CN111145326A (en) * 2019-12-26 2020-05-12 网易(杭州)网络有限公司 Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN111167120A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Method and device for processing virtual model in game

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200895A (en) * 2020-12-07 2021-01-08 江苏原力数字科技股份有限公司 Digital human cloth real-time resolving method based on deep learning
CN112200895B (en) * 2020-12-07 2021-04-16 江苏原力数字科技股份有限公司 Digital human cloth real-time resolving method based on deep learning
CN113706681A (en) * 2021-07-30 2021-11-26 华为技术有限公司 Image processing method and electronic device
CN113706683A (en) * 2021-08-06 2021-11-26 网易(杭州)网络有限公司 Shadow processing method and device of virtual three-dimensional model and electronic device
CN113706683B (en) * 2021-08-06 2023-09-26 网易(杭州)网络有限公司 Shadow processing method and device for virtual three-dimensional model and electronic device
CN114053696A (en) * 2021-11-15 2022-02-18 完美世界(北京)软件科技发展有限公司 Image rendering processing method and device and electronic equipment
CN114053696B (en) * 2021-11-15 2023-01-10 完美世界(北京)软件科技发展有限公司 Image rendering processing method and device and electronic equipment
CN114401423A (en) * 2022-01-13 2022-04-26 上海哔哩哔哩科技有限公司 Data processing method and device
CN114401423B (en) * 2022-01-13 2023-12-12 上海哔哩哔哩科技有限公司 Data processing method and device

Similar Documents

Publication Publication Date Title
CN112652044B (en) Particle special effect rendering method, device, equipment and storage medium
CN111773719A (en) Rendering method and device of virtual object, storage medium and electronic device
US7209139B1 (en) Efficient rendering of similar objects in a three-dimensional graphics engine
CN107358649B (en) Processing method and device of terrain file
US6268861B1 (en) Volumetric three-dimensional fog rendering technique
CN112241993B (en) Game image processing method and device and electronic equipment
CN105913471B (en) The method and apparatus of picture processing
CN101473351A (en) Musculo-skeletal shape skinning
CN106296778A (en) Virtual objects motion control method and device
CN111773688B (en) Flexible object rendering method and device, storage medium and electronic device
CN113076152B (en) Rendering method and device, electronic equipment and computer readable storage medium
EP4394713A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
Ripolles et al. Real-time tessellation of terrain on graphics hardware
CN115641375A (en) Method, device, equipment, medium and program product for processing hair of virtual object
CN110930484B (en) Animation configuration method and device, storage medium and electronic device
CN116012507A (en) Rendering data processing method and device, electronic equipment and storage medium
JP2009020874A (en) Hair simulation method, and device therefor
US8952968B1 (en) Wave modeling for computer-generated imagery using intersection prevention on water surfaces
Dong et al. Real‐Time Large Crowd Rendering with Efficient Character and Instance Management on GPU
Wang et al. Dynamic modeling and rendering of grass wagging in wind
CN115099025A (en) Method for calculating fluid flow speed in fluid model, electronic device and storage medium
CN114882153A (en) Animation generation method and device
US20070115279A1 (en) Program, information storage medium, and image generation system
Bao et al. Billboards for tree simplification and real-time forest rendering
US7724255B2 (en) Program, information storage medium, and image generation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination