CN117237509A - Virtual model display control method and device, electronic equipment and storage medium - Google Patents

Virtual model display control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117237509A
CN117237509A CN202311043342.7A CN202311043342A CN117237509A CN 117237509 A CN117237509 A CN 117237509A CN 202311043342 A CN202311043342 A CN 202311043342A CN 117237509 A CN117237509 A CN 117237509A
Authority
CN
China
Prior art keywords
virtual model
virtual
determining
area
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311043342.7A
Other languages
Chinese (zh)
Inventor
张美琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311043342.7A priority Critical patent/CN117237509A/en
Publication of CN117237509A publication Critical patent/CN117237509A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The disclosure discloses a display control method and device of a virtual model, electronic equipment and a computer readable storage medium, and relates to the technical field of computers. The method comprises the following steps: determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera; determining a texel scaling of the first virtual model according to a distance between the first virtual model and the virtual camera; determining texture tiling parameters for the first virtual model according to the area ratio and the texel scaling; and displaying the first virtual model according to the texture tiling parameters. According to the display control method for the virtual model, provided by the embodiment of the invention, the area of the virtual model and the distance of the virtual camera can be integrated, and the texel scaling of the virtual model can be determined, so that the texture performance of the virtual model is adaptively changed according to the texel scaling, the texture performance of the virtual model is enabled to accord with the look and feel in the real world, and the texture expressive force of the virtual model is improved.

Description

Virtual model display control method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a display control method and device of a virtual model, electronic equipment and a computer readable storage medium.
Background
At present, after the position and the coordinates of the mapping of the virtual model are manually adjusted, the mapping is basically unchanged in the whole manufacturing process of the virtual model, and the texture drawing work of the mapping is performed on the basis of the mapping.
As the morphology or viewing of the virtual model changes, it is desirable that the virtual model have different texture behavior to conform to the look and feel in the real world. At present, the above effects can be achieved by respectively manufacturing the maps adapting to various conditions or performing texture optimization when rendering the image, but if a plurality of maps are manufactured in advance for one virtual model, the cost of manpower, time and the like can be high, if the texture is optimized when rendering the image, the optimization scheme needs to be determined for different virtual models, and the difficulty and time consumption of rendering can be increased more. In summary, it is difficult to make the virtual model behave in different textures according to the morphological changes or observation conditions of the virtual model in the current technology.
Disclosure of Invention
The present disclosure provides a display control method, apparatus, electronic device, and computer-readable storage medium for a virtual model, so as to solve or at least partially solve the above-mentioned problems, which are specifically as follows.
In a first aspect, the present disclosure provides a display control method of a virtual model, the method including:
determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera;
determining a texel scaling of the first virtual model according to a distance between the first virtual model and the virtual camera;
determining texture tiling parameters for the first virtual model according to the area ratio and the texel scaling;
and displaying the first virtual model according to the texture tiling parameters.
In a second aspect, an embodiment of the present disclosure further provides a display control apparatus for a virtual model, including:
a first determination module for determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera;
a second determining module, configured to determine a texel scaling of the first virtual model according to a distance between the first virtual model and the virtual camera;
a third determining module, configured to determine texture tiling parameters for the first virtual model according to the area ratio and the texel scaling;
And the display module is used for displaying the first virtual model according to the texture tiling parameters.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and computer program instructions stored on the memory and executable on the processor;
the processor, when executing the computer program instructions, implements the display control method of the virtual model as described in the first aspect above.
In a fourth aspect, the embodiments of the present disclosure further provide a computer-readable storage medium having stored therein computer program instructions, which when executed by a processor, are configured to implement a display control method for a virtual model as described in the first aspect above.
Compared with the prior art, the method has the following beneficial effects:
in an embodiment of the present disclosure, an area ratio of UV area to surface area of a first virtual model observed by a virtual camera may be first determined; then, determining the texel scaling of the first virtual model according to the distance between the first virtual model and the virtual camera; according to the area ratio and the texel scaling, determining texture tiling parameters aiming at the first virtual model; and the first virtual model may be displayed according to the texture tiling parameters. According to the display control method for the virtual model, provided by the embodiment of the invention, the area of the virtual model and the distance of the virtual camera can be integrated, and the texel scaling of the virtual model can be determined, so that the texture performance of the virtual model is adaptively changed according to the texel scaling, the texture performance of the virtual model is enabled to accord with the look and feel in the real world, and the texture expressive force of the virtual model is improved. In addition, according to the display control method of the virtual model, which is provided by the embodiment of the disclosure, a plurality of maps with different distances and different sizes are not required to be manufactured in advance for one virtual model, so that the cost of manpower, time and the like for manufacturing the maps is reduced, model textures are not required to be optimized when the images are rendered, and the regulation and control of the model texture expression are realized by adjusting texture tiling parameters before rendering, so that the difficulty and time consumption of rendering are reduced.
Drawings
FIG. 1 is a flowchart of a method for controlling display of a virtual model according to an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating another method of controlling the display of a virtual model provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram showing the comparison of effects of a first virtual model before and after movement according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram showing comparison of effects of a first virtual model before and after scaling down according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a display control apparatus of a virtual model provided by an embodiment of the present disclosure;
fig. 6 illustrates a schematic logical structure of an electronic device for implementing virtual model display control according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, each other embodiment that a person skilled in the art would obtain without making inventive efforts is within the scope of this disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
It should be understood that in the embodiments of the present disclosure, "at least one" means one or more, and "a plurality" means two or more. "and/or" is merely an association relationship describing an association object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and/or C" means comprising any 1 or any 2 or 3 of A, B, C.
It should be understood that in the disclosed embodiments, "B corresponding to a", "a corresponding to B", or "B corresponding to a", means that B is associated with a from which B may be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
The display control method of the virtual model provided in the embodiment of the present disclosure may be run on a terminal device or a server, where the terminal device includes, for example, a desktop computer, a notebook computer, and the server includes, for example, a server device, a cloud server, and the like.
Fig. 1 shows a flowchart of a display control method of a virtual model according to an embodiment of the present disclosure, and as shown in fig. 1, the display control method of a virtual model may include the following steps S101 to S104.
Step S101: an area ratio of UV area to surface area of the first virtual model viewed by the virtual camera is determined.
In the disclosed embodiment, the first virtual model is in a virtual space, and a virtual camera is also placed in the virtual space, through which the first virtual model in the virtual space can be observed. Wherein the first virtual model may be any one of a character virtual model, an object virtual model, etc., and the embodiments of the present disclosure are not intended to be limited thereto.
The first virtual model can be unfolded in the UV space by conducting UV unfolding operation on the first virtual model, and the occupied area of all surfaces forming the first virtual model in the UV space is the UV area of the first virtual model.
The surface area of the first virtual model is then the model surface area that is true for the first virtual model.
In this step, the ratio of the UV area to the surface area of the first virtual model may be determined, i.e. the area ratio is obtained.
Step S102: and determining the texel scaling of the first virtual model according to the distance between the first virtual model and the virtual camera.
Wherein the texel, i.e. the texture pixel, is a single pixel in the texture map, unlike the screen pixel. The texel scaling of the first virtual model, i.e. the texel scaling of the texture map of the first virtual model.
When the distance between the first virtual model and the virtual camera changes, the texel scale of the first virtual model may be different. When the distance between the first virtual model and the virtual camera is close, the texture of the first virtual model appears finer; when the distance between the first virtual model and the virtual camera is long, the texture fineness of the first virtual model is reduced in adaptability.
Step S103: texture tiling parameters for the first virtual model are determined based on the area ratio and the texel scale.
In this step, the number of repetitions of the texture of the first virtual model in the U-direction and the V-direction, respectively, may be adjusted according to the above-described area ratio and texel scaling. The different number of repetitions of the texture of the first virtual model in the U-direction and the V-direction, respectively, may result in the first virtual model exhibiting different texture accuracy.
The U direction and the V direction are two coordinate axis directions of the UV space, and the above-mentioned UV unfolding operation is to unfold the virtual model in the UV space.
Step S104: and displaying the first virtual model according to the texture tiling parameters.
In this step, the first virtual model may be displayed according to the adjusted repetition times of the texture of the first virtual model in the U direction and the V direction, respectively, so that the first virtual model may exhibit different fineness degrees of the texture, that is, different precision of the texture, when the morphology change or the observation condition change occurs.
In an embodiment of the present disclosure, an area ratio of UV area to surface area of a first virtual model observed by a virtual camera may be first determined; then, determining the texel scaling of the first virtual model according to the distance between the first virtual model and the virtual camera; according to the area ratio and the texel scaling, determining texture tiling parameters aiming at the first virtual model; and the first virtual model may be displayed according to the texture tiling parameters. According to the display control method for the virtual model, provided by the embodiment of the invention, the area of the virtual model and the distance of the virtual camera can be integrated, and the texel scaling of the virtual model can be determined, so that the texture performance of the virtual model is adaptively changed according to the texel scaling, the texture performance of the virtual model is enabled to accord with the look and feel in the real world, and the texture expressive force of the virtual model is improved. In addition, according to the display control method of the virtual model, which is provided by the embodiment of the disclosure, a plurality of maps with different distances and different sizes are not required to be manufactured in advance for one virtual model, so that the cost of manpower, time and the like for manufacturing the maps is reduced, model textures are not required to be optimized when the images are rendered, and the regulation and control of the model texture expression are realized by adjusting texture tiling parameters before rendering, so that the difficulty and time consumption of rendering are reduced.
Fig. 2 is a flowchart illustrating another method for controlling display of a virtual model according to an embodiment of the present disclosure, and as shown in fig. 2, the method for controlling display of a virtual model may include the following steps S201 to S207.
Step S201: an area ratio of UV area to surface area of the first virtual model viewed by the virtual camera is determined.
In this step, first, a UV space of the first virtual model observed by the virtual camera may be acquired, wherein the first virtual model may be expanded in the UV space.
Each face of the first virtual model may then be traversed and the UV area of each face determined from its UV coordinates in UV space. Wherein the virtual model may be composed of faces (Polygon, polygons), and in three-dimensional computer graphics, the basic units constructing the 3D model include points, lines, faces.
In a specific application, each face of the first virtual model may be traversed by an iterator implementation.
Each surface of the first virtual model is a polygon, each vertex of the polygon corresponds to a UV coordinate in UV space, and the area of each surface in UV space, that is, the UV area, can be calculated according to the UV coordinate of each vertex of each surface of the first virtual model.
Then, the UV areas of each surface may be accumulated to obtain the UV area of the first virtual model.
For the surface area of the first virtual model, a similar manner of determining the UV area as described above may also be used.
Specifically, each face of the first virtual model is traversed first, and the surface area of each face is determined; and then, accumulating the surface areas of the surfaces to obtain the surface area of the first virtual model.
After determining the UV area to surface area of the first virtual model, the ratio of the UV area to the surface area may be determined as the desired area ratio.
Step S202: a distance of the first virtual model to the virtual camera is determined.
In one embodiment of the present disclosure, this step may be specifically realized by the following steps S2021 to S2023.
Step S2021: the method comprises the steps of obtaining a direction vector from a virtual camera to a first virtual model and position coordinates of the virtual camera.
A world coordinate system is provided in the virtual space where the virtual camera is located, and the origin of the world coordinate system is the center of the virtual space. In this embodiment, the direction vector of the virtual camera to the first virtual model may be represented under the world coordinate system.
Step S2022: a first vector of the first virtual model relative to the virtual camera is determined based on the position coordinates of the virtual camera.
Illustratively, in a virtual space provided by Maya software (a piece of three-dimensional production software), at least two position attributes, that is, world coordinates (world Matrix) representing position coordinates of an object in a world coordinate system and Parent node coordinates (Parent Matrix) representing position coordinates of a direct Parent object of the object in the world coordinate system, are provided to the object in the virtual space. In the virtual space provided by Maya software, both the virtual model and the virtual camera are considered objects in the virtual space.
In this step, the position coordinates of the virtual camera, i.e. the world coordinates of the virtual camera in the world coordinate system.
Then, a first vector of the first virtual model relative to the virtual camera in a world coordinate system is determined based on the parent node coordinates of the first virtual model, the offset of the first virtual model relative to its immediate parent object, and the position coordinates of the virtual camera.
Specifically, the parent node coordinates of the first virtual model and the offset of the first virtual model relative to the direct parent object thereof are added, and the position coordinates of the virtual camera are subtracted, so that a first vector of the first virtual model relative to the virtual camera can be obtained.
Step S2023: the dot product of the first vector and the direction vector is determined as the distance from the first virtual model to the virtual camera.
In this step, the dot product operation may be performed on the first vector and the direction vector, so as to obtain the distance d from the first virtual model to the virtual camera.
Step S203: and determining the rendering pixel density of the first virtual model according to the distance from the first virtual model to the virtual camera, the view angle size of the virtual camera and the rendering resolution.
In this step, the rendered pixel density of the first virtual model may be determined by the following formula.
ppi=1×2.54/(p×2×d×tan (pi/180/a/2)) (equation 1)
Wherein ppi represents the rendering pixel density of the first virtual model, p represents the rendering image aspect ratio corresponding to the rendering resolution, d represents the distance from the first virtual model to the virtual camera, tan represents a sine trigonometric function, pi represents the circumference ratio, and a represents the view angle size of the virtual camera.
The pi/180/a operation is to convert the angle system into the radian system.
The above formula 1 references the calculation mode of the screen pixel density.
Illustratively, taking the example of calculating the rendered pixel density of the first virtual model by the written script in Maya software, the view angle size a of the virtual camera in Maya software may be 60 °, the rendering resolution may be 1920×1280, and the rendered image aspect ratio p=1920/1280=1.5 corresponding to the rendering resolution.
Accordingly, the above formula 1 becomes the following formula 2.
ppi=1×2.54/(1.5×2×d×tan (pi/180/60/2)) (equation 2)
Step S204: a texel scale of the first virtual model is determined based on the rendered pixel density.
In this step, the texel scaling of the first virtual model is determined according to the rendered pixel density ppi and the preset value.
Specifically, a ratio of the rendered pixel density to a preset value may be determined as the texel scale.
The preset value may be, for example, equal toI.e. the ratio of 2.54 to the square root of 2. Based on an example of the preset value, the calculation formula of the texel scale is as follows.
Where pix_scale represents the texel scale of the first virtual model rendering and ppi represents the pixel density of the first virtual model rendering.
Step S205: texture tiling parameters for the first virtual model are determined based on the area ratio and the texel scale.
In one embodiment of the present disclosure, the ratio of the texel scale to the square root of the area ratio may be determined as a first parameter, as follows.
Where real_pix_scale represents the first parameter, pix_scale represents the texel scale, and uv_ratio represents the area ratio.
Then, determining the ratio of the first parameter to the preset rendering pixel number as a texture tiling parameter of the first virtual model, wherein the formula is as follows.
repeat=real_pix_scale/pix_value (equation 5)
Wherein repeat represents texture tiling parameters of the first virtual model, real_pix_scale represents the first parameters, pix_value represents a preset number of rendering pixels.
In one embodiment, the preset number of rendering pixels may be related to a required rendering quality, and the higher the required rendering quality, the larger the value of the preset number of rendering pixels may be.
Illustratively, the preset number of rendering pixels may be equal to 256.
Step S206: and adjusting attribute values of texture tiling frequency attributes of texture coordinate nodes contained in the first virtual model in the U direction and the V direction respectively into texture tiling parameters.
The texture coordinate node may indicate a placement location of the texture map on the virtual model. In a particular application, multiple texture maps may be connected to the same texture coordinate node, which indicates that the placement of the multiple texture maps on the virtual model is consistent. That is, one texture map would correspond to one texture coordinate node, but one texture coordinate node may correspond to at least one texture map.
Each texture coordinate node is correspondingly configured with a texture tiling time attribute, and the texture precision of the first virtual model can be changed by adjusting the attribute value of the texture tiling time attribute.
Illustratively, in Maya software, the texture coordinate node is specifically a place2dtex node. The texture tiling time attribute corresponding to the place2dTexture node includes a repeatU attribute in the U direction and a repeatV attribute in the V direction. The repeat U attribute value of the place2dTexture node indicates the number of texture repetitions in the U direction of all texture maps connected to the place2dTexture node, and the repeat V attribute value of the place2dTexture node indicates the number of texture repetitions in the V direction of all texture maps connected to the place2dTexture node.
In one embodiment of this step, the plane 2dtex node included in the first virtual model may be iterated, and the attribute value of the repeat u attribute corresponding to the plane 2dtex node included in the first virtual model may be adjusted to be repeat in the above formula 5, and the attribute value of the repeat v attribute corresponding to the plane 2dtex node included in the first virtual model may also be adjusted to be repeat in the above formula 5.
Step S207: and rendering and displaying the first virtual model according to the adjusted attribute value.
Rendering the first virtual model according to the repeat U attribute values and the repeat V attribute values which are adjusted by all the place2dTexture nodes contained in the first virtual model, and then displaying the rendered first virtual model in a window for observing the virtual space in Maya software.
Furthermore, in one embodiment of the present disclosure, each frame of a frame captured by a virtual camera is displayed after rendering, and before each rendering, its texture tiling parameters may be calculated for all virtual models (whether the virtual model is shifted, scaled, etc.) that appear in the current virtual camera field of view. Thus, each virtual model in the visual field of the virtual camera can be guaranteed to obtain good self-adaptive texture performance.
In yet another embodiment of the present disclosure, before determining the area ratio of the UV area to the surface area of the first virtual model, the display control method of the virtual model may further include the steps of:
a transformation instruction for a first virtual model in a virtual model observed by a virtual camera is received.
Accordingly, in response to the transformation instruction, an area ratio of the UV area to the surface area of the first virtual model is determined.
And after the texture tiling parameters are determined, executing transformation operation corresponding to the transformation instruction on the first virtual model, and displaying the transformed first virtual model according to the texture tiling parameters.
Illustratively, the transformation instructions include at least one of: a move instruction, a zoom-out instruction, and a zoom-in instruction.
In this embodiment, there may be a plurality of virtual models in the view field of the virtual camera, and when the first virtual model is shifted, scaled, or the like, the texture tiling parameters may be recalculated only for the first virtual model, and the other virtual models that are not moved and are not blocked in the view field of the virtual camera may be displayed without recalculating, using the original texture tiling parameters.
Because the attention of the observer to the first virtual model is higher and the attention to other virtual models in the visual field of the virtual camera is lower when the first virtual model is transformed, only the texture tiling parameters of the transformed first virtual model can be recalculated, and thus, the calculation amount of the whole picture can be reduced.
Fig. 3 is a schematic diagram showing the comparison of the effects of the first virtual model a before and after moving, in which in fig. 3, the change of the texture precision of the virtual model is represented by the change of the color blocks on the virtual model, and the smaller and denser the color blocks on the virtual model, the higher the texture precision of the virtual model is. As shown in fig. 3 (a) and 3 (b), after the first virtual model a is moved in a direction closer to the virtual camera, the texture accuracy of the first virtual model a becomes high, that is, the texture fineness becomes high.
Fig. 4 is a schematic diagram showing comparison of effects before and after the first virtual model a is scaled down, and similarly, in fig. 4, the change of texture precision of the virtual model is represented by the change of color blocks on the virtual model, which indicates that the smaller and denser the color blocks on the virtual model are, the higher the texture precision of the virtual model is. As shown in fig. 4 (a) and 4 (b), after the first virtual model a is scaled down, the texture accuracy of the first virtual model a becomes low, that is, the texture fineness becomes low.
In another embodiment of the present disclosure, before determining the area ratio of the UV area to the surface area of the first virtual model, the display control method of the virtual model may further include the steps of:
a movement instruction for a virtual camera is received.
Accordingly, in response to the movement instruction, the virtual camera movement is controlled and an area ratio of the UV area to the surface area of the first virtual model currently being viewed by the virtual camera is determined.
In this embodiment, the texture tiling parameters may be recalculated for all virtual models in the current field of view of the virtual camera only when the virtual camera is moving, and may be calculated for only the virtual models transformed in the field of view when the virtual camera is not moving, which may also reduce the amount of calculation for the entire picture.
When the virtual camera moves, all virtual models currently reserved in the field of view are equivalent to scaling transformation, so that texture tiling parameters can be recalculated for all virtual models reserved in the current field of view of the virtual camera.
Of course, since the virtual model newly added to the virtual camera field needs to be displayed in the next frame, the texture tiling parameters thereof need to be calculated before the next frame is rendered. For the virtual model exiting the virtual camera field of view, the virtual model does not need to be displayed in the next frame, so that the texture tiling parameters of the virtual model do not need to be recalculated before the next frame is rendered.
The execution of the display control method of the virtual model provided by the embodiment of the disclosure is not limited to a certain manufacturing period of the virtual model, a look development stage, a development verification link of resources and illumination, most of which are at the front end link of a three-dimensional flow, and the like, and the execution of the display control method of the virtual model is not limited to the period of manufacturing the virtual model, and the texture performance of the virtual model can be optimized before rendering without rendering or after rendering.
According to the display control method for the virtual model, provided by the embodiment of the invention, the area of the virtual model and the distance of the virtual camera can be integrated, and the texel scaling of the virtual model can be determined, so that the texture performance of the virtual model is adaptively changed according to the texel scaling, the texture performance of the virtual model is enabled to accord with the look and feel in the real world, and the texture expressive force of the virtual model is improved. In addition, the cost of manpower, time and the like consumed in the process of making the map can be saved, and the difficulty and time consumption of rendering are reduced.
Corresponding to the display control method of the virtual model provided in the embodiment of the present disclosure, the embodiment of the present disclosure further provides a display control apparatus 700 of the virtual model. As shown in fig. 5, the apparatus 700 includes:
a first determining module 701 for determining an area ratio of UV area to surface area of the first virtual model as observed by the virtual camera;
a second determining module 702, configured to determine a texel scaling of the first virtual model according to a distance between the first virtual model and the virtual camera;
a third determining module 703, configured to determine texture tiling parameters for the first virtual model according to the area ratio and the texel scaling;
and a display module 704, configured to display the first virtual model according to the texture tiling parameter.
Optionally, the second determining module 702 includes:
a first determining unit configured to determine a distance from the first virtual model to the virtual camera;
a second determining unit, configured to determine a rendering pixel density of the first virtual model according to a distance from the first virtual model to the virtual camera, and a view angle size and a rendering resolution of the virtual camera;
And a third determining unit, configured to determine a texel scaling of the first virtual model according to the rendering pixel density.
Optionally, the third determining unit is specifically configured to:
and determining the ratio of the rendering pixel density to a preset value as the texel scaling.
Optionally, the first determining unit is specifically configured to:
obtaining a direction vector from the virtual camera to the first virtual model and a position coordinate of the virtual camera;
determining a first vector of the first virtual model relative to the virtual camera according to the position coordinates of the virtual camera;
a dot product result of the first vector and the direction vector is determined as a distance of the first virtual model to the virtual camera.
Optionally, the first determining module 701 includes:
an acquisition unit configured to acquire a UV space of the first virtual model observed by the virtual camera;
a first traversing unit, configured to traverse each surface of the first virtual model, and determine a UV area of each surface according to UV coordinates of each surface in the UV space;
the first accumulation unit is used for accumulating the UV area of each surface to obtain the UV area of the first virtual model;
A second traversing unit for traversing each of the faces of the first virtual model, determining a surface area of each of the faces;
a second accumulating unit, configured to accumulate the surface area of each surface to obtain the surface area of the first virtual model;
and a fourth determining unit, configured to determine a ratio of a UV area to a surface area of the first virtual model as the area ratio.
Optionally, the third determining module 703 includes:
a fifth determining unit configured to determine a ratio of the texel scale to a square root of the area ratio as a first parameter;
and a sixth determining unit, configured to determine a ratio of the first parameter to a preset rendering pixel number as a texture tiling parameter of the first virtual model.
Optionally, the display module 704 includes:
the adjusting unit is used for adjusting attribute values of texture tiling frequency attributes of texture coordinate nodes contained in the first virtual model in the U direction and the V direction into the texture tiling parameters;
and the rendering display unit is used for rendering and displaying the first virtual model according to the adjusted attribute value.
Optionally, the apparatus 700 further includes:
A first receiving module for receiving a transformation instruction for the first virtual model in a virtual model observed by the virtual camera;
the first determining module 701 is specifically configured to:
determining an area ratio of UV area to surface area of the first virtual model in response to the transformation instruction;
the display module 704 is specifically configured to:
and executing the transformation operation corresponding to the transformation instruction on the first virtual model, and displaying the transformed first virtual model according to the texture tiling parameters.
Optionally, the transformation instruction includes at least one of: a move instruction, a zoom-out instruction, and a zoom-in instruction.
Optionally, the apparatus 700 further includes:
the second receiving module is used for receiving a moving instruction aiming at the virtual camera;
the first determining module 701 is specifically configured to:
and in response to the movement instruction, controlling the virtual camera to move, and determining the area ratio of the UV area to the surface area of the first virtual model currently observed by the virtual camera.
In the embodiment of the disclosure, firstly, an area ratio of a UV area to a surface area of a first virtual model observed by a virtual camera can be determined through a first determining module; then, determining the texel scaling of the first virtual model according to the distance between the first virtual model and the virtual camera through a second determining module; the third determining module is used for determining texture tiling parameters aiming at the first virtual model according to the area ratio and the texel scaling; and the first virtual model can be displayed according to the texture tiling parameters through the display module. According to the display control device for the virtual model, provided by the embodiment of the invention, the area of the virtual model and the distance of the virtual camera can be integrated, and the texel scaling of the virtual model can be determined, so that the texture performance of the virtual model is adaptively changed according to the texel scaling, the texture performance of the virtual model is enabled to accord with the look and feel in the real world, and the texture expressive force of the virtual model is improved. In addition, according to the display control device of the virtual model, which is provided by the embodiment of the disclosure, a plurality of maps with different distances and different sizes are not required to be manufactured in advance for one virtual model, so that the cost of manpower, time and the like for manufacturing the maps is reduced, model textures are not required to be optimized when the images are rendered, and the regulation and control of the model texture expression are realized by adjusting texture tiling parameters before rendering, so that the difficulty and time consumption of rendering are reduced.
Next, referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 800 may be deployed with a display control apparatus of the virtual model described in the embodiments of the present disclosure, for implementing the functions in the embodiments of the present disclosure. Specifically, the electronic device 800 includes: a receiver 801, a transmitter 802, a processor 803, and a memory 804 (where the number of processors 803 in the execution device 800 may be one or more, one processor is exemplified in fig. 6), where the processor 803 may include an application processor 8031 and a communication processor 8032. In some embodiments of the present disclosure, the receiver 801, transmitter 802, processor 803, and memory 804 may be connected by a bus or other means.
Memory 804 may include read only memory and random access memory and provides instructions and data to the processor 803. A portion of the memory 804 may also include non-volatile random access memory (NVRAM). The memory 804 stores a processor and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, where the operating instructions may include various operating instructions for performing various operations.
The processor 803 controls the operation of the execution device. In a specific application, the individual components of the execution device are coupled together by a bus system, which may include, in addition to a data bus, a power bus, a control bus, a status signal bus, etc. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The methods disclosed in the embodiments of the present disclosure described above may be applied to the processor 803 or implemented by the processor 803. The processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware or instructions in software form in the processor 803. The processor 803 may be a general purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor, or a microcontroller, and may further include an application specific integrated circuit (application specific integrated circuit, ASIC), a field-programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The processor 803 may implement or perform the methods, steps, and logic blocks of the disclosure in embodiments of the present disclosure. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 804, and the processor 803 reads information in the memory 804, and in combination with the hardware, performs the steps of the above method.
The receiver 801 may be used to receive input numeric or character information and to generate signal inputs related to performing relevant settings and function control of the device. The transmitter 802 may be used to output numeric or character information through a first interface; the transmitter 802 may also be configured to send instructions to the disk group through the first interface to modify data in the disk group; the transmitter 802 may also include a display device such as a display screen.
In the embodiment of the present disclosure, the application processor 8031 in the processor 803 is configured to execute the display control method of the virtual model in the embodiment of the present disclosure. It should be noted that, the specific manner in which the application processor 8031 performs each step is based on the same concept as that of each method embodiment in the disclosure, so that the technical effects thereof are the same as those of each method embodiment in the disclosure, and the specific content can be referred to the description in the foregoing illustrated method embodiment in the disclosure, which is not repeated herein.
The embodiment of the disclosure also provides a chip for running the instruction, which is used for executing the technical scheme of the display control method of the virtual model in the embodiment.
The embodiment of the disclosure further provides a computer readable storage medium, in which computer instructions are stored, which when executed on a processor, cause the processor to execute the technical scheme of the display control method of the virtual model in the above embodiment.
The embodiment of the disclosure also provides a computer program product, which comprises a computer program, wherein the computer program is used for executing the technical scheme of the display control method of the virtual model in the embodiment when being executed by a processor.
The computer readable storage medium described above may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose server.
It should be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. Is limited only by the scope of the appended claims.
While the present disclosure has been described in terms of the preferred embodiments, it is not intended to limit the disclosure, and any person skilled in the art can make variations and modifications without departing from the spirit and scope of the present disclosure, so that the scope of the present disclosure shall be defined by the claims of the present disclosure.

Claims (13)

1. A display control method of a virtual model, the method comprising:
determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera;
determining a texel scaling of the first virtual model according to a distance between the first virtual model and the virtual camera;
determining texture tiling parameters for the first virtual model according to the area ratio and the texel scaling;
and displaying the first virtual model according to the texture tiling parameters.
2. The method of claim 1, wherein the determining the texel scale of the first virtual model based on the distance between the first virtual model and the virtual camera comprises:
determining a distance from the first virtual model to the virtual camera;
determining a rendering pixel density of the first virtual model according to the distance from the first virtual model to the virtual camera, the view angle size and the rendering resolution of the virtual camera;
and determining the texel scaling of the first virtual model according to the rendering pixel density.
3. The method of claim 2, wherein the determining the texel scale of the first virtual model from the rendered pixel density comprises:
and determining the ratio of the rendering pixel density to a preset value as the texel scaling.
4. The method of claim 2, wherein the determining the distance of the first virtual model to the virtual camera comprises:
obtaining a direction vector from the virtual camera to the first virtual model and a position coordinate of the virtual camera;
determining a first vector of the first virtual model relative to the virtual camera according to the position coordinates of the virtual camera;
a dot product result of the first vector and the direction vector is determined as a distance of the first virtual model to the virtual camera.
5. The method of claim 1, wherein the determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera comprises:
acquiring a UV space of the first virtual model observed by the virtual camera;
traversing each of the faces of the first virtual model, and determining the UV area of each of the faces according to the UV coordinates of each of the faces in the UV space;
Accumulating the UV area of each surface to obtain the UV area of the first virtual model;
traversing each of the faces of the first virtual model to determine a surface area of each of the faces;
accumulating the surface area of each surface to obtain the surface area of the first virtual model;
the ratio of the UV area to the surface area of the first virtual model is determined as the area ratio.
6. The method of claim 1, wherein the determining texture tiling parameters for the first virtual model from the area ratio and the texel scale comprises:
determining a ratio of the texel scale to the square root of the area ratio as a first parameter;
and determining the ratio of the first parameter to the preset rendering pixel number as a texture tiling parameter of the first virtual model.
7. The method of claim 1, wherein displaying the first virtual model according to the texture tiling parameters comprises:
adjusting attribute values of texture tiling frequency attributes of texture coordinate nodes contained in the first virtual model in the U direction and the V direction respectively to be the texture tiling parameters;
And rendering and displaying the first virtual model according to the adjusted attribute value.
8. The method of claim 1, wherein prior to said determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera, the method further comprises:
receiving a transformation instruction for the first virtual model in a virtual model observed by the virtual camera;
the determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera comprises:
determining an area ratio of UV area to surface area of the first virtual model in response to the transformation instruction;
displaying the transformed first virtual model according to the texture tiling parameters, including:
and executing the transformation operation corresponding to the transformation instruction on the first virtual model, and displaying the transformed first virtual model according to the texture tiling parameters.
9. The method of claim 8, wherein the transformation instruction comprises at least one of: a move instruction, a zoom-out instruction, and a zoom-in instruction.
10. The method of claim 1, wherein prior to said determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera, the method further comprises:
Receiving a movement instruction for the virtual camera;
the determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera comprises:
and in response to the movement instruction, controlling the virtual camera to move, and determining the area ratio of the UV area to the surface area of the first virtual model currently observed by the virtual camera.
11. A display control apparatus of a virtual model, the apparatus comprising:
a first determination module for determining an area ratio of UV area to surface area of the first virtual model as viewed by the virtual camera;
a second determining module, configured to determine a texel scaling of the first virtual model according to a distance between the first virtual model and the virtual camera;
a third determining module, configured to determine texture tiling parameters for the first virtual model according to the area ratio and the texel scaling;
and the display module is used for displaying the first virtual model according to the texture tiling parameters.
12. An electronic device, comprising: a processor, a memory, and computer program instructions stored on the memory and executable on the processor;
The processor, when executing the computer program instructions, implements a method for controlling the display of a virtual model as claimed in any one of the preceding claims 1 to 10.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer program instructions, which when executed by a processor are adapted to implement a display control method of a virtual model according to any of the preceding claims 1 to 10.
CN202311043342.7A 2023-08-17 2023-08-17 Virtual model display control method and device, electronic equipment and storage medium Pending CN117237509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311043342.7A CN117237509A (en) 2023-08-17 2023-08-17 Virtual model display control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311043342.7A CN117237509A (en) 2023-08-17 2023-08-17 Virtual model display control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117237509A true CN117237509A (en) 2023-12-15

Family

ID=89093823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311043342.7A Pending CN117237509A (en) 2023-08-17 2023-08-17 Virtual model display control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117237509A (en)

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US8970583B1 (en) Image space stylization of level of detail artifacts in a real-time rendering engine
CN115147579B (en) Block rendering mode graphic processing method and system for expanding block boundary
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN110428504B (en) Text image synthesis method, apparatus, computer device and storage medium
EP1519317A1 (en) Apparatus and method for antialiasing based on z buffer or normal vector information
CN112634414B (en) Map display method and device
CN112652046B (en) Game picture generation method, device, equipment and storage medium
US7466322B1 (en) Clipping graphics primitives to the w=0 plane
US8941660B2 (en) Image generating apparatus, image generating method, and image generating integrated circuit
CN111127590B (en) Second-order Bezier curve drawing method and device
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
US8462156B1 (en) Method and system for generating shadows in a graphics processing unit
CN114862999A (en) Dotting rendering method, dotting rendering device, dotting rendering equipment and storage medium
CN111091497A (en) Map vector line and plane thinning method, intelligent terminal and storage medium
CN113808243B (en) Drawing method and device for deformable snowfield grid
CN114663324A (en) Fusion display method of BIM (building information modeling) model and GIS (geographic information system) information and related components
CN113392246B (en) Drawing display method and device, storage medium and electronic equipment
JPWO2009104218A1 (en) Map display device
JP2000348206A (en) Image generating device and method for deciding image priority
US7525551B1 (en) Anisotropic texture prefiltering
CN117237509A (en) Virtual model display control method and device, electronic equipment and storage medium
CN107730577B (en) Line-hooking rendering method, device, equipment and medium
JP2003233836A (en) Image processor for conducting rendering shading processing by using distance component in modeling and its method
CN113674419B (en) Three-dimensional display method and device for meteorological cloud data, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination