CN116385291A - Texture rendering method and device, electronic equipment and medium - Google Patents

Texture rendering method and device, electronic equipment and medium Download PDF

Info

Publication number
CN116385291A
CN116385291A CN202310286850.1A CN202310286850A CN116385291A CN 116385291 A CN116385291 A CN 116385291A CN 202310286850 A CN202310286850 A CN 202310286850A CN 116385291 A CN116385291 A CN 116385291A
Authority
CN
China
Prior art keywords
texture
image
loading
image resource
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310286850.1A
Other languages
Chinese (zh)
Inventor
杨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202310286850.1A priority Critical patent/CN116385291A/en
Publication of CN116385291A publication Critical patent/CN116385291A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The disclosure provides a texture rendering method, a texture rendering device, electronic equipment and a medium, wherein the texture rendering method comprises the following steps: constructing a first ray according to a first position of a virtual camera in a first aerial box and a second position of a touch point; determining the loading priority of each face of the first empty box according to the first ray; according to the loading priority of each surface, sequentially loading the first image resources corresponding to each surface; each time a first image resource is loaded, generating a corresponding first texture image according to the first image resource; and rendering the first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first aerial box. Therefore, the method can realize the preferential loading of the image resources in the visual angle of the user and the visual range, then load the image resources in the non-user visual angle, and sequentially render the texture images in the visual angle of the user and the texture images in the non-user visual angle, so that the waiting time of the user is reduced, the user can enter the scene at a higher speed, and the scene preview is carried out.

Description

Texture rendering method and device, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, and in particular relates to a texture rendering method, a texture rendering device, electronic equipment and a medium.
Background
cube map maps are combinations of textures mapped to a single texture, and comprise 6 2D (two-dimensional) textures, each 2D texture being a face of a cube (cube), i.e. cube map is a cube with texture map.
Among them, a skybox (sky box), which is a cube wrapping the entire scene, is a technology implemented using a cube map, and for example, a 3D (three-dimensional) exhibition hall, a venue, or a house room show can be implemented using the skybox. The skybox constitutes a surrounding environment from 6 images, giving the user the illusion that he is in a much larger scene than is practical. For example, some skybox images used in video games are mountains, clouds, or stars.
In the related art, by loading 6 images to generate a cube or skybox, panoramic preview cannot be performed at a high speed.
Disclosure of Invention
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
The present disclosure provides a texture rendering method, apparatus, electronic device, and medium, so as to implement loading image resources in a user view angle or a visual range preferentially, loading image resources in a non-user view angle later, and sequentially rendering texture images in a user view angle and texture images in a non-user view angle, so that a waiting time of a user can be reduced, the user can enter a scene at a faster speed, and perform scene preview, and user experience is improved.
An embodiment of a first aspect of the present disclosure provides a texture rendering method, including:
constructing a first ray according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point;
determining the loading priority of each face of the first empty box according to the first ray; wherein, the loading priority of the first surface through which the first ray passes is higher than that of other surfaces except the first surface in the first empty box;
according to the loading priority of each face, loading first image resources corresponding to each face in turn, and generating a corresponding first texture image according to one first image resource in response to the completion of loading one first image resource each time;
and rendering the first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first aerial box to obtain a first aerial box after texture rendering.
An embodiment of a second aspect of the present disclosure provides a texture rendering apparatus, including:
the first construction module is used for constructing a first ray according to a first position of the virtual camera in the first aerial box to be rendered and a second position of the touch point;
The first determining module is used for determining the loading priority of each face of the first empty box according to the first ray; wherein, the loading priority of the first surface through which the first ray passes is higher than that of other surfaces except the first surface in the first empty box;
the first loading module is used for loading the first image resources corresponding to the faces in sequence according to the loading priority of the faces;
the first generation module is used for responding to one first image resource each time loading is completed, and generating a corresponding first texture image according to the one first image resource;
the first rendering module is used for rendering the first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box so as to obtain a first empty box after texture rendering.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the texture rendering method set forth in the embodiments of the first aspect of the present disclosure.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the texture rendering method proposed by the embodiment of the first aspect of the present disclosure.
An embodiment of a fifth aspect of the present disclosure proposes a computer program comprising a computer program which, when executed by a processor, implements a texture rendering method according to an embodiment of the first aspect of the present disclosure.
One embodiment of the present disclosure described above has at least the following advantages or benefits:
constructing a first ray according to a first position of a virtual camera in a first aerial box to be rendered and a second position of a touch point; determining the loading priority of each surface of the first empty box according to the first ray, wherein the loading priority of the first surface penetrated by the first ray is higher than that of other surfaces except the first surface in the first empty box; according to the loading priority of each surface, loading the first image resources corresponding to each surface in turn, and generating a corresponding first texture image according to one first image resource in response to the completion of one first image resource each time of loading; and rendering a first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box to obtain a first empty box after texture rendering. Therefore, the image resources in the user view angle or the visual range can be loaded preferentially, the image resources in the non-user view angle are loaded afterwards, the texture images in the user view angle and the texture images in the non-user view angle are rendered in sequence, the waiting time of the user can be reduced, the user can enter a scene at a higher speed, the scene is previewed, and the user experience is improved.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a texture rendering method according to an embodiment of the disclosure;
fig. 2 is a flow chart of a texture rendering method according to a second embodiment of the disclosure;
fig. 3 is a flow chart of a texture rendering method according to a third embodiment of the disclosure;
fig. 4 is a flow chart of a texture rendering method according to a fourth embodiment of the disclosure;
fig. 5 is a flow chart of a texture rendering method according to a fifth embodiment of the disclosure;
fig. 6 is a flowchart of a texture rendering method according to a sixth embodiment of the disclosure;
FIG. 7 is a schematic diagram of a pyramid structure provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a loading process of a low-definition image resource according to an embodiment of the disclosure;
FIG. 9 is a schematic illustration of a size expansion of a low-definition texture image provided by an embodiment of the present disclosure;
Fig. 10 is a schematic diagram of a loading flow of a medium definition image resource according to an embodiment of the disclosure;
FIG. 11 is a view cone clipping schematic diagram provided by an embodiment of the present disclosure;
FIG. 12 is a schematic illustration of a partial replacement of a texture image provided by an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a loading process of a high-definition image resource according to an embodiment of the disclosure;
fig. 14 is a schematic structural diagram of a texture rendering apparatus according to a seventh embodiment of the disclosure;
fig. 15 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
In the related art, a cube or a skybox is generated by loading 6 images of a fixed size and definition, or a cube or a skybox is generated by one panorama.
This approach may have the following problems:
First, panoramic previewing cannot be performed at a faster speed;
second, considering the loading time, the highest definition image material or image resource cannot be adopted, which may cause a problem that the display effect of the image after the enlarged observation is unclear.
In view of at least one problem presented above, the present disclosure proposes a texture rendering method, apparatus, electronic device, and storage medium.
The following describes a texture rendering method, apparatus, electronic device, and storage medium of embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a texture rendering method according to an embodiment of the disclosure.
The embodiment of the disclosure is exemplified by the texture rendering method being configured in a texture rendering device, and the texture rendering device can be applied to any electronic device so that the electronic device can execute a texture rendering function.
The electronic device may be any device with computing capability, for example, a computer, a mobile terminal, a server, etc., and the mobile terminal may be, for example, a vehicle-mounted device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, etc., which has various operating systems, a touch screen, and/or a display screen.
As shown in fig. 1, the texture rendering method may include the steps of:
Step 101, constructing a first ray according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point.
In order to interactively browse the virtual scenes in the sky box, the corresponding visible portions in the virtual scenes may be displayed according to the rotation angle of the user, so when the corresponding scenes are presented by using the sky box, a virtual camera may be set in the sky box, and the virtual camera is used for rendering the portions visible to the user in the virtual scenes.
In the embodiment of the present disclosure, the setting position of the virtual camera is not limited, for example, in order to enable the user to see the complete spatial image corresponding to the first empty box, the virtual camera may be set at the center of the first empty box, or the virtual camera may be set at another position in the first empty box.
In the embodiment of the disclosure, the first position of the virtual camera may be a position of a center or origin of the virtual camera in a three-dimensional space where the first empty box is located.
In the embodiment of the present disclosure, the touch point may be a position point touched by a finger of a user, a position point clicked by a mouse, a position point where the mouse is located, and the like.
It should be noted that, based on the mapping relationship or projection relationship between the two-dimensional coordinate system corresponding to the screen and the three-dimensional coordinate system where the first empty box is located, two-position coordinates of the touch point on the screen may be mapped to the three-dimensional coordinate system, so as to obtain a three-dimensional position of the touch point under the three-dimensional coordinate system, and the three-dimensional position is used as the second position of the touch point.
In the embodiment of the disclosure, the first ray may be constructed according to a first position of the virtual camera and a second position of the touch point in the first empty box to be rendered. For example, a first ray may be constructed starting at a first location and pointing from the first location to a second location.
Step 102, determining the loading priority of each surface of the first empty box according to the first ray, wherein the loading priority of the first surface penetrated by the first ray is higher than that of the other surfaces except the first surface in the first empty box.
In the embodiment of the disclosure, a first face through which the first ray passes may be determined from the faces of the first empty box according to the first ray, that is, a face in the first empty box through which the first ray passes is taken as the first face, and a loading priority of the first face is determined to be higher than other faces except the first face in the first empty box.
In summary, the first plane through which the first ray passes is the plane in the visible range of the user, and the plane in the visible range of the user is used as the first loaded plane, so that the waiting time of the user can be reduced.
Step 103, loading the first image resources corresponding to each surface in turn according to the loading priority of each surface, and generating a corresponding first texture image according to one first image resource in response to each time one first image resource is loaded.
In the embodiment of the present disclosure, according to the loading priority of each surface in the first aerial box, the first image resources corresponding to each surface may be loaded in turn.
In one possible implementation manner of the embodiment of the present disclosure, in order to further reduce the waiting duration of the user, the sharpness of the first image resource may be smaller than the set first sharpness threshold. Therefore, when the scene is constructed, the image resources with relatively smaller definition are loaded, and the loading efficiency of the image resources is improved, so that the construction time of the scene is saved, a user can enter the scene faster, and the user experience is improved.
In the embodiment of the disclosure, each time one first image resource is loaded, a first texture image corresponding to the one first image resource can be generated according to the one first image resource.
And 104, rendering a first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box to obtain a first empty box after texture rendering.
In the embodiment of the present disclosure, a first texture image corresponding to the one first image resource may be rendered to a surface corresponding to the one first image resource in the first empty box, so as to obtain a first empty box after texture rendering.
According to the texture rendering method, a first ray is constructed according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point; determining the loading priority of each surface of the first empty box according to the first ray, wherein the loading priority of the first surface penetrated by the first ray is higher than that of other surfaces except the first surface in the first empty box; according to the loading priority of each surface, loading the first image resources corresponding to each surface in turn, and generating a corresponding first texture image according to one first image resource in response to the completion of one first image resource each time of loading; and rendering a first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box to obtain a first empty box after texture rendering. Therefore, the image resources in the user view angle or the visual range can be loaded preferentially, the image resources in the non-user view angle are loaded afterwards, the texture images in the user view angle and the texture images in the non-user view angle are rendered in sequence, the waiting time of the user can be reduced, the user can enter a scene at a higher speed, the scene is previewed, and the user experience is improved.
In order to clearly illustrate how to load the first image resources corresponding to each face in the first aerial box in the above embodiment of the present disclosure, the present disclosure further provides a texture rendering method.
Fig. 2 is a flowchart illustrating a texture rendering method according to a second embodiment of the disclosure.
As shown in fig. 2, the texture rendering method may include the steps of:
in step 201, a first ray is constructed according to a first position of a virtual camera and a second position of a touch point in a first empty box to be rendered.
Step 202, determining loading priorities of all sides of a first empty box according to a first ray; the first surface through which the first ray passes is higher in loading priority than the second surface adjacent to the first surface, and the second surface is higher in loading priority than the third surface through which the second ray opposite to the first ray passes.
In the embodiment of the disclosure, a first face through which the first ray passes may be determined from the faces of the first empty box, that is, a face in the first empty box through which the first ray passes may be taken as the first face, and a face adjacent to the first face in the first empty box may be taken as the second face, and a face of the first empty box through which the second ray opposite to the first ray passes may be taken as the third face, that is, a face other than the first face and the second face in the faces of the first empty box may be taken as the third face.
In the embodiment of the disclosure, it may be determined that the loading priority of the first face is higher than that of the second face, and that the loading priority of the second face is higher than that of the third face.
Step 203, adding the first facet to the first load queue, and adding the second facet to the second load queue, and adding the third facet to the third load queue.
In an embodiment of the disclosure, a first side may be added to a first load queue, a second side may be added to a second load queue, and a third side may be added to a third load queue, where the first load queue has a higher load priority than the second load queue, and the second load queue has a higher load priority than the third load queue.
Step 204, loading the first image resources corresponding to each face in the first loading queue, the second loading queue and the third loading queue in turn.
In the embodiment of the present disclosure, the first image resources corresponding to each face in the first load queue, the second load queue, and the third load queue may be sequentially loaded.
In response to each time a first image resource is loaded, a corresponding first texture image is generated from the one first image resource, step 205.
And 206, rendering a first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box to obtain a first empty box after texture rendering.
The explanation of steps 205 to 206 may be referred to the relevant descriptions in any embodiment of the disclosure, and are not repeated here.
According to the texture rendering method, through the hierarchical loading strategy, firstly, the image resources in the user view angle are loaded, then, the image resources adjacent to the image resources in the user view angle are loaded, and finally, the rest image resources are loaded, so that the texture images in the user view angle, the texture images adjacent to the texture images in the user view angle and the rest texture images can be sequentially rendered, waiting time of a user is reduced, and user experience is improved.
To clearly illustrate how a first texture image is generated from a first image source in any embodiment of the present disclosure, the present disclosure also proposes a texture rendering method.
Fig. 3 is a flowchart illustrating a texture rendering method according to a third embodiment of the present disclosure.
As shown in fig. 3, the texture rendering method may include the steps of:
step 301, constructing a first ray according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point.
Step 302, determining loading priorities of all sides of a first empty box according to a first ray; wherein the first side through which the first ray passes has a higher loading priority than the other sides of the first aerial box other than the first side.
Step 303, loading the first image resources corresponding to each surface in turn according to the loading priority of each surface.
The explanation of steps 301 to 303 may be referred to the relevant descriptions in any embodiment of the disclosure, and are not repeated here.
In response to each time a first image resource is loaded, an initial texture object corresponding to the first image resource is created, step 304.
In the disclosed embodiments, each time a first image resource is loaded, the first image resource may be stored in memory and a null texture object corresponding to the first image resource may be created in the GPU (graphics processing unit, graphics processor), denoted as an initial texture object in the present disclosure.
In step 305, according to the set texture information, the texture parameter configuration is performed on the initial texture object to obtain the first texture object.
In the embodiment of the present disclosure, the texture information or texture parameters may include, but are not limited to, mapping relation between a two-dimensional coordinate system corresponding to a screen and a two-dimensional coordinate system corresponding to a GPU (i.e. for mapping information in the two-dimensional coordinate system corresponding to the GPU onto the screen), sampling point information (i.e. for indicating how to sample), and so on.
In the embodiment of the present disclosure, texture parameter configuration may be performed on the initial texture object according to the set texture information, so as to obtain the first texture object.
And step 306, rendering a first image resource according to the first texture object to obtain a first texture image corresponding to the first image resource.
In the embodiment of the disclosure, the first image resource may be read from the memory, and the first image resource may be rendered according to the first texture object, so as to obtain a first texture image corresponding to the first image resource.
Step 307, rendering a first texture image corresponding to a first image resource to a surface corresponding to the first image resource in the first empty box, so as to obtain a first empty box after texture rendering.
The explanation of step 307 may be referred to the relevant description in any embodiment of the present disclosure, and will not be repeated here.
In a possible implementation manner of the embodiment of the present disclosure, in order to improve a display effect of texture images on each surface in the first aerial case, the physical size of the first texture image may be adjusted according to the physical size of each surface in the first aerial case, so that the adjusted physical size of the first texture image is matched with the physical size of the corresponding surface in the first aerial case.
As an example, the first texture image corresponding to the one first image resource may be resized according to the physical size of the surface corresponding to the one first image resource in the first space box, so that the physical size of the adjusted first texture image matches the physical size of the surface, and then the resized first texture image may be rendered to the surface corresponding to the one first image resource in the space box, so as to obtain the first space box after texture rendering.
According to the texture rendering method, the first image resource can be effectively rendered according to the texture objects configured by the texture parameters, so that the texture image is obtained, the texture image can be rendered into the first sky box, and the effectiveness of texture rendering is improved.
In order to clearly illustrate any embodiment of the present disclosure, the present disclosure also proposes a texture rendering method.
Fig. 4 is a flowchart illustrating a texture rendering method according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the texture rendering method may include the steps of:
in step 401, a first ray is constructed according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point.
Step 402, determining loading priorities of all sides of the first empty box according to the first ray.
Wherein the first side through which the first ray passes has a higher loading priority than the other sides of the first aerial box other than the first side.
Step 403, loading the first image resources corresponding to each surface in turn according to the loading priority of each surface.
The explanation of steps 401 to 403 may be referred to the relevant description in any embodiment of the present disclosure, and will not be repeated here.
The definition of the first image resource is smaller than a set first definition threshold. Therefore, when the scene is constructed, the image resources with relatively smaller definition are loaded, and the loading efficiency of the image resources is improved, so that the construction time of the scene is saved, a user can enter the scene faster, and the user experience is improved.
In response to each time a first image resource is loaded, a corresponding first texture image is generated from the one first image resource, step 404.
And step 405, rendering a first texture image corresponding to a first image resource to a surface corresponding to the first image resource in the first empty box to obtain a first empty box after texture rendering.
The explanation of steps 404 to 405 may be referred to the relevant descriptions in any embodiment of the disclosure, and are not repeated here.
Step 406, determining a fourth surface located in the view cone of the virtual camera from the first empty box after texture rendering, and determining a first target area from the fourth surface, wherein the first target area is located in the view cone of the virtual camera.
It should be noted that step 406 may be performed after step 403, or may be performed after step 405, and the present disclosure is exemplified only as performed after step 405.
In the embodiment of the disclosure, when the low-definition first texture image rendering is completed, or when the low-definition texture image rendering is completed, a fourth surface located within the view cone of the virtual camera may be determined from the surfaces of the first empty box after the texture rendering, and a first target area may be determined from the fourth surface, wherein the first target area is located within the view cone of the virtual camera, that is, the first target area is a visual area of the user.
Step 407, loading at least one second image resource corresponding to the first target area; the definition of the second image resource is larger than the first definition threshold and smaller than the set second definition threshold.
In the embodiment of the present disclosure, the second image resource in the user visible area may be loaded first, that is, at least one second image resource corresponding to the first target area may be loaded. The definition of the second image resource is larger than the first definition threshold, and the definition of the second image resource is smaller than the set second definition threshold. That is, the first image resource may be a low definition image resource and the second image resource may be a medium definition image resource.
Wherein the picture content displayed by each second image resource matches the partial picture content displayed by the first image resource. For example, the physical size of the first image resource may be a first set multiple of the physical size of the second image resource, for example, the first set multiple may be 4, that is, the total screen content displayed by the 4 second image resources matches or is the same as the screen content of one first image resource, and the physical size of each second image resource is 1/4 of the physical size of the first image resource.
Step 408, generating a second texture image corresponding to at least one second image resource.
In the embodiment of the present disclosure, a second texture image corresponding to each second image resource may be generated, where a generating manner of the second texture image is similar to that of the first texture image, and will not be described herein.
At step 409, at least one second texture image is rendered to the first target area in the fourth surface of the first texture-rendered empty box to obtain the first texture-updated empty box.
In the embodiment of the disclosure, at least one second texture image may be rendered to the first target area in the fourth surface of the first empty box after texture rendering, so as to obtain the first empty box after texture updating.
In any one embodiment of the disclosure, after loading the second image resource with middle definition and replacing the first texture image with the second texture image corresponding to the second image resource, the first texture object corresponding to the image resource with low definition may be released, so as to reduce occupation of storage space.
In any one embodiment of the present disclosure, when loading the second image resource corresponding to the first target area is completed, the first load queue, the second load queue, and the third load queue may be further emptied, and the second target area is added to the first load queue, the fifth surface is added to the second load queue, and the sixth surface is added to the third load queue, where the second target area is an area except for the first target area in the fourth surface of the first empty box after the texture update, the fifth surface is a surface adjacent to the fourth surface, and the sixth surface is a surface except for the fourth surface and the fifth surface in the first empty box after the texture update.
Then, third image resources corresponding to each surface or region in the first loading queue, the second loading queue and the third loading queue can be loaded in sequence; wherein the physical size of the third image asset matches the physical size of the second image asset and the sharpness of the third image asset matches the sharpness of the second image asset.
That is, the total picture content displayed by at least one third image resource corresponding to the second target area and at least one second image resource corresponding to the first target area is matched or the same as the picture content displayed by the first image resource corresponding to the fourth surface, and the total picture content displayed by the plurality of third image resources corresponding to the rest of the first empty boxes except the fourth surface after the texture update is matched or the same as the picture content displayed by the first image resource corresponding to the rest of the first empty boxes.
And, when loading of one third image resource is completed, a third texture image corresponding to the one third image resource may be generated according to the one third image resource, where a generation manner of the third texture image is similar to a generation manner of the first texture image or the second texture image, and details are not repeated herein. And finally, rendering the third texture image corresponding to the third image resource to a surface or area corresponding to the third image resource in the first empty box after the texture update to obtain the first empty box after the texture update.
Therefore, the method and the device can also realize loading of the medium-definition image resources in the non-user visible area, render the medium-definition texture images in the non-user visible area according to the image resources, and improve the display effect and definition of the texture images in the first sky box and user experience.
In any one of the embodiments of the present disclosure, in order to improve the display effect of the texture image on each surface in the first empty box, the third texture image may be resized according to the physical dimensions of each surface in the first empty box.
As an example, the third texture image corresponding to the one third image resource may be resized according to a first ratio between the first physical size of the first image resource and the second physical size of the third image resource, and a second ratio between the third physical size of each side of the first empty box and the first physical size; and rendering the third texture image with the adjusted size to a surface or a region corresponding to the third image resource in the first empty box with updated texture.
For example, the physical size of the resized third texture image may be (second ratio)/(first ratio) of the third physical size.
According to the texture rendering method, when the scene is constructed, the image resources with relatively smaller definition are loaded first, and the loading efficiency of the image resources is improved, so that the construction time of the scene is saved, a user can enter the scene faster, and the user experience is improved. Further, when the loading of the low-definition image resources is completed or when the rendering of the low-definition texture images is completed, the display effect and definition of the texture images in the first empty box can be improved, and user experience is improved. And firstly, the medium-definition image resources in the user visible area are loaded, so that the waiting time of the user can be reduced, the display effect of the image in the user visible area can be improved, and the user experience can be improved.
In a possible implementation manner of the embodiment of the present disclosure, in order to further improve the display effect of the texture image in the first empty box, when the user zooms in and views, the texture image in the user visible area may be replaced with a high-definition texture image. The above process is described in detail with reference to fig. 5.
Fig. 5 is a flowchart illustrating a texture rendering method according to a fifth embodiment of the present disclosure.
As shown in fig. 5, the texture rendering method may further include the following steps, based on the embodiment shown in fig. 4:
in step 501, in response to the zoom-in operation of the first empty box after the texture update, a third ray is constructed according to the third position of the virtual camera and the fourth position of the touch point.
In the embodiment of the present disclosure, when a user wants to zoom in and view a texture image on a first empty box, an zoom-in operation on the first empty box after the texture update may be triggered, so that an execution body of the present disclosure may construct a third ray according to a third position of a virtual camera in a three-dimensional space and a fourth position of a touch point touched by the user in the three-dimensional space in response to the zoom-in operation on the first empty box after the texture update. For example, a third ray may be constructed starting at a third location and pointing from the third location to a fourth location.
Step 502 determines that a third ray passes through a third target area in a seventh face of the first empty box after texture updating.
In an embodiment of the present disclosure, a third target area through which a third ray passes may be determined, where the third target area is located on a seventh surface of the first empty box after the texture update.
The third target area may be the same as the first target area, or the size of the third target area may be determined according to the view cone of the virtual camera, i.e. the third target area is located within the view cone of the virtual camera.
Step 503, loading a fourth image resource corresponding to the third target area, where the sharpness of the fourth image resource is greater than the second sharpness threshold.
In the embodiment of the disclosure, the definition of the fourth image resource is greater than the second definition threshold, i.e., the fourth image resource is a high definition image resource.
Wherein the picture content displayed by each fourth image resource matches the partial picture content displayed by the first image resource. For example, the physical size of the first image resource may be a second set multiple of the physical size of the fourth image resource, for example, the second set multiple may be 9, that is, the total screen content displayed by the 9 fourth image resources matches or is the same as the screen content of one first image resource, and the physical size of each fourth image resource is 1/9 of the physical size of the first image resource.
In the embodiment of the present disclosure, a fourth image resource in the user visible area may be loaded, that is, at least one fourth image resource corresponding to the third target area is loaded.
And step 504, generating a fourth texture image corresponding to the fourth image resource.
In the embodiment of the present disclosure, a fourth texture image corresponding to each fourth image resource may be generated, where a generating manner of the fourth texture image is similar to that of the first texture image, the second texture image, or the third texture image, and will not be described herein.
Step 505, mapping the fourth texture image to the third target area of the seventh surface of the texture updated first empty box.
In an embodiment of the present disclosure, the fourth texture image may be mapped to the third target area of the seventh face of the first empty box after the texture update.
In a possible implementation manner of the embodiment of the present disclosure, when loading a high-definition fourth image resource corresponding to a third target area, the first load queue, the second load queue, and the third load queue may be further emptied, the fourth target area may be added to the first load queue, the eighth surface may be added to the second load queue, and the ninth surface may be added to the third load queue, where the fourth target area is an area except the third target area in a seventh surface of the first empty box after texture update, the eighth surface is a surface adjacent to the seventh surface, and the ninth surface is a surface except the seventh surface and the eighth surface in the first empty box after texture update.
Then, fifth image resources corresponding to each surface or region in the first loading queue, the second loading queue and the third loading queue can be loaded in sequence; wherein the physical size of the fifth image asset matches the physical size of the fourth image asset and the sharpness of the fifth image asset matches the sharpness of the fourth image asset.
That is, the total picture content displayed by at least one fourth image resource corresponding to the fourth target area and at least one fifth image resource corresponding to the third target area is matched or identical to the picture content displayed by the first image resource corresponding to the seventh surface, and the total picture content displayed by a plurality of fifth image resources corresponding to the remaining surfaces except the seventh surface in the first empty box after the texture update is matched or identical to the picture content displayed by the first image resource corresponding to the remaining surfaces.
And, when loading of one fifth image resource is completed, generating a fifth texture image corresponding to the one fifth image resource according to the one fifth image resource, wherein the generation manner of the fifth texture image is similar to that of the first texture image or the second texture image or the third texture image or the fourth texture image, and will not be described herein. And finally, rendering the fifth texture image corresponding to the fifth image resource to a surface or area corresponding to the fifth image resource in the first empty box after the texture update to obtain the first empty box after the texture update.
Therefore, the loading of the high-definition image resources in the non-user visible area can be realized, the high-definition texture images in the non-user visible area are rendered according to the image resources, the display effect and definition of the texture images in the first sky box can be improved, and the user experience is improved.
In any one embodiment of the disclosure, after loading the fourth image resource or the fifth image resource with high definition, and replacing the texture image with medium definition corresponding to the fourth image resource or the fifth image resource with high definition, the texture object corresponding to the image resource with medium definition may be released, so as to reduce the occupation of the storage space.
In any one of the embodiments of the present disclosure, in order to improve the display effect of the texture image on each surface in the first day box, the fifth texture image may be resized according to the physical dimensions of each surface in the first day box. The implementation principle is similar to that of the third texture image, and will not be described here.
According to the texture rendering method, when a user zooms in and views, the middle-definition texture image in the user visible area is replaced by the high-definition texture image, so that the display effect of the image in the user visible area can be improved, and the user experience is improved.
In one possible implementation manner of the embodiment of the present disclosure, in order to further improve user experience, a sky box with multiple points (i.e., viewpoint positions) may be displayed, so as to achieve the purpose of roaming a three-dimensional scene when the points are switched. The above process will be described in detail with reference to fig. 6.
Fig. 6 is a flowchart illustrating a texture rendering method according to a sixth embodiment of the disclosure.
As shown in fig. 6, on the basis of the embodiment shown in any one of fig. 1 to 5, the texture rendering method may further include the steps of:
step 601, obtaining a second space box after texture rendering, wherein the center of the first space box after texture rendering is located at a first viewpoint position, the center of the second space box after texture rendering is located at a second viewpoint position, and the first viewpoint position is different from the second viewpoint position.
In the embodiment of the present disclosure, the surface of the second space box after the texture rendering may be a low-definition texture image, a medium-definition texture image, or a high-definition texture image, which is not limited in this disclosure. The texture rendering mode of the second empty box is similar to that of the first empty box, and will not be described herein.
Step 602, the virtual camera is moved from a first viewpoint position to a second viewpoint position to switch the scene at the first viewpoint position to the scene at the second viewpoint position.
In the embodiments of the present disclosure, a virtual camera may be moved from a first viewpoint position to a second viewpoint position to switch a scene of the first viewpoint position to a scene of the second viewpoint position.
In any one embodiment of the present disclosure, during point location switching, texture objects corresponding to high-definition image resources corresponding to each surface of the first empty box at the first viewpoint position may also be released, so as to reduce the system memory.
According to the texture rendering method, point position switching is achieved by moving the virtual camera to different viewpoint positions, and the purpose of roaming the three-dimensional scene can be achieved.
In any one embodiment of the disclosure, the pyramid structure can be used for loading the image resources, and the image resources with low definition, larger physical size and smaller volume can be initially loaded, so that the problem of too slow initial loading can be solved. After the image resources with low definition are loaded, the image resources with medium definition and the image resources with high definition can be gradually replaced as required, and the user experience is improved.
The application scene of the present disclosure may include a 3D exhibition hall, a venue, a house indoor display, etc., and the basic principle is to use 6 square image resources (front, back, left, right, up, down) to splice into one box for simulating a three-dimensional space. The image resources can be divided into three levels (low-definition image resources, medium-definition image resources and high-definition image resources) like a pyramid, and when the image resources enter a scene in an initial state, the low-definition image resources can be loaded, and because the volume of the low-definition image resources is smaller, the network transmission time is shorter and the loading speed is higher.
The whole three-dimensional space can be viewed in a magnifying manner, or can be viewed without magnification. When the user does not enlarge and view, the face of the space box, which the virtual camera faces, can be obtained, the medium-definition image resource corresponding to the face of the space box, which the virtual camera faces, is downloaded, and the low-definition image resource corresponding to the face is replaced. When the user zooms in and views, the part of the virtual camera which is specifically viewed can be searched according to a ray method, and the high-definition image resource is downloaded for local replacement.
The pyramid structure is mainly divided into three stages, and the image sizes are 512×512 pixels, 1024×1024 pixels, and 2048×2048 pixels, respectively, and the specific structure is shown in fig. 7.
For loading of image resources, a three-level loading queue may be used to manage the loading order of the image resources, respectively: a first load queue (or referred to as a priority load queue), a second load queue (or referred to as a normal load queue), and a third load queue (or referred to as a delayed load queue). After a user performs a scene, under the condition that the virtual camera is not pushed, a three-dimensional vector (namely, a ray is constructed) is formed by the point positions of the origin of the virtual camera and the mouse in the three-dimensional space, the surface through which the three-dimensional vector passes can be added into a priority loading queue, four surfaces adjacent to the surface can be added into a common loading queue, and the surface through which the negative quantity opposite to the direction of the three-dimensional vector passes can be added into a delay loading queue. Under the condition of pushing the virtual camera, the definition of the loaded image resource is judged according to the Z-axis coordinate of the left-hand coordinate system pushed by the virtual camera. When the mouse moves, the three-dimensional vector is recalculated, the three-level loading queue is emptied, and the three-level loading queue is reset according to the new three-dimensional vector.
As an example, the loading process of the low-definition image resource may be as shown in fig. 8, and the loading of the low-definition image resource of the panoramic material of the first point location may be performed before entering the construction scene, and the construction scene is constructed after the loading is completed. Specifically, the loading procedure of the low-definition image resource may include the following steps:
1. Using a ray method, constructing a three-dimensional vector (namely, sending a ray from the virtual camera to the mouse) by using the origin of the virtual camera and the coordinate point of the mouse, adding the plane penetrated by the three-dimensional vector (or ray) into a priority loading queue, adding four planes adjacent to the plane into a common loading queue, adding the plane penetrated by the negative direction opposite to the three-dimensional vector into a delay loading queue, requesting by HTTP (Hyper Text Transfer Protocol ), downloading low-definition image resources (such as 6 panoramic images with 512 x 512 pixels) of the first panoramic point in the scene, and storing the image resources into a memory.
2. The color channel information (RGBA (representing the value of Red Green Blue and Alpha) of the image resource is read from the memory) is stored using the unit8array format. In the disclosure, a texture object may be created, and 6 texture objects may be generated and stored in an array, where each face of a space box penetrated by an X-axis positive direction posivex in a left-hand coordinate system, a face of a space box penetrated by an X-axis negative direction negativeX in a left-hand coordinate system, a face of a space box penetrated by a Y-axis positive direction posivey in a left-hand coordinate system, a face of a space box penetrated by a Y-axis negative direction posivey in a left-hand coordinate system, a face of a space box penetrated by a Z-axis positive direction posivez in a left-hand coordinate system, and a face of a space box penetrated by a Z-axis negative direction posivez in a left-hand coordinate system are bound with texture objects according to the order of the face of the space box penetrated by the X-axis positive direction posivex in the left-hand coordinate system, texture parameters are set, and corresponding image resources in each face of the space box are rendered according to the texture objects set with texture parameters, so as to obtain a texture image.
The image size of the texture image is 512×512 pixels.
3. The image size of the 6 generated texture images is 512×512 pixels, and the texture information is transferred into the primitive shader and the vertex shader to enlarge the physical size of the texture images to 2048×2048, i.e. the physical size of the texture images with low definition is expanded, and the expansion mode can be shown in fig. 9. When the 6 texture images are amplified, a panoramic image can be generated according to the 6 texture images.
4. And (3) constructing a rendering scene, constructing a space box, setting up a virtual camera, mapping the panoramic image generated in the step (3) onto the space box, and performing scene rendering, wherein at the moment, a user can see the general condition of the scene.
Since the sharpness of the texture image is low, the scene is unclear and cannot be viewed in enlargement. That is, in the overall loading flow shown in fig. 8, only the HTTP request is most time-consuming, the definition and the image size of the image resource are reduced, and the processing time of the finishing flow can be greatly shortened, so that the user can enter the scene more quickly.
As an example, as shown in fig. 10, the loading process of the middle resolution image resource may be that after the loading of the low resolution image resource is completed and when the user just enters the scene, the image resource in the view angle of the user may be preferentially downloaded, and the texture image is replaced, and specifically, the loading process of the middle resolution image resource may include the following steps:
1. And acquiring a part of the space box after cutting (namely determining the area in the view angle range of the user in each surface of the space box) in the view cone of the virtual camera, and finding out the corresponding medium-definition image resource. Wherein, view cone clipping schematic diagram can be shown in fig. 11. The cone in fig. 11 refers to the view cone of the virtual camera, and the cone between the proximal and distal planes of the view cone may be one of the 6 faces of the sky box.
gl_Position=uPMatrix*uMVMatrix*vec4(aVertexPosition,1.0);
The gl_position refers to coordinates of sampling points on a screen rendered by a color former, namely, three-dimensional coordinates in a three-dimensional scene of a space box are mapped to coordinates under a two-dimensional coordinate system corresponding to the screen, aVertex Position refers to model (namely, space box) coordinates, uMVMatrix refers to a model viewpoint matrix, namely, uMVMatrix is a product of the viewpoint matrix and the model matrix, and uPMatrix refers to a projection matrix from the three-dimensional coordinate system to the two-dimensional coordinate system.
2. And downloading the image resources with the definition in the memory through the HTTP request, and storing the image resources in the memory.
3. The color channel information (RGBA value) of the image resource is read from the memory and saved in unit8array format. And creating a texture object, binding texture objects, setting texture parameters and generating a texture image.
The image size of the texture image is 1024 pixels by 1024 pixels.
4. The texture image is partially replaced, and the schematic diagram can be shown in fig. 12. Specifically, the corresponding texture image may be partially replaced, the low-definition texture image is replaced by the medium-definition texture image, and the physical size of the replaced texture image is expanded to 2048×2048, i.e. 4 times larger. And after the replacement is finished, regenerating a panoramic image, and mapping the panoramic image onto a sky box to finish one-time updating.
5. And (3) repeatedly executing the steps 2-4 according to the medium definition resources obtained in the step (1), and gradually replacing the image resources of the medium definition space box. After the replacement is finished, when the current point position is not subjected to the operation of magnifying and checking, a user can obtain a better observation effect.
As an example, the loading flow of the high-definition image resource may be as shown in fig. 13, and it should be noted that the loading of the high-definition image resource is not an essential item, and the steps shown in fig. 13 are performed only when the user performs the zoom-in view. Specifically, the loading process of the high definition image resource may include the following steps:
1. when the user triggers the zoom-in and view operation, a ray can be emitted from the virtual camera to the clicking place of the mouse, and the surface of the space box through which the ray passes is a high-definition image resource to be zoomed in.
2. And downloading high-definition image resources corresponding to the user visible area in the surface through an HTTP request, wherein the image size is 2048 x 2048 pixels, and storing the downloaded image resources in a memory after the downloading is completed.
3. Reading the image resource from the memory, creating an empty texture object, binding the texture object, setting texture parameters and generating a texture image.
4. And mapping the generated texture image to the enlarged space box, wherein the enlarged image resources are clearer, and the user experience is improved.
In any of the embodiments of the present disclosure, the storage resources may be dynamically released as the panorama point moves.
For example, when a general venue is displayed, a plurality of displayed points can be included, the purpose of roaming the venue can be achieved by switching the points, new image resources can be loaded when the points are switched, and unnecessary texture objects need to be released due to limited memory resources.
An example can replace the low-definition image resource of the current point in the memory with the medium-definition image resource after the medium-definition image resource of the current point is successfully loaded, and release the texture object corresponding to the low-definition image resource, thereby achieving the purpose of reducing the space.
As another example, when switching the point positions, the texture object corresponding to the high-definition image resource of the previous point position may be released to reduce the system content. When loading again, the buffer memory of the system can be utilized to increase the loading speed, so that the loading speed is faster than the first loading speed when loading the image resource for the second time.
In summary, the texture rendering method disclosed by the invention can realize hierarchical management of image resources based on a pyramid structure; the panoramic image can be segmented, and the panoramic image is partially replaced, so that the rendering efficiency is improved; the method can release the storage resources during point location switching.
Corresponding to the texture rendering method provided in the embodiments of fig. 1 to 6, the present disclosure also provides a texture rendering device, and since the texture rendering device provided in the embodiments of the present disclosure corresponds to the texture rendering method provided in the embodiments of fig. 1 to 6, the implementation of the texture rendering method is also applicable to the texture rendering device provided in the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
Fig. 14 is a schematic structural diagram of a texture rendering apparatus according to a seventh embodiment of the disclosure.
As shown in fig. 14, the texture rendering apparatus 1400 may include: a first construction module 1401, a first determination module 1402, a first loading module 1403, a first generation module 1404, and a first rendering module 1405.
The first constructing module 1401 is configured to construct a first ray according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point.
A first determining module 1402, configured to determine a loading priority of each face of the first empty box according to the first ray; wherein the first side through which the first ray passes has a higher loading priority than the other sides of the first aerial box other than the first side.
The first loading module 1403 is configured to sequentially load the first image resources corresponding to each surface according to the loading priority of each surface.
The first generating module 1404 is configured to generate, in response to each time one of the first image resources is loaded, a corresponding first texture image according to one of the first image resources.
The first rendering module 1405 is configured to render a first texture image corresponding to a first image resource to a surface corresponding to the first image resource in the first empty box, so as to obtain a first empty box after texture rendering.
In one possible implementation of the embodiment of the disclosure, a first determining module 1402 is configured to: determining a first face through which a first ray passes from the faces of the first empty box; determining a second surface adjacent to the first surface among the surfaces of the first empty box; determining a third surface through which a second ray opposite to the first ray passes from the surfaces of the first empty box; the loading priority of the first face is determined to be higher than that of the second face, and the loading priority of the second face is determined to be higher than that of the third face.
In one possible implementation of the embodiment of the present disclosure, the first loading module 1403 is configured to: adding the first side to a first load queue, and adding the second side to a second load queue, and adding the third side to a third load queue; the loading priority of the first loading queue is higher than that of the second loading queue, and the loading priority of the second loading queue is higher than that of the third loading queue; and sequentially loading the first image resources corresponding to each surface in the first loading queue, the second loading queue and the third loading queue.
In one possible implementation of the embodiments of the present disclosure, the first generating module 1404 is configured to: creating an initial texture object corresponding to one of the first image resources in response to each time one of the first image resources is loaded; according to the set texture information, performing texture parameter configuration on the initial texture object to obtain a first texture object; rendering a first image resource according to the first texture object to obtain a first texture image corresponding to the first image resource.
In one possible implementation of an embodiment of the present disclosure, a first rendering module 1405 is configured to: according to the physical size of a surface corresponding to a first image resource in the first aerial box, carrying out size adjustment on the first texture image; and rendering the first texture image with the adjusted size to a surface corresponding to one first image resource in the space box to obtain a first space box with the texture rendered.
In one possible implementation of the embodiment of the present disclosure, the sharpness of the first image resource is smaller than a set first sharpness threshold, and the texture rendering apparatus 1400 may further include:
and the second determining module is used for determining a fourth surface positioned in the view cone of the virtual camera from the first empty box after the texture rendering and determining a first target area from the fourth surface, wherein the first target area is positioned in the view cone of the virtual camera.
The second loading module is used for loading at least one second image resource corresponding to the first target area; the definition of the second image resource is larger than the first definition threshold and smaller than the set second definition threshold.
And the second generation module is used for generating a second texture image corresponding to at least one second image resource.
And the second rendering module is used for rendering at least one second texture image to a first target area in a fourth surface of the first empty box after the texture rendering so as to obtain the first empty box after the texture updating.
In one possible implementation of the embodiments of the present disclosure, the texture rendering apparatus 1400 may further include:
and the release module is used for releasing the first texture object.
In one possible implementation of the embodiments of the present disclosure, the texture rendering apparatus 1400 may further include:
the first emptying module is used for emptying the first loading queue, the second loading queue and the third loading queue.
The adding module is configured to add a second target area to the first load queue, add a fifth surface to the second load queue, and add a sixth surface to the third load queue, where the second target area is an area except the first target area in a fourth surface of the first empty box after texture updating, the fifth surface is a surface adjacent to the fourth surface, and the sixth surface is a surface except the fourth surface and the fifth surface in the first empty box after texture updating.
The third loading module is used for loading third image resources corresponding to each surface or region in the first loading queue, the second loading queue and the third loading queue in sequence; wherein the physical size and sharpness of the third image asset matches the second image asset.
And the third generation module is used for generating a corresponding third texture image according to a third image resource every time a third image resource is loaded.
And the third rendering module is used for rendering a third texture image corresponding to a third image resource to a surface or area corresponding to the third image resource in the first empty box after the texture update so as to obtain the first empty box after the texture update.
In one possible implementation of the embodiments of the present disclosure, a third rendering module is configured to: according to a first ratio between the first physical size of the first image resource and the second physical size of the third image resource and a second ratio between the third physical size of each surface in the first empty box and the first physical size, performing size adjustment on a third texture image corresponding to one third image resource; and rendering the third texture image with the adjusted size to a surface or area corresponding to one third image resource in the first empty box with updated texture. In one possible implementation of the embodiments of the present disclosure, the texture rendering apparatus 1400 may further include:
the second construction module is used for responding to the amplifying operation of the first empty box after the texture updating, and constructing a third ray according to the third position of the virtual camera and the fourth position of the touch point.
And a third determining module, configured to determine that a third ray passes through a third target area in a seventh surface of the first empty box after the texture update.
And the fourth loading module is used for loading a fourth image resource corresponding to the third target area, wherein the definition of the fourth image resource is larger than the second definition threshold.
And the fourth generation module is used for generating a fourth texture image corresponding to the fourth image resource.
And the mapping module is used for mapping the fourth texture image to a third target area of a seventh surface of the first empty box after the texture updating.
In one possible implementation of the embodiments of the present disclosure, the texture rendering apparatus 1400 may further include:
the acquisition module is used for acquiring a second space box after texture rendering, wherein the center of the first space box after texture rendering is located at a first viewpoint position, the center of the second space box after texture rendering is located at a second viewpoint position, and the first viewpoint position is different from the second viewpoint position.
And the moving module is used for moving the virtual camera from the first viewpoint position to the second viewpoint position so as to switch the scene at the first viewpoint position to the scene at the second viewpoint position.
According to the texture rendering device, a first ray is constructed according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point; determining the loading priority of each surface of the first empty box according to the first ray, wherein the loading priority of the first surface penetrated by the first ray is higher than that of other surfaces except the first surface in the first empty box; according to the loading priority of each surface, loading the first image resources corresponding to each surface in turn, and generating a corresponding first texture image according to one first image resource in response to the completion of one first image resource each time of loading; and rendering a first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box to obtain a first empty box after texture rendering. Therefore, the image resources in the user view angle or the visual range can be loaded preferentially, the image resources in the non-user view angle are loaded afterwards, the texture images in the user view angle and the texture images in the non-user view angle are rendered in sequence, the waiting time of the user can be reduced, the user can enter a scene at a higher speed, the scene is previewed, and the user experience is improved.
In order to achieve the above embodiments, the present disclosure further proposes an electronic device including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a texture rendering method as set forth in any of the foregoing embodiments of the present disclosure when the program is executed.
To achieve the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a texture rendering method as proposed in any of the foregoing embodiments of the present disclosure.
To achieve the above embodiments, the present disclosure also proposes a computer program product which, when executed by a processor, performs a texture rendering method as proposed in any of the previous embodiments of the present disclosure.
Fig. 15 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 15 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 15, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 15, commonly referred to as a "hard disk drive"). Although not shown in fig. 15, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks, such as a local area network (Local Area Network; hereinafter: LAN), a wide area network (Wide Area Network; hereinafter: WAN) and/or a public network, such as the Internet, via the network adapter 20. As shown in fig. 15, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the methods mentioned in the foregoing embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.

Claims (24)

1. A method of texture rendering, the method comprising:
constructing a first ray according to a first position of a virtual camera in a first empty box to be rendered and a second position of a touch point;
determining the loading priority of each face of the first empty box according to the first ray; wherein, the loading priority of the first surface through which the first ray passes is higher than that of other surfaces except the first surface in the first empty box;
according to the loading priority of each face, loading first image resources corresponding to each face in turn, and generating a corresponding first texture image according to one first image resource in response to the completion of loading one first image resource each time;
and rendering the first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first aerial box to obtain a first aerial box after texture rendering.
2. The method of claim 1, wherein determining the loading priority of each face of the first empty box based on the first ray comprises:
determining a first face through which the first ray passes from the faces of the first empty box;
determining a second surface adjacent to the first surface in each surface of the first empty box;
determining a third surface through which a second ray opposite to the first ray passes from the surfaces of the first empty box;
and determining that the loading priority of the first surface is higher than that of the second surface, and determining that the loading priority of the second surface is higher than that of the third surface.
3. The method according to claim 2, wherein sequentially loading the first image resources corresponding to each of the planes according to the loading priority of each of the planes comprises:
adding the first side to a first load queue, and the second side to a second load queue, and the third side to a third load queue; the loading priority of the first loading queue is higher than that of the second loading queue, and the loading priority of the second loading queue is higher than that of the third loading queue;
And sequentially loading the first image resources corresponding to each surface in the first loading queue, the second loading queue and the third loading queue.
4. The method of claim 1, wherein generating a corresponding first texture image from one of the first image resources in response to each time one of the first image resources is loaded comprises:
creating an initial texture object corresponding to one first image resource each time the one first image resource is loaded;
according to the set texture information, performing texture parameter configuration on the initial texture object to obtain a first texture object;
and rendering the first image resource according to the first texture object to obtain a first texture image corresponding to the first image resource.
5. The method of claim 4, wherein rendering the first texture image corresponding to the one first image resource to a surface of the first aerial box corresponding to the one first image resource to obtain a first aerial box after texture rendering, comprises:
the first texture image is subjected to size adjustment according to the physical size of the surface corresponding to the first image resource in the first aerial box;
And rendering the first texture image with the adjusted size to a surface corresponding to the first image resource in the sky box to obtain a first sky box with the texture rendered.
6. The method of claim 4, wherein the sharpness of the first image resource is less than a set first sharpness threshold, the method further comprising:
determining a fourth surface positioned in a view cone of the virtual camera from the first space box after the texture rendering, and determining a first target area from the fourth surface, wherein the first target area is positioned in the view cone of the virtual camera;
loading at least one second image resource corresponding to the first target area; the definition of the second image resource is larger than the first definition threshold and smaller than a set second definition threshold;
generating a second texture image corresponding to the at least one second image resource;
and rendering the at least one second texture image to a first target area in a fourth surface of the first texture-rendered empty box to obtain the first texture-updated empty box.
7. The method of claim 6, wherein the method further comprises:
Releasing the first texture object.
8. The method of claim 6, wherein the method further comprises:
emptying a first load queue, a second load queue and a third load queue, adding a second target area to the first load queue, adding a fifth plane to the second load queue and adding a sixth plane to the third load queue; the second target area is an area except the first target area in a fourth surface of the first empty box after the texture updating, the fifth surface is a surface adjacent to the fourth surface, and the sixth surface is a surface except the fourth surface and the fifth surface in the first empty box after the texture updating;
sequentially loading third image resources corresponding to each surface or region in the first loading queue, the second loading queue and the third loading queue; wherein the physical size and sharpness of the third image resource is matched with the second image resource;
each time one third image resource is loaded, generating a corresponding third texture image according to the one third image resource;
and rendering a third texture image corresponding to the third image resource to a surface or area corresponding to the third image resource in the first empty box after texture updating so as to obtain the first empty box after texture updating.
9. The method according to claim 8, wherein rendering the third texture image corresponding to the one third image resource to the surface or the area corresponding to the one third image resource in the first empty box after the texture update includes:
performing size adjustment on a third texture image corresponding to the third image resource according to a first ratio between a first physical size of the first image resource and a second physical size of the third image resource and a second ratio between a third physical size of each surface in the first empty box and the first physical size;
and rendering the third texture image with the adjusted size to a surface or an area corresponding to the third image resource in the first empty box with the updated texture.
10. The method according to claim 6, further comprising:
responding to the amplifying operation of the first empty box after the texture updating, and constructing a third ray according to the third position of the virtual camera and the fourth position of the touch point;
determining that the third ray passes through a third target area in a seventh face of the texture updated first empty box;
Loading a fourth image resource corresponding to the third target area, wherein the definition of the fourth image resource is larger than the second definition threshold;
generating a fourth texture image corresponding to the fourth image resource;
mapping the fourth texture image to a third target area of a seventh face of the first empty box after the texture updating.
11. The method according to any one of claims 1-10, further comprising:
acquiring a second space box after texture rendering, wherein the center of the first space box after texture rendering is positioned at a first viewpoint position, the center of the second space box after texture rendering is positioned at a second viewpoint position, and the first viewpoint position is different from the second viewpoint position;
and moving the virtual camera from the first viewpoint position to the second viewpoint position to switch the scene of the first viewpoint position to the scene of the second viewpoint position.
12. A texture rendering apparatus, the apparatus comprising:
the first construction module is used for constructing a first ray according to a first position of the virtual camera in the first aerial box to be rendered and a second position of the touch point;
The first determining module is used for determining the loading priority of each face of the first empty box according to the first ray; wherein, the loading priority of the first surface through which the first ray passes is higher than that of other surfaces except the first surface in the first empty box;
the first loading module is used for loading the first image resources corresponding to the faces in sequence according to the loading priority of the faces;
the first generation module is used for responding to one first image resource each time loading is completed, and generating a corresponding first texture image according to the one first image resource;
the first rendering module is used for rendering the first texture image corresponding to the first image resource to a surface corresponding to the first image resource in the first empty box so as to obtain a first empty box after texture rendering.
13. The apparatus of claim 12, wherein the first determining module is configured to:
determining a first face through which the first ray passes from the faces of the first empty box;
determining a second surface adjacent to the first surface in each surface of the first empty box;
determining a third surface through which a second ray opposite to the first ray passes from the surfaces of the first empty box;
And determining that the loading priority of the first surface is higher than that of the second surface, and determining that the loading priority of the second surface is higher than that of the third surface.
14. The apparatus of claim 13, wherein the first loading module is configured to:
adding the first side to a first load queue, and the second side to a second load queue, and the third side to a third load queue; the loading priority of the first loading queue is higher than that of the second loading queue, and the loading priority of the second loading queue is higher than that of the third loading queue;
and sequentially loading the first image resources corresponding to each surface in the first loading queue, the second loading queue and the third loading queue.
15. The apparatus of claim 12, wherein the first generation module is configured to:
creating an initial texture object corresponding to one first image resource each time the one first image resource is loaded;
according to the set texture information, performing texture parameter configuration on the initial texture object to obtain a first texture object;
and rendering the first image resource according to the first texture object to obtain a first texture image corresponding to the first image resource.
16. The apparatus of claim 15, wherein the first rendering module is configured to:
the first texture image is subjected to size adjustment according to the physical size of the surface corresponding to the first image resource in the first aerial box;
and rendering the first texture image with the adjusted size to a surface corresponding to the first image resource in the sky box to obtain a first sky box with the texture rendered.
17. The apparatus of claim 15, wherein the sharpness of the first image resource is less than a set first sharpness threshold, the apparatus further comprising:
a second determining module, configured to determine a fourth surface located within a view cone of the virtual camera from the first empty box after the texture rendering, and determine a first target area from the fourth surface, where the first target area is located within the view cone of the virtual camera;
the second loading module is used for loading at least one second image resource corresponding to the first target area; the definition of the second image resource is larger than the first definition threshold and smaller than a set second definition threshold;
The second generation module is used for generating a second texture image corresponding to the at least one second image resource;
and the second rendering module is used for rendering the at least one second texture image to a first target area in a fourth surface of the first empty box after the texture rendering so as to obtain the first empty box after the texture updating.
18. The apparatus of claim 17, wherein the apparatus further comprises:
and the release module is used for releasing the first texture object.
19. The apparatus of claim 17, wherein the apparatus further comprises:
the first emptying module is used for emptying the first loading queue, the second loading queue and the third loading queue;
an adding module, configured to add a second target area to the first load queue, add a fifth surface to the second load queue, and add a sixth surface to the third load queue, where the second target area is an area, except for the first target area, in a fourth surface of the first empty box after texture update, the fifth surface is a surface adjacent to the fourth surface, and the sixth surface is a surface, except for the fourth surface and the fifth surface, in the first empty box after texture update;
The third loading module is used for loading third image resources corresponding to each surface or region in the first loading queue, the second loading queue and the third loading queue in sequence; wherein the physical size and sharpness of the third image resource is matched with the second image resource;
the third generation module is used for generating a corresponding third texture image according to one third image resource every time loading is completed;
and the third rendering module is used for rendering the third texture image corresponding to the third image resource to a surface or area corresponding to the third image resource in the first empty box after the texture update so as to obtain the first empty box after the texture update.
20. The apparatus of claim 19, wherein the third rendering module is configured to:
performing size adjustment on a third texture image corresponding to the third image resource according to a first ratio between a first physical size of the first image resource and a second physical size of the third image resource and a second ratio between a third physical size of each surface in the first empty box and the first physical size;
And rendering the third texture image with the adjusted size to a surface or an area corresponding to the third image resource in the first empty box with the updated texture.
21. The apparatus of claim 17, wherein the apparatus further comprises:
the second construction module is used for responding to the amplifying operation of the first empty box after the texture updating, and constructing a third ray according to the third position of the virtual camera and the fourth position of the touch point;
a third determining module, configured to determine that the third ray passes through a third target area in a seventh surface of the first empty box after the texture update;
a fourth loading module, configured to load a fourth image resource corresponding to the third target area, where a sharpness of the fourth image resource is greater than the second sharpness threshold;
a fourth generation module, configured to generate a fourth texture image corresponding to the fourth image resource;
and the mapping module is used for mapping the fourth texture image to a third target area of a seventh surface of the first empty box after the texture updating.
22. The apparatus according to any one of claims 12-21, wherein the apparatus further comprises:
The acquisition module is used for acquiring a second space box after texture rendering, wherein the center of the first space box after texture rendering is located at a first viewpoint position, the center of the second space box after texture rendering is located at a second viewpoint position, and the first viewpoint position is different from the second viewpoint position;
and the moving module is used for moving the virtual camera from the first viewpoint position to the second viewpoint position so as to switch the scene of the first viewpoint position to the scene of the second viewpoint position.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
CN202310286850.1A 2023-03-22 2023-03-22 Texture rendering method and device, electronic equipment and medium Pending CN116385291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310286850.1A CN116385291A (en) 2023-03-22 2023-03-22 Texture rendering method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310286850.1A CN116385291A (en) 2023-03-22 2023-03-22 Texture rendering method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN116385291A true CN116385291A (en) 2023-07-04

Family

ID=86979934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310286850.1A Pending CN116385291A (en) 2023-03-22 2023-03-22 Texture rendering method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116385291A (en)

Similar Documents

Publication Publication Date Title
US11748840B2 (en) Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
JP6563048B2 (en) Tilt adjustment of texture mapping for multiple rendering targets with different resolutions depending on screen position
US10311548B2 (en) Scaling render targets to a higher rendering resolution to display higher quality video frames
US8115767B2 (en) Computer graphics shadow volumes using hierarchical occlusion culling
KR101922482B1 (en) Varying effective resolution by screen location by changing active color sample count within multiple render targets
US6763175B1 (en) Flexible video editing architecture with software video effect filter components
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
US6525728B2 (en) Method and apparatus for using a general three-dimensional (3D) graphics pipeline for cost effective digital image and video editing, transformation, and representation
US20170038942A1 (en) Playback initialization tool for panoramic videos
CN113900797B (en) Three-dimensional oblique photography data processing method, device and equipment based on illusion engine
JP4987070B2 (en) Image generating apparatus and image generating method
KR20210151114A (en) Hybrid rendering
BR112019012641A2 (en) woven render in block architectures
GB2392072A (en) Generating shadow image data of a 3D object
US6763176B1 (en) Method and apparatus for real-time video editing using a graphics processor
CN110047123B (en) Map rendering method, map rendering device, storage medium and computer program product
US11044398B2 (en) Panoramic light field capture, processing, and display
JP2011510407A (en) Multi-buffer support for off-screen surfaces in graphics processing systems
WO2024045416A1 (en) Graph processing method and system using tile-based rendering mode and capable of expanding tile boundaries
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN111754381A (en) Graphics rendering method, apparatus, and computer-readable storage medium
US7756391B1 (en) Real-time video editing architecture
CN114782648A (en) Image processing method, image processing device, electronic equipment and storage medium
US20230394701A1 (en) Information processing apparatus, information processing method, and storage medium
CN110211022A (en) A kind of image processing method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination