CN117475070A - Image rendering method, device, terminal equipment and storage medium - Google Patents

Image rendering method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN117475070A
CN117475070A CN202311526246.8A CN202311526246A CN117475070A CN 117475070 A CN117475070 A CN 117475070A CN 202311526246 A CN202311526246 A CN 202311526246A CN 117475070 A CN117475070 A CN 117475070A
Authority
CN
China
Prior art keywords
voxels
target
rendering
dimensional
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311526246.8A
Other languages
Chinese (zh)
Inventor
李宇宙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tiantian Microchip Semiconductor Technology Co ltd
Original Assignee
Beijing Tiantian Microchip Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tiantian Microchip Semiconductor Technology Co ltd filed Critical Beijing Tiantian Microchip Semiconductor Technology Co ltd
Priority to CN202311526246.8A priority Critical patent/CN117475070A/en
Publication of CN117475070A publication Critical patent/CN117475070A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)

Abstract

The invention provides an image rendering method, an image rendering device, terminal equipment and a storage medium, and relates to the technical field of image rendering. The method comprises the following steps: according to the rendering ray corresponding to the target view angle, determining three-dimensional coordinates of a plurality of target voxels of the target three-dimensional reconstruction model of the three-dimensional scene, wherein the sampling points on the rendering ray are located in the plurality of target voxels; determining voxels to be marked according to the three-dimensional coordinates of the plurality of target voxels; processing by adopting a target three-dimensional reconstruction model according to the target view angle to obtain attribute information of a plurality of voxels under the target view angle; and replacing attribute information of the voxels to be marked in the attribute information of the plurality of voxels with preset attribute information, and rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information. And enabling the rendering effects of the voxels to be marked at different positions and other voxels to be different, and marking the positions corresponding to the voxels to be marked in the generated rendered two-dimensional image.

Description

Image rendering method, device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of image rendering technologies, and in particular, to an image rendering method, an image rendering device, a terminal device, and a storage medium.
Background
NeRF (Neural Radiance Fields, neuro-radiation field) is an advanced computer graphics technology capable of generating highly realistic three-dimensional scenes, which is in fact an implicit three-dimensional scene representation, because NeRF cannot be seen in a direct three-dimensional model like a cloud of image points, a grid.
In the related art, photos of different camera views of a scene and external parameters and internal parameters corresponding to the camera views are input into a NeRF network for training so as to update the volume density and directional attribute information of points in a NeRF scene space and obtain a trained NeRF network. Rendering is carried out based on the trained NERF, and a new view angle two-dimensional image aiming at the scene can be obtained.
However, in the related art, the designated position in the scene cannot be marked out and rendered into the two-dimensional image during rendering.
Disclosure of Invention
The present invention aims to solve the above-mentioned problems occurring in the related art, and to provide an image rendering method, an image rendering device, a terminal device, and a storage medium.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides an image rendering method, including:
according to a rendering ray corresponding to a target view angle, determining three-dimensional coordinates of a plurality of target voxels of a target three-dimensional reconstruction model of a three-dimensional scene, wherein the plurality of target voxels are located by each sampling point on the rendering ray;
determining voxels to be marked according to the three-dimensional coordinates of the target voxels;
processing by adopting the target three-dimensional reconstruction model according to the target view angle to obtain attribute information of the voxels under the target view angle;
and replacing attribute information of the voxels to be marked in the attribute information of the plurality of voxels with preset attribute information, and rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information.
Optionally, the determining the voxel to be marked according to the three-dimensional coordinates of the target voxels includes:
respectively acquiring rendering indication information of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels, wherein the rendering indication information of each target voxel is used for indicating whether each target voxel is a voxel to be marked or not;
and determining the voxels to be marked according to the rendering indication information of the target voxels.
Optionally, the respectively obtaining rendering indication information of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels includes:
generating addresses of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels;
and respectively acquiring rendering instruction information of the plurality of target voxels from storage areas corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels.
Optionally, the generating addresses of the target voxels according to the three-dimensional coordinates of the target voxels includes:
mapping by adopting a preset hash function according to the three-dimensional coordinates of the target voxels, and generating addresses of the target voxels;
the step of respectively obtaining rendering instruction information of the plurality of target voxels from storage areas corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels, includes:
and respectively determining rendering instruction information stored in the storage areas corresponding to the addresses of the target voxels from a preset hash table according to the addresses of the target voxels, and taking the rendering instruction information as the rendering instruction information of the target voxels.
Optionally, the method further includes, before determining, according to the addresses of the plurality of target voxels, rendering indication information stored in the storage areas corresponding to the addresses of the plurality of target voxels from a preset hash table, as the rendering indication information of the plurality of target voxels, respectively:
acquiring three-dimensional coordinates of the plurality of voxels;
mapping by adopting the preset hash function according to the three-dimensional coordinates of the voxels, and generating addresses of the voxels;
and generating the hash table according to rendering instruction information stored in the corresponding storage areas of the addresses of the plurality of voxels.
Optionally, the acquiring three-dimensional coordinates of the plurality of voxels includes:
according to the value range of each coordinate axis in the model space where the target three-dimensional reconstruction model is located and the preset number of voxels, carrying out voxel division on the model space to obtain a plurality of voxels and voxel side lengths;
determining the center coordinates of the voxels according to the voxel side lengths and the value ranges of the coordinate axes; the three-dimensional coordinates of the plurality of voxels are center coordinates of the plurality of voxels.
Optionally, the determining the center coordinates of the plurality of voxels according to the voxel side lengths and the value ranges of the coordinate axes includes:
determining serial numbers of the plurality of voxels according to the arrangement sequence of the plurality of voxels in the model space;
and determining the central coordinates of the voxels according to the minimum value in the value range of each coordinate axis, the side length of the voxels and the serial numbers of the voxels.
In a second aspect, an embodiment of the present invention further provides an image rendering apparatus, including:
the determining module is used for determining three-dimensional coordinates of a plurality of target voxels of each sampling point on the rendering ray from a plurality of voxels of a target three-dimensional reconstruction model of the three-dimensional scene according to the rendering ray corresponding to the target view angle; determining voxels to be marked according to the three-dimensional coordinates of the target voxels;
the processing module is used for processing by adopting the target three-dimensional reconstruction model according to the target view angle to obtain attribute information of the voxels under the target view angle;
and the rendering module is used for replacing the attribute information of the voxels to be marked in the attribute information of the voxels with preset attribute information and rendering to generate a two-dimensional image of the three-dimensional scene under the target visual angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information.
Optionally, the determining module is specifically configured to obtain rendering indication information of the plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels, where the rendering indication information of each target voxel is used to indicate whether each target voxel is a voxel to be marked; and determining the voxels to be marked according to the rendering indication information of the target voxels.
Optionally, the determining module is specifically configured to generate addresses of the plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels; and respectively acquiring rendering instruction information of the plurality of target voxels from storage areas corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels.
Optionally, the determining module is specifically configured to map with a preset hash function according to three-dimensional coordinates of the plurality of target voxels, so as to generate addresses of the plurality of target voxels;
the determining module is specifically configured to determine, according to the addresses of the plurality of target voxels, rendering instruction information stored in the storage areas corresponding to the addresses of the plurality of target voxels from a preset hash table, as rendering instruction information of the plurality of target voxels, respectively.
Optionally, the apparatus further includes:
the acquisition module is used for acquiring three-dimensional coordinates of the plurality of voxels;
the generation module is used for mapping by adopting the preset hash function according to the three-dimensional coordinates of the voxels to generate addresses of the voxels; and generating the hash table according to rendering instruction information stored in the corresponding storage areas of the addresses of the plurality of voxels.
Optionally, the acquiring module is specifically configured to divide voxels of the model space according to a value range of each coordinate axis in the model space where the target three-dimensional reconstruction model is located and a preset number of voxels, so as to obtain the plurality of voxels and a voxel side length; determining the center coordinates of the voxels according to the voxel side lengths and the value ranges of the coordinate axes; the three-dimensional coordinates of the plurality of voxels are center coordinates of the plurality of voxels.
Optionally, the acquiring module is specifically configured to determine a sequence number of the plurality of voxels according to an arrangement sequence of the plurality of voxels in the model space; and determining the central coordinates of the voxels according to the minimum value in the value range of each coordinate axis, the side length of the voxels and the serial numbers of the voxels.
In a third aspect, an embodiment of the present invention further provides a terminal device, including: a memory storing a computer program executable by the processor, and a processor implementing the image rendering method according to any one of the above first aspects when the processor executes the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when read and executed, implements the image rendering method according to any one of the first aspects.
The beneficial effects of the invention are as follows: the embodiment of the invention provides an image rendering method, which is characterized in that according to a rendering ray corresponding to a target view angle, three-dimensional coordinates of a plurality of target voxels of a target three-dimensional reconstruction model of a three-dimensional scene, where sampling points on the rendering ray are located, are determined from the plurality of voxels; determining voxels to be marked according to the three-dimensional coordinates of the plurality of target voxels; processing by adopting a target three-dimensional reconstruction model according to the target view angle to obtain attribute information of a plurality of voxels under the target view angle; and replacing attribute information of the voxels to be marked in the attribute information of the plurality of voxels with preset attribute information, and rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information. The attribute information of the voxels to be marked in the plurality of voxels is different from that of other voxels, so that the rendering effects of the voxels to be marked in different positions are different from those of other voxels, and the positions corresponding to the voxels to be marked in the generated rendered two-dimensional image are marked.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image rendering method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of displaying a two-dimensional image at a first target viewing angle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of displaying a two-dimensional image at a second target viewing angle according to an embodiment of the present invention;
fig. 4 is a second flowchart of an image rendering method according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a method for rendering an image according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a method for rendering an image according to an embodiment of the present invention;
fig. 7 is a flowchart of an image rendering method according to an embodiment of the present invention;
fig. 8 is a flowchart of an image rendering method according to an embodiment of the present invention;
fig. 9 is a flow chart of an image rendering method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an image rendering device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, it should be noted that, if the terms "upper", "lower", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or an azimuth or the positional relationship that is commonly put when the product of the application is used, it is merely for convenience of description and simplification of the description, and does not indicate or imply that the apparatus or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and therefore should not be construed as limiting the present application.
Furthermore, the terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, without conflict, features in embodiments of the present application may be combined with each other.
The embodiment of the application provides an image rendering method, which is applied to terminal equipment, wherein the terminal equipment can be any one of the following: desktop computers, tablet computers, notebook computers, smart phones, gaming machines, and the like.
An explanation is provided below for an image rendering method according to an embodiment of the present application.
Fig. 1 is a schematic flow chart of an image rendering method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101, determining three-dimensional coordinates of a plurality of target voxels of each sampling point on a rendering ray from a plurality of voxels of a target three-dimensional reconstruction model of a three-dimensional scene according to the rendering ray corresponding to the target view angle.
Wherein the rendering ray comprises a plurality of sampling points.
In the embodiment of the application, three-dimensional coordinates of each sampling point on a rendering ray are obtained, and according to the three-dimensional coordinates of a plurality of voxels and the three-dimensional coordinates of each sampling point, the three-dimensional coordinates of a plurality of target voxels where each sampling point is located in the plurality of voxels.
The target voxel is a voxel where a sampling point is located among a plurality of voxels.
In addition, a voxel refers to a cube element in a three-dimensional model space, which may be a cube in the model space, for example.
S102, determining voxels to be marked according to the three-dimensional coordinates of the plurality of target voxels.
The plurality of target voxels comprise voxels to be marked and other voxels, and the voxels to be marked in the plurality of target voxels can be found out according to the three-dimensional coordinates of the plurality of target voxels.
In addition, the voxel to be marked refers to a voxel that needs to be marked at the time of rendering.
And S103, processing by adopting a target three-dimensional reconstruction model according to the target view angle to obtain attribute information of a plurality of voxels under the target view angle.
Wherein the attribute information of the plurality of voxels includes: the volume density and color value of the plurality of voxels.
In some embodiments, according to the target view angle, processing is performed by using a target three-dimensional reconstruction model, and attribute information of a plurality of voxels under the target view angle is calculated, where the attribute information of the plurality of voxels includes: attribute information of voxels to be marked, and attribute information of the remaining voxels.
In the embodiment of the present application, attribute information of a plurality of voxels includes: the calculated volume density and color values of the plurality of voxels.
It is worth to say that the target three-dimensional reconstruction model is a NeRF model trained for a three-dimensional scene, and the initial three-dimensional reconstruction model is trained according to sample two-dimensional images of different view angles of the three-dimensional scene and camera parameters corresponding to each sample two-dimensional image to obtain the target three-dimensional reconstruction model for the three-dimensional scene; rendering is carried out based on the target three-dimensional reconstruction model, and a two-dimensional image of a new view angle aiming at the target scene can be obtained.
And S104, replacing attribute information of the voxels to be marked in the attribute information of the plurality of voxels with preset attribute information, and rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle.
Wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information.
In the embodiment of the application, the attribute information of the voxel to be marked in the attribute information of the plurality of voxels is replaced by the preset attribute information, the attribute information of other voxels in the attribute information of the plurality of voxels is kept unchanged, the output results of the plurality of sampling points can be obtained, and the rendering is performed according to the output results of the plurality of sampling points to generate a two-dimensional image of the three-dimensional scene under the target view angle.
The attribute information of the plurality of voxels includes: calculating the volume density and the color value of the plurality of voxels; the preset attribute information comprises: preset bulk density and preset color value. In order to make the marks clearer, the preset volume density may be set to be greater than a preset volume density threshold, for example, the preset volume density may be 50, and the preset color value may be a color value corresponding to red.
It is noted that since the attribute information of the voxel to be marked and the remaining voxels are different, the effects rendered for the voxel to be marked and the remaining voxels are also different. The two-dimensional image of the target view angle in the three-dimensional scene may include: marking the corresponding position of the voxel to be marked under the target view angle, and a scene image of the corresponding position of other voxels under the target view angle.
In addition, under the condition that the position of the voxel to be marked in the target three-dimensional reconstruction model is unchanged; the marked positions in the generated two-dimensional image are different for different target perspectives. The target viewing angle can be preset according to actual requirements.
Fig. 2 is a schematic display diagram of a two-dimensional image at a first target viewing angle according to an embodiment of the present invention, where, as shown in fig. 2, the two-dimensional image includes: a flowerpot at a first target viewing angle, the area right in front of the flowerpot being marked; fig. 3 is a schematic display diagram of a two-dimensional image at a second target viewing angle according to an embodiment of the present invention, where, as shown in fig. 3, the two-dimensional image includes a flowerpot at the second target viewing angle, and a right side area of the flowerpot is marked. The first target viewing angle may be an elevation viewing angle and the second target viewing angle may be a left viewing angle.
In summary, an embodiment of the present application provides an image rendering method, including: according to the rendering ray corresponding to the target view angle, determining three-dimensional coordinates of a plurality of target voxels of the target three-dimensional reconstruction model of the three-dimensional scene, wherein the sampling points on the rendering ray are located in the plurality of target voxels; determining voxels to be marked according to the three-dimensional coordinates of the plurality of target voxels; processing by adopting a target three-dimensional reconstruction model according to the target view angle to obtain attribute information of a plurality of voxels under the target view angle; and replacing attribute information of the voxels to be marked in the attribute information of the plurality of voxels with preset attribute information, and rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information. The attribute information of the voxels to be marked in the plurality of voxels is different from that of other voxels, so that the rendering effects of the voxels to be marked in different positions are different from those of other voxels, and the positions corresponding to the voxels to be marked in the generated rendered two-dimensional image are marked.
Optionally, fig. 4 is a second flowchart of an image rendering method according to an embodiment of the present invention, as shown in fig. 4, a process of determining a voxel to be marked according to three-dimensional coordinates of the plurality of target voxels in S102 includes:
s201, respectively acquiring rendering instruction information of a plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels.
S202, determining voxels to be marked according to rendering indication information of a plurality of target voxels.
Wherein the rendering indication information of each target voxel is used for indicating whether each target voxel is a voxel to be marked.
In addition, the plurality of target voxels include voxels to be marked and other voxels, the rendering identification information of the voxels to be marked and the rendering identification information of the other voxels are different, the rendering identification information of different voxels to be marked can be the same, and the rendering identification information of different other voxels can be the same. The rendering identification information of the voxels to be marked may be configured as first identification information and the rendering identification information of the remaining voxels may be configured as second identification information.
In this embodiment of the present application, the first identification information and the second identification information are different information, and the first identification information and the second identification information may be different numbers, and by way of example, the first identification information may be 1, and the second identification information may also be 0; of course, the first identification information and the second identification information may also be different letters, and may also be other types of different information, which is not particularly limited in the embodiment of the present application.
Optionally, fig. 5 is a flowchart of a third image rendering method provided by an embodiment of the present invention, as shown in fig. 5, respectively obtaining rendering indication information of a plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels, including:
s301, generating addresses of a plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels.
And calculating the addresses of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels by adopting a preset hash function.
S302, according to the addresses of the target voxels, rendering instruction information of the target voxels is obtained from storage areas corresponding to the addresses of the target voxels.
In this embodiment of the present application, the storage area corresponding to the address of each voxel in the plurality of voxels is pre-stored with rendering instruction information, the target voxel is a part of or all of the plurality of voxels, and the storage area corresponding to the address of the plurality of target voxels is also pre-stored with rendering instruction information. Since the storage areas corresponding to the addresses of the respective voxels are different, the rendering instruction information of the plurality of target voxels can be acquired from the storage areas corresponding to the addresses of the plurality of target voxels.
Optionally, fig. 6 is a flowchart of a method for rendering an image, as shown in fig. 6, where the generating, in S301, the addresses of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels may include:
s401, mapping is carried out according to three-dimensional coordinates of a plurality of target voxels by adopting a preset hash function, and addresses of the plurality of target voxels are generated.
Wherein the three-dimensional coordinates of the plurality of target voxels may be center coordinates of the plurality of target voxels. And mapping by adopting a preset hash function according to the center coordinates of the plurality of target voxels to generate hash values of the plurality of target voxels, wherein the hash values of the plurality of target voxels are addresses of the plurality of target voxels.
In the step S302, the process of obtaining rendering instruction information of the plurality of target voxels from the storage regions corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels may include:
s402, according to the addresses of the target voxels, rendering instruction information stored in the storage areas corresponding to the addresses of the target voxels is determined from a preset hash table and used as the rendering instruction information of the target voxels.
In some embodiments, according to hash values of the plurality of target voxels, rendering indication information stored in the storage areas corresponding to the hash values of the plurality of target voxels, that is, rendering indication information stored in the storage areas corresponding to the addresses of the plurality of target voxels, is determined from the preset hash table, respectively, so as to obtain rendering indication information of the plurality of target voxels.
Optionally, fig. 7 is a flowchart of a method for rendering an image, as shown in fig. 7, where in S402, according to addresses of a plurality of target voxels, rendering indication information stored in storage regions corresponding to the addresses of the plurality of target voxels is determined from a preset hash table, and before the process of serving as the rendering indication information of the plurality of target voxels, the method may further include:
s501, acquiring three-dimensional coordinates of a plurality of voxels.
The three-dimensional coordinates of the plurality of voxels are the three-dimensional coordinates of the center points of the plurality of voxels, that is, the center coordinates of the plurality of voxels.
In some embodiments, a model space of a target three-dimensional reconstruction model is first determined, the model space of the target three-dimensional reconstruction model is subjected to voxel division to obtain a plurality of voxels, and then three-dimensional coordinates of each voxel in the plurality of voxels are calculated.
It should be noted that, the three-dimensional coordinates of a plurality of voxels may be obtained separately, or the three-dimensional coordinates of a plurality of voxels may be obtained sequentially, or the three-dimensional coordinates of a plurality of voxels may be obtained by other methods, which is not particularly limited in the embodiment of the present application.
S502, mapping is carried out by adopting a preset hash function according to three-dimensional coordinates of a plurality of voxels, and addresses of the plurality of voxels are generated;
in the embodiment of the application, mapping is performed by adopting a preset hash function according to the center coordinates of a plurality of target voxels, so as to generate hash values of the plurality of voxels, wherein the hash values of the plurality of voxels are addresses of the plurality of voxels.
Wherein the three-dimensional coordinates of each voxel of the plurality of voxels are different, and the address of each voxel is correspondingly different, and the address of the voxel can represent the position of the voxel in the model space of the target three-dimensional reconstruction model.
S503, generating a hash table according to rendering instruction information stored in the storage area corresponding to the addresses of the voxels.
Wherein, the hash table comprises: addresses of a plurality of voxels.
In this embodiment of the present application, the expression of the preset hash function H may be:
H(x,y,z)=[(x+b)/s]·n 2 +[(y+b)/s]·n+[(z+b)/s]
where b represents the maximum value in the range of values of the three-dimensional coordinates, s represents the side length of the voxel, n represents the number of voxels in each dimension, [ ] represents the rounding, i.e. the maximum integer is calculated that does not exceed the number in brackets.
It should be noted that the number of the plurality of voxels may be n 3 The center coordinates of a plurality of voxels can be mapped to a length n by adopting a preset hash function 3 A hash table T is obtained from consecutive addresses. N stored in hash table T 3 Address and n 3 The voxels are in one-to-one correspondence with consecutive n in the hash table 3 The addresses of the plurality of voxels are the addresses of the plurality of voxels.
In this embodiment of the present application, the storage areas corresponding to the addresses of the plurality of voxels in the hash table store rendering indication information, where the storage areas corresponding to the addresses of the plurality of voxels store second identification information, which may be set to 0 by way of example, and the second identification information stored in the storage areas corresponding to the addresses of the voxels to be marked may be modified to first identification information, which may be modified to 1 by way of example.
In some embodiments, coordinates of a plurality of sampling points may be obtained, and a preset hash function is used to calculate an address of each sampling point, and an address of a target voxel matching the address of each sampling point is found in the addresses of a plurality of voxels.
Optionally, fig. 8 is a flowchart sixth of an image rendering method according to an embodiment of the present invention, as shown in fig. 8, a process of obtaining three-dimensional coordinates of a plurality of voxels in S501 may include:
s601, carrying out voxel division on a model space according to the value range of each coordinate axis and the preset number of voxels in the model space where the target three-dimensional reconstruction model is located, and obtaining a plurality of voxels and voxel side lengths.
Wherein, each coordinate axis in the model space may include: x-axis, y-axis, z-axis, theThe three-dimensional coordinates in the model space have a value range of x, y, z epsilon (-b, +b). The preset number of voxels refers to the number of voxels in each dimension, which can be represented as n, and the model space is divided to obtain a plurality of voxels, and the number of the voxels is n 3 The side length of the voxel is s=2b/n.
S602, determining the center coordinates of a plurality of voxels according to the side lengths of the voxels and the value ranges of all coordinate axes.
Wherein the three-dimensional coordinates of the plurality of voxels are center coordinates of the plurality of voxels.
The center coordinates of each voxel may be calculated sequentially, or may be calculated simultaneously, or may be calculated in other manners, which is not particularly limited in the embodiment of the present application.
Optionally, fig. 9 is a flow chart seven of an image rendering method according to an embodiment of the present invention, as shown in fig. 9, a process of determining center coordinates of a plurality of voxels according to a voxel side length and a value range of each coordinate axis in S602 may include:
s701, determining serial numbers of a plurality of voxels according to the arrangement sequence of the voxels in the model space;
s702, determining the center coordinates of a plurality of voxels according to the minimum value in the value range of each coordinate axis, the side length of the voxels and the serial numbers of the plurality of voxels.
Note that, assuming that the number of voxels is m, let i be the voxel number, i=0, 1,2, …, m-1. The position serial numbers of the m voxels in the three dimensions of X, y and z are respectively X i ,Y i ,Z i Position number X i ,Y i ,Z i E {0,1,2, …, n-1}. The coordinates of the central points of the m voxels in the three dimensions of x, y and z are respectively x i ,y i ,z i
In some embodiments, based on the geometric positional relationship, the center coordinates of the plurality of voxels may be obtained:
x i =-b+(X i +0.5)·s
y i =-b+(Y i +0.5)·s
z i =-b+(Z i +0.5)·s
wherein X is i ,Y i ,Z i Representing position numbers in three dimensions of x, y and z, x i ,y i ,z i Representing coordinates of the center points of m voxels in three dimensions of x, y, z, i=0, 1,2, …, m-1, s representing the side lengths of the voxels, -b representing the minimum value of the three-dimensional coordinates in the model space.
In summary, the embodiment of the present application provides an image rendering method, where attribute information of a voxel to be marked and other voxels in a plurality of voxels are different, so that rendering effects of the voxel to be marked and other voxels at different positions are different, and a target three-dimensional reconstruction model is rendered, so that a position corresponding to the voxel to be marked in a rendered two-dimensional image is marked, and the whole implementation process is simple and efficient. The target three-dimensional reconstruction model can be a NeRF model, and the image rendering method provided by the embodiment of the application can realize hash coding on voxels in the NeRF model and then label rendering, so that a two-dimensional image rendered based on the NeRF model is provided with labels.
The following describes an image rendering device, a terminal device, a storage medium, etc. for executing the image rendering method provided in the present application, and specific implementation processes and technical effects thereof refer to relevant contents of the image rendering method, which are not described in detail below.
Fig. 10 is a schematic structural diagram of an image rendering device according to an embodiment of the present invention, as shown in fig. 10, where the device includes:
a determining module 101, configured to determine, according to a rendering ray corresponding to a target view angle, three-dimensional coordinates of a plurality of target voxels of a target three-dimensional reconstruction model of a three-dimensional scene, where sampling points on the rendering ray are located, from a plurality of voxels; determining voxels to be marked according to the three-dimensional coordinates of the target voxels;
the processing module 102 is configured to process by using the target three-dimensional reconstruction model according to the target view angle, so as to obtain attribute information of the plurality of voxels under the target view angle;
and the rendering module 103 is configured to replace attribute information of the voxel to be marked in attribute information of the plurality of voxels with preset attribute information, and perform rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle, where the voxel to be marked and other voxels in the two-dimensional image have different attribute information.
Optionally, the determining module 101 is specifically configured to obtain rendering indication information of the plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels, where the rendering indication information of each target voxel is used to indicate whether each target voxel is a voxel to be marked; and determining the voxels to be marked according to the rendering indication information of the target voxels.
Optionally, the determining module 101 is specifically configured to generate addresses of the plurality of target voxels according to three-dimensional coordinates of the plurality of target voxels; and respectively acquiring rendering instruction information of the plurality of target voxels from storage areas corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels.
Optionally, the determining module 101 is specifically configured to map with a preset hash function according to three-dimensional coordinates of the plurality of target voxels, so as to generate addresses of the plurality of target voxels;
the determining module 101 is specifically configured to determine, according to the addresses of the plurality of target voxels, rendering instruction information stored in the storage areas corresponding to the addresses of the plurality of target voxels from a preset hash table, as rendering instruction information of the plurality of target voxels, respectively.
Optionally, the apparatus further includes:
the acquisition module is used for acquiring three-dimensional coordinates of the plurality of voxels;
the generation module is used for mapping by adopting the preset hash function according to the three-dimensional coordinates of the voxels to generate addresses of the voxels; and generating the hash table according to rendering instruction information stored in the corresponding storage areas of the addresses of the plurality of voxels.
Optionally, the acquiring module is specifically configured to divide voxels of the model space according to a value range of each coordinate axis in the model space where the target three-dimensional reconstruction model is located and a preset number of voxels, so as to obtain the plurality of voxels and a voxel side length; determining the center coordinates of the voxels according to the voxel side lengths and the value ranges of the coordinate axes; the three-dimensional coordinates of the plurality of voxels are center coordinates of the plurality of voxels.
Optionally, the acquiring module is specifically configured to determine a sequence number of the plurality of voxels according to an arrangement sequence of the plurality of voxels in the model space; and determining the central coordinates of the voxels according to the minimum value in the value range of each coordinate axis, the side length of the voxels and the serial numbers of the voxels.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 11 is a schematic structural diagram of an image rendering device according to an embodiment of the present invention, as shown in fig. 10, where the terminal device includes: processor 201, memory 202.
The memory 202 is used for storing a program, and the processor 201 calls the program stored in the memory 202 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image rendering method, comprising:
according to a rendering ray corresponding to a target view angle, determining three-dimensional coordinates of a plurality of target voxels of a target three-dimensional reconstruction model of a three-dimensional scene, wherein the plurality of target voxels are located by each sampling point on the rendering ray;
determining voxels to be marked according to the three-dimensional coordinates of the target voxels;
processing by adopting the target three-dimensional reconstruction model according to the target view angle to obtain attribute information of the voxels under the target view angle;
and replacing attribute information of the voxels to be marked in the attribute information of the plurality of voxels with preset attribute information, and rendering to generate a two-dimensional image of the three-dimensional scene under the target view angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information.
2. The method of claim 1, wherein determining the voxel to be labeled from the three-dimensional coordinates of the plurality of target voxels comprises:
respectively acquiring rendering indication information of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels, wherein the rendering indication information of each target voxel is used for indicating whether each target voxel is a voxel to be marked or not;
and determining the voxels to be marked according to the rendering indication information of the target voxels.
3. The method according to claim 2, wherein the obtaining rendering indication information of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels, respectively, includes:
generating addresses of the plurality of target voxels according to the three-dimensional coordinates of the plurality of target voxels;
and respectively acquiring rendering instruction information of the plurality of target voxels from storage areas corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels.
4. A method according to claim 3, wherein said generating addresses of said plurality of target voxels from three-dimensional coordinates of said plurality of target voxels comprises:
mapping by adopting a preset hash function according to the three-dimensional coordinates of the target voxels, and generating addresses of the target voxels;
the step of respectively obtaining rendering instruction information of the plurality of target voxels from storage areas corresponding to the addresses of the plurality of target voxels according to the addresses of the plurality of target voxels, includes:
and respectively determining rendering instruction information stored in the storage areas corresponding to the addresses of the target voxels from a preset hash table according to the addresses of the target voxels, and taking the rendering instruction information as the rendering instruction information of the target voxels.
5. The method according to claim 4, wherein the determining, from a preset hash table, rendering instruction information stored in the storage areas corresponding to the addresses of the plurality of target voxels, respectively, according to the addresses of the plurality of target voxels, is further performed before the determining, as the rendering instruction information of the plurality of target voxels, the method further comprises:
acquiring three-dimensional coordinates of the plurality of voxels;
mapping by adopting the preset hash function according to the three-dimensional coordinates of the voxels, and generating addresses of the voxels;
and generating the hash table according to rendering instruction information stored in the corresponding storage areas of the addresses of the plurality of voxels.
6. The method of claim 5, wherein the acquiring three-dimensional coordinates of the plurality of voxels comprises:
according to the value range of each coordinate axis in the model space where the target three-dimensional reconstruction model is located and the preset number of voxels, carrying out voxel division on the model space to obtain a plurality of voxels and voxel side lengths;
determining the center coordinates of the voxels according to the voxel side lengths and the value ranges of the coordinate axes; the three-dimensional coordinates of the plurality of voxels are center coordinates of the plurality of voxels.
7. The method of claim 6, wherein determining the center coordinates of the plurality of voxels according to the voxel side lengths and the value ranges of the respective coordinate axes comprises:
determining serial numbers of the plurality of voxels according to the arrangement sequence of the plurality of voxels in the model space;
and determining the central coordinates of the voxels according to the minimum value in the value range of each coordinate axis, the side length of the voxels and the serial numbers of the voxels.
8. An image rendering apparatus, comprising:
the determining module is used for determining three-dimensional coordinates of a plurality of target voxels of each sampling point on the rendering ray from a plurality of voxels of a target three-dimensional reconstruction model of the three-dimensional scene according to the rendering ray corresponding to the target view angle; determining voxels to be marked according to the three-dimensional coordinates of the target voxels;
the processing module is used for processing by adopting the target three-dimensional reconstruction model according to the target view angle to obtain attribute information of the voxels under the target view angle;
and the rendering module is used for replacing the attribute information of the voxels to be marked in the attribute information of the voxels with preset attribute information and rendering to generate a two-dimensional image of the three-dimensional scene under the target visual angle, wherein the voxels to be marked and other voxels in the two-dimensional image have different attribute information.
9. A terminal device, comprising: a memory storing a computer program executable by the processor, and a processor implementing the image rendering method according to any one of the preceding claims 1-7 when the computer program is executed by the processor.
10. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when read and executed, implements the image rendering method according to any of the preceding claims 1-7.
CN202311526246.8A 2023-11-16 2023-11-16 Image rendering method, device, terminal equipment and storage medium Pending CN117475070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311526246.8A CN117475070A (en) 2023-11-16 2023-11-16 Image rendering method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311526246.8A CN117475070A (en) 2023-11-16 2023-11-16 Image rendering method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117475070A true CN117475070A (en) 2024-01-30

Family

ID=89627384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311526246.8A Pending CN117475070A (en) 2023-11-16 2023-11-16 Image rendering method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117475070A (en)

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
US10726580B2 (en) Method and device for calibration
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
US9928666B2 (en) Virtual three-dimensional model generation based on virtual hexahedron models
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
KR20210013150A (en) Lighting estimation
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN111459269B (en) Augmented reality display method, system and computer readable storage medium
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
CN111653175B (en) Virtual sand table display method and device
CN109901123A (en) Transducer calibration method, device, computer equipment and storage medium
CN111161398B (en) Image generation method, device, equipment and storage medium
CN114387347A (en) Method and device for determining external parameter calibration, electronic equipment and medium
CN112232315A (en) Text box detection method and device, electronic equipment and computer storage medium
CN112184815A (en) Method and device for determining position and posture of panoramic image in three-dimensional model
CN108305281A (en) Calibration method, device, storage medium, program product and the electronic equipment of image
CN111951333A (en) Automatic six-dimensional attitude data set generation method, system, terminal and storage medium
CN115830135A (en) Image processing method and device and electronic equipment
CN113362445B (en) Method and device for reconstructing object based on point cloud data
CN110310325A (en) A kind of virtual measurement method, electronic equipment and computer readable storage medium
CN112669418A (en) Model rendering method and device
Noborio et al. Experimental results of 2D depth-depth matching algorithm based on depth camera Kinect v1
CN111127590B (en) Second-order Bezier curve drawing method and device
CN117475070A (en) Image rendering method, device, terminal equipment and storage medium
CN108399638B (en) Augmented reality interaction method and device based on mark and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 8002, Floor 8, No. 36, Haidian West Street, Haidian District, Beijing, 100089

Applicant after: Beijing Tiantian Zhixin Semiconductor Technology Co.,Ltd.

Address before: 8002, Floor 8, No. 36, Haidian West Street, Haidian District, Beijing, 100089

Applicant before: Beijing Tiantian Microchip Semiconductor Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information