CN115170715A - Image rendering method and device, electronic equipment and medium - Google Patents

Image rendering method and device, electronic equipment and medium Download PDF

Info

Publication number
CN115170715A
CN115170715A CN202210761733.1A CN202210761733A CN115170715A CN 115170715 A CN115170715 A CN 115170715A CN 202210761733 A CN202210761733 A CN 202210761733A CN 115170715 A CN115170715 A CN 115170715A
Authority
CN
China
Prior art keywords
rendering
image
ray
tsdf
interactive interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210761733.1A
Other languages
Chinese (zh)
Inventor
赵斌涛
林忠威
张健
江腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202210761733.1A priority Critical patent/CN115170715A/en
Publication of CN115170715A publication Critical patent/CN115170715A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure relates to an image rendering method, an image rendering device, an electronic device and a medium, wherein the method comprises the following steps: responding to a display instruction received in the interactive interface, acquiring an image to be rendered and rendering parameters in the interactive interface, generating a ray set based on the image to be rendered and the rendering parameters, acquiring an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface, rendering a TSDF field lattice point where the intersection point is located based on parameter information of the intersection point, and generating a rendering result and displaying the rendering result on the interactive interface. By adopting the technical scheme, the TSDF field data is directly used for rendering and displaying, so that the real-time rendering of the image is realized, the real-time performance of the image rendering can be improved, the influence on the frame rate is less, and the display effect of the image rendering scene is further improved.

Description

Image rendering method and device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image rendering method and apparatus, an electronic device, and a medium.
Background
Generally, the TSDF (Truncated directed Distance Function) technology is a technology for merging all scanned frames into one fractional field, and is currently mostly used for rendering and displaying in a scanned scene through the TSDF field.
In the related art, a point rendering or triangular mesh rendering mode is adopted, for example, a point rendering mode is to extract part of points from a TSDF (time dependent dynamic distribution) field hidden surface for rendering, the effect of the method is poor, when a camera is close to a model to be rendered, most areas have no points, and when the camera is far away, the rendered picture has excessive and unsmooth colors; for example, the method of triangular mesh rendering is to triangulate the implicit surface of the TSDF field into a triangular mesh, and render the triangular mesh, which consumes more computing resources and memory and has a large influence on the frame rate; in addition, the triangulation also brings a certain precision loss, and when the camera is close, the details of the triangle can be seen.
Disclosure of Invention
To solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides an image rendering method, apparatus, electronic device, and medium.
The embodiment of the disclosure provides an image rendering method, which comprises the following steps:
responding to a display instruction received in an interactive interface, and acquiring an image to be rendered and rendering parameters in the interactive interface;
generating a ray set based on the image to be rendered and the rendering parameters;
acquiring an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface;
rendering the TSDF field lattice points where the intersection points are located based on the parameter information of the intersection points, generating rendering results and displaying the rendering results on the interactive interface.
An embodiment of the present disclosure further provides an image rendering apparatus, including:
the response acquisition module is used for responding to a display instruction received in an interactive interface and acquiring an image to be rendered and rendering parameters in the interactive interface;
a generating module, configured to generate a ray set based on the image to be rendered and the rendering parameter;
the first acquisition module is used for acquiring an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface;
and the generating and displaying module is used for rendering the TSDF field lattice point where the intersection point is located based on the parameter information of the intersection point, generating a rendering result and displaying the rendering result on the interactive interface.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instructions from the memory and executing the instructions to realize the image rendering method provided by the embodiment of the disclosure.
The embodiment of the present disclosure also provides a computer-readable storage medium, which stores a computer program for executing the image rendering method provided by the embodiment of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the image rendering scheme provided by the embodiment of the disclosure, in response to a display instruction received in an interactive interface, an image to be rendered and a rendering parameter in the interactive interface are obtained, a ray set is generated based on the image to be rendered and the rendering parameter, an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface is obtained, a TSDF field lattice point where the intersection point is located is rendered based on parameter information of the intersection point, and a rendering result is generated and displayed on the interactive interface. By adopting the technical scheme, the TSDF field data is directly used for rendering and displaying, so that the real-time rendering of the image is realized, the real-time performance of the image rendering can be improved, the influence on the frame rate is less, and the display effect of the image rendering scene is further improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image rendering method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another image rendering method according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In practical application, for a situation of real-time rendering and displaying of an image in a scanned scene (which may be a scene such as image scanning, normal scanning, infrared scanning, laser scanning, and the like), a current scheme uses real-time point rendering or real-time triangular mesh rendering, and there is a problem of poor rendering effect or low rendering efficiency.
In order to solve the problems, the image rendering method includes the steps of responding to a display instruction received in an interactive interface, obtaining an image to be rendered and rendering parameters in the interactive interface, generating a ray set based on the image to be rendered and the rendering parameters, obtaining an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface, rendering a TSDF field lattice point where the intersection point is located based on parameter information of the intersection point, and generating and displaying a rendering result on the interactive interface.
It will be appreciated that the three-dimensional space is rasterized (pitch is a parameter) at a pitch, and each cell records information about the center, fraction, normal, weight, color, etc. of the cell that constitutes the TSDF field. Given any point in space, if this point has a score in the TSDF field, and the score is 0, then this point is a point on the surface described by the TSDF field (the purpose of the TSDF field is to describe this surface, i.e., the TSDF implicit surface).
The image rendering method of the embodiment of the disclosure does not pre-extract data from the TSDF field in advance, but directly uses the TSDF field data for rendering during rendering, when internal and external parameters of a rendering camera are determined, a screen is composed of a plurality of pixels, a ray set is formed by connecting lines from the center of the rendering camera to the pixels, an intersection point of the ray set and an implicit surface of the TSDF field is calculated, and then the color and the depth of the pixel are rendered by using information (including depth, normal direction, color, weight and the like) of the intersection point, thereby completing real-time rendering.
Therefore, the TSDF field data is directly used for rendering and displaying, real-time rendering of the image is achieved, the real-time performance of the image rendering can be improved, the influence on the frame rate is smaller, and the display effect under the image rendering scene is further improved.
Specifically, fig. 1 is a flowchart of an image rendering method provided by an embodiment of the present disclosure, where the method may be executed by an image rendering apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, responding to a display instruction received in an interactive interface, and acquiring an image to be rendered and rendering parameters in the interactive interface.
The image to be rendered refers to a current display screen pixel point in the interactive interface, namely a region needing to be rendered, and can also be understood as the surface of the TSDF field under the current camera view angle; the rendering parameters refer to internal parameters of the rendering camera (the position of the rendering camera in space is composed of a rotation and a translation) and external parameters of the rendering camera (the internal parameters corresponding to the pinhole rendering camera converts three-dimensional coordinates into two-dimensional plane coordinates and depth). Wherein, the rendering camera refers to a simulation camera defined in a program, not a real camera.
In the embodiment of the present disclosure, the TSDF field is composed of uniformly distributed lattice points, and each lattice point records information such as a lattice center point, a score value, a normal direction, a weight, and a color. Multiple scanned images may be fused into one TSDF field.
In the embodiment of the present disclosure, in response to a display instruction received in an interactive interface, acquiring an image to be rendered and rendering parameters in the interactive interface includes: and receiving a display instruction and responding to the display instruction based on the triggering operation of a user on a preset display control on a display page, and acquiring an image to be rendered and rendering parameters. The display page is a page for displaying a rendered image, the display control is an anchor point which is arranged in the display page and is used for performing display operation, the presentation form of the display control is not limited, and the display control may be an icon or text information, for example.
It should be noted that, the triggering operation on the display control may also be an operation on an image, and the like, and the manner of receiving the display instruction is not specifically limited by the present disclosure.
Specifically, in the process of displaying the rendering result in the display page, a trigger operation of the user on the display page may be detected, and when a click operation or a hover operation of the user on the display control is detected, a display instruction may be received, so that the current screen display pixel is obtained to determine the image to be rendered, and the internal parameters and the external parameters of the rendering camera at the current viewing angle are obtained.
In one embodiment, the interactive interface is used for displaying a three-dimensional model generated based on the scanned image, and the display page in the interactive interface is used for displaying a two-dimensional display image of the three-dimensional model.
In the embodiment of the present disclosure, in response to a display instruction received in an interactive interface, acquiring an image to be rendered and rendering parameters in the interactive interface includes: and responding to a display instruction for displaying the three-dimensional model received in the interactive interface, and acquiring current screen display pixels, internal parameters and external parameters of the rendering camera in the interactive interface.
In the embodiment of the disclosure, the update display instruction for updating and displaying the three-dimensional model received in the interactive interface reacquires the image to be rendered and the rendering parameter in the interactive interface based on the update display instruction.
The display instruction may be that a user starts displaying the three-dimensional model, and the update display instruction may be that the user moves, drags, rotates, and the like the three-dimensional model through the anchor point.
Therefore, when a user moves, drags, rotates and the like the three-dimensional model through the anchor point, the image rendering method of the embodiment of the disclosure can render the two-dimensional display image displayed on the display page in real time through the TSDF field, so that the user can see the rendered image on the display page, and after the image is enlarged, the user cannot see the point cloud and can see the surface.
In addition, even if the whole three-dimensional model is not actually packaged or well rendered, a user can see a model which is equivalent to the well rendered model, and the user experience is improved.
Step 102, generating a ray set based on the image to be rendered and the rendering parameters.
The ray set refers to a set comprising a plurality of rays, and the rays refer to straight lines formed by extending one end of a line segment infinitely, wherein the straight lines are formed by pixels corresponding to an image to be rendered and internal parameters and external parameters of a rendering camera.
In the embodiments of the present disclosure, there are many ways to generate a ray set based on an image to be rendered and rendering parameters, and in some embodiments, a rendering camera center is determined based on the rendering parameters, a plurality of pixels are determined based on the image to be rendered, and a plurality of rays are determined based on the rendering camera center and each pixel, so as to obtain a ray set.
In other embodiments, a plurality of pixels are determined based on the image to be rendered, a rendering camera front view plane is determined based on the rendering parameters, and a plurality of rays are determined by pointing to each pixel with a point in the rendering camera front view plane as a starting point, so as to obtain a ray set.
The above two ways are only examples of generating the ray set based on the image to be rendered and the rendering parameter, and the embodiment of the present disclosure does not specifically limit the way of generating the ray set based on the image to be rendered and the rendering parameter.
And 103, acquiring an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface.
Wherein, given any point in space, if the point has a fraction in the TSDF field and the fraction is 0, the point is a point on the implicit surface of the TSDF field.
In some embodiments, all lattice points of the TSDF field are obtained, a ray subset where each lattice point intersects is calculated, an initial lattice point set corresponding to each ray is determined based on the ray subset where each lattice point intersects, an initial lattice point is determined from the initial lattice point set, distance comparison processing is performed on remaining lattice points in the initial lattice point set and the initial lattice points to obtain a target lattice point set corresponding to each ray, calculation is performed based on the target lattice point set to obtain a score value corresponding to each ray, and an intersection point of each ray and the TSDF field implicit plane is determined based on the score value.
In other embodiments, a lattice point set where each ray intersects is calculated, the lattice point set is screened to obtain a target lattice point, calculation is performed according to a score value and a weight of the target lattice point to obtain a score value of each ray, and an intersection point of each ray and the TSDF field implicit surface is determined based on the score value.
The above two manners are only examples of obtaining an intersection point of each ray in the ray set and the pre-generated truncated directed distance function TSDF field implicit surface, and the present disclosure does not specifically limit the manner of obtaining an intersection point of each ray in the ray set and the pre-generated truncated directed distance function TSDF field implicit surface.
And step 104, rendering the lattice points of the TSDF field where the intersection points are located based on the parameter information of the intersection points, generating rendering results and displaying the rendering results on the interactive interface.
The parameter information of the intersection point refers to information such as intersection point depth, normal direction, color, weight, and the like, wherein the depth information can be understood as a distance between a ray starting point and the intersection point.
In the embodiment of the present disclosure, the TSDF field lattice point where the intersection point is located is rendered based on the parameter information of the intersection point, a rendering result is generated and displayed, specifically, a depth map, a color map, and a normal map are obtained through rendering, and a final picture is rendered and displayed according to the depth map, the color map, and the normal map.
In summary, in the image rendering method according to the embodiment of the present disclosure, an image to be rendered and a rendering parameter are obtained by responding to a display instruction, a ray set is generated based on the image to be rendered and the rendering parameter, an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface is calculated, a TSDF field lattice point where the intersection point is located is rendered based on parameter information of the intersection point, and a rendering result is generated and displayed. By adopting the technical scheme, the TSDF field data is directly used for rendering and displaying, so that the real-time rendering of the image is realized, the real-time performance of the image rendering can be improved, the influence on the frame rate is less, and the display effect of the image rendering scene is further improved.
Fig. 2 is a schematic flow chart of another image rendering method according to the embodiment of the present disclosure, and the embodiment further optimizes the image rendering method on the basis of the embodiment.
As shown in fig. 2, the method includes:
step 201, acquiring all scanned images, and generating a TSDF field based on all scanned images.
It should be noted that, after step 201, step 202 and/or step 203 may be executed.
All scanned images refer to pictures acquired by the scanner camera, and the pictures are successfully tracked, namely the effective scanned pictures acquired by the scanner camera. In the embodiment of the disclosure, the images collected by the scanner camera are processed through a reconstruction algorithm to obtain a depth map, the position of the depth map in a world coordinate system is determined through a tracking module, if tracking is successful, the images are transmitted to a fusion module, and the fusion module is responsible for synthesizing all single frames into one TSDF field.
Step 202, obtaining an update scan image, and updating the TSDF field based on the update scan image.
In the embodiment of the present disclosure, image rendering may be performed during scanning, so that a scanned image updated in real time may be obtained, and the TSDF field is updated according to the updated scanned image, where only the changed TSDF grid points are updated during scanning, and not all the grid points are updated every time.
Step 203, detecting whether the display memory and the memory of the current display device are the same, and loading the TSDF field to the display memory under the condition that the display memory and the memory are not the same.
It should be noted that, when the display memory and the memory of the current display device are the same, it is not necessary to load the TSDF field into the display memory.
Therefore, the TSDF field is loaded to the video memory, and the effect of real-time rendering is achieved.
And step 204, responding to a display instruction for displaying the three-dimensional model received in the interactive interface, and acquiring current screen display pixels, internal parameters and external parameters of the rendering camera in the interactive interface.
In the embodiment of the disclosure, the update display instruction for updating and displaying the three-dimensional model received in the interactive interface reacquires the image to be rendered and the rendering parameter in the interactive interface based on the update display instruction.
Specifically, the interactive interface is used for displaying a three-dimensional model generated based on the scanned image, and the display page in the interactive interface is used for displaying a two-dimensional display image of the three-dimensional model. The display instruction may be that a user starts displaying the three-dimensional model, and the update display instruction may be that the user moves, drags, rotates, and the like the three-dimensional model through the anchor point.
Therefore, when a user moves, drags, rotates and the like the three-dimensional model through the anchor point, the image rendering method of the embodiment of the disclosure can render the two-dimensional display image displayed on the display page in real time through the TSDF field, so that the user can see the rendered image on the display page, and after the image is enlarged, the user cannot see the point cloud and can see the surface.
In addition, even if the whole three-dimensional model is not actually packaged or well rendered, a user can see a model which is equivalent to the well rendered model, and the user experience is improved.
Step 205, determining a center of a rendering camera based on the rendering parameters, determining a plurality of pixels based on the image to be rendered, and determining a plurality of rays based on the center of the rendering camera and each pixel to obtain a ray set.
Specifically, when the rendering parameters such as internal and external parameters of the rendering camera are determined, the image to be rendered is composed of some pixels, connecting lines from the center of the rendering camera to the pixels form some rays, and the rays start from the center of the rendering camera, pass through pixel points of the image to be rendered, and go to infinity.
In the embodiment of the disclosure, data in the TSDF field is provided to a real-time rendering algorithm, and an image to be rendered and rendering parameters are obtained, that is, a current rendering camera position (which is different from a physical camera and may change with a user operation), information such as a rendering output normal map and a depth map is simply processed and then output to a screen together with interface design information such as a toolbar.
And step 206, acquiring all grid points of the TSDF field, calculating a ray subset intersected by each grid point, and determining an initial grid point set corresponding to each ray based on the ray subset intersected by each grid point.
And step 207, determining an initial grid point from the initial grid point set, comparing distances between the remaining grid points in the initial grid point set and the initial grid points to obtain a target grid point set corresponding to each ray, calculating based on the target grid point set to obtain a score value corresponding to each ray, and determining an intersection point of each ray and the TSDF field implicit surface based on the score value.
In the embodiment of the disclosure, the weight and lattice point value of each target lattice point in the target lattice point set are obtained, and weighted average calculation is performed based on the weight and lattice point value of each target lattice point to obtain a point value corresponding to each ray.
In the disclosed embodiment, the TSDF field is actually a three-dimensional grid, each grid having xyz (grid point centers). All grids are empty initially, and when the scan image data is merged, the grid points covered by the frame and the grid points around the frame change their grid information. Among these, information generally includes: score, normal, weight, color, etc. (xyz must not be included, xyz is always the center of the lattice).
That is, the TSDF field can be understood as a regular, neat small square in a stack of three-dimensional spaces, with some information in the middle of each lattice point. The TSDF field information is used to describe a particular surface, and this approach is generally referred to as implicit surface. The disclosed embodiments directly render this implicit surface by inputting the TSDF field. The implicit surface is also a surface in nature, and the output result can be rendered in real time by the implicit surface described by the TSDF field.
Specifically, for each ray, the intersection point (including information such as depth, normal direction and color) of the ray and the TSDF field implicit surface is calculated, and the rendering efficiency is further improved.
Specifically, for each grid point of the TSDF field, all rays intersecting it are computed, and the grid point and which ray intersection information is converted into an initial set of grid points with which rays intersect, i.e., each ray corresponds.
Further, for each ray, the grid point closest to the ray starting point is calculated, denoted as C, and all grid points intersected by this ray are compared with C, and if not adjacent to C, discarded (set as disjoint). Where adjacent is understood to mean that the distance between the centre points of the grids is smaller than a target multiple of the grid spacing, such as 4.
Wherein, different rays determine the starting point of the ray in different ways, such as parallel rays, a group of rays perpendicular to the front view section of the rendering camera, and the starting point is positioned in the front view section of the rendering camera; as another example of a co-point ray, the starting point of all rays is the rendering camera center point.
Further, the initial set of grid points intersected by the ray, each grid point intersected by the ray has a continuous score on the ray, and the scores are weighted and averaged (the weight is the weight of the grid point, namely the quality degree of the grid) to obtain a score value corresponding to each ray.
Further, a lattice point with a fraction of 0 on the ray is calculated, and a 0-value point closest to the center of the ray is taken, and the point is a point on the implicit surface. So far, the intersection point of the ray and the implicit expression is already found, and the depth information of the pixel corresponding to the ray can be obtained. And the distance between the starting point and the intersection point of the ray is the depth information of the corresponding pixel of the ray.
Finally, at the intersection point, the other information of all the grid points intersected with the ray is weighted and averaged, including normal direction, color and the like, to obtain the parameter information of the intersection point.
For example, if there are 6 TSDF lattice points, each lattice point is topped with a fraction, and all point (consecutive) fractions near the lattice point are weighted by the surrounding lattice fractions, where the set of lattice points equal to 0 is the surface described by these points. In the embodiment of the present disclosure, the intersection point of the ray and the surface (hidden surface) is calculated, except that the surface does not exist in the memory, the grid intersected by the ray is calculated, and then the intersection point is rendered, for example, one ray intersects eight grid points, the fraction of the ray on the block area can be determined by the eight grid points together, where 0 is the intersection point.
Therefore, all lattice points only need to be traversed once, the number of the lattice points intersected by any ray and the TSDF field has an upper limit, the number of the rays is fixed, the rays cannot be obviously changed along with the change of the TSDF field, real-time performance can be achieved, and real-time rendering of images is guaranteed.
And step 208, rendering the TSDF field lattice points where the intersection points are located based on the parameter information of the intersection points, generating rendering results and displaying the rendering results on the interactive interface.
In the embodiment of the present disclosure, the rendering result is output, and the rendering result includes a depth map, a color map, and a normal map, and the final picture is rendered and displayed by using the pictures.
Therefore, the image rendering method disclosed by the embodiment of the disclosure does not need to extract information from the TSDF field, but directly renders according to the TSDF field, has small influence on the scanning frame rate, does not sample the TSDF field, and has the advantages that each ray is obtained as an accurate value and the rendering effect is optimal.
Fig. 3 is a schematic structural diagram of an image rendering apparatus provided in an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 3, the apparatus includes:
a response obtaining module 301, configured to obtain, in response to a display instruction received in an interactive interface, an image to be rendered and rendering parameters in the interactive interface;
a generating module 302, configured to generate a ray set based on the image to be rendered and the rendering parameter;
a first obtaining module 303, configured to obtain an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface;
and a generating and displaying module 304, configured to render the TSDF field lattice point where the intersection point is located based on the parameter information of the intersection point, generate a rendering result, and display the rendering result on the interactive interface.
Optionally, the response obtaining module 301 is specifically configured to:
and responding to a display instruction for displaying the three-dimensional model received in the interactive interface, and acquiring current screen display pixels, internal parameters and external parameters of the rendering camera in the interactive interface.
Optionally, the apparatus further includes:
the receiving module is used for receiving an updating display instruction for updating and displaying the three-dimensional model in the interactive interface;
and the reacquiring module is used for reacquiring the image to be rendered and the rendering parameters in the interactive interface based on the updated display instruction.
Optionally, the generating module 302 is specifically configured to:
determining a rendering camera center based on the rendering parameters;
determining a plurality of pixels based on the image to be rendered;
determining a plurality of rays based on the rendering camera center and each pixel, resulting in the ray set.
Optionally, the first obtaining module 303 includes:
the acquisition and calculation unit is used for acquiring all grid points of the TSDF field and calculating a ray subset intersected by each grid point;
a first determining unit, configured to determine, based on the subset of rays that each of the lattice points intersects, an initial lattice point set corresponding to each of the rays;
the determining and processing unit is used for determining initial grid points from the initial grid point set, and performing distance comparison processing on the remaining grid points in the initial grid point set and the initial grid points to obtain a target grid point set corresponding to each ray;
the calculating unit is used for calculating based on the target lattice point set to obtain a score value corresponding to each ray;
and the second determination unit is used for determining the intersection point of each ray and the TSDF field implicit surface based on the fraction value.
Optionally, the computing unit is specifically configured to:
acquiring the weight and lattice point value of each target lattice point in the target lattice point set;
and performing weighted average calculation based on the weight of each target grid point and the grid fraction value to obtain the fraction value corresponding to each ray.
Optionally, the apparatus further comprises:
the detection module is used for detecting whether the display memory and the memory of the current display equipment are the same;
and the loading module is used for loading the TSDF field to the video memory under the condition that the video memory and the memory are not the same.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring all scanned images;
and the processing generation module is used for generating the TSDF field based on all the scanned images.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring the updated scanning image;
an update module to update the TSDF field based on the updated scan image.
The image rendering device provided by the embodiment of the disclosure can execute the image rendering method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Embodiments of the present disclosure also provide a computer program product comprising a computer program/instructions that when executed by a processor implement the image rendering method provided by any of the embodiments of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring now specifically to fig. 4, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and fixed terminals such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the image rendering method of the embodiment of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving information display triggering operation of a user in the video playing process; acquiring at least two pieces of target information related to the video; displaying first target information in the at least two pieces of target information in an information display area of a playing page of the video, wherein the size of the information display area is smaller than that of the playing page; and receiving a first switching trigger operation of a user, and switching the first target information displayed in the information display area into second target information of the at least two pieces of target information.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the image rendering method provided by the disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing any of the image rendering methods provided by the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. An image rendering method, comprising:
responding to a display instruction received in an interactive interface, and acquiring an image to be rendered and rendering parameters in the interactive interface;
generating a ray set based on the image to be rendered and the rendering parameters;
acquiring an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface;
rendering the TSDF field lattice points where the intersection points are located based on the parameter information of the intersection points, generating rendering results and displaying the rendering results on the interactive interface.
2. The image rendering method according to claim 1, wherein the obtaining of the image to be rendered and the rendering parameters in the interactive interface in response to the display instruction received in the interactive interface comprises:
and responding to a display instruction for displaying the three-dimensional model received in the interactive interface, and acquiring current screen display pixels, internal parameters and external parameters of a rendering camera in the interactive interface.
3. The image rendering method of claim 2, further comprising:
receiving an updating display instruction for updating and displaying the three-dimensional model in the interactive interface;
and based on the updating display instruction, the image to be rendered and the rendering parameters in the interactive interface are obtained again.
4. The image rendering method of claim 1, wherein the generating a set of rays based on the image to be rendered and the rendering parameters comprises:
determining a rendering camera center based on the rendering parameters;
determining a plurality of pixels based on the image to be rendered;
determining a plurality of rays based on the rendering camera center and each pixel to obtain the ray set.
5. The image rendering method of claim 1, wherein the obtaining an intersection point of each ray in the ray set and an implicit surface of a pre-generated truncated directed distance function (TSDF) field comprises:
acquiring all grid points of a TSDF field, and calculating a ray subset intersected by each grid point;
determining an initial grid point set corresponding to each ray based on the ray subset intersected by each grid point;
determining initial grid points from the initial grid point set, and performing distance comparison processing on the remaining grid points in the initial grid point set and the initial grid points to obtain a target grid point set corresponding to each ray;
calculating based on the target lattice point set to obtain a score value corresponding to each ray;
and determining the intersection point of each ray and the TSDF field implicit surface based on the fraction value.
6. The image rendering method of claim 5, wherein the calculating based on the target set of grid points to obtain the score value corresponding to each ray comprises:
acquiring the weight and lattice point value of each target lattice point in the target lattice point set;
and performing weighted average calculation based on the weight of each target grid point and the grid fraction value to obtain the fraction value corresponding to each ray.
7. The image rendering method of claim 1, further comprising:
detecting whether a display memory and a memory of the current display equipment are the same;
and loading the TSDF field to the video memory under the condition that the video memory and the memory are not the same.
8. The image rendering method according to any one of claims 1 to 7, further comprising:
acquiring all scanned images;
based on all the scanned images, a TSDF field is generated.
9. The image rendering method of claim 8, further comprising:
acquiring an updated scanning image;
updating the TSDF field based on the updated scan image.
10. An image rendering apparatus, comprising:
the response acquisition module is used for responding to a display instruction received in an interactive interface and acquiring an image to be rendered and rendering parameters in the interactive interface;
a generating module, configured to generate a ray set based on the image to be rendered and the rendering parameter;
the first acquisition module is used for acquiring an intersection point of each ray in the ray set and a pre-generated truncated directed distance function TSDF field implicit surface;
and the generating and displaying module is used for rendering the TSDF field lattice point where the intersection point is located based on the parameter information of the intersection point, generating a rendering result and displaying the rendering result on the interactive interface.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the image rendering method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the image rendering method of any of claims 1-9.
CN202210761733.1A 2022-06-29 2022-06-29 Image rendering method and device, electronic equipment and medium Pending CN115170715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210761733.1A CN115170715A (en) 2022-06-29 2022-06-29 Image rendering method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210761733.1A CN115170715A (en) 2022-06-29 2022-06-29 Image rendering method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115170715A true CN115170715A (en) 2022-10-11

Family

ID=83489805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210761733.1A Pending CN115170715A (en) 2022-06-29 2022-06-29 Image rendering method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115170715A (en)

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
US20220277481A1 (en) Panoramic video processing method and apparatus, and storage medium
WO2024016930A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN111870953A (en) Height map generation method, device, equipment and storage medium
CN115330925A (en) Image rendering method and device, electronic equipment and storage medium
WO2024120223A1 (en) Image processing method and apparatus, and device, storage medium and computer program product
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN114445269A (en) Image special effect processing method, device, equipment and medium
CN111862349A (en) Virtual brush implementation method and device and computer readable storage medium
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN109816791B (en) Method and apparatus for generating information
CN116030221A (en) Processing method and device of augmented reality picture, electronic equipment and storage medium
CN113506356B (en) Method and device for drawing area map, readable medium and electronic equipment
CN115170715A (en) Image rendering method and device, electronic equipment and medium
CN111652831B (en) Object fusion method and device, computer-readable storage medium and electronic equipment
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN113744379A (en) Image generation method and device and electronic equipment
CN112164066A (en) Remote sensing image layered segmentation method, device, terminal and storage medium
CN112465692A (en) Image processing method, device, equipment and storage medium
CN112395826B (en) Text special effect processing method and device
CN111354070A (en) Three-dimensional graph generation method and device, electronic equipment and storage medium
CN111489428B (en) Image generation method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination