CN115512038A - Real-time rendering method for free viewpoint synthesis, electronic device and readable storage medium - Google Patents

Real-time rendering method for free viewpoint synthesis, electronic device and readable storage medium Download PDF

Info

Publication number
CN115512038A
CN115512038A CN202210868418.9A CN202210868418A CN115512038A CN 115512038 A CN115512038 A CN 115512038A CN 202210868418 A CN202210868418 A CN 202210868418A CN 115512038 A CN115512038 A CN 115512038A
Authority
CN
China
Prior art keywords
target
viewpoint
spatial
image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210868418.9A
Other languages
Chinese (zh)
Other versions
CN115512038B (en
Inventor
米杰
国计武
董立龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weishiwei Information Technology Co ltd
Original Assignee
Beijing Weishiwei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weishiwei Information Technology Co ltd filed Critical Beijing Weishiwei Information Technology Co ltd
Priority to CN202210868418.9A priority Critical patent/CN115512038B/en
Publication of CN115512038A publication Critical patent/CN115512038A/en
Application granted granted Critical
Publication of CN115512038B publication Critical patent/CN115512038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a real-time rendering method for free viewpoint synthesis, electronic equipment and a readable storage medium, wherein the method comprises the following steps: receiving a target viewpoint for observing a target scene; acquiring first drawing data of a pre-stored target scene on a reference viewpoint; converting the first rendering data into second rendering data of the target scene on the target viewpoint according to a camera coordinate conversion relation between the target viewpoint and the reference viewpoint; the drawing data includes: the image data of the space layers comprise image data of each space point in the space layers, and the image data of the space points comprise transparency, basic color values and N color coefficients which are in one-to-one correspondence with the N basis functions; drawing a target image corresponding to the target viewpoint according to the second drawing data; and outputting the target image.

Description

Real-time rendering method for free viewpoint synthesis, electronic device and readable storage medium
Technical Field
The present disclosure relates to the field of free viewpoint synthesis technologies, and in particular, to a real-time rendering method for free viewpoint synthesis, an electronic device, and a computer-readable storage medium.
Background
The free viewpoint synthesized image drawing is to draw a target image for observing the target scene at other viewpoints based on a known image for observing the target scene at a reference viewpoint so as to realize free viewpoint observation of the target scene by a user. For example, when a user browses a commodity or a handicraft on the internet, the user is more inclined to interactive and stronger viewing experience than only viewing a pre-shot image, so that a free viewpoint synthesis technology can be utilized, the geometry or light field of the commodity is reconstructed based on the pre-shot image of a merchant, and further 360-degree interactive virtual browsing experience is provided for the user.
For free viewpoint synthesis, the prior art proposes a three-dimensional scene representation method based on a Neural radial Fields (NeRF), which estimates a continuous Neural network scene representation from images of known viewpoints, and on this basis, uses a classical volume rendering technique to draw images of other viewpoints. In the method, when each pixel of a new viewpoint image is drawn, the device for performing free viewpoint synthesis needs to predict the geometric appearance information of a plurality of spatial position points on the imaging light of each pixel through a nerve radiation field model, the calculation amount is very large, and the drawing efficiency is greatly influenced.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a new technical solution for free viewpoint synthesis, so as to improve the response speed of free viewpoint rendering on the premise of ensuring the rendering accuracy, and implement high-accuracy real-time rendering.
According to a first aspect of the present disclosure, there is provided a real-time rendering method of free viewpoint synthesis according to an embodiment, including:
receiving a target viewpoint for observing a target scene;
acquiring first pre-stored drawing data of the target scene on a reference viewpoint;
converting the first rendering data into second rendering data of the target scene on the target viewpoint according to a camera coordinate conversion relation between the target viewpoint and a reference viewpoint; wherein the drawing data includes: the method comprises the steps of obtaining N basis function values of corresponding viewpoints and image data of each space layer in a group of space layers of the corresponding viewpoints, wherein the basis function values represent color variation in the observation direction of the corresponding viewpoints, the group of space layers are arranged in parallel along the observation direction of the corresponding viewpoints according to different depth values, the image data of the space layers comprise image data of each space point in the space layers, the image data of the space points comprise transparency, basic color values and N color coefficients, and the N color coefficients correspond to the N basis functions one by one;
drawing a target image corresponding to the target viewpoint according to the second drawing data;
and outputting the target image.
Optionally, the rendering a target image corresponding to the target viewpoint according to the second rendering data includes:
according to the second rendering data, rendering a target image corresponding to the target viewpoint according to a set synthesis operation; wherein the synthesis operation is represented as:
Figure BDA0003759465270000021
Figure BDA0003759465270000022
c (w, h) represents the color value of a pixel point (w, h) of the target image, D represents the layer number of a second group of space layers of the target viewpoint, and a (d,w,h) A transparency representing a spatial point (w, h) of a spatial layer of a d-th spatial layer of said second set of spatial layers, a (i,w,h) A transparency representing a spatial point (w, h) of an ith spatial layer of said second set of spatial layers, c (d,w,h) Representing a color value of a spatial point (w, h) of the d-th spatial layer, wherein a depth value corresponding to the d-th spatial layer decreases with the increase of d; colour value c (d,w,h) The method is determined by N basis functions of a target viewpoint, basic color values of spatial points (w, h) of a d-th spatial layer and N color coefficients.
Optionally, the color value c of the spatial point (w, h) of the d-th spatial layer (d,w,h) Expressed as:
Figure BDA0003759465270000031
wherein H n An nth basis function representing the target viewpoint,
Figure BDA0003759465270000032
representing a base color value of a spatial point (w, h) of said d-th spatial layer,
Figure BDA0003759465270000033
and the nth color coefficient represents the spatial point (w, h) of the kth spatial layer.
Optionally, the image data of the spatial point is data independent of a viewing direction.
Optionally, the image data of each spatial point in the first rendering data is obtained through a preset first model, and the N basis function values of the first rendering data are obtained through a preset second model; when the image data of any space point in the first rendering data is obtained through the first model, the input information of the first model comprises the position coordinates of the space point and a first scene characteristic corresponding to the space point; when the N basis function values of the first rendering data are obtained by the second model, the input information of the second model includes an observation direction of the reference viewpoint and a second scene characteristic corresponding to the observation direction.
Optionally, a first scene feature corresponding to any spatial point in the first drawing data is obtained by fusing first image features, corresponding to the spatial point, of the multiple frames of first images according to a set fusion mode; the multi-frame first image is a known image obtained by observing a target scene along a plurality of viewpoints, the multi-frame first image corresponds to the plurality of viewpoints one by one, and the plurality of viewpoints comprise the reference viewpoint;
and the second scene characteristics are obtained by fusing second image characteristics of the multiple frames of first images corresponding to the observation direction of the reference viewpoint according to the fusion mode.
Optionally, the first model and the second model are obtained by training samples in a training sample set, where the training sample set includes training samples corresponding to different scenes.
Optionally, the first model is a multi-layer perceptron structure with 6 layers, and/or the second model is a two-layer MLP network structure.
According to a second aspect of the present disclosure, there is provided an electronic device according to an embodiment, the electronic device comprising a memory for storing a computer program and a processor for executing the real-time rendering method according to the first aspect of the present disclosure under the control of the computer program.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium according to an embodiment, having stored thereon a computer program which, when executed by a processor, implements the real-time rendering method according to the first aspect of the present disclosure.
The electronic device for free viewpoint synthesis has the advantages that the first rendering data of the target scene at the reference viewpoint is stored locally in advance, after the target viewpoint input by the user is received, the first rendering data is converted into second rendering data corresponding to the target viewpoint based on the camera coordinate conversion relation between the target viewpoint and the reference viewpoint, and the target image corresponding to the target viewpoint is directly rendered based on the converted second rendering data. Here, when the electronic device renders the target image, the electronic device directly uses the converted second rendering data to render, and does not need to call a model to predict image data of each spatial point corresponding to the target viewpoint after receiving the target viewpoint input by the user.
On the other hand, when the embodiment of the present disclosure performs real-time rendering, anisotropic color characteristics of a spatial point are considered, that is, when the same spatial point is observed from different directions, characteristics of different colors are exhibited, data content embodying the color characteristics is incorporated into rendering data, that is, the rendering data includes N basis function values related to the observation direction, the basis functions reflect a mapping relationship between the observation direction and color variation, the rendering data further includes image data corresponding to spatial points in a spatial map layer one to one, and the image data of any spatial point includes not only transparency and a basic color value of the spatial point but also N color coefficients corresponding to the N basis function values one to one of the spatial point.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an application scenario of a method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram of a real-time rendering method according to some embodiments;
FIG. 3 is a schematic flow diagram of obtaining first rendering data, according to some embodiments;
FIG. 4 is a diagrammatic schematic view of obtaining a first image feature according to some embodiments;
FIG. 5 is a diagrammatic schematic view of obtaining a second image feature according to some embodiments;
FIG. 6 is a hardware architecture diagram of an electronic device according to some embodiments.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a schematic view of an application scenario of a real-time rendering method according to an embodiment of the present disclosure.
The electronic device 1000 shown in fig. 1 may be used to perform a real-time rendering method of free viewpoint synthesis according to an embodiment of the present disclosure. The electronic device 1000 may be any electronic device with computing capability, may be a terminal device, may be a server, and the like, and is not limited herein.
As shown in fig. 1, the electronic device 1000 prestores, in a storage space, first rendering data of a target scene at a reference viewpoint, that is, the electronic device 1000 prestores first rendering data of the target scene observed from the reference viewpoint, where the first rendering data is capable of being used by the electronic device 1000 to render an image of the target scene observed at the reference viewpoint. The first rendering data may be generated by the electronic device 1000 based on a plurality of frames of first images corresponding to different known viewpoints, which include the above reference viewpoint, or may be generated by other devices based on a plurality of frames of first images corresponding to different known viewpoints and provided to the electronic device 1000, which is not limited herein.
The electronic device 1000 may directly call locally stored first rendering data and convert the first rendering data into second rendering data corresponding to the target viewpoint, and then render a target image corresponding to the target viewpoint based on the second rendering data and output the target image for the user to observe, when receiving the target viewpoint of the observation target scene input by the user. In the process of rendering the second image, the electronic device 1000 only needs to perform rendering data conversion and rendering operation based on the second rendering data, and does not need to call a model to predict image data of each spatial point corresponding to the target viewpoint, so that the rendering calculation amount is greatly reduced, the response speed to the user input can be effectively improved, real-time rendering is realized, and good interactivity is achieved.
As shown in fig. 1, the electronic device 1000 may include a processor 1100, a memory 1200, a camera 1300, a communication device 1400, a display device 1500, an input device 1600.
Processor 1100 is used to execute computer programs, which may be written in an instruction set of architectures such as x86, arm, RISC, MIPS, SSE, and the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The camera 1300 may be a depth camera. The communication device 1400 is capable of wired or wireless communication, for example, the communication device 1400 may include at least one short-range communication module, for example, any module for performing short-range wireless communication based on short-range wireless communication protocols such as Hilink protocol, wiFi (IEEE 802.11 protocol), mesh, bluetooth, zigBee, thread, Z-Wave, NFC, UWB, liFi, and the like, and the communication device 1400 may also include a long-range communication module, for example, any module for performing WLAN, GPRS, 2G/3G/4G/5G long-range communication. The display device 1500 is an arbitrary device capable of displaying an image. The input device 1600 may include a mouse, keyboard, microphone, etc. for a user to input information.
The memory 1200 of the electronic device 1000 is used to store a computer program for controlling the processor 1100 to operate at least to perform the real-time rendering method according to any embodiment of the present disclosure. The skilled person can design a computer program according to the method steps and how the computer program controls the processor to operate, which is well known in the art and therefore not described in detail here.
< method example >
Fig. 2 shows a flow diagram of a method of real-time rendering of free viewpoint synthesis according to an embodiment. The method of this embodiment may be implemented by the electronic device 1000 shown in fig. 1, or may be implemented by other types of electronic devices, and the method steps of this embodiment will now be described by taking the electronic device 1000 as an example.
As shown in fig. 2, the real-time rendering method of the present embodiment may include the following steps S210 to S250:
step S210 receives a target viewpoint for observing a target scene.
Any viewpoint mentioned in this embodiment includes an observation position and an observation direction, and the any viewpoint may be a target viewpoint, a reference viewpoint, or other viewpoints mentioned above.
The target scene in this embodiment may be any scene, for example, any article, any scene, any person, and the like, which are not limited herein.
The user may input a target viewpoint for observing a target scene to the electronic device 1000 in any manner. For example, the user may input data regarding the target viewpoint to the electronic device 1000 by means of a keyboard, voice, or the like. For another example, the user may select the target viewpoint by moving or rotating a ball marker representing the target viewpoint, and the electronic device may determine the target viewpoint or the like according to an operation of the ball marker by the user.
The target viewpoint in this embodiment may be any viewpoint selected by the user within a set range.
In step S220, first drawing data of a pre-stored target scene at a reference viewpoint is obtained.
The first drawing data of the target scene at the reference viewpoint is data capable of drawing a first image obtained by observing (i.e., shooting) the target scene at the reference viewpoint by the camera, that is, the electronic device 1000 can draw the first image by using the first drawing data, where the first image is a real image obtained by observing the target scene by the camera.
The first rendering data may be generated in advance by the electronic device 1000 based on a plurality of frames of the first image of the observation target scene at different viewpoints, including the above reference viewpoint, and stored locally. The first rendering data may also be generated by other devices and provided to the electronic device 1000 for real-time rendering by the electronic device 1000 for free viewpoint synthesis based on the first rendering data.
In this embodiment, for a target scene, different viewpoints may correspond to different rendering data, for example, a reference viewpoint corresponds to first rendering data. The first rendering data can be converted into rendering data of the target scene on other viewpoints by using a camera coordinate conversion relation among different viewpoints, and the conversion can be performed based on homography transformation.
In this embodiment, rendering data of the target scene from any viewpoint (including the first rendering data from the reference viewpoint and also including other rendering data converted based on the first rendering data) includes: the N base function values of the corresponding viewpoints, the nth base function value of the corresponding viewpoint is the value of the nth base function in the observation direction of the corresponding viewpoint, the value range of N is an integer from 1 to N, and the nth base function reflects the observation direction v and the nth color variation H n (v) The nth base function value of the corresponding viewpoint is the observer of the corresponding viewpointThe nth color variation upwards, it is clear that the N basis function values are data relating to the viewing direction.
In this embodiment, the rendering data of the target scene at any viewpoint further includes image data of each spatial layer in a set of spatial layers of the corresponding viewpoint, where the set of spatial layers are arranged in parallel from far to near according to different depth values in a depth range of the corresponding viewpoint along an observation direction of the corresponding viewpoint. The size of the spatial layer of the corresponding viewpoint is the same as the size of a camera observation image of the target scene on the corresponding viewpoint, wherein the camera observation image is a first image observed by the camera on the reference viewpoint under the condition that the corresponding viewpoint is the reference viewpoint, and the camera observation image is a target image to be drawn corresponding to the target viewpoint under the condition that the corresponding viewpoint is the target viewpoint. The height and width of each spatial layer can be represented as H, W, H is the number of spatial points of the spatial layer in the height direction, and is also the number of pixel points of the camera observation image in the height direction, W is the number of spatial points of the spatial layer in the width direction, and is also the number of pixel points of the camera drawing image in the width direction. The image data of any space layer comprises the image data of each space point in the space layer, the image data of any space point comprises transparency, basic color values and N color coefficients, and the N color coefficients are in one-to-one correspondence with the N basic function values.
The image data of the spatial point may be data that is independent of the viewing direction and that relates to the absolute position of the spatial point in a world coordinate system. Therefore, for each spatial point, the image data does not change when the viewing direction changes.
All spatial points (hereinafter, referred to as spatial points (W, H)) with image coordinates (W, H) in a set of spatial layers of the corresponding viewpoint correspond to pixel points (W, H) with the same image coordinates of an image observed by a camera, the spatial points (W, H) and the pixel points (W, H) respectively represent intersection points of a W-th column in the width direction and a H-th row in the height direction, the value range of W is an integer from 1 to W, and the value range of H is an integer from 1 to H. The image data of the spatial point (w, h) of each spatial layer is used for drawing pixel points of the same image coordinate, that is, for drawing the pixel point (w, h), wherein for the spatial point (w, h) of each spatial layer, the color variation of the spatial point (w, h) in the viewing direction of the corresponding viewpoint can be obtained through N basis function values and N color coefficients of the spatial point (w, h); further, superpose this colour variable quantity on the basic colour value of this space point (w, h), alright obtain the colour value that this space point (w, h) appears on the viewing direction of the viewpoint that corresponds, then, according to the transparency of all space points (w, h) in a set of space map layer, superpose the colour value of these space points together, just can obtain the colour value of pixel point (w, h), and then realize the drawing to this pixel point (w, h). Therefore, after the drawing of each pixel point is completed based on the drawing data of the corresponding viewpoint, a camera observation image formed by all the pixel points can be obtained.
Taking the first rendering data as an example, the first rendering data includes the reference viewpoint v 1 N number of basis function values H n (v 1 ) The method further includes image data of each of a set of spatial layers of a reference viewpoint (hereinafter referred to as a first set of spatial layers), where the first set of spatial layers are arranged in parallel at different depth values between the reference viewpoint and the target scene along a viewing direction of the reference viewpoint.
In the embodiment, in consideration of the anisotropic color characteristics of the spatial points, the color value of each spatial point is not set to a fixed value only related to the position, but the color value of each spatial point can be changed along with the change of the observation direction by setting the basis function, which can more accurately express the visual presentation of observing the target scene from different observation directions compared with the case of setting the fixed color value for each spatial point, thereby improving the accuracy of image rendering.
In addition, when performing anisotropic color expression of spatial points, a rendering data structure may also be adopted, in which a plurality of color values corresponding to each spatial point of the corresponding viewpoint are stored, the plurality of color values correspond to a plurality of observation directions one to one, and the plurality of observation directions may be obtained by discretely dividing a spherical surface (which can represent all possible observation directions) with the corresponding spatial point as a spherical center. For the drawing data structure, since a plurality of color values with a large number need to be stored for each space point, the data amount required to be stored is very large, and a very large storage space is occupied, so that the configuration requirement on the electronic equipment for operating the method of the embodiment is increased. In order to effectively control the data amount of the rendering data, as described above, the present embodiment employs the rendering data structure in which all spatial points share a set of basis function values and a set of color coefficients are stored for each spatial point, which can reduce the data amount and further reduce the configuration requirement on the electronic device compared to the rendering data structure in which a plurality of color values are stored for each spatial point.
Step S230, converting the first rendering data into second rendering data of the target scene on the target viewpoint according to the camera coordinate conversion relationship between the target viewpoint and the reference viewpoint.
In this embodiment, after receiving the target viewpoint input by the user, the electronic device 1000 may obtain locally stored first rendering data, and in step S230, convert the first rendering data into second rendering data by using a camera coordinate conversion relationship between the target viewpoint and the reference viewpoint, where the conversion may be performed based on homography transformation.
The camera coordinate conversion relationship between the target viewpoint and the reference viewpoint may be determined based on the camera extrinsic parameters of the two viewpoints. The camera external parameters are used for determining the relative position relation between the camera coordinates and the world coordinate system, and have 6 parameters (alpha, beta, gamma, tx, ty, tz), wherein T = (Tx, ty, tz) is a translation vector, and R = R (alpha, beta, gamma, phi) is a rotation matrix. Different viewpoints can correspond to different camera external parameters, and according to the camera external parameters under the two viewpoints, the camera coordinate conversion relation of the camera coordinates under the two viewpoints in a world coordinate system can be determined. Through the camera coordinate conversion relationship, the position coordinates of all the space points under the camera coordinate system of the reference viewpoint can be converted into the position coordinates under the camera coordinate system of the target viewpoint, and based on the position coordinates of the space points under the camera coordinate system of the target viewpoint, a group of space layers of the target viewpoint, hereinafter referred to as a second group of space layers, is obtained in a layering manner that all the space points with the same depth value form a space layer. Here, between the first set of spatial layers of the reference viewpoint and the second set of spatial layers of the target viewpoint, the spatial layers with the same arrangement order of the two will be composed of different spatial point combinations.
According to the above description of the rendering data, the converted second rendering data will comprise the target viewpoint v 2 N number of basis function values H n (v 2 ) And the image data of each spatial layer in the second group of spatial layers of the target viewpoint are also included, wherein the first group of spatial layers are arranged in parallel according to different depth values between the target viewpoint and the target scene along the observation direction of the target viewpoint.
In step S240, a target image corresponding to the target viewpoint is drawn according to the second rendering data.
Setting a second group of spatial layers of the target viewpoint to comprise D spatial layers, wherein image data of spatial points (w, H) of each spatial layer is used for drawing pixel points (w, H), and the spatial points (w, H) of each spatial layer pass through N basic function values H n (v 2 ) And N color coefficients of the space point (w, h), the color variation of the space point (w, h) in the observing direction of the target viewpoint can be obtained, further, the color variation is superposed on the basic color value of the space point (w, h), the color value of the space point (w, h) presented in the observing direction of the target viewpoint can be obtained, then, according to the transparency of all the space points (w, h) in the second group of space layers, the color values of the space points are superposed, the color value of the pixel point (w, h) can be obtained, and further the drawing of the pixel point (w, h) is realized. Therefore, after the drawing of each pixel point is completed based on the drawing data of the target viewpoint, a camera observation image formed by all the pixel points, namely the target image corresponding to the target viewpoint, can be obtained.
In some embodiments, the drawing the target image corresponding to the target viewpoint according to the second rendering data in step S240 may include: and drawing the target image corresponding to the target viewpoint according to the set synthesis operation according to the second drawing data. For the pixel point (w, h) of the target image, the set composition operation can be expressed as:
Figure BDA0003759465270000111
Figure BDA0003759465270000112
c (w, h) represents the color value of a pixel point (w, h) of the target image, D represents the layer number of a second group of space layers of the target viewpoint, and a (d,w,h) The transparency of a spatial point (w, h) of the d-th spatial layer in the second group of spatial layers is represented, a (i,w,h) Representing the transparency of a spatial point (w, h) of the ith spatial layer of the second set of spatial layers, c (d,w,h) And representing the color value of the spatial point (w, h) of the D-th spatial layer, where the depth value corresponding to the D-th spatial layer decreases with the increase of D, that is, the D-th spatial layer is a spatial layer closest to the target viewpoint. Color value c (d,w,h) The method is determined by N basis functions of a target viewpoint, basic color values of spatial points (w, h) of a d-th spatial layer and N color coefficients.
In some embodiments, the color value c of the spatial point (w, h) of the d-th spatial layer (d,w,h) Can be expressed as:
Figure BDA0003759465270000113
wherein H n An nth basis function representing a target viewpoint,
Figure BDA0003759465270000114
representing a base color value of a spatial point (w, h) of the d-th spatial layer,
Figure BDA0003759465270000121
represents the d-th space diagramThe nth color coefficient of a spatial point (w, h) of a layer.
In the above formula (3)
Figure BDA0003759465270000122
And (3) representing the color variation of the spatial point (w, h) of the d-th spatial layer in the viewing direction of the target viewpoint, wherein the color variation is obtained by weighted summation of the N basis functions and the N color coefficients. In another embodiment, the color variation may also be obtained by another manner capable of fusing N single color variations, where the nth single color variation is obtained by multiplying the nth color coefficient by the nth basis function value, and the color variation may also be obtained by, for example, a weighted average of the N basis functions and the N color coefficients, and the like, which is not limited herein.
According to the synthesis operation expressed by the formula (1) and the formula (2), the color value of each pixel point of the target image can be obtained, and then the target image formed by each pixel point is obtained, and the drawing of the target image is completed.
In step S250, the target image rendered in step S240 is output.
In step S250, the electronic device 1000 may output the target image through the display device for the user to observe, complete the real-time response output to the user input, and implement the real-time rendering of the free viewpoint synthesis.
According to the above steps S210 to S250, in the method of the present embodiment, on the one hand, the electronic device 1000 performs rendering directly using the second rendering data converted from the first rendering data stored in advance when rendering the target image corresponding to the target viewpoint, without calling a model to predict image data of each spatial point corresponding to the target viewpoint after receiving the target viewpoint input by the user, and therefore, rendering calculation is basically only conversion calculation for converting the first rendering data into the second rendering data, and synthesis operation calculation similar to formula (1) and formula (2) required for rendering the target image based on the second rendering data, which greatly reduces the amount of rendering calculation, can effectively improve rendering response speed, and realize real-time rendering of free viewpoint synthesis.
On the other hand, in the embodiment of the present disclosure, when performing real-time rendering, the electronic device 1000 incorporates, into rendering data, data content including N basis function values related to the observation direction, which represents color characteristics of the spatial points, in consideration of the color characteristics of anisotropy. Like this, to arbitrary space point, through the image data of N basis function values and this space point, alright obtain the colour value that this space point represents on arbitrary viewing direction, and then obtain the colour value of the pixel that needs the drawing, this colour value will be more close to the true sense of sight of observing the target scene at the target viewpoint, consequently, the method of this disclosure embodiment not only can realize the synthetic real-time drawing of free viewpoint, can also realize high accuracy drawing.
On the other hand, when the anisotropic color characteristics of the space points are considered, the method of the embodiment of the disclosure adopts a drawing data structure which is beneficial to reducing the occupation of the storage space, can reduce the hardware configuration requirement on the electronic device running the method of the embodiment, and improves the universality of the method on the electronic device.
In some embodiments, the first pre-stored rendering data of the electronic device 1000 may be obtained by a preset first model and a preset second model, wherein the image data of each spatial point in the first rendering data is obtained by the preset first model, and the N basis function values in the first rendering data are obtained by the second model.
The first model and the second model can be obtained by training through a training sample set. The first model reflects the mapping relation between the position coordinates of the space points and the image data of the space points, and the second model reflects the mapping relation between the observation direction and the N basis function values.
In some embodiments, the first model and the second model may be dedicated models dedicated to the target scene, in which case the training samples in the set of training samples may only be training samples based on the camera taking an image of the target scene. For the special model, when the target scene changes every time, the first model and the second model need to be retrained, so that the model training cost is increased.
In order to improve the compatibility of the first model and the second model with respect to the scene, so that the first model and the second model can be adapted to different scenes, in other embodiments, the inventor encodes scene characteristics in the model input information, which means that the model input information will contain content embodying the scene characteristics, and then the model output is the output corresponding to a specific scene. Through the mode, the first model and the second model can be adapted to different scenes, when a target scene changes, model training does not need to be carried out again, and model training cost is reduced. In these embodiments, the first model and the second model may be obtained by training samples corresponding to different scenes, and in this case, the training sample set may include training samples corresponding to different scenes.
In an embodiment where the scene features are encoded in the model input information, the input information of the first model may include position coordinates of a spatial point and the first scene feature corresponding to the position coordinates, and the output information of the first model includes image data of the spatial point. In this way, when the image data of any spatial point in the first rendering data is obtained by the first model, the input information of the first model includes the position coordinates of the any spatial point and the first scene characteristics corresponding to the spatial point (or referred to as corresponding to the position coordinates), and the output information of the first model includes the image data of the any spatial point. The first scene feature is a location-related feature of the target scene.
In an embodiment in which the scene features are encoded in the model input information, the input information of the second model comprises a viewing direction and second scene features corresponding to the viewing direction. In this way, when the N basis function values in the first rendering data are obtained by the second model, the input information of the second model includes the observation direction of the reference viewpoint and the second scene characteristic corresponding to the observation direction, and the output information of the second model includes the N basis functions in the first rendering data. The second scene characteristic is a characteristic of the target scene that is related to the viewing direction.
In the embodiment where the first drawing data is generated by the electronic device 1000, the real-time drawing method may further include, before the step S220, a step of obtaining the first drawing data, and as shown in fig. 3, obtaining the first drawing data may include the following steps S310 to S330:
step S310, the observation direction of the reference viewpoint and the position coordinates of each spatial point in the first group of spatial layers of the reference viewpoint are obtained.
Step S320, for each spatial point in the first group of spatial layers, obtaining a first scene feature of the target scene at the spatial point, and inputting the position coordinates of the spatial point and the first scene feature corresponding to the spatial point to the first model to obtain image data of the spatial point.
In step S320, a feature vector obtained by splicing the position coordinates of the spatial point and the first scene features into one bit may be input into the first model; the first scene feature is also a one-dimensional feature vector.
In this embodiment, the first model may be represented as:
(a,k 0 ,k 1 ,……,k N ) = F (x, S (x)) formula (4);
when the image data of a space point in the first group of spatial layers is obtained through the first model, x represents the position coordinate of the space, S (x) represents a first scene feature of the target scene at the space point, and (a, k) represents a second scene feature of the target scene at the space point 0 ,k 1 ,……,k N ) Image data representing the spatial point, wherein a represents the transparency of the spatial point and k represents the transparency of the spatial point 0 A base color value, k, representing the spatial point 1 ,……,k N N color coefficients representing the spatial point.
In some embodiments, the structure of the first model may be a 6-layer multi-layered perceptron structure.
The first scene characteristics S (x) of the target scene at a spatial point in the first group of spatial layers, that is, the first scene characteristics of the target scene corresponding to the spatial point, may be obtained based on multiple frames of first images, where the multiple frames of first images are known images obtained by observing the target scene along multiple viewpoints, the multiple frames of first images are in one-to-one correspondence with the multiple viewpoints, and the multiple viewpoints include reference viewpoints. When the first scene feature of a target scene corresponding to a space point is obtained based on multiple frames of first images, as shown in fig. 4, for each frame of first image, the first image feature corresponding to the space point may be obtained, so as to obtain multiple first image features corresponding to the space point, and then the multiple first image features are fused according to a set fusion manner to obtain the first scene feature S (x) corresponding to the space point.
When the first image feature corresponding to any space point is obtained for any first image, the image feature of the first image may be extracted through a feature extraction network to obtain a feature map of the first image, and then the pixel point feature of the feature map corresponding to the space point is used as the first image feature corresponding to the space point.
The feature extraction network may be, for example, a two-dimensional convolutional network.
Step S330, acquiring a second scene characteristic of the target scene in the observation direction of the reference viewpoint, and inputting the observation direction of the reference viewpoint and the second scene characteristic into a second model to obtain N basis function values of the reference viewpoint.
In step S330, a feature vector obtained by splicing the observation direction of the reference viewpoint and the second scene feature into one bit may be input to the second model; and the second scene feature is also a one-dimensional feature vector.
In this embodiment, the second model may be represented as:
(H 1 (v 1 ),……,H N (v 1 ))=G(v 1 ,S(v 1 ) Equation (5);
wherein v is the number of base function values of the reference viewpoint obtained by the second model 1 Representing the viewing direction of the reference viewpoint, S (v) 1 ) Representing the target scene in the viewing direction v 1 The second scene characteristic of (a) above, (H) 1 (v 1 ),……,H N (v 1 ) N basis function values representing the reference viewpoint.
In some embodiments, the structure of the second model may be a two-layer MLP network structure.
For a target scene in a viewing direction v 1 Second scene characteristic S (v) of (1) 1 ) And may also be obtained based on the plurality of frames of the first image. Obtaining the second scene characteristic S (v) based on multiple frames of the first image 1 ) Then, as shown in fig. 5, for each frame of the first image, second image features imaged at an observation position corresponding to the first image (that is, at an optical center position corresponding to the first image) and along an observation direction of the reference viewpoint may be respectively obtained to obtain a plurality of second image features, and then the plurality of second image features are fused according to the same fusion manner as that of the first image features to obtain the target scene in the observation direction v 1 Second scene characteristic S (v) of (1) 1 )。
When the second image feature is obtained for any one of the first images, the second image feature can also be obtained from the feature map of the first image, which can use the pixel point feature of the feature map corresponding to the observation position of the first image and the observation direction of the reference viewpoint as the second image feature obtained from the first image.
The above fusion method may be any fusion method for performing feature fusion, and is not limited herein.
Through the above steps S310 to S330, the electronic device 1000 can obtain the first rendering data of the target scene at the reference viewpoint.
< apparatus embodiment >
Fig. 6 is a hardware configuration diagram of an electronic device 600 according to an embodiment. As shown in fig. 6, the electronic device 600 may comprise a processor 610 and a memory 620, the memory 620 being configured to store a computer program, the processor 610 being configured to execute the real-time rendering method according to any embodiment of the present disclosure under the control of the computer program.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A real-time rendering method of free viewpoint synthesis, comprising:
receiving a target viewpoint for observing a target scene;
acquiring first pre-stored drawing data of the target scene on a reference viewpoint;
converting the first rendering data into second rendering data of the target scene on the target viewpoint according to a camera coordinate conversion relation between the target viewpoint and a reference viewpoint; wherein drawing the data comprises: the method comprises the steps of obtaining N basis function values of corresponding viewpoints and image data of each space layer in a group of space layers of the corresponding viewpoints, wherein the basis function values represent color variation in the observation direction of the corresponding viewpoints, the group of space layers are arranged in parallel along the observation direction of the corresponding viewpoints according to different depth values, the image data of the space layers comprise image data of each space point in the space layers, the image data of the space points comprise transparency, basic color values and N color coefficients, and the N color coefficients correspond to the N basis functions one by one;
drawing a target image corresponding to the target viewpoint according to the second drawing data;
and outputting the target image.
2. The method of claim 1, wherein said rendering a target image corresponding to the target viewpoint from the second rendering data comprises:
according to the second rendering data, rendering a target image corresponding to the target viewpoint according to a set synthesis operation; wherein the synthesis operation is represented as:
Figure FDA0003759465260000011
Figure FDA0003759465260000012
c (w, h) represents the color value of a pixel point (w, h) of the target image, D represents the layer number of a second group of space layers of the target viewpoint, and a (d,w,h) A transparency representing a spatial point (w, h) of a d-th spatial layer of said second set of spatial layers, a (i,w,h) A transparency representing a spatial point (w, h) of an ith spatial layer of said second set of spatial layers, c (d,w,h) Representing a color value of a spatial point (w, h) of the d-th spatial layer, wherein a depth value corresponding to the d-th spatial layer decreases with the increase of d; color value c (d,w,h) The method is determined by N basis functions of a target viewpoint, basic color values of spatial points (w, h) of a d-th spatial layer and N color coefficients.
3. Method according to claim 2, wherein the color value c of a spatial point (w, h) of said d-th spatial layer (d,w,h) Expressed as:
Figure FDA0003759465260000021
wherein H n An nth basis function representing the target viewpoint,
Figure FDA0003759465260000022
representing a base color value of a spatial point (w, h) of said d-th spatial layer,
Figure FDA0003759465260000023
and the nth color coefficient represents the spatial point (w, h) of the kth spatial layer.
4. The method of claim 1, wherein the image data of the spatial point is data independent of a viewing direction.
5. The method according to any one of claims 1 to 4, wherein the image data of each spatial point in the first rendering data is obtained by a preset first model, and the N basis function values of the first rendering data are obtained by a preset second model; when the image data of any space point in the first rendering data is obtained through the first model, the input information of the first model comprises the position coordinates of the space point and a first scene characteristic corresponding to the space point; when the N basis function values of the first rendering data are obtained by the second model, the input information of the second model includes a viewing direction of the reference viewpoint and a second scene characteristic corresponding to the viewing direction.
6. The method according to claim 5, wherein a first scene feature corresponding to any spatial point in the first rendering data is obtained by fusing first image features corresponding to the spatial point in the multiple frames of first images according to a set fusion mode; the multi-frame first image is a known image obtained by observing a target scene along a plurality of viewpoints, the multi-frame first image corresponds to the plurality of viewpoints one by one, and the plurality of viewpoints comprise the reference viewpoint;
and the second scene characteristics are obtained by fusing second image characteristics of the multiple frames of first images, which correspond to the observation direction of the reference viewpoint, according to the fusion mode.
7. The method of claim 5, wherein the first model and the second model are trained from training samples of a set of training samples, the set of training samples including training samples corresponding to different scenarios.
8. The method of claim 5, wherein the first model is a 6-layer multi-layered perceptron structure and/or the second model is a two-layer MLP network structure.
9. An electronic device comprising a memory and a processor, the memory for storing an executable computer program; a processor for operating the electronic device according to control of the computer program to perform the real-time rendering method according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements a real-time rendering method as claimed in any one of claims 1 to 8.
CN202210868418.9A 2022-07-22 2022-07-22 Real-time drawing method for free viewpoint synthesis, electronic device and readable storage medium Active CN115512038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210868418.9A CN115512038B (en) 2022-07-22 2022-07-22 Real-time drawing method for free viewpoint synthesis, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210868418.9A CN115512038B (en) 2022-07-22 2022-07-22 Real-time drawing method for free viewpoint synthesis, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN115512038A true CN115512038A (en) 2022-12-23
CN115512038B CN115512038B (en) 2023-07-18

Family

ID=84502971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210868418.9A Active CN115512038B (en) 2022-07-22 2022-07-22 Real-time drawing method for free viewpoint synthesis, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115512038B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115879207A (en) * 2023-02-22 2023-03-31 清华大学 Outdoor space surrounding degree determining method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116532A (en) * 2007-11-05 2009-05-28 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for generating virtual viewpoint image
US20100315324A1 (en) * 2009-06-16 2010-12-16 Samsung Electronics Co., Ltd. Display device and method
US8800183B1 (en) * 2013-04-11 2014-08-12 Brice Belisle Belisle picture painting technique displaying different colors at different viewing angles
JP2015022510A (en) * 2013-07-18 2015-02-02 凸版印刷株式会社 Free viewpoint image imaging device and method for the same
JP2018042237A (en) * 2016-08-31 2018-03-15 キヤノン株式会社 Image processor, image processing method, and program
US10262451B1 (en) * 2018-04-09 2019-04-16 8i Limited View-dependent color compression
CN110246146A (en) * 2019-04-29 2019-09-17 北京邮电大学 Full parallax light field content generating method and device based on multiple deep image rendering
CN110798673A (en) * 2019-11-13 2020-02-14 南京大学 Free viewpoint video generation and interaction method based on deep convolutional neural network
CN111385554A (en) * 2020-03-28 2020-07-07 浙江工业大学 High-image-quality virtual viewpoint drawing method of free viewpoint video
CN112738496A (en) * 2020-12-25 2021-04-30 浙江合众新能源汽车有限公司 Image processing method, apparatus, system, and computer-readable medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116532A (en) * 2007-11-05 2009-05-28 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for generating virtual viewpoint image
US20100315324A1 (en) * 2009-06-16 2010-12-16 Samsung Electronics Co., Ltd. Display device and method
US8800183B1 (en) * 2013-04-11 2014-08-12 Brice Belisle Belisle picture painting technique displaying different colors at different viewing angles
JP2015022510A (en) * 2013-07-18 2015-02-02 凸版印刷株式会社 Free viewpoint image imaging device and method for the same
JP2018042237A (en) * 2016-08-31 2018-03-15 キヤノン株式会社 Image processor, image processing method, and program
US10262451B1 (en) * 2018-04-09 2019-04-16 8i Limited View-dependent color compression
CN110246146A (en) * 2019-04-29 2019-09-17 北京邮电大学 Full parallax light field content generating method and device based on multiple deep image rendering
CN110798673A (en) * 2019-11-13 2020-02-14 南京大学 Free viewpoint video generation and interaction method based on deep convolutional neural network
CN111385554A (en) * 2020-03-28 2020-07-07 浙江工业大学 High-image-quality virtual viewpoint drawing method of free viewpoint video
CN112738496A (en) * 2020-12-25 2021-04-30 浙江合众新能源汽车有限公司 Image processing method, apparatus, system, and computer-readable medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KEISUKE NONAKA等: "Fast Plane-Based Free-viewpoint Synthesis for Real-time Live Streaming", 2018 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING, pages 1 - 4 *
汪辉;彭宗举;焦仁直;陈芬;郁梅;蒋刚毅;: "快速3维坐标变换的绘制算法", 中国图象图形学报, no. 06, pages 805 - 814 *
郁理: "基于深度图像的视点绘制新方法", 中国科学院研究生院学报, vol. 27, no. 5, pages 638 - 644 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115879207A (en) * 2023-02-22 2023-03-31 清华大学 Outdoor space surrounding degree determining method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN115512038B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US11238644B2 (en) Image processing method and apparatus, storage medium, and computer device
US20200349680A1 (en) Image processing method and device, storage medium and electronic device
US11551405B2 (en) Computing images of dynamic scenes
KR102612808B1 (en) lighting estimation
CN112166604B (en) Volume capture of objects with a single RGBD camera
US11823322B2 (en) Utilizing voxel feature transformations for view synthesis
EP3533218B1 (en) Simulating depth of field
CN109410141B (en) Image processing method and device, electronic equipment and storage medium
EP3987443A1 (en) Recurrent multi-task convolutional neural network architecture
WO2020048484A1 (en) Super-resolution image reconstruction method and apparatus, and terminal and storage medium
CN110648274B (en) Method and device for generating fisheye image
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN115512038B (en) Real-time drawing method for free viewpoint synthesis, electronic device and readable storage medium
CN115482322A (en) Computer-implemented method and system for generating a synthetic training data set
CN117036581A (en) Volume rendering method, system, equipment and medium based on two-dimensional nerve rendering
CN115272575B (en) Image generation method and device, storage medium and electronic equipment
CN115375884B (en) Free viewpoint synthesis model generation method, image drawing method and electronic device
WO2021248432A1 (en) Systems and methods for performing motion transfer using a learning model
CN115527011A (en) Navigation method and device based on three-dimensional model
CN111866493A (en) Image correction method, device and equipment based on head-mounted display equipment
CN116681818B (en) New view angle reconstruction method, training method and device of new view angle reconstruction network
US20240112394A1 (en) AI Methods for Transforming a Text Prompt into an Immersive Volumetric Photo or Video
CN115714888B (en) Video generation method, device, equipment and computer readable storage medium
CN114779981B (en) Draggable hot spot interaction method, system and storage medium in panoramic video
CN115272576A (en) Image generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant