CN112819945A - Fluid reconstruction method based on sparse viewpoint video - Google Patents

Fluid reconstruction method based on sparse viewpoint video Download PDF

Info

Publication number
CN112819945A
CN112819945A CN202110105012.0A CN202110105012A CN112819945A CN 112819945 A CN112819945 A CN 112819945A CN 202110105012 A CN202110105012 A CN 202110105012A CN 112819945 A CN112819945 A CN 112819945A
Authority
CN
China
Prior art keywords
frame
field
density field
density
fluid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110105012.0A
Other languages
Chinese (zh)
Other versions
CN112819945B (en
Inventor
梁晓辉
陈泽年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110105012.0A priority Critical patent/CN112819945B/en
Publication of CN112819945A publication Critical patent/CN112819945A/en
Application granted granted Critical
Publication of CN112819945B publication Critical patent/CN112819945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The embodiment of the disclosure discloses a fluid reconstruction method based on sparse view video. One embodiment of the method comprises: step (1) reconstructing a fluid density field according to a single-frame fluid image under a sparse view angle to obtain a post-frame reconstructed density field; step (2) speed field processing is carried out according to the reconstructed density field of the previous frame and the reconstructed density field of the later frame obtained by utilizing the single frame information to obtain a speed field of the previous frame; weighting the advection density field guided by the front frame velocity field and the rear frame reconstruction density field to obtain a rear frame target reconstruction density field; and (4) sequentially executing the steps (1) to (3) to each frame in the video to obtain a multi-frame reconstructed density field. The embodiment effectively reduces the constraint requirement of dense visual angles in the fluid reconstruction, thereby reducing the cost of the fluid reconstruction by using real world data.

Description

Fluid reconstruction method based on sparse viewpoint video
Technical Field
The embodiment of the disclosure relates to the field of computer graphics, in particular to a fluid reconstruction method based on sparse view video.
Background
Fluids, such as smoke, water, are elements that are often found in the fields of film and video, as well as animation. Traditional fluid animations are manually edited by an artist. Editing a fluid animation process requires a lot of time, labor, and financial resources. And the produced fluid model is often far away from the fluid appearance in the real world and is difficult to meet the actual requirement. In order to quickly obtain the fluid animation close to reality, a data-driven method can be used for performing three-dimensional fluid multi-frame reconstruction by utilizing a video shot in the real world, so that the fluid animation required by an art creator can be quickly generated, the artificial interference in modeling is reduced, and the modeling time is saved.
However, when reconstructing a single-frame fluid image in the above manner, the following technical problems often exist:
at present, the fluid reconstruction based on a single frame image mainly utilizes a tomography method to estimate a density field. To perform accurate fluid density field reconstruction, denser viewpoints are required as constraints. The viewpoint is also the viewing angle. However, precisely calibrated images taken from dense viewpoints are difficult to obtain, and therefore, the conventional tomographic reconstruction method is difficult to be used in practice. In the aspect of fluid multi-frame reconstruction, a particle image velocimetry method or an optical flow method is mainly used for estimating a velocity field so as to track the motion of the fluid. The particle image velocimetry method tracks the movement of the fluid by adding tiny particles in the fluid, and requires special equipment, so the acquisition cost is high, and the optical flow method needs to accurately model the initial state of the fluid and is difficult to be applied to reality.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose sparse view video-based fluid reconstruction methods to address one or more of the technical problems mentioned in the background section above.
Some embodiments of the present disclosure provide a fluid reconstruction method based on sparse view video, the method including: step (1) reconstructing a fluid density field according to a single-frame fluid image under a sparse view angle to obtain a post-frame reconstructed density field; step (2) speed field processing is carried out according to the reconstructed density field of the previous frame and the reconstructed density field of the later frame obtained by utilizing the single frame information to obtain a speed field of the previous frame; weighting the advection density field guided by the front frame velocity field and the rear frame reconstruction density field to obtain a rear frame target reconstruction density field; and (4) sequentially executing the steps (1) to (3) to each frame in the video to obtain a multi-frame reconstructed density field.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: the invention only utilizes sparse viewpoint information to reconstruct the fluid, thereby effectively reducing the constraint requirement of needing a dense visual angle in reconstructing the fluid, and further reducing the cost of reconstructing the fluid by utilizing real world data. The method estimates the speed field between two frames by using the speed field estimation algorithm, effectively utilizes multi-frame video information to carry out modeling during reconstruction, makes up the defect that the time sequence information is difficult to utilize in a single-frame reconstruction method, and enables the multi-frame reconstruction result to be smoother. The invention combines the density field guided by the speed field and the density field reconstructed by single frame information during reconstruction, and effectively ensures that the reconstruction result is more fit with the reconstruction target while ensuring the excessive smoothness of information between adjacent frames.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic view of an application scenario of a sparse view video-based fluid reconstruction method according to some embodiments of the present disclosure;
fig. 2 is a schematic view of an application scenario of the single-frame reconstruction method under the sparse view according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic view of an application scenario of a fluid reconstruction method based on sparse view video according to some embodiments of the present disclosure.
Referring to fig. 1, a schematic view of an application scenario of a sparse view video-based fluid reconstruction method according to some embodiments of the present disclosure is shown. The fluid reconstruction method based on the sparse view video comprises the following steps:
and (1) reconstructing a fluid density field according to the single-frame fluid image under the sparse view angle to obtain a post-frame reconstructed density field.
In some embodiments, the main executing body of the sparse view video-based fluid reconstruction method may perform fluid density field reconstruction according to the post-frame fluid image under the sparse view, so as to obtain a post-frame reconstructed density field. Therein, a sparse view may be used to characterize an angular difference of 90 ° between two views. The post-frame fluid image is used to characterize the current frame fluid image observed, e.g., smoke, water, etc. The density field is used to characterize a three-dimensional matrix of density distributions in the fluid image. The post-frame reconstructed density field is used to characterize a three-dimensional matrix of the reconstructed density distribution for the current fluid image. Density field reconstruction can restore the three-dimensional density field structure of a fluid by a given fluid image.
In some optional implementations of some embodiments, the performing a fluid density field reconstruction by the main body according to the post-frame fluid image under the sparse view to obtain a post-frame reconstructed density field may include the following steps:
step (a 1): and carrying out binarization on the post-frame fluid image under the sparse view angle, and carrying out visible shell reconstruction according to the binarized mask to obtain a marking matrix. The binarized mask can be subjected to threshold segmentation for the gray level map by the execution subject, wherein the threshold value is changed to 0 below the threshold value and is marked as 1 above the threshold value. The visual shell may generate a matrix of markers that may have fluid present, with a1 in the matrix indicating that there may be fluid present at the location and a 0 indicating that there may be no fluid present.
Step (a 2): and carrying out tomography reconstruction according to the back frame fluid image under the sparse view angle to obtain an initial back frame density field. Wherein the execution subject may perform tomographic reconstruction of the input sparse view angle image using a tomographic method. The reconstruction process may generate a visual shell constraint through step (a 1). The visual enclosure constraint may be a constraint that sets the density of the area outside the fluid visual enclosure to 0.
Step (a 3): and rendering the initial post-frame density field under an unknown visual angle to obtain a rendered image of the density field under the unknown visual angle. Rendering may be performed such that the subject equates the intensity value of each pixel in the image to the integral of the density along the ray as it passes through the density field starting at that pixel location on the rendering plane. And for the sampling which is not under the grid integer coordinate, acquiring the density of the coordinate by adopting a linear interpolation first-order approximation mode. The interpolation formula is:
Figure BDA0002917037080000041
wherein the content of the first and second substances,
Figure BDA0002917037080000042
representing the density field. (x, y) represents the sample point coordinates where the initial post-frame density field is parallel to the horizontal plane.
Figure BDA0002917037080000043
And the density corresponding to the coordinates of the sampling point of which the density field of the initial back frame is parallel to the horizontal plane is represented.
Figure BDA0002917037080000044
Representing a first density of sample points whose initial post-frame density field is parallel to the horizontal plane.
Figure BDA0002917037080000045
Representing a second density of sample points whose initial post-frame density field is parallel to the horizontal plane.
Figure BDA0002917037080000046
A third density of sample points representing the initial post-frame density field parallel to the horizontal plane.
Figure BDA0002917037080000047
A fourth density of sample points representing the initial post-frame density field parallel to the horizontal plane. x is the number of1The abscissa in the coordinates of the sample points is expressed as a value rounded down in the x-axis direction. x is the number of2Representing the value, y, of the sampling point coordinate rounded up in the x-axis direction on the abscissa1The values of the ordinate in the coordinates of the sampling points rounded down in the y-axis direction are indicated. y is2The values of the sampling point coordinates rounded up along the y-axis direction on the ordinate are shown. (x)1,y1) And the coordinate corresponding to the fact that the horizontal coordinate in the sampling point coordinate in which the initial back frame density field is parallel to the horizontal plane is rounded downwards along the x-axis direction and the vertical coordinate is rounded downwards along the y-axis direction is represented. (x)1,y2) And the coordinate corresponding to the fact that the horizontal coordinate in the sampling point coordinate in which the initial back frame density field is parallel to the horizontal plane is rounded downwards along the x-axis direction and the vertical coordinate is rounded upwards along the y-axis direction is represented. (x)2,y1) And the coordinate corresponding to the coordinate of the horizontal coordinate of the sampling point parallel to the horizontal plane of the initial back frame density field, which is rounded up along the x-axis direction, and the coordinate corresponding to the vertical coordinate, which is rounded down along the y-axis direction. (x)2,y2) And the coordinate corresponding to the fact that the horizontal coordinate in the sampling point coordinate in which the initial back frame density field is parallel to the horizontal plane is rounded up along the x-axis direction and the vertical coordinate is rounded up along the y-axis direction is represented.
Figure BDA0002917037080000051
Indicating point (x)1,y1) The corresponding density.
Figure BDA0002917037080000052
Indicating point (x)1,y2) The corresponding density.
Figure BDA0002917037080000053
Indicating point (x)2,y1) The corresponding density.
Figure BDA0002917037080000054
Indicating point (x)2,y2) The corresponding density.
Step (a 4): and processing the rendered image under the unknown visual angle by using a style migration method combined with a controllable directional pyramid decomposition algorithm to obtain a relative fluid image under the unknown visual angle. The observation images of the front view and the side view can be decomposed by using the steerable direction pyramid in combination with the style migration method of the steerable direction pyramid decomposition algorithm. And 4 sub-band and 4-order image decomposition parameters are adopted in the decomposition. And finally generating a high-frequency residual image, a low-frequency residual image and two layers of 4-direction decomposition sub-bands with different sizes. Then, a distribution histogram of the sub-band image is calculated, wherein the histogram comprises 256 intervals in total. Next, a histogram interpolation method is used to obtain an image distribution histogram estimation under an unknown view angle. The interpolation method is as follows:
Figure BDA0002917037080000055
where q represents the unknown view angle. k denotes the kth bin in the histogram. C represents a value corresponding to a bin in the histogram. c. Cq,kAnd representing the value corresponding to the k-th interval in the histogram estimated under the unknown view angle q. r denotes a first viewing angle. p represents a second viewing angle. i denotes a fluid image. i.e. irRepresenting an image of the fluid at a first viewing angle. i.e. ipRepresenting the fluid image at the second viewing angle. Phi denotes the value corresponding to the bin in the histogram. Phi is akIndicates the value, phi, corresponding to the kth interval in the histogramk(ir) Indicating the value corresponding to the kth bin in the histogram at the first view angle. Phi is ak(ip) Indicating the value corresponding to the kth bin in the histogram at the second view angle. The direction controllable pyramid is an important image processorBy means of linear decomposition of the direction controllable pyramid, an image is decomposed into a series of image sub-bands with different scales and different directions. The transformed basis functions are higher order derivatives according to the desired order. The direction controllable pyramid transformation is done by recursive convolution and sampling operations, the corresponding inverse transformation matrix is the transpose of the forward transformation matrix. The advantage of the direction controllable pyramid is that it has translational invariance and rotational invariance. The 4 subbands may be that each layer of the pyramid decomposes the image using 4 directional basis functions. The 4 th order may be that the decomposed pyramid is divided into four layers in total, including: high-frequency residual errors, two layers of decomposed sub-bands with different scales, and residual low-frequency residual error items after decomposition. The high frequency residual may be a part of information representing the image local detail in the steerable directional pyramid decomposition result. The low frequency residual may be a portion of the information in the decomposition that represents the overall picture global. The different sizes may be the original image size and 1/2 size, respectively. The 4 directions may be 4 directions of the image corresponding to the 4 basis functions. The histogram interpolation method may be a linear interpolation method.
Step (a 5): performing tomography reconstruction by using the relative fluid image under the unknown view angle generated in the step (A4) to adjust the density field of the later frame, and obtaining the reconstructed density field of the adjusted later frame to update the density field of the initial previous frame. Wherein the executing subject generates unknown viewpoint constraints at angles of [0,2 π ] randomly 100 by the method of step (A4), and optimizes the density field using them as constraints for tomographic reconstruction.
Step (a 6): the steps (A3) to (a5) are iterated to generate a post-frame reconstructed density field.
And (2) carrying out speed field processing according to the reconstructed density field of the previous frame and the reconstructed density field of the later frame obtained by utilizing the single frame information to obtain the speed field of the previous frame.
In some embodiments, the execution body may perform velocity field processing on the reconstructed density field of the previous frame and the reconstructed density field of the subsequent frame obtained by using the single frame information to obtain the velocity field of the previous frame. Wherein the reconstructed density field of the previous frame can be used for representing a three-dimensional matrix of the density distribution after the reconstruction of the fluid image of the previous frame. The reconstructed density field of the later frame is used for representing a three-dimensional matrix of density distribution after the current frame fluid image is reconstructed. The single frame of information is used to characterize a single frame of fluid image information. The previous frame velocity field is used to characterize the velocity profile field that drives the motion of the previous frame of fluid. The velocity field processing can obtain a velocity field meeting the constraint condition by adjusting the minimization energy function.
In some optional implementations of some embodiments, the performing main body performs velocity field processing on the reconstructed density field of the previous frame and the reconstructed density field of the subsequent frame obtained by using the single frame information to obtain the velocity field of the previous frame, and may include the following steps:
step (B1): an initial previous frame rate field is acquired.
As an example, if the current reconstructed frame is the first frame, the velocity field is initialized to 0, otherwise the initial velocity field is obtained by advecting the velocity field of the previous frame with the velocity field of the previous frame, which includes:
ut-1=advect(ut-2,ut-2)。
where t represents time in the video. u denotes the velocity field. u. oft-1Representing the velocity at time t-1 (previous frame). u. oft-2Representing the velocity field at time t-2. advect () represents a advection function.
Step (B2): and carrying out continuous modeling on the rear frame density field formed by modeling the rear frame reconstructed density field to generate a continuous modeled rear frame density field. The method comprises the following steps of converting a discrete density field into a Gaussian function, and superposing the Gaussian function, so that the discrete density field is converted into a uniform differentiable function form. The gaussian distribution of the density for any point in the density field is:
Figure BDA0002917037080000071
where (i, j, k) represents the coordinates of any point in the density field. i denotes the abscissa of any point in the density field. j represents the ordinate of any point in the density field. k denotes the vertical coordinate of any point in the density field. (i ', j'And k') represents the coordinates of an arbitrary point in a square whose side length is 2l with (i, j, k) as the center. l represents a constant. l may be 1. i' represents the abscissa of an arbitrary point in a square having a side length of 2l and having (i, j, k) as the center. j' represents the ordinate of an arbitrary point in a square whose side length is 2l with (i, j, k) as the center. k' represents the vertical coordinate of any point in a square with 2l as the side length and (i, j, k) as the center.
Figure BDA0002917037080000072
Representing the density field.
Figure BDA0002917037080000073
The density of the points (i, j, k) is shown.
Figure BDA0002917037080000074
The density of points (i, j, k) after continuous modeling is represented. C denotes a scaling factor. Ci,j,kThe scaling factor corresponding to point (i, j, k) is represented. σ denotes the variance. O indicates whether there is an obstacle. O isi′,j′,k′Indicates whether or not there is an obstacle in the coordinates of the point (i, j, k). And pi represents the circumferential ratio. e represents a natural constant with a value of about 2.718281828459045.
Step (B3): smoothing the previous frame density field using the initial previous frame velocity field to generate a previous frame smoothing density field. The advection method comprises the following steps:
Figure BDA0002917037080000081
where t represents time in the video.
Figure BDA0002917037080000082
Advection density field.
Figure BDA0002917037080000083
Representing the advection density field at time t (the following frame).
Figure BDA0002917037080000084
Representing the density field.
Figure BDA0002917037080000085
Representing the density field at time t-1 (previous frame). u denotes the velocity field. u. oft-1Representing the velocity field at time t-1 (previous frame). advect () represents a advection function.
Step (B4): reconstructing the density field based on the advection density field of the previous frame and the previous frame to generate error information and an a priori energy function of the initial previous frame velocity field. The energy function may be represented by the following equation:
Figure BDA0002917037080000086
wherein L represents an energy function. L issRepresenting the velocity field prior. L isdIndicating error information. i denotes the abscissa of any point in the density field. j represents the ordinate of any point in the density field. k denotes the vertical coordinate of any point in the density field. a denotes the size of the filter. (i ', j ', k ') represents the coordinates of an arbitrary point in a square having a side length of 2l and having (i, j, k) as the center. u. ofi′,j′,k′Representing the velocity field magnitude at point (i ', j ', k '). u. ofi,j,kRepresenting the velocity field magnitude at point (i, j, k). advect () represents a advection function. t represents time in the video.
Figure BDA0002917037080000087
Representing the density field.
Figure BDA0002917037080000088
Representing the density field at time t-1 (previous frame). u denotes the velocity field. u. oft-1Representing the velocity field at time t-1 (previous frame). Δ t represents a time step. F () represents a superposition of gaussian functions.
Figure BDA0002917037080000089
Representing the reconstructed density field.
Figure BDA00029170370800000810
Representing the reconstructed density field at time t.
Figure BDA00029170370800000811
And (4) performing density field continuous modeling on the density field. ()i,j,kIndicates a value corresponding to an arbitrary point (i, j, k) in the field amount.
Step (B5): and adjusting the minimized energy function of the initial front frame speed field to obtain an adjusted front frame speed field. The differentiation of the density error with respect to the velocity field can be determined by the following equation:
Figure BDA0002917037080000091
wherein L isdIndicating error information. i denotes the abscissa of any point in the density field. j represents the ordinate of any point in the density field. k denotes the vertical coordinate of any point in the density field. u denotes the velocity field. u. ofi,j,kRepresenting the velocity field magnitude at point (i, j, k). advect () represents a advection function. t represents time in the video.
Figure BDA0002917037080000092
Representing the density field at time t-1 (previous frame). Δ t represents a time interval in the video.
Figure BDA0002917037080000093
Representing the reconstructed density field.
Figure BDA0002917037080000094
Representing the reconstructed density field at time t. F () represents a superposition of gaussian functions.
Figure BDA0002917037080000095
And (4) performing density field continuous modeling on the density field. ()i,j,kIndicates a value corresponding to an arbitrary point (i, j, k) in the field amount.
Figure BDA0002917037080000096
Representing the density field。
Figure BDA0002917037080000097
Representing the density field at time t (previous frame). backTrace (i, j, k, u, Δ t) represents the coordinates of the tracking point (i, j, k) before the time Δ t.
Figure BDA0002917037080000098
The density value of a point rounded up to the coordinates of the tracking point (i, j, k) before the time Δ t is shown.
Figure BDA0002917037080000099
The density value of a point rounded down to the coordinates of the tracking point (i, j, k) before the time Δ t is shown.
Step (B6): and (4) carrying out non-scattering adjustment on the adjusted front frame velocity field by using a fluid solver to obtain the adjusted front frame velocity field.
Step (B7): and (4) iterating the steps (B3) to (B6) until the velocity field is converged to obtain a previous frame velocity field.
And (3) weighting the advection density field guided by the front frame velocity field and the rear frame reconstruction density field to obtain a rear frame target reconstruction density field.
In some embodiments, the execution body may weight the advection density field guided by the previous frame rate field and the subsequent frame reconstructed density field to obtain a subsequent frame target reconstructed density field. The advection density field can be advected by the velocity field to obtain the velocity field guided advection density field. And the rear frame target reconstruction density field is used for representing that the rear frame reconstruction density field is weighted through the advection density field to obtain the weighted rear frame reconstruction density field.
In some optional implementations of some embodiments, the performing step of weighting the advection density field guided by the previous frame rate field and the subsequent frame reconstructed density field to obtain the subsequent frame target reconstructed density field may include the following steps:
step (C1): and smoothing the previous frame density field by using the previous frame speed field to obtain the previous frame smoothing density field.
Step (C2): and weighting the advection density field of the front frame and the reconstruction density field of the rear frame to obtain a target reconstruction density field of the rear frame.
With continued reference to fig. 2, a schematic view of an application scenario of the single frame reconstruction method under sparse view according to the present disclosure is shown.
According to the method, rough density field reconstruction is firstly carried out according to a sparse view image, wherein the density field reconstruction means that the quality of a reconstruction result is low. An unknown viewpoint estimation image is then generated under an unknown viewpoint by a style migration method. And tomographic reconstruction is performed using the generated estimated image as a constraint. And (4) carrying out unknown viewpoint generation and tomography reconstruction iteratively to obtain a fluid density field reconstruction result based on a single-frame image. The method comprises the following steps:
step (a 1): and carrying out binarization on the fluid image under the sparse view angle, and carrying out visible shell reconstruction according to the binarized mask. The execution main body can carry out binarization on the observation image by using a fixed threshold value to obtain a two-dimensional image mask at multiple angles. And (3) raising the dimension of the two-dimensional image mask to a possible area prior of the three-dimensional fluid density field, and solving the intersection of the three-dimensional masks at multiple angles to obtain the prior of the area of the fluid density field. The two-dimensional image mask can be obtained by performing threshold segmentation on the image after the binarization operation is performed on the fluid image, wherein the threshold value is 0 when the threshold value is lower than the threshold value, and the threshold value is marked as 1 when the threshold value is larger than the threshold value. The area prior may be that after mapping the two-dimensional mask to three-dimensional space, there is no possibility of fluid in the area inside the mask (labeled as 1 after binarization) and no possibility of fluid in the area outside the mask (labeled as 0 after binarization). The marking of a possible area for the fluid is called a regional prior.
Step (a 2): and carrying out tomography reconstruction according to the fluid image under the sparse view angle to obtain an initial density field estimation result. Wherein, the execution subject can perform tomography reconstruction on the input sparse viewpoint image by using a tomography method, and the reconstruction is to add the visible shell constraint generated in the step (A1), namely to set the density of the region outside the fluid visible shell to be 0.
Step (a 3): and rendering the density field estimation result under an unknown view angle. The density field is rendered using an orthogonal view based rendering method. Wherein the execution subject can render under an unknown viewing angle by using the density field estimation result, and the brightness value of each pixel in the image is equivalent to the integral value of the density edge ray when the ray from the pixel position of the rendering plane passes through the density field. And for the samples which do not exactly fall under the grid integer coordinate, acquiring the density of the coordinate by adopting a linear interpolation first-order approximation mode.
Step (a 4): and fine-tuning the rendered image under the unknown viewpoint by using a style migration method combined with a controllable directional pyramid decomposition algorithm to obtain an unknown viewpoint estimation result. The execution subject can obtain reasonable unknown viewpoint constraint, and an unknown viewpoint estimation image is generated by adopting a mode of fine adjustment of an estimated density field rendering image. The specific generation method is that firstly, the observation images of the front view and the side view are decomposed by using the steerable direction pyramid. And 4 subbands and 4-order image decomposition parameters are adopted during decomposition, and finally a high-frequency residual image, a low-frequency residual image and two layers of 4-direction decomposition subbands with different sizes are generated. Then, a distribution histogram of the sub-band image is calculated, wherein the histogram comprises 256 intervals in total. Next, a histogram interpolation method is used to obtain an image distribution histogram estimation under an unknown view angle.
Step (a 5): performing tomographic reconstruction using the unknown viewpoint estimation result generated in the step (a4) to optimize the density field. Generating unknown viewpoint constraints at 100 random angles of [0,2 pi ] by using the method of the step (A4), and optimizing the density field by using the unknown viewpoint constraints as constraints of tomography reconstruction.
Step (a 6): iteratively performing steps (A3) through (a5) to generate a final reconstruction result. A total of 50 iterations of steps (A3) through (a5) are performed until the reconstruction results converge.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the technical method may be formed by replacing the above-mentioned features with (but not limited to) technical features having similar functions disclosed in the embodiments of the present disclosure.

Claims (4)

1. A fluid reconstruction method based on sparse view video comprises the following steps:
step (1) reconstructing a fluid density field according to a single-frame fluid image under a sparse view angle to obtain a post-frame reconstructed density field;
step (2) speed field processing is carried out according to the reconstructed density field of the previous frame and the reconstructed density field of the later frame obtained by utilizing the single frame information to obtain a speed field of the previous frame;
weighting the advection density field guided by the front frame velocity field and the rear frame reconstruction density field to obtain a rear frame target reconstruction density field;
and (4) sequentially executing the steps (1) to (3) to each frame in the video to obtain a multi-frame reconstructed density field.
2. The method according to claim 1, wherein the reconstructing a fluid density field from a single frame fluid image under a sparse view to obtain a post-frame reconstructed density field comprises:
step (a 1): carrying out binarization on the rear frame fluid image under the sparse view angle, and carrying out visible shell reconstruction according to the mask after binarization to obtain a marking matrix;
step (a 2): carrying out tomography reconstruction according to the back frame fluid image under the sparse view angle to obtain an initial back frame density field;
step (a 3): rendering the initial post-frame density field at an unknown visual angle to obtain a rendered image of the density field at the unknown visual angle;
step (a 4): processing the rendered image under the unknown visual angle by using a style migration method combined with a controllable directional pyramid decomposition algorithm to obtain a relative fluid image under the unknown visual angle;
step (a 5): performing tomographic reconstruction using the relative fluid image at the unknown view angle generated in step (a4) to adjust the subsequent frame density field, and obtaining an adjusted subsequent frame reconstructed density field to update the initial previous frame density field;
step (a 6): the steps (A3) to (a5) are iterated to generate a post-frame reconstructed density field.
3. The method of claim 1, wherein said reconstructing the density field from the previous frame and the reconstructed density field from the subsequent frame using the single frame information, and performing velocity field processing to obtain the velocity field of the previous frame, comprises:
step (B1): acquiring an initial previous frame rate field;
step (B2): and continuously modeling the rear frame density field formed by modeling the rear frame reconstruction density field to generate a continuously modeled rear frame density field, wherein the Gaussian distribution of the density of any point in the rear frame reconstruction density field is as follows:
Figure FDA0002917037070000021
wherein (i, j, k) represents a coordinate of an arbitrary point in the density field, i represents an abscissa of the arbitrary point in the density field, j represents an ordinate of the arbitrary point in the density field, k represents an ordinate of the arbitrary point in the density field, (i ', j', k ') represents a coordinate of the arbitrary point in a square having a side length of 2l with the center of (i, j, k), l represents a constant, i' represents an abscissa of the arbitrary point in the square having a side length of 2l with the center of (i, j, k), j 'represents an ordinate of the arbitrary point in the square having a side length of 2l with the center of (i, j, k), k' represents an ordinate of the arbitrary point in the square having a side length of 2l with the center of (i, j, k),
Figure FDA0002917037070000022
the intensity field is represented by a field of density,
Figure FDA0002917037070000023
represents the density of the points (i, j, k),
Figure FDA0002917037070000024
representing the density of points (i, j, k) after continuous modeling, sigma represents variance, C represents a scaling factori,j,kRepresents a scaling factor corresponding to the point (i, j, k), O represents whether there is an obstacle, Oi′,j′,k′Indicates whether there is an obstacle in the coordinates of the point (i, j, k), where π indicates the circumferential ratio, e indicates a natural constant, which has a value of about 2.718281828459045;
step (B3): smoothing the previous frame density field using the initial previous frame velocity field to generate a previous frame smoothing density field;
step (B4): reconstructing a density field based on a previous frame advection density field and a previous frame to generate error information and an energy function of an initial previous frame velocity field prior;
step (B5): obtaining an adjusted front frame speed field by adjusting a minimization energy function for the initial front frame speed field;
step (B6): using a fluid solver to perform non-dispersion adjustment on the adjusted front frame velocity field to obtain an adjusted front frame velocity field;
step (B7): and (4) iterating the steps (B3) to (B6) until the velocity field is converged to obtain a previous frame velocity field.
4. The method of claim 3, wherein weighting the advection density field guided by the previous frame velocity field and the subsequent frame reconstructed density field to obtain a subsequent frame target reconstructed density field comprises:
step (C1): smoothing the previous frame density field by using the previous frame speed field to obtain a previous frame smoothing density field;
step (C2): and weighting the advection density field of the front frame and the reconstruction density field of the rear frame to obtain a target reconstruction density field of the rear frame.
CN202110105012.0A 2021-01-26 2021-01-26 Fluid reconstruction method based on sparse viewpoint video Active CN112819945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110105012.0A CN112819945B (en) 2021-01-26 2021-01-26 Fluid reconstruction method based on sparse viewpoint video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110105012.0A CN112819945B (en) 2021-01-26 2021-01-26 Fluid reconstruction method based on sparse viewpoint video

Publications (2)

Publication Number Publication Date
CN112819945A true CN112819945A (en) 2021-05-18
CN112819945B CN112819945B (en) 2022-10-04

Family

ID=75859310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110105012.0A Active CN112819945B (en) 2021-01-26 2021-01-26 Fluid reconstruction method based on sparse viewpoint video

Country Status (1)

Country Link
CN (1) CN112819945B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231918A1 (en) * 2022-06-01 2023-12-07 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446366A (en) * 2011-09-14 2012-05-09 天津大学 Time-space jointed multi-view video interpolation and three-dimensional modeling method
CN106600675A (en) * 2016-12-07 2017-04-26 西安蒜泥电子科技有限责任公司 Point cloud synthesis method based on constraint of depth map
CN107085629A (en) * 2017-03-28 2017-08-22 华东师范大学 A kind of fluid simulation method based on video reconstruction Yu Euler's Model coupling
CN108280804A (en) * 2018-01-25 2018-07-13 湖北大学 A kind of multi-frame image super-resolution reconstruction method
US20180204355A1 (en) * 2015-09-02 2018-07-19 Siemens Healthcare Gmbh Fast sparse computed tomography image reconstruction from few views
US20180225807A1 (en) * 2016-12-28 2018-08-09 Shenzhen China Star Optoelectronics Technology Co., Ltd. Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN111814654A (en) * 2020-07-03 2020-10-23 南京莱斯信息技术股份有限公司 Markov random field-based remote tower video target tagging method
CN111915735A (en) * 2020-06-29 2020-11-10 浙江传媒学院 Depth optimization method for three-dimensional structure contour in video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446366A (en) * 2011-09-14 2012-05-09 天津大学 Time-space jointed multi-view video interpolation and three-dimensional modeling method
US20180204355A1 (en) * 2015-09-02 2018-07-19 Siemens Healthcare Gmbh Fast sparse computed tomography image reconstruction from few views
CN106600675A (en) * 2016-12-07 2017-04-26 西安蒜泥电子科技有限责任公司 Point cloud synthesis method based on constraint of depth map
US20180225807A1 (en) * 2016-12-28 2018-08-09 Shenzhen China Star Optoelectronics Technology Co., Ltd. Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN107085629A (en) * 2017-03-28 2017-08-22 华东师范大学 A kind of fluid simulation method based on video reconstruction Yu Euler's Model coupling
CN108280804A (en) * 2018-01-25 2018-07-13 湖北大学 A kind of multi-frame image super-resolution reconstruction method
CN111915735A (en) * 2020-06-29 2020-11-10 浙江传媒学院 Depth optimization method for three-dimensional structure contour in video
CN111814654A (en) * 2020-07-03 2020-10-23 南京莱斯信息技术股份有限公司 Markov random field-based remote tower video target tagging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
付庭煌: "先进的流场测量技术 同时测量速度场、温度场、压力场和密度场的S.A.F.E技术", 《航空动力学报》 *
肖祥云: "基于深度神经网络的流体动画研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231918A1 (en) * 2022-06-01 2023-12-07 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium

Also Published As

Publication number Publication date
CN112819945B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
Min et al. Depth video enhancement based on weighted mode filtering
Sun et al. Image vectorization using optimized gradient meshes
CN109544677A (en) Indoor scene main structure method for reconstructing and system based on depth image key frame
CN111553858B (en) Image restoration method and system based on generation countermeasure network and application thereof
Yoon et al. Surface and normal ensembles for surface reconstruction
Jalobeanu et al. An adaptive Gaussian model for satellite image deblurring
US11887256B2 (en) Deferred neural rendering for view extrapolation
CN104820969A (en) Real-time blind image restoration method
Okabe et al. Fluid volume modeling from sparse multi-view images by appearance transfer
Mücke et al. Surface Reconstruction from Multi-resolution Sample Points.
CN112819945B (en) Fluid reconstruction method based on sparse viewpoint video
CN111899314A (en) Robust CBCT reconstruction method based on low-rank tensor decomposition and total variation regularization
CN117274515A (en) Visual SLAM method and system based on ORB and NeRF mapping
CN116958453A (en) Three-dimensional model reconstruction method, device and medium based on nerve radiation field
Wang et al. 3D model inpainting based on 3D deep convolutional generative adversarial network
CN115761178A (en) Multi-view three-dimensional reconstruction method based on implicit neural representation
spick et al. Realistic and textured terrain generation using GANs
CN111583408A (en) Human body three-dimensional modeling system based on hand-drawn sketch
Thakur et al. Gradient and multi scale feature inspired deep blind gaussian denoiser
Feng et al. Gaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting
CN112991504A (en) Improved method for filling holes based on TOF camera three-dimensional reconstruction
Koo et al. Shape from projections via differentiable forward projector for computed tomography
Heinzl Analysis and visualization of industrial CT data
US10922872B2 (en) Noise reduction on G-buffers for Monte Carlo filtering
CN109801367B (en) Grid model characteristic editing method based on compressed manifold mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant