CN113160296A - Micro-rendering-based three-dimensional reconstruction method and device for vibration liquid drops - Google Patents

Micro-rendering-based three-dimensional reconstruction method and device for vibration liquid drops Download PDF

Info

Publication number
CN113160296A
CN113160296A CN202110348718.XA CN202110348718A CN113160296A CN 113160296 A CN113160296 A CN 113160296A CN 202110348718 A CN202110348718 A CN 202110348718A CN 113160296 A CN113160296 A CN 113160296A
Authority
CN
China
Prior art keywords
model
rendering
micro
image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110348718.XA
Other languages
Chinese (zh)
Other versions
CN113160296B (en
Inventor
张松海
何煜
陈晓松
刘应天
胡事民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110348718.XA priority Critical patent/CN113160296B/en
Publication of CN113160296A publication Critical patent/CN113160296A/en
Application granted granted Critical
Publication of CN113160296B publication Critical patent/CN113160296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a micro-rendering-based three-dimensional reconstruction method and a micro-rendering-based three-dimensional reconstruction device for vibrating liquid drops, wherein the method comprises the following steps: establishing a differentiable rendering model according to the input camera parameters; converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image; initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through a micro-renderable model; and calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters. The method considers the characteristics of unobvious liquid drop characteristic points and high vibration speed, and combines micro-renderable description with physical model description, so that a three-dimensional model of the liquid drop is consistent with a shot image under multiple visual angles, and the rapid reconstruction speed and the accurate reconstruction precision are achieved.

Description

Micro-rendering-based three-dimensional reconstruction method and device for vibration liquid drops
Technical Field
The invention relates to the technical field of computer graphics, in particular to a method and a device for three-dimensional reconstruction of vibration liquid drops based on micro-rendering.
Background
Conventional three-dimensional reconstruction techniques for objects rely primarily on multi-view geometry principles for parallax-based calculations. The Motion recovery Structure method (Structure from Motion) mainly utilizes images obtained by a plurality of cameras to carry out three-dimensional recovery, firstly extracts and matches feature points of multi-view images, recovers the spatial correspondence between different views according to the geometric principle, firstly reconstructs camera parameters, and then combines the uniform feature points of different images to calculate the point cloud information of a target object.
The method has low robustness for dynamic objects, poor effect on objects with few textures and undefined characteristic points such as liquid drops, large time consumption, poor real-time performance, capability of only recovering point clouds of a target object, and need of a post-processing process if triangular mesh representation is needed.
Disclosure of Invention
Aiming at the problem that the high-speed vibration liquid drop with few real-time reconstruction characteristic points can not be effectively realized in the prior art, the invention provides a vibration liquid drop three-dimensional reconstruction method and a device based on micro-rendering.
The invention provides a micro-rendering-based three-dimensional reconstruction method for vibrating liquid drops, which comprises the following steps: establishing a differentiable rendering model according to the input camera parameters; converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image; initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through a micro-renderable model; and calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters.
According to the three-dimensional reconstruction method of the micro-renderable-based vibration liquid drop, after the model parameter updating, the method further comprises the following steps: and repeating the process of updating the model parameters of the three-dimensional model updated in the previous frame as an initial model, obtaining a corresponding rendering image through the micro-renderable model, calculating a loss function value between the rendering image and the current frame multi-view image, and updating the model parameters until a preset condition is met.
According to the three-dimensional reconstruction method of the micro-renderable-based vibration liquid drop, the micro-renderable rendering model is established according to the input camera parameters, and the method comprises the following steps: and establishing a rendering model by using a soft rasterization renderer, replacing the entity triangular patch by using the probability distribution of the pixel position, and replacing the front and back shielding relation of the triangular patch by using a distance-related aggregation function.
According to one embodiment of the invention, the method for three-dimensional reconstruction of the micro-renderable-based vibration liquid drop comprises the following steps: and constructing a triangular mesh model of the unit spherical surface, calculating the displacement sum of each vertex vibration mode, and radially moving the vertex of the triangular mesh to generate the triangular mesh model of the liquid drop.
According to one embodiment of the invention, the three-dimensional reconstruction method based on micro-renderable vibration liquid drop comprises the following steps:
Figure BDA0003001692410000021
wherein, Is,
Figure BDA0003001692410000022
Masks for taking and rendering images, respectively.
According to the three-dimensional reconstruction method of the micro-renderable vibration liquid drop, the three-dimensional model parameters are reversely propagated to the liquid drop to update the model parameters, and the method comprises the following steps: and determining a target function according to the Laplace regularization term, the loss function and the smooth regularization term, and updating parameters according to the target function.
The invention also provides a micro-rendering-based three-dimensional reconstruction device for the vibrating liquid drops, which comprises: the model building module is used for building a differentiable rendering model according to the input camera parameters; the image processing module is used for converting the input multi-view liquid drop vibration video into multi-view images at the same moment and carrying out edge extraction on the multi-view images; the rendering map generation module is used for initializing a three-dimensional model of the liquid drop and obtaining a corresponding rendering image through the micro-renderable model; and the model reconstruction module is used for calculating a loss function value between the rendering image and the multi-view image, reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop and updating the model parameters.
According to an embodiment of the present invention, the three-dimensional reconstruction apparatus for vibro-droplet based on micro-rendering, the model reconstruction module is further configured to: and repeating the process of updating the model parameters of the three-dimensional model updated in the previous frame as an initial model, obtaining a corresponding rendering image through the micro-renderable model, calculating a loss function value between the rendering image and the current frame multi-view image, and updating the model parameters until a preset condition is met.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the micro-renderable-based three-dimensional reconstruction method of the vibrated liquid drop as described in any one of the above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for three-dimensional reconstruction of vibro-droplets based on micro-renderable, as recited in any of the above.
According to the method and the device for three-dimensional reconstruction of the vibrating liquid drop based on micro-rendering, provided by the invention, the characteristics of unobvious characteristic points of the liquid drop and high vibration speed are considered, and the micro-rendering is combined with physical model description, so that the three-dimensional model of the liquid drop is consistent with the shot image under multiple visual angles, and the higher reconstruction speed and the accurate reconstruction precision are achieved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a micro-renderable-based three-dimensional reconstruction method for vibrating liquid drops according to the present invention;
FIG. 2 is a diagrammatic illustration of the output results provided by the present invention;
FIG. 3 is a schematic structural diagram of a micro-renderable-based three-dimensional reconstruction device for vibrating liquid drops, provided by the invention;
fig. 4 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Rendering is the process opposite to three-dimensional reconstruction, which requires input of two-dimensional images and inference of three-dimensional structures, while rendering is the three-dimensional information of a given object or scene, simulating a camera to take two-dimensional images. The traditional rendering method comprises a rasterization method and a ray tracing method, wherein the rasterization method has good real-time performance, and the ray tracing method can generate pictures with strong reality. The rapid development of deep learning has brought about a wide range of concerns about gradient-based optimization techniques. The traditional rasterization has an irreducible calculation process, the soft rasterization replaces the irreducible parts with micromanipulation through probability distribution approximation and aggregation functions, so that the whole rendering pipeline is completely microminiature, the gradient of an output image relative to input parameters can be calculated, and further parameter optimization is carried out through gradient descent, so that a rendering pipeline is reversed, and the process of reconstructing three-dimensional information through two-dimensional images is realized.
Aiming at the problems that the traditional reconstruction method cannot process the liquid drop that the characteristic points are few and the vibration is fast, the invention provides a micro-renderable-based three-dimensional reconstruction method for the vibration liquid drop. The method considers a physical model of the vibration of the liquid drop under the gravity-free state, constructs a parameterized liquid drop three-dimensional model on the basis of a spherical harmonic function, and enables the whole rendering process to be calculated in a micro-rendering frame.
The three-dimensional reconstruction method and device based on micro-renderable vibration liquid drop of the invention are described in conjunction with fig. 1-4. Fig. 1 is a schematic flow diagram of a micro-renderable-based three-dimensional reconstruction method for a vibrating droplet, and as shown in fig. 1, the micro-renderable-based three-dimensional reconstruction method for a vibrating droplet provided by the present invention includes:
101. and establishing a differentiable rendering model according to the input camera parameters.
The parameters of the camera are required to be known, the camera is calibrated to obtain camera internal parameters, the pose of the camera relative to a shot object is fixed, and the pose is represented by a triple group of a distance, an azimuth angle and a pitch angle. The differentiable rendering model is built with a soft rasterizer, and the rendering framework describes the rendering as a differentiable aggregation process.
102. Converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image.
Firstly, aligning the video streams of different cameras in time, and combining the images of different visual angles at the same time to obtain multi-visual-angle images according to a time sequence, so as to reconstruct the images frame by frame in the subsequent steps. And (3) carrying out outline extraction on the liquid drop on a single image by applying an edge extraction operator, and segmenting the liquid drop from the background by combining brightness information. The image edge lifting can be carried out by using a Laplace operator, the image is divided into an object and a liquid drop by combining brightness information, the position of the liquid drop is represented by a mask, the liquid pixel is 1, and the background pixel is 0.
103. Initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through the micro-renderable model.
The three-dimensional model initialization can be carried out by describing the vibration of the liquid drop through the spherical harmonic function, and the rendered three-dimensional model can obtain a corresponding rendered image in any direction.
104. And calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters.
The image in the multi-view image is the real state of the liquid drop, and the information in the multi-view image is learned into the three-dimensional model through the form of the three-dimensional model parameters, so that the accurate liquid drop three-dimensional model is obtained.
The position of the appearance of the liquid drop in the rendered image and the shot image can be described by a mask, the intersection and the comparison of the two are calculated as a loss function, and a Laplace smoothing term is used as a regularization. Gradient descent optimization is performed by computing a gradient of the penalty function relative to the input through the micro-renderable pipeline.
The micro-rendering-based three-dimensional reconstruction method of the vibrating liquid drop considers the characteristics of unobvious characteristic points of the liquid drop and high vibration speed, and combines micro-rendering with physical model description to enable the three-dimensional model of the liquid drop to be consistent with a shot image under multiple visual angles, thereby achieving higher reconstruction speed and accurate reconstruction precision.
In one embodiment, after the performing the model parameter update, the method further includes: and repeating the process of updating the model parameters of the three-dimensional model updated in the previous frame as an initial model, obtaining a corresponding rendering image through the micro-renderable model, calculating a loss function value between the rendering image and the current frame multi-view image, and updating the model parameters until a preset condition is met.
Step 103 and step 104 can be repeatedly executed, and the parameters of the liquid drop are iteratively updated to obtain a more accurate three-dimensional model of the liquid drop. The vibration parameters of the droplets in the initial frame may have a large difference from the actual vibration parameters, and therefore, a plurality of iterations are required. The preset condition may be the number of times set, or other termination condition. And for the (i + 1) th frame, the parameter is initialized by using the optimization result of the nth frame, and because the vibration parameter has continuity in time, the iteration times and time required by each optimization are reduced, and the reconstruction precision and speed are improved.
The specific method comprises the following steps: and fixing the learning rate, and performing gradient descent optimization parameters for multiple times to obtain an accurate reconstruction result of a single frame. For dynamic reconstruction, the continuity of time is considered, the physical parameters of each frame are initialized by the result of the previous frame, the reconstruction time is shortened, and the reconstruction robustness is improved.
In one embodiment, the building a differentiable rendering model according to the input camera parameters includes: and establishing a rendering model by using a soft rasterization renderer, replacing the entity triangular patch by using the probability distribution of the pixel position, and replacing the front and back shielding relation of the triangular patch by using a distance-related aggregation function.
The color contributions of the different triangular patches to each pixel position are described in a probability distribution and fused together according to distance-dependent weights. The position of the triangular patch can be described microscopically with a probability map in screen space so that the gradient can flow over all mesh triangles, thereby allowing supervisory signals to control different parameters in the model.
Firstly, internal parameters of a known camera are needed, the pose of the fixed camera relative to a shot liquid drop is represented by a triad distance, an azimuth angle and a pitch angle, and the pose of the kth camera is recorded as
Figure BDA0003001692410000071
The micro-renderable model receives the input of the pose of a camera and a triangular surface element, and the pose input of a certain model is set as
Figure BDA0003001692410000072
The triangular bin input is fjTransforming the surface element into the screen space of the camera according to the pose, wherein T represents a series of rotation and translation transformation, and recording the transformed surface element as
Figure BDA0003001692410000073
The contribution of this bin to each pixel is described by a probability distribution, given that the contribution of bin j to pixel i is
Figure BDA0003001692410000074
Where d (i, j) is the minimum distance between the pixel and the edge of the bin;
Figure BDA0003001692410000075
the value is 1 or-1, if the pixel is 1 in the surface element, otherwise, the value is-1; σ is a parameter that controls the rendering sharpness. The method only needs to render a mask of the object, for the pixel i, the mask is
Figure BDA0003001692410000076
In one embodiment, the initializing a three-dimensional model of the droplet comprises: and constructing a triangular mesh model of the unit spherical surface, calculating the displacement sum of each vertex vibration mode, and radially moving the vertex of the triangular mesh to generate the triangular mesh model of the liquid drop.
The incompressible fluid-free infinite vibration in the gravity-free state has a closed form of analytic solution mathematically. The radial displacement of the droplet is a superposition of different eigenmodes l, m radial displacements:
Figure BDA0003001692410000077
wherein Y islmBeing a real spherical harmonic, Xlm、VlmInitial displacement and velocity, ω, corresponding to vibrational modeslmFor the vibration frequency, θ, φ are the angular parameters of the sphere in the real spherical harmonic function. The vibration frequency satisfies:
Figure BDA0003001692410000081
wherein alpha is the surface tension coefficient, rho is the liquid density, R is the radius of the liquid drop in the static state, and l is the order of the spherical harmonic function.
It can be seen that the coefficient of each vibration mode is only related to the coefficient of the same mode, and the vibration coefficients of different modes are completely decoupled, so that the vibration coefficients of different modes can be used as physical parameters to construct a three-dimensional model of the liquid drop. In actual rendering, it is necessary to use a triangular mesh to represent the shape, and since the shape of the droplet is close to a sphere, a triangular mesh model of a unit sphere can be constructed first, and the vertices of the triangular mesh are radially moved by calculating the displacement of each vertex, thereby generating a triangular mesh model of the droplet.
In one embodiment, the loss function comprises:
Figure BDA0003001692410000082
wherein, Is,
Figure BDA0003001692410000083
Masks for the captured image and the rendered image respectively,
Figure BDA0003001692410000084
respectively element-by-element addition and multiplication.
In one embodiment, the back propagation into the three-dimensional model parameters of the droplet, performing model parameter update, comprises: and determining a target function according to the Laplace regularization term, the loss function and the smooth regularization term, and updating parameters according to the target function.
Noting that Laplace regularization term is
Figure BDA0003001692410000085
For preventing spherical selfing, the smooth regularization term is
Figure BDA0003001692410000086
Increasing the smoothness of the droplet surface. The optimization goal may be:
Figure BDA0003001692410000087
wherein the content of the first and second substances,
Figure BDA0003001692410000088
is Laplace regularization term, is used for preventing spherical self-crossing,
Figure BDA0003001692410000089
for smoothing the regularization term, for increasing the smoothness of the surface of the drop, λlapfRespectively, for balancing the corresponding regularization terms.
Fig. 2 is a schematic diagram of an output result provided by the present invention, as shown in fig. 2, the first and second lines are a mask of an input image and a mask of a rendered image during current optimization, respectively, and the third line is a droplet model reconstructed from a certain frame.
The three-dimensional reconstruction device of the micro-renderable-based vibrating droplet provided by the invention is described below, and the three-dimensional reconstruction device of the micro-renderable-based vibrating droplet described below and the three-dimensional reconstruction method of the micro-renderable-based vibrating droplet described above can be referred to correspondingly.
Fig. 3 is a schematic structural diagram of a micro-renderable-based three-dimensional reconstruction apparatus for oscillating liquid droplets, as shown in fig. 3, the micro-renderable-based three-dimensional reconstruction apparatus for oscillating liquid droplets includes: a model building module 301, an image processing module 302, a rendering map generation module 303 and a model reconstruction module 304. The model building module 301 is configured to build a differentiable rendering model according to the input camera parameters; the image processing module 302 is configured to convert an input multi-view liquid droplet vibration video into a multi-view image at the same time, and perform edge extraction on the multi-view image; the rendering map generation module 303 is configured to initialize a three-dimensional model of the droplet, and obtain a corresponding rendering image through the micro-renderable model; the model reconstruction module 304 is configured to calculate a loss function value between the rendered image and the multi-view image, and inversely propagate the loss function value to a three-dimensional model parameter of the droplet to update the model parameter.
In one embodiment, the model reconstruction module is further configured to: and repeating the process of updating the model parameters of the three-dimensional model updated in the previous frame as an initial model, obtaining a corresponding rendering image through the micro-renderable model, calculating a loss function value between the rendering image and the current frame multi-view image, and updating the model parameters until a preset condition is met.
The device embodiment provided in the embodiments of the present invention is for implementing the above method embodiments, and for details of the process and the details, reference is made to the above method embodiments, which are not described herein again.
The micro-rendering-based three-dimensional reconstruction device for the vibrating liquid drops, provided by the embodiment of the invention, considers the characteristics of unobvious characteristic points of the liquid drops and high vibration speed, and combines micro-rendering with physical model description, so that a three-dimensional model of the liquid drops is consistent with a shot image under multiple visual angles, and the fast reconstruction speed and the accurate reconstruction precision are achieved.
Fig. 4 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 4, the electronic device may include: a processor (processor)401, a communication Interface (communication Interface)402, a memory (memory)403 and a communication bus 404, wherein the processor 401, the communication Interface 402 and the memory 403 complete communication with each other through the communication bus 404. Processor 401 may invoke logic instructions in memory 403 to perform a micro-renderable-based vibro-droplet three-dimensional reconstruction method, the method comprising: establishing a differentiable rendering model according to the input camera parameters; converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image; initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through a micro-renderable model; and calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters.
In addition, the logic instructions in the memory 403 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform the micro-renderable-based three-dimensional reconstruction method of the vibrating droplets provided by the above methods, the method comprising: establishing a differentiable rendering model according to the input camera parameters; converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image; initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through a micro-renderable model; and calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the micro-renderable-based three-dimensional reconstruction method for the vibrated liquid drop provided by the above embodiments, the method comprising: establishing a differentiable rendering model according to the input camera parameters; converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image; initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through a micro-renderable model; and calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A micro-renderable-based three-dimensional reconstruction method for vibrating liquid drops is characterized by comprising the following steps:
establishing a differentiable rendering model according to the input camera parameters;
converting an input multi-view liquid drop vibration video into a multi-view image at the same moment, and performing edge extraction on the multi-view image;
initializing a three-dimensional model of the liquid drop, and obtaining a corresponding rendering image through a micro-renderable model;
and calculating a loss function value between the rendering image and the multi-view image, and reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop to update the model parameters.
2. The method for three-dimensional reconstruction of vibro-droplets based on micro-rendering of claim 1, wherein after the performing model parameter update, further comprising:
and repeating the process of updating the model parameters of the three-dimensional model updated in the previous frame as an initial model, obtaining a corresponding rendering image through the micro-renderable model, calculating a loss function value between the rendering image and the current frame multi-view image, and updating the model parameters until a preset condition is met.
3. The method for three-dimensional reconstruction of vibro-droplets based on micro-rendering of claim 1, wherein the building of micro-rendering models from inputted camera parameters comprises:
and establishing a rendering model by using a soft rasterization renderer, replacing the entity triangular patch by using the probability distribution of the pixel position, and replacing the front and back shielding relation of the triangular patch by using a distance-related aggregation function.
4. The method of claim 1, wherein initializing a three-dimensional model of the drop comprises:
and constructing a triangular mesh model of the unit spherical surface, calculating the displacement sum of each vertex vibration mode, and radially moving the vertex of the triangular mesh to generate the triangular mesh model of the liquid drop.
5. The method of claim 1, wherein the loss function comprises:
Figure FDA0003001692400000011
wherein, Is,
Figure FDA0003001692400000021
Masks for taking and rendering images, respectively.
6. The micro-renderable-based three-dimensional reconstruction method of the vibrating droplets according to claim 1, wherein said back-propagating into three-dimensional model parameters of the droplets performs model parameter updates comprising:
and determining a target function according to the Laplace regularization term, the loss function and the smooth regularization term, and updating parameters according to the target function.
7. A micro-renderable-based three-dimensional reconstruction device of vibrating liquid drops, comprising:
the model building module is used for building a differentiable rendering model according to the input camera parameters;
the image processing module is used for converting the input multi-view liquid drop vibration video into multi-view images at the same moment and carrying out edge extraction on the multi-view images;
the rendering map generation module is used for initializing a three-dimensional model of the liquid drop and obtaining a corresponding rendering image through the micro-renderable model;
and the model reconstruction module is used for calculating a loss function value between the rendering image and the multi-view image, reversely transmitting the loss function value to the three-dimensional model parameters of the liquid drop and updating the model parameters.
8. The micro-renderable-based vibro-droplet three-dimensional reconstruction device of claim 1, the model reconstruction module further to:
and repeating the process of updating the model parameters of the three-dimensional model updated in the previous frame as an initial model, obtaining a corresponding rendering image through the micro-renderable model, calculating a loss function value between the rendering image and the current frame multi-view image, and updating the model parameters until a preset condition is met.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the method for three-dimensional reconstruction of vibro-droplets based on micro-rendering according to any of claims 1 to 6.
10. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for three-dimensional reconstruction of vibro-droplets based on micro-rendering according to any one of claims 1 to 6.
CN202110348718.XA 2021-03-31 2021-03-31 Three-dimensional reconstruction method and device for vibration liquid drop based on micro-rendering Active CN113160296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110348718.XA CN113160296B (en) 2021-03-31 2021-03-31 Three-dimensional reconstruction method and device for vibration liquid drop based on micro-rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110348718.XA CN113160296B (en) 2021-03-31 2021-03-31 Three-dimensional reconstruction method and device for vibration liquid drop based on micro-rendering

Publications (2)

Publication Number Publication Date
CN113160296A true CN113160296A (en) 2021-07-23
CN113160296B CN113160296B (en) 2023-06-06

Family

ID=76885775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110348718.XA Active CN113160296B (en) 2021-03-31 2021-03-31 Three-dimensional reconstruction method and device for vibration liquid drop based on micro-rendering

Country Status (1)

Country Link
CN (1) CN113160296B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120062A (en) * 2021-11-26 2022-03-01 北京百度网讯科技有限公司 Sample generation method and device
CN115115780A (en) * 2022-06-29 2022-09-27 聚好看科技股份有限公司 Three-dimensional reconstruction method and system based on multi-view RGBD camera
CN116206035A (en) * 2023-01-12 2023-06-02 北京百度网讯科技有限公司 Face reconstruction method, device, electronic equipment and storage medium
CN116824026A (en) * 2023-08-28 2023-09-29 华东交通大学 Three-dimensional reconstruction method, device, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7948485B1 (en) * 2005-12-12 2011-05-24 Sony Computer Entertainment Inc. Real-time computer simulation of water surfaces
CN102930588A (en) * 2012-09-20 2013-02-13 四川川大智胜软件股份有限公司 Real-time rendering method for water drops at screen space lens
CN102930583A (en) * 2012-10-17 2013-02-13 中国科学院自动化研究所 Method for interactively generating droplet effect
CN111243071A (en) * 2020-01-08 2020-06-05 叠境数字科技(上海)有限公司 Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7948485B1 (en) * 2005-12-12 2011-05-24 Sony Computer Entertainment Inc. Real-time computer simulation of water surfaces
CN102930588A (en) * 2012-09-20 2013-02-13 四川川大智胜软件股份有限公司 Real-time rendering method for water drops at screen space lens
CN102930583A (en) * 2012-10-17 2013-02-13 中国科学院自动化研究所 Method for interactively generating droplet effect
CN111243071A (en) * 2020-01-08 2020-06-05 叠境数字科技(上海)有限公司 Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
费涨等: "实时快速3D绘制空气中水滴反射效果", 《计算机工程与应用》 *
费涨等: "实时快速3D绘制空气中水滴反射效果", 《计算机工程与应用》, no. 25, 1 September 2007 (2007-09-01) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120062A (en) * 2021-11-26 2022-03-01 北京百度网讯科技有限公司 Sample generation method and device
CN115115780A (en) * 2022-06-29 2022-09-27 聚好看科技股份有限公司 Three-dimensional reconstruction method and system based on multi-view RGBD camera
CN116206035A (en) * 2023-01-12 2023-06-02 北京百度网讯科技有限公司 Face reconstruction method, device, electronic equipment and storage medium
CN116206035B (en) * 2023-01-12 2023-12-01 北京百度网讯科技有限公司 Face reconstruction method, device, electronic equipment and storage medium
CN116824026A (en) * 2023-08-28 2023-09-29 华东交通大学 Three-dimensional reconstruction method, device, system and storage medium
CN116824026B (en) * 2023-08-28 2024-01-09 华东交通大学 Three-dimensional reconstruction method, device, system and storage medium

Also Published As

Publication number Publication date
CN113160296B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN109147048B (en) Three-dimensional mesh reconstruction method by utilizing single-sheet colorful image
CN113160296A (en) Micro-rendering-based three-dimensional reconstruction method and device for vibration liquid drops
US20180012407A1 (en) Motion Capture and Character Synthesis
CN114863038B (en) Real-time dynamic free visual angle synthesis method and device based on explicit geometric deformation
CN112991537B (en) City scene reconstruction method and device, computer equipment and storage medium
US11887241B2 (en) Learning 2D texture mapping in volumetric neural rendering
US20230177822A1 (en) Large scene neural view synthesis
Yan et al. Interactive liquid splash modeling by user sketches
US20220375152A1 (en) Method for Efficiently Computing and Specifying Level Sets for Use in Computer Simulations, Computer Graphics and Other Purposes
Liu et al. Real-time neural rasterization for large scenes
US20110012910A1 (en) Motion field texture synthesis
US9811941B1 (en) High resolution simulation of liquids
Caruso et al. 3d reconstruction of non-cooperative resident space objects using instant ngp-accelerated nerf and d-nerf
Kohlbrenner et al. Gauss stylization: Interactive artistic mesh modeling based on preferred surface normals
Liu et al. Neural impostor: Editing neural radiance fields with explicit shape manipulation
CN116543086A (en) Nerve radiation field processing method and device and electronic equipment
Nie et al. Physics-preserving fluid reconstruction from monocular video coupling with SFS and SPH
Bhardwaj et al. SingleSketch2Mesh: generating 3D mesh model from sketch
Rivalcoba et al. Towards urban crowd visualization
US11354878B2 (en) Method of computing simulated surfaces for animation generation and other purposes
US10586401B2 (en) Sculpting brushes based on solutions of elasticity
CN117788703A (en) Port three-dimensional model construction method based on machine vision and electronic equipment
Yang et al. Integration of Depth Normal Consistency and Depth Map Refinement for MVS Reconstruction
Rückert Real-Time Exploration of Photorealistic Virtual Environments
Lisitsa 3D view generation and view synthesis based on 2D data.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant