CN112348956A - Method and device for reconstructing grid of transparent object, computer equipment and storage medium - Google Patents

Method and device for reconstructing grid of transparent object, computer equipment and storage medium Download PDF

Info

Publication number
CN112348956A
CN112348956A CN202011083277.7A CN202011083277A CN112348956A CN 112348956 A CN112348956 A CN 112348956A CN 202011083277 A CN202011083277 A CN 202011083277A CN 112348956 A CN112348956 A CN 112348956A
Authority
CN
China
Prior art keywords
model
loss
transparent object
initial
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011083277.7A
Other languages
Chinese (zh)
Other versions
CN112348956B (en
Inventor
黄惠
吕佳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011083277.7A priority Critical patent/CN112348956B/en
Publication of CN112348956A publication Critical patent/CN112348956A/en
Application granted granted Critical
Publication of CN112348956B publication Critical patent/CN112348956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a mesh reconstruction method and device for a transparent object, a computer device and a storage medium. The method comprises the following steps: acquiring object images of a transparent object under a plurality of acquisition visual angles and calibration information corresponding to image acquisition equipment, wherein the image acquisition equipment is used for acquiring the object images; generating an initial grid model corresponding to the transparent object according to the object images under a plurality of collection visual angles; determining light refraction loss corresponding to emergent light of the image acquisition equipment according to the calibration information, and determining model loss corresponding to the initial grid model according to the light refraction loss; and reconstructing the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object. By adopting the method, the accuracy of the grid reconstruction of the transparent object can be effectively improved.

Description

Method and device for reconstructing grid of transparent object, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for reconstructing a mesh of a transparent object, a computer device, and a storage medium.
Background
The mesh reconstruction refers to the establishment of a mesh model suitable for computer representation and processing of a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer.
In a conventional method, a mesh reconstruction process usually aims at a non-transparent object, and a corresponding mesh model is established by scanning the non-transparent object. However, the conventional approach for non-transparent objects cannot be applied to transparent objects. The transparent object mesh model established by the traditional transparent object-oriented method is easy to lose the details of the transparent object, so that the accuracy of mesh reconstruction of the transparent object is low.
Disclosure of Invention
In view of the above, it is necessary to provide a mesh reconstruction method, an apparatus, a computer device and a storage medium for a transparent object, which can effectively improve the accuracy of mesh reconstruction of the transparent object.
A method of mesh reconstruction of a transparent object, the method comprising:
acquiring object images of a transparent object under a plurality of acquisition visual angles and calibration information corresponding to image acquisition equipment, wherein the image acquisition equipment is used for acquiring the object images;
generating an initial grid model corresponding to the transparent object according to the object images under a plurality of collection visual angles;
determining light refraction loss corresponding to emergent light of the image acquisition equipment according to the calibration information, and determining model loss corresponding to the initial grid model according to the light refraction loss;
and reconstructing the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object.
In one embodiment, the determining, according to the calibration information, the light refraction loss corresponding to the outgoing light of the image capturing device includes:
determining first position coordinates corresponding to a plurality of emergent rays of the image acquisition equipment according to the calibration information and the object image, wherein the first position coordinates have a corresponding relation with the emergent rays;
calculating second position coordinates corresponding to the emergent rays according to the initial grid model;
and determining the ray refraction loss corresponding to the initial grid model according to the coordinate distance between the first position coordinate and the second position coordinate corresponding to the emergent rays respectively.
In one embodiment, the generating an initial mesh model corresponding to the transparent object according to the object images at a plurality of acquisition view angles includes:
extracting a plurality of contour images corresponding to the transparent object from the object images under the plurality of collection visual angles;
carrying out space carving according to the plurality of outline images to obtain a three-dimensional convex hull corresponding to the transparent object;
and acquiring target grid parameters, and carrying out gridding processing on the three-dimensional convex hull according to the target grid parameters to obtain an initial grid model corresponding to the transparent object.
In one embodiment, after the reconstructing the initial mesh model according to the model loss, the method further comprises:
returning to the step of obtaining the target grid parameters, carrying out gridding processing on the reconstructed grid model according to the obtained target grid parameters, and recording the first returning times;
and stopping returning to the step of obtaining the target grid parameters when the first returning times reach a first threshold value.
In one embodiment, after the reconstructing the initial mesh model according to the model loss, the method further comprises:
returning to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information, and recording a second return time;
and when the second return times reach a second threshold value, stopping returning to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information.
In one embodiment, the method further comprises:
acquiring projection outlines of the initial grid model under a plurality of acquisition visual angles;
and determining the object contour loss corresponding to the initial grid model according to the coincidence degree of the projection contour and the corresponding contour image.
In one embodiment, the determining the model loss corresponding to the initial mesh model according to the ray refraction loss includes:
acquiring a light refraction weight corresponding to the light refraction loss and a contour weight corresponding to the object contour loss;
and according to the light refraction weight and the outline weight, weighting the light refraction loss and the object outline loss to obtain a model loss corresponding to the initial grid model.
An apparatus for mesh reconstruction of transparent objects, the apparatus comprising:
the device comprises an image acquisition module, a calibration module and a display module, wherein the image acquisition module is used for acquiring object images of a transparent object under a plurality of acquisition visual angles and calibration information corresponding to image acquisition equipment, and the image acquisition equipment is used for acquiring the object images;
the model generation module is used for generating an initial grid model corresponding to the transparent object according to the object images under a plurality of collection visual angles;
the loss determining module is used for determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information and determining the model loss corresponding to the initial grid model according to the light refraction loss;
and the grid reconstruction module is used for reconstructing the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object.
A computer device comprising a memory in which a computer program is stored and a processor which, when executing said computer program, carries out the steps of the above-mentioned method for mesh reconstruction of transparent objects.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method for mesh reconstruction of transparent objects.
The mesh reconstruction method, the device, the computer equipment and the storage medium of the transparent object are characterized in that by acquiring the object images of the transparent object under a plurality of acquisition visual angles, and calibration information corresponding to the image acquisition device, the image acquisition device is used for acquiring an object image, generating an initial network model corresponding to the transparent object according to the object images at a plurality of collection visual angles, determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information, determining the model loss corresponding to the initial grid model according to the light refraction loss, fully utilizing the light refraction relation of the emergent light, reconstructing the initial grid model through the model loss, therefore, the details of the transparent object can be accurately reserved, and the target grid model corresponding to the transparent object is obtained, so that the accuracy of grid reconstruction of the transparent object is effectively improved.
Drawings
FIG. 1 is a diagram of an application environment of a mesh reconstruction method for a transparent object according to an embodiment;
FIG. 2 is a flowchart illustrating a method for reconstructing a mesh of a transparent object according to an embodiment;
FIG. 3 is a diagram of an application environment of a mesh reconstruction method for a transparent object according to another embodiment;
FIG. 4 is a schematic diagram of a three-dimensional convex hull corresponding to a transparent object in one embodiment;
FIG. 5 is a logic diagram of model optimization in one embodiment;
FIG. 6 is a diagram illustrating the results of a mesh reconstruction in one embodiment;
FIG. 7 is a schematic illustration of determining the refractive loss of light in one embodiment;
FIG. 8 is a schematic illustration of determining object contour loss in one embodiment;
FIG. 9 is a graph comparing results of a target mesh model in one embodiment;
FIG. 10 is a block diagram showing the structure of a mesh reconstructing apparatus for a transparent object according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The mesh reconstruction method for the transparent object provided by the application can be applied to the application environment shown in fig. 1. The image capturing device 102 establishes a connection with the terminal 104 and communicates with the terminal in a wired or wireless manner. The image capturing device 102 captures object images of a transparent object at a plurality of capture perspectives. The terminal 104 acquires object images of the transparent object acquired by the image acquisition device 102 at a plurality of acquisition viewing angles, and calibration information corresponding to the image acquisition device 102. The terminal 104 generates an initial grid model corresponding to the transparent object according to the object images at the multiple collection visual angles, the terminal 104 determines the light refraction loss corresponding to the emergent light of the image collection device according to the calibration information, and determines the model loss corresponding to the initial grid model according to the light refraction loss. And the terminal 104 reconstructs the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object. The image capturing device 102 may include, but is not limited to, various types of cameras, video cameras, still cameras, or other devices with image capturing functions, and the terminal 104 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
In one embodiment, as shown in fig. 2, a method for reconstructing a mesh of a transparent object is provided, which is described by taking the method as an example applied to the terminal 104 in fig. 1, and includes the following steps:
step 202, acquiring object images of the transparent object at a plurality of acquisition viewing angles and calibration information corresponding to image acquisition equipment, wherein the image acquisition equipment is used for acquiring the object images.
The transparent object is a three-dimensional object which can be penetrated by light rays, the transparent object is a target object of grid reconstruction, and a target grid model corresponding to the transparent object is obtained by carrying out the grid reconstruction. The terminal can acquire object images of the transparent object under a plurality of acquisition visual angles and calibration information corresponding to the image acquisition equipment. The image capturing device may be a device having an image capturing function, and is configured to capture an object image corresponding to the transparent object. The terminal can be connected with the image acquisition equipment in a wired connection or wireless connection mode and communicates with the image acquisition equipment, and the terminal can acquire the object image corresponding to the transparent object acquired by the image acquisition equipment through the connection established with the image acquisition equipment.
The object images acquired by the image acquisition equipment comprise object images of the transparent object under a plurality of acquisition visual angles. The collection visual angle refers to an angle formed by the image collection equipment and the front face of the transparent object when the object image of the transparent object is collected. The terminal can acquire a plurality of object images of the transparent object under a plurality of acquisition visual angles. The object images at the multiple collection viewing angles can respectively reflect the transparent objects corresponding to the various angles. Specifically, object images of the transparent object at a plurality of acquisition viewpoints may be acquired by moving the image acquisition device. According to the position relation between the transparent object and the image acquisition equipment, correspondingly, the object images corresponding to the plurality of acquisition visual angles can be acquired by rotating the transparent object under the condition of fixing the image acquisition equipment. For example, in the case of a fixed image capturing device, the transparent object is rotated 10 degrees horizontally to the left, and the object images of the transparent object at a plurality of capturing visual angles are acquired through a plurality of rotations.
The terminal can also obtain calibration information corresponding to the image acquisition equipment. The calibration information may include at least one of internal calibration information or external calibration information corresponding to the image acquisition device, and the calibration information may be used to accurately reflect device parameters corresponding to the image acquisition device. The calibration information may specifically include, but is not limited to, at least one of position information corresponding to the image capturing device, a capturing view angle, an image pixel, a focal length, or a distance from the transparent object. The terminal can acquire corresponding calibration information from the image acquisition device, and can also determine the calibration information corresponding to the image acquisition device in response to the received configuration operation. The configuration operation may be specifically a calibration information input operation of a user, so as to obtain the calibration information input by the user.
As shown in fig. 3, fig. 3 is an application environment diagram of a mesh reconstruction method for a transparent object in an embodiment. Included in fig. 3 are a transparency 302, a wheel 304, an image capture device 306, and a background screen 308. The transparent object 302 is a rabbit-shaped transparent object, and may be other transparent objects in other embodiments. The transparent object 302 may be placed on a turntable 304, and the transparent object 302 is rotated by the turntable 304. The image capturing device 306 may specifically be a camera, and the image capturing device 306 may capture an object image corresponding to the transparent object 302 by using the background screen 308 as a background of the transparent object 302. The background screen 308 may specifically display an image of a black-and-white stripe, so that the terminal determines a corresponding relationship between the emergent light of the image capturing device 306 and each pixel point of the background image.
The terminal may obtain calibration information corresponding to the image capturing device 306, where the calibration information may specifically include parameters corresponding to the image capturing device 306, distances between the transparent object 302 and the background screen 308, and the like. The turntable 304 may be rotated to cause the image capture device 306 to capture object images of the transparent object 302 at a plurality of capture perspectives. For example, the turntable 304 may rotate 5 degrees at a time for 72 times, and the image capture device 306 may capture 72 images of the object at different capture perspectives. The turntable 304 may also rotate at a constant speed at a preset speed, for example, 1 degree per second, and the image capturing device 306 captures an object image corresponding to a transparent object at a preset frequency corresponding to the preset speed, for example, every 5 seconds.
In some embodiments, the image capturing device may capture an object image corresponding to the transparent object in a dark environment, thereby avoiding an influence of ambient light on the mesh reconstruction, and improving the accuracy of the mesh reconstruction of the transparent object.
And 204, generating an initial grid model corresponding to the transparent object according to the object images under the multiple collection visual angles.
The terminal can generate an initial grid model corresponding to the transparent object according to the acquired object images under the multiple acquisition visual angles. The initial mesh model refers to a mesh model which is not reconstructed and corresponds to the transparent object, and the initial mesh model may be a mesh model to be reconstructed. The mesh model is a three-dimensional model which performs polygonal representation on an object in the form of meshes, and the meshes can be triangles, quadrangles or other convex polygons.
Specifically, the terminal may determine, according to the acquired object images at the multiple acquisition viewing angles, object profiles of the transparent object corresponding to the multiple acquisition viewing angles, respectively. The object contour may be a line of an outer edge of the transparent object at the corresponding collection view angle, the object contour is two-dimensional data, and the object image at each collection view angle may correspond to the object contour of the transparent object, respectively. The terminal can generate a three-dimensional convex hull corresponding to the transparent object according to a plurality of object outlines corresponding to the transparent object, and the three-dimensional convex hull can be a visible convex polygon body capable of including the transparent object. The method for generating the three-dimensional convex hull may specifically include, but is not limited to, combining the plurality of object outlines according to the position relationship between the plurality of object outlines to obtain the three-dimensional convex hull corresponding to the transparent object. The positional relationship refers to an angular relationship between a plurality of object profiles determined from the transparent object. For example, the object contour obtained for a front view transparent object is 90 degrees to the object contour obtained for a left view transparent object. The terminal can perform gridding processing on the generated three-dimensional convex hull to obtain an initial grid model represented by the visual convex hull.
And step 206, determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information, and determining the model loss corresponding to the initial grid model according to the light refraction loss.
The emergent ray refers to a ray emitted by the image acquisition equipment, the ray refraction loss can be used for representing the difference degree between the simulated ray refraction and the real ray refraction of the emergent ray of the image acquisition equipment, and the model loss can be used for representing the difference degree between the initial grid model and the accurate grid model corresponding to the transparent object. Wherein, the loss and the corresponding difference degree form a positive correlation relationship, and if the loss is larger, the corresponding difference degree is larger.
It will be appreciated that, in accordance with the imaging principles of the image capture device, the image capture device may emit a plurality of light rays, each of which is refracted through a transparent object onto a background in the image of the object, such as background screen 308 in fig. 3. The content of each pixel in the object image, e.g. the color to which the pixel corresponds, is derived from the corresponding area in the background.
In some embodiments, the background screen may capture an object image of the transparent object using a black and white stripe image including a horizontal direction and a vertical direction as a background, so as to more quickly and accurately determine a correspondence relationship between pixels in the object image and a background area, and thereby determine a correspondence relationship between a plurality of emergent rays and the background area, respectively, based on a correspondence relationship between the emergent rays and the pixels of the image capture device. For convenience of understanding, it can be considered that the emergent rays can fall on the background screen after being refracted by the transparent object, so as to obtain corresponding positions of the emergent rays on the background screen, and each emergent ray can have a corresponding position.
A transparent object is an object through which outgoing light rays can pass, which when entering the transparent object and leaving the transparent object, respectively, will be refracted by the surface of the transparent object, the angle of refraction being related to the angle between the outgoing light ray and the surface of the transparent object. Since the initial mesh model and the real transparent object are not completely the same in shape, when the same emergent ray passes through the initial mesh model and the transparent object, the light paths obtained by refraction are different, and the ray refraction result can be specifically shown as that the corresponding positions on the background screen are different.
The terminal can track a plurality of emergent rays emitted by the image acquisition equipment, the real refraction result of each emergent ray passing through the transparent object is obtained according to the corresponding relation between the plurality of emergent rays and the pixels of the object image, the model refraction result of each emergent ray passing through the initial grid model is determined according to the generated initial grid model, the terminal can determine the light refraction loss corresponding to the initial grid model according to the difference between the real refraction result and the model refraction result corresponding to each emergent ray, and the model loss corresponding to the initial grid model is determined according to the light refraction loss.
Specifically, the terminal may determine the number of the emergent rays emitted by the image acquisition device and the corresponding emergent angles of the emergent rays according to the calibration information corresponding to the image acquisition device, and determine the corresponding relationship between the emergent rays and the background position according to the corresponding relationship between the emergent rays and each pixel in the object image, so as to obtain the real background position corresponding to each emergent ray. The terminal can determine a simulated background position obtained by refraction of the emergent ray through the initial grid model according to the calibration information corresponding to the image acquisition equipment and based on a ray refraction principle, and the terminal can synthesize the difference between the real background position and the simulated background position corresponding to the emergent rays respectively to obtain the ray refraction loss corresponding to the initial grid model.
In some embodiments, the difference between the synthetic real background position and the simulated background position may be specifically a position distance between the synthetic real background position and the simulated background position, and the terminal may calculate a corresponding position distance according to the real background position and the simulated background position, and perform synthetic operation on the position distances respectively corresponding to the plurality of emergent rays to obtain the ray refraction loss. The comprehensive operation may be specifically a sum operation.
The terminal can directly determine the obtained light refraction loss as the model loss corresponding to the initial grid model, and the terminal can also process the light refraction loss and determine the processed loss as the model loss corresponding to the initial grid model. For example, the terminal may obtain a light refraction weight corresponding to the light refraction loss, and perform an operation on the light refraction loss according to the light refraction weight to obtain a model loss corresponding to the initial grid model. The light refraction weight may be set according to the actual application requirement, or may be learned in the model reconstruction process. The operation of the light refraction weight and the light refraction loss may be specifically a product operation. The terminal can also obtain other losses corresponding to the initial grid model, and the model loss corresponding to the initial grid model is obtained by integrating the light refraction loss and the other losses.
And 208, reconstructing the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object.
The target grid model is a grid model obtained after the initial grid model is reconstructed, and the target grid model can more accurately represent the corresponding transparent object. Specifically, the terminal may adjust each mesh vertex in the initial mesh model toward a direction of decreasing the model loss according to the model loss corresponding to the initial mesh model, and reconstruct the initial mesh model in a gradient decreasing manner until the model converges to obtain the target mesh model corresponding to the transparent object.
For example, assuming that the model loss is specifically a ray refraction loss corresponding to the outgoing ray, and the mesh shape of the initial mesh model is a triangle, when an outgoing ray passes through the transparent object, the outgoing ray can pass through two triangular meshes corresponding to the initial mesh model, that is, the two triangular meshes in the initial mesh model can refract the outgoing ray. Therefore, the terminal can adjust six grid vertexes corresponding to two triangular grids through which the emergent ray passes towards the direction in which the ray refraction loss corresponding to the emergent ray is reduced, so that the adjusted grid model can more accurately represent the transparent object, and the terminal can respectively adjust the grid vertexes corresponding to the emergent rays according to the ray refraction loss corresponding to the emergent rays to obtain the reconstructed target grid model.
In some embodiments, the above mesh reconstruction method for transparent objects may also be applied to a server, and the server may specifically be an independent server or a server cluster including two or more servers. The image acquisition equipment can upload the acquired object images to the server, or upload the object images to the server after the terminal acquires the object images of the transparent object under a plurality of acquisition visual angles. The server can carry out grid reconstruction on the transparent object according to the steps of the grid reconstruction method of the transparent object to obtain a target grid model corresponding to the transparent object, so that local operation resources of the terminal are effectively saved.
In this embodiment, an initial mesh model corresponding to a transparent object is generated according to object images of the transparent object at a plurality of collection angles by acquiring object images of the transparent object at the plurality of collection angles and calibration information corresponding to image collection equipment. And determining the light refraction loss corresponding to the emergent light of the image acquisition equipment through the calibration information, determining the model loss corresponding to the initial grid model according to the light refraction loss, and reconstructing the initial grid model according to the model loss to obtain the target grid model corresponding to the transparent object. The initial grid model is directly generated and reconstructed, compared with the traditional mode, the method does not need a large amount of training data, and therefore the data volume needing to be collected and the reconstruction time can be reduced. The model loss of the initial grid model is determined according to the ray refraction loss, the ray refraction relation of emergent rays is fully utilized, the model grid is directly optimized according to the model loss, and the grid reconstruction accuracy of the transparent object is effectively improved.
In an embodiment, the step of generating an initial mesh model corresponding to the transparent object according to the object images at the multiple collection viewing angles includes: extracting a plurality of contour images corresponding to the transparent object from the object images under a plurality of collection visual angles; carrying out space carving according to the plurality of outline images to obtain a three-dimensional convex hull corresponding to the transparent object; and acquiring target grid parameters, and carrying out gridding processing on the three-dimensional convex hull according to the target grid parameters to obtain an initial grid model corresponding to the transparent object.
The contour image is an image corresponding to the object contour of the transparent object, and can be used for accurately representing the object contour corresponding to the transparent object. For example, the outline image may be a silhouette mask corresponding to the transparent object. The terminal can extract the contour images of the transparent object respectively corresponding to the acquisition visual angles from the object images of the acquisition visual angles. Specifically, the transparent object refracts the emergent light, namely, in the object image corresponding to the transparent object, the background of the corresponding part of the transparent object is distorted, the terminal can extract the image of the distorted part from the object image, and the image of the distorted part is determined as the contour image of the transparent object under the corresponding collection visual angle, so that the contour images respectively corresponding to the transparent object under the multiple collection visual angles are extracted.
The terminal can adopt a space carving technology to carry out space carving according to the extracted contour images to obtain the three-dimensional convex hull corresponding to the transparent object. Specifically, the terminal may determine, according to calibration information corresponding to the image capturing device, for example, a position corresponding to the image capturing device, a distance from a transparent object, or a distance from a background screen, and determine a view cone (frustum) corresponding to the capturing view angle based on the contour image. The view cone may be used to represent a visible cone range corresponding to the outline image of the image capturing device, and each capturing view angle may have a corresponding view cone. The terminal can calculate the intersection of the view cones corresponding to the multiple collecting visual angles respectively, and the three-dimensional convex hull corresponding to the transparent object is determined according to the intersection area of the multiple view cones.
For example, as shown in fig. 4, fig. 4 is a schematic diagram of a three-dimensional convex hull corresponding to a transparent object in an embodiment. It is understood that, for convenience of illustration, the two-dimensional data is shown and described in fig. 4, and in the practical application process, the three-dimensional data corresponding to the transparent object may be specifically used. The plurality of image capture devices 402 in fig. 4 may represent capturing corresponding images of an object at multiple capture perspectives, and need not necessarily require multiple image capture devices. As shown in fig. 4, the image capturing apparatus 402 may capture object images corresponding to the transparent object 404 from a plurality of capturing perspectives, wherein the background screen 406 may be used as a background of the transparent object. The portion of the background screen 406 corresponding to the transparent object 404 is distorted by the transparent object 404, and the image corresponding to the distorted portion is the outline image 412 in the object image. The view cones 408 corresponding to the image capturing device 402 at the respective capturing view angles may be represented in a form of a dotted line, as shown in fig. 4, the terminal may determine a three-dimensional convex hull 410 corresponding to the transparent object 404 according to an intersection region of the plurality of view cones 408, and a range of the three-dimensional convex hull 410 may include the transparent object 404.
The terminal can obtain the target grid parameters and carry out gridding processing according to the target grid parameters. The target grid parameter refers to a grid parameter for performing gridding processing, and the grid parameter may specifically be a side length corresponding to the grid side length. For example, when the mesh is a triangle, the mesh parameter may be a value or a value range of the side length of the triangle. The grid parameters may be determined according to actual application requirements, and the grid parameters may be changed according to actual requirements during the model reconstruction process. The terminal can perform gridding processing on the three-dimensional convex hull according to the target grid parameters so as to divide the three-dimensional convex hull into grids corresponding to the target grid parameters, and obtain an initial grid model corresponding to the transparent object.
In some embodiments, the terminal may repeatedly acquire the grid parameters for reconstruction for multiple times, where multiple times may refer to two times or more than two times. The target grid parameters can form a negative correlation relation with the repetition times, and the more the repetition times are, the smaller the corresponding target grid parameters are, so that the grids of the grid model are gradually reduced in the model reconstruction process, and the accuracy of the reconstructed target grid model is effectively improved.
In this embodiment, a plurality of contour images corresponding to the transparent object are extracted, spatial carving is performed according to the plurality of contour images to obtain a three-dimensional convex hull corresponding to the transparent object, and gridding processing is performed on the three-dimensional convex hull according to the obtained target grid parameters to obtain an initial grid model corresponding to the transparent object, so that data volume required for generating a grid model and reconstructing the grid is reduced, time spent on data processing and model optimization is reduced, grid reconstruction efficiency is effectively improved, and resources required for reconstructing the grid are saved.
In one embodiment, after the step of reconstructing the initial mesh model from model losses, the method further comprises: returning to the step of obtaining the target grid parameters, carrying out gridding processing on the reconstructed grid model according to the obtained target grid parameters, and recording the first returning times; and stopping returning to the step of obtaining the target grid parameters when the first returning times reach a first threshold value.
After the initial network model is reconstructed according to the model loss, the terminal may return to the step of obtaining the target grid parameters, repeatedly perform the gridding processing on the reconstructed grid model according to the obtained target grid parameters, and record the number of return times, and the terminal may determine the number of return times for repeatedly performing the gridding processing as the first number of return times. The terminal may determine whether to continue the gridding process repeatedly according to the recorded first return times. Specifically, the terminal may compare the first return frequency with a first threshold, and detect whether the first return frequency reaches the first threshold. The first threshold may be a threshold of the number of times set according to the actual application requirement, and may be specifically set to 10 times, for example. When the first return times reach the first threshold, that is, the first return times are greater than or equal to the first threshold, the terminal may stop returning to the step of obtaining the target mesh parameters, so as to obtain the reconstructed target mesh model. And the terminal repeatedly determines the model loss corresponding to the gridded grid model and reconstructs the gridded grid model according to the model loss, so that the generated initial grid model is optimized to obtain a more accurate target grid model.
In some embodiments, the target grid parameter may be determined based on the first number of returns. The target grid parameter may be in a negative correlation with the first number of returns, the larger the first number of returns, the smaller the target grid parameter. Namely, the grid is gradually reduced by repeating the gridding treatment for multiple times, so that the grid model optimization reconstruction process from sparse to fine is realized. Specifically, the terminal may calculate the target grid parameter corresponding to the first return time according to the corresponding relationship between the target grid parameter and the first return time. The corresponding relation between the target grid parameter and the first return times can be determined according to the actual application requirement.
In some embodiments, calculating the target grid parameter according to the first number of returns may specifically be represented as:
Figure BDA0002719410810000121
wherein L represents a first number of returns, and L represents a first numberAnd the first return times can be an integer between 1 and the first threshold value. And when L is equal to L, the first return time reaches a first threshold value, and the target grid parameters are stopped from being acquired. t is tlRepresenting the target grid parameter, t, corresponding to the first number of returnsminAnd representing the grid parameter change distance, wherein the grid parameter change distance can be used for restricting the surface change degree of the grid model of each gridding process. The grid parameter variation distance may be determined according to the actual application requirement, for example, may be determined according to the diagonal length of the transparent object, for example, may be specifically 0.005 × diagonal length. When the transparent object is irregular, the diagonal length may be a diagonal length of a bounding box of the transparent object. The terminal can perform gridding processing on the grid model again according to the obtained target grid parameters, and combines the grids with the side length smaller than the target grid parameters, or divides the grids with the side length larger than the target grid parameters, so that the side length of the grids in the grid model is close to the target grid parameters.
In some embodiments, after the terminal reconstructs the initial mesh model according to the model loss, the terminal may further return to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition device according to the calibration information, and record the second return times. The terminal can repeatedly determine the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information, and determine the model loss corresponding to the grid model according to the light refraction loss, so that gradient descending is repeatedly performed according to the model loss, and the grid vertex of the grid model is adjusted. And when the second return times reach a second threshold value, stopping returning to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information. The second threshold may be determined according to actual application requirements, for example, the second threshold may be specifically determined 500 times. The terminal can repeatedly determine the model loss of the mesh model after adjusting the mesh vertex, and the cyclic gradient is reduced to optimize and reconstruct the mesh model to obtain the target mesh model corresponding to the transparent object.
In some embodiments, the terminal may combine the first return times and the second return times, and repeat the model loss determination and the mesh vertex adjustment for the second return times each time the target mesh parameters are repeatedly obtained for gridding, so that the mesh vertex position can be continuously and circularly optimized and the mesh model can be refined, thereby obtaining a more accurate target mesh model. For example, as shown in fig. 5, fig. 5 is a logic diagram of model optimization in one embodiment. In fig. 5, the first threshold corresponding to the first number of times of return is specifically 10 times, and the second threshold corresponding to the second number of times of return is specifically 500 times. And (3) the terminal carries out 500 times of loss circulation every time the terminal carries out grid circulation, namely 10 times of grid circulation and 5000 times of loss circulation are carried out in total, so that the grid model is reconstructed to obtain the target grid model. The terminal optimizes the grids in the grid model through the loss circulation every 500 times, so that the vertex position of each grid is more accurate, the grids in the grid model are finer through the 10 times of grid circulation, a target grid model corresponding to the transparent object is obtained through reconstruction, and the accuracy of grid reconstruction is effectively improved.
In some embodiments, as shown in fig. 6, fig. 6 is a diagram illustrating the result of mesh reconstruction in one embodiment. In fig. 6, the transparent object 602 is a rabbit-shaped transparent object, and the white object 604 can be obtained by white coating the transparent object 602 in a conventional manner, and the corresponding real model 606 can be obtained by scanning the white object 604. Fig. 6 shows the process of changing the object model, the mesh and the error in the process of reconstructing the mesh based on the method of the present application. The model mesh, specifically the triangular mesh error, is determined according to the closest point pair distance between the mesh model surface and the real model surface. A target mesh model 610 corresponding to transparent object 602 is obtained by reconstructing initial mesh model 608.
In this embodiment, by returning to the step of obtaining the target grid parameter, the reconstructed grid model is subjected to gridding processing according to the obtained target grid parameter, and the first return frequency is recorded, and when the first return frequency reaches the first threshold, the step of obtaining the target grid parameter is stopped from returning, so that the grid model can be repeatedly gridded, thereby implementing the process of optimizing and reconstructing the grid model from sparseness to fineness, and effectively improving the accuracy of the reconstructed target grid model.
In an embodiment, the step of determining the light refraction loss corresponding to the emergent light of the image capturing device according to the calibration information includes: determining first position coordinates corresponding to a plurality of emergent rays of the image acquisition equipment according to the calibration information and the object image, wherein the first position coordinates and the emergent rays have a corresponding relation; calculating second position coordinates corresponding to the emergent rays according to the initial grid model; and determining the light refraction loss corresponding to the initial grid model according to the coordinate distance between the first position coordinate and the second position coordinate corresponding to the emergent light rays respectively.
The terminal can track a plurality of emergent rays emitted by the image acquisition equipment according to the calibration information corresponding to the image acquisition equipment and the acquired object image, determine respective real background positions corresponding to the emergent rays emitted by the image acquisition equipment, and determine the position coordinates corresponding to the real background positions as first position coordinates. The first position coordinate and the emergent ray have a corresponding relation. The method for determining the first position coordinate corresponding to the emergent ray according to the calibration information and the object image is similar to the method for determining the real background position in the above embodiments, and therefore is not described herein again.
The terminal can calculate second position coordinates corresponding to the emergent rays respectively according to the initial grid model. The second position coordinate is a position coordinate of a model background position corresponding to the emergent ray, and the second position coordinate can be used for representing a position coordinate of the emergent ray after being refracted by the initial grid model. Specifically, the terminal can determine the respective corresponding emergent angles of the plurality of emergent rays, and the incident grid and the emergent grid which intersect with the initial grid model according to the calibration information corresponding to the image acquisition device. The terminal can calculate the light path of the emergent light refracted by the initial grid model according to the angles between the incident grid and the emergent grid in the initial grid model and the emergent light respectively, and determine the second position coordinate corresponding to the emergent light according to the respective corresponding distances between the image acquisition equipment and the initial grid model and the background screen. The terminal may calculate a corresponding coordinate distance according to the first position coordinate and the second position coordinate, where the calculation mode may specifically be difference operation. The terminal can determine the light refraction loss corresponding to the initial grid model according to the coordinate distances corresponding to the emergent light rays respectively.
For example, FIG. 7 is a schematic illustration of determining the refractive loss of a light ray in one embodiment. As shown in fig. 7, the image capture device 702 may emit a plurality of outgoing rays, illustrated in fig. 7 as one of the outgoing rays cq. An initial mesh model 706 may be generated from the object image corresponding to the transparent object 704, where the initial mesh model 706 is embodied as a triangular mesh. As can be seen from the enlarged partial view of fig. 7, the outgoing ray cq can be taken from the point p1Injecting into an initial mesh model, from point p2The emergent ray cq passes through the incident grid v where the points are respectively located0 1v1 1v2 1And an ejection grid v0 2v1 2v2 2Refraction occurs to obtain the light line 2 corresponding to the emergent light, and a second position coordinate Q' is obtained on the background screen 708. While in the real process the emergent ray can be from point p1Incident on the transparent object from point p2And (3) emitting the transparent object to obtain a light line 1 corresponding to the emergent light, and obtaining a first position coordinate Q on the background screen. Because the initial grid model is different from the transparent object, the light path of the refracted light is also different. The terminal can determine the coordinate distance delta according to the first position coordinate Q and the second position coordinate Q', and determine the light refraction loss according to the coordinate distances corresponding to the emergent lights respectively, so that the grid vertex in the initial grid model is adjusted in the grid reconstruction process, and the reconstructed target grid model can more accurately represent the corresponding transparent object.
In some embodiments, determining the refraction loss of the light ray according to the coordinate distances corresponding to the plurality of emergent light rays may be specifically expressed as:
Figure BDA0002719410810000151
wherein L isrefractRepresenting the ray refraction loss, U representing the specific collection view, and U representing the number of multiple collection views. I denotes the ray path, and I denotes the set of ray paths refracted twice. The terminal can determine the refraction direction of the emergent ray through the normal line of the grid and the Fresnel law, so that the ray path, the first position coordinate Q and the second position coordinate Q' can be expressed by the coordinates of the grid vertex. The gradient of the ray refraction loss relative to the grid vertex can be obtained by performing derivation calculation according to a chain rule.
In this embodiment, by determining the first position coordinates corresponding to the plurality of emergent rays, and calculating the second position coordinates corresponding to the plurality of emergent rays according to the initial grid model, the light refraction loss corresponding to the initial grid model is determined according to the coordinate distance between the first position coordinates and the second position coordinates corresponding to the plurality of emergent rays, thereby fully utilizing the light refraction relationship of the emergent rays, the light refraction loss can accurately reflect the difference between the initial grid model and the transparent object, so as to reconstruct the model loss determined according to the light refraction loss, and the accuracy of grid reconstruction is effectively improved.
In one embodiment, the method further comprises: acquiring projection outlines of the initial grid model under a plurality of acquisition visual angles; and determining the object contour loss corresponding to the initial grid model according to the coincidence degree of the projection contour and the corresponding contour image.
Although the initial mesh model is matched to the contour image of the transparent object, in reconstructing the mesh model from the ray refraction loss, it may cause the reconstructed model contour to deviate from the true contour. Therefore, the reconstruction of the mesh model can be constrained according to the contour image of the transparent object, and the accuracy of mesh reconstruction can be effectively improved.
The terminal may acquire a projection profile of the initial mesh model at a plurality of acquisition perspectives, wherein the acquisition perspectives of the projection profile correspond to the acquisition perspectives of the profile image. The terminal can obtain the coincidence degree of the projection contour and the contour image under the corresponding collection visual angle, and the coincidence degree can be used for representing the position relation of the projection contour and the contour image. The terminal can determine the object contour loss corresponding to the initial grid model according to the contact ratio, and the terminal can adjust the contour position of the initial grid model from the gradient descending direction of the object contour loss.
Specifically, the terminal may obtain the position relationship between a plurality of contour edges included in the object contour and the corresponding contour image, so as to determine the coincidence degree between the projection contour and the contour image under the corresponding acquisition view angle. For example, when a plurality of contour edges included in the object contour are overlapped with the corresponding contour image, the overlap ratio of the projection contour and the contour image is determined to be the highest, and the corresponding object contour loss is 0.
FIG. 8 is a schematic diagram of determining loss of object contour in one embodiment, as shown in FIG. 8. In fig. 8, a projection contour and a contour image at an acquisition view angle are illustrated, and the contour image 802 is image data including a plurality of pixels, wherein a white pixel may represent an inner portion of a transparent object, a gray pixel may represent an object contour corresponding to the transparent object, and a black pixel may represent an outer portion of the transparent object. The initial mesh model has a projection profile 804 at the corresponding acquisition perspective. The midpoints corresponding to each of the three contour sides in the projected contour 804 are shown in overlay 1, where the three midpoints are located at a white pixel, a gray pixel, and a black pixel, respectively. Overlap 2 shows that the midpoint of the contour edge overlaps the gray pixel in overlap 1. In overlap 2, the projected contour 804 overlaps the contour image 802, and the midpoints of the contour edges of the projected contour 804 are on gray pixels, whereby it can be determined that the object contour loss is 0. When not on a gray pixel, the corresponding mesh model needs to be adjusted towards the gradient decreasing direction.
In some embodiments, the terminal may acquire the contour images and the projection contours corresponding to a preset number of acquisition views from a plurality of acquisition views, and determine the object contour loss corresponding to the mesh model. The preset number can be determined according to the actual application requirement. For example, the image acquisition device may acquire object images at 72 acquisition view angles, and the terminal may acquire object images corresponding to 5 acquisition view angles from the 72 acquisition view angles, extract a contour image from the object images, and acquire projection contours corresponding to the mesh model at the 5 acquisition view angles, so as to determine the object contour loss. The obtaining mode may specifically be random screening. It will be appreciated that the acquisition view angle may also be randomly screened when determining the refraction loss of light. For example, one collection view may be randomly selected to determine the corresponding ray refraction loss, and 9 collection views spaced 40 degrees apart from each other may be randomly selected to determine the corresponding profile loss. Therefore, the data size required by determining the model loss each time can be reduced, and the grid reconstruction efficiency is improved while the grid reconstruction accuracy is ensured.
In some embodiments, the determined object contour loss may be specifically expressed as:
Figure BDA0002719410810000171
wherein L issilhouetteRepresenting the loss of the object profile. B represents a contour edge of the projected contour, and B represents a set of contour edges including the projected contour. sbRepresenting the corresponding midpoint of the contour edge. U denotes a specific acquisition view angle and U denotes the number of multiple acquisition views. x(s)b) The indicative function corresponding to the midpoint can be used to indicate whether the midpoint is on the contour of the contour image. X(s) when on the contourb) Is 0, x(s) when the midpoint is inside the contour imageb) Is 1, x(s) when the midpoint is outside the contour imageb) Is-1.
Loss of object profile LsilhouetteRelative to the midpoint sbThe negative gradient of (d) can be expressed as: x(s)b)||b||Nb. Wherein, | b | | represents the length of the contour edge b, NbRepresenting the normal vector of the contour edge b. Suppose contour edge b is a mesh vertex v1 bAnd v2 bThe formed projection, the midpoint corresponding to the contour edge b can be represented as:
Figure BDA0002719410810000172
wherein, PuRepresenting a projection matrix corresponding to the acquisition view angle.
In some embodiments, the terminal may further determine a smooth loss corresponding to the mesh model, and suppress noise generated in the process of reconstructing the mesh model according to the smooth loss, so as to measure a difference between normal vectors of adjacent meshes. Specifically, the terminal may obtain respective corresponding normal vectors of adjacent grids having a common edge, perform dot product operation on the normal vectors, and determine a corresponding smooth loss of the grid model according to an operation result. Wherein, the smoothness loss can be specifically expressed as:
Figure BDA0002719410810000173
wherein E represents a common edge in the mesh model, and E represents a set of a plurality of common edges included in the mesh model. N is a radical of1 eAnd N2 eThe normal vectors corresponding to two adjacent grids can be respectively represented.<N1 e,N2 e>Representing the dot product of adjacent normal vectors.
In the embodiment, the projection profiles of the initial grid model under a plurality of collection visual angles are obtained, and the object profile loss corresponding to the initial grid model is determined according to the contact ratio of the projection profiles and the corresponding profile images, so that the model loss of the grid model is determined according to the object profile loss, the grid model is prevented from deviating in the reconstruction process, and the accuracy of grid reconstruction is effectively improved.
In an embodiment, the step of determining a model loss corresponding to the initial mesh model according to the ray refraction loss includes: acquiring a light refraction weight corresponding to the light refraction loss and a contour weight corresponding to the object contour loss; and weighting the light refraction loss and the object outline loss according to the light refraction weight and the outline weight to obtain the model loss corresponding to the initial grid model.
The terminal can obtain the light refraction weight corresponding to the light refraction loss and the outline weight corresponding to the object outline loss, and the light refraction weight and the outline weight can be determined according to the actual application requirements. For example, the specific data may be set according to human experience, or may be determined by reconstructing data through a large number of models. For example, it may be obtained through machine learning or data statistics during reconstruction.
The terminal can perform weighting processing on the light refraction loss and the object outline loss according to the light refraction weight and the outline weight to obtain the model loss corresponding to the initial grid model. Specifically, the terminal can adjust the light refraction loss according to the light refraction weight to obtain the adjusted light refraction loss, and adjust the object contour loss according to the contour weight to obtain the adjusted object contour loss. The adjustment method may specifically be performing a product operation. The terminal can calculate the adjusted light refraction loss and the adjusted object profile loss, and determines the calculation result as the model loss corresponding to the initial grid model. The operation mode may specifically be a sum operation.
In some embodiments, the terminal may further obtain a smooth loss weight corresponding to the smooth loss, adjust the smooth loss according to the smooth loss weight to obtain an adjusted smooth loss, and determine a model loss of the mesh model according to the adjusted smooth loss, the adjusted light refraction loss, and the adjusted object profile loss. Specifically, the model loss for determining the mesh model may be specifically expressed as:
L=αLrefract+βLsilhouette+γLsmooth
where L represents the model loss of the mesh model. Alpha denotes the light refraction weight, LrefractRepresenting the light refraction loss. Beta represents the contour weight, LsilhouetteRepresenting the loss of the object profile. Gamma denotes the weight of the smoothness loss, LsmoothIndicating a loss of smoothness. The determination of the light refraction loss, the object profile loss and the smoothness loss may be similar according to the determination methods in the above embodiments, and therefore, the description thereof is omitted here. For example, the light refraction weight may be 104HW, the contour weight may in particular be 0.5min (H, W), the weight for the smooth loss may be specifically 103And s. Where H denotes a pixel width of the object image and W denotes a pixel height of the object image. s represents the average side length of the mesh model.
In this embodiment, the model loss is obtained by obtaining the light refraction weight and the contour weight and performing weighting processing on the light refraction loss and the object contour loss according to the light refraction weight and the contour weight. Therefore, the light refraction loss and the object outline loss can be effectively integrated according to the light refraction weight and the outline weight, the accuracy of the obtained model loss is improved, the grid reconstruction is conveniently carried out according to the model loss, and the accuracy of the grid reconstruction of the transparent object is improved.
In some embodiments, a mesh reconstruction method for a transparent object in each embodiment of the present application is tested by using a plurality of real transparent objects in shapes, and respective corresponding errors of the generated initial mesh model and the target mesh model obtained after reconstruction are shown in the following table:
shape of transparent object Initial mesh model error Target mesh model error
Mouse (A. W. T. W 0.007164 0.003075
Dog 0.004481 0.002065
Monkey 0.005048 0.002244
Hand (W.E.) 0.005001 0.002340
Pig 0.004980 0.002696
Rabbit 0.005639 0.002848
Horse 0.002032 0.001160
Tiger 0.005364 0.003020
Wherein the model error may be determined for the average point-to-point distance of the model surface from the real model surface. As can be seen from the above table, the grid model is reconstructed through model loss, the error of the obtained target grid model is obviously reduced, and the accuracy of the target grid model is effectively improved.
FIG. 9 is a diagram illustrating a comparison of results of a target mesh model in one embodiment, as shown in FIG. 9. Fig. 9 includes a real model corresponding to a real transparent object, a target model generated by a mesh reconstruction method for a transparent object in the embodiments of the present application, a conventional model generated by a conventional method for a transparent object, and a partial enlarged view corresponding to each model. As can be seen in fig. 9, the conventional model generated in the conventional manner is too smooth to lose the local details of the real model. The target model generated by the method has more accurate and obvious details, can accurately represent a real transparent object and is closer to the real model of the transparent object, and the accuracy of grid reconstruction of the transparent object is effectively improved.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 10, there is provided a mesh reconstruction apparatus for a transparent object, including: an image acquisition module 1002, a model generation module 1004, a loss determination module 1006, and a model reconstruction module 1008, wherein:
the image acquiring module 1002 is configured to acquire object images of a transparent object at multiple acquiring viewing angles and calibration information corresponding to an image acquiring device, where the image acquiring device is configured to acquire the object images.
The model generating module 1004 is configured to generate an initial mesh model corresponding to the transparent object according to the object images at the multiple collection viewing angles.
And a loss determining module 1006, configured to determine a light refraction loss corresponding to the emergent light of the image acquisition device according to the calibration information, and determine a model loss corresponding to the initial grid model according to the light refraction loss.
And a mesh reconstruction module 1008, configured to reconstruct the initial mesh model according to the model loss, to obtain a target mesh model corresponding to the transparent object.
In an embodiment, the loss determining module 1006 is further configured to determine a first position coordinate corresponding to each of the plurality of emergent rays of the image capturing device according to the calibration information and the object image, where the first position coordinate and the emergent ray have a corresponding relationship; calculating second position coordinates corresponding to the emergent rays according to the initial grid model; and determining the light refraction loss corresponding to the initial grid model according to the coordinate distance between the first position coordinate and the second position coordinate corresponding to the emergent light rays respectively.
In an embodiment, the model generating module 1004 is further configured to extract a plurality of contour images corresponding to the transparent object from the object images at a plurality of collection perspectives; carrying out space carving according to the plurality of outline images to obtain a three-dimensional convex hull corresponding to the transparent object; and acquiring target grid parameters, and carrying out gridding processing on the three-dimensional convex hull according to the target grid parameters to obtain an initial grid model corresponding to the transparent object.
In an embodiment, the mesh reconstruction module 1008 is further configured to return to the step of obtaining the target mesh parameters, perform meshing processing on the reconstructed mesh model according to the obtained target mesh parameters, and record a first return time; and stopping returning to the step of obtaining the target grid parameters when the first returning times reach a first threshold value.
In an embodiment, the mesh reconstruction module 1008 is further configured to return to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition device according to the calibration information, and record a second return time; and when the second return times reach a second threshold value, stopping returning to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information.
In one embodiment, the above-mentioned loss determining module 1006 is further configured to obtain projection profiles of the initial mesh model at a plurality of acquisition view angles; and determining the object contour loss corresponding to the initial grid model according to the coincidence degree of the projection contour and the corresponding contour image.
In one embodiment, the loss determining module 1006 is further configured to obtain a ray refraction weight corresponding to the ray refraction loss and a contour weight corresponding to the object contour loss; and weighting the light refraction loss and the object outline loss according to the light refraction weight and the outline weight to obtain the model loss corresponding to the initial grid model.
For specific definition of the mesh reconstruction apparatus for the transparent object, reference may be made to the above definition of the mesh reconstruction method for the transparent object, and details are not described herein again. The modules in the mesh reconstruction device for the transparent object can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method for mesh reconstruction of transparent objects. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above-mentioned embodiments of the mesh reconstruction method for transparent objects when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned respective transparent object mesh reconstruction method embodiment.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for mesh reconstruction of a transparent object, the method comprising:
acquiring object images of a transparent object under a plurality of acquisition visual angles and calibration information corresponding to image acquisition equipment, wherein the image acquisition equipment is used for acquiring the object images;
generating an initial grid model corresponding to the transparent object according to the object images under a plurality of collection visual angles;
determining light refraction loss corresponding to emergent light of the image acquisition equipment according to the calibration information, and determining model loss corresponding to the initial grid model according to the light refraction loss;
and reconstructing the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object.
2. The method of claim 1, wherein determining the ray refraction loss corresponding to the outgoing ray of the image capturing device based on the calibration information comprises:
determining first position coordinates corresponding to a plurality of emergent rays of the image acquisition equipment according to the calibration information and the object image, wherein the first position coordinates have a corresponding relation with the emergent rays;
calculating second position coordinates corresponding to the emergent rays according to the initial grid model;
and determining the ray refraction loss corresponding to the initial grid model according to the coordinate distance between the first position coordinate and the second position coordinate corresponding to the emergent rays respectively.
3. The method of claim 1, wherein generating an initial mesh model corresponding to the transparent object from the object images at the plurality of acquisition perspectives comprises:
extracting a plurality of contour images corresponding to the transparent object from the object images under the plurality of collection visual angles;
carrying out space carving according to the plurality of outline images to obtain a three-dimensional convex hull corresponding to the transparent object;
and acquiring target grid parameters, and carrying out gridding processing on the three-dimensional convex hull according to the target grid parameters to obtain an initial grid model corresponding to the transparent object.
4. The method of claim 3, wherein after said reconstructing the initial mesh model from the model losses, the method further comprises:
returning to the step of obtaining the target grid parameters, carrying out gridding processing on the reconstructed grid model according to the obtained target grid parameters, and recording the first returning times;
and stopping returning to the step of obtaining the target grid parameters when the first returning times reach a first threshold value.
5. The method according to any of claims 1-4, wherein after said reconstructing the initial mesh model from the model losses, the method further comprises:
returning to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information, and recording a second return time;
and when the second return times reach a second threshold value, stopping returning to the step of determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information.
6. The method of claim 3, further comprising:
acquiring projection outlines of the initial grid model under a plurality of acquisition visual angles;
and determining the object contour loss corresponding to the initial grid model according to the coincidence degree of the projection contour and the corresponding contour image.
7. The method of claim 6, wherein determining the model loss corresponding to the initial mesh model based on the ray refraction loss comprises:
acquiring a light refraction weight corresponding to the light refraction loss and a contour weight corresponding to the object contour loss;
and according to the light refraction weight and the outline weight, weighting the light refraction loss and the object outline loss to obtain a model loss corresponding to the initial grid model.
8. An apparatus for mesh reconstruction of transparent objects, the apparatus comprising:
the device comprises an image acquisition module, a calibration module and a display module, wherein the image acquisition module is used for acquiring object images of a transparent object under a plurality of acquisition visual angles and calibration information corresponding to image acquisition equipment, and the image acquisition equipment is used for acquiring the object images;
the model generation module is used for generating an initial grid model corresponding to the transparent object according to the object images under a plurality of collection visual angles;
the loss determining module is used for determining the light refraction loss corresponding to the emergent light of the image acquisition equipment according to the calibration information and determining the model loss corresponding to the initial grid model according to the light refraction loss;
and the grid reconstruction module is used for reconstructing the initial grid model according to the model loss to obtain a target grid model corresponding to the transparent object.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011083277.7A 2020-10-12 2020-10-12 Method, device, computer equipment and storage medium for reconstructing grid of transparent object Active CN112348956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011083277.7A CN112348956B (en) 2020-10-12 2020-10-12 Method, device, computer equipment and storage medium for reconstructing grid of transparent object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011083277.7A CN112348956B (en) 2020-10-12 2020-10-12 Method, device, computer equipment and storage medium for reconstructing grid of transparent object

Publications (2)

Publication Number Publication Date
CN112348956A true CN112348956A (en) 2021-02-09
CN112348956B CN112348956B (en) 2023-07-14

Family

ID=74361670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011083277.7A Active CN112348956B (en) 2020-10-12 2020-10-12 Method, device, computer equipment and storage medium for reconstructing grid of transparent object

Country Status (1)

Country Link
CN (1) CN112348956B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880443A (en) * 2023-02-28 2023-03-31 武汉大学 Method and equipment for reconstructing implicit surface of transparent object

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5560799A (en) * 1993-12-22 1996-10-01 Jacobsen; Gary A. In-line printing production of three dimensional image products incorporating lenticular transparent material
CN107240148A (en) * 2017-04-11 2017-10-10 中国人民解放军国防科学技术大学 Transparent substance three-dimensional surface rebuilding method and device based on background stration technique
CN109118531A (en) * 2018-07-26 2019-01-01 深圳大学 Three-dimensional rebuilding method, device, computer equipment and the storage medium of transparent substance
CN111127633A (en) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 Three-dimensional reconstruction method, apparatus, and computer-readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5560799A (en) * 1993-12-22 1996-10-01 Jacobsen; Gary A. In-line printing production of three dimensional image products incorporating lenticular transparent material
CN107240148A (en) * 2017-04-11 2017-10-10 中国人民解放军国防科学技术大学 Transparent substance three-dimensional surface rebuilding method and device based on background stration technique
CN109118531A (en) * 2018-07-26 2019-01-01 深圳大学 Three-dimensional rebuilding method, device, computer equipment and the storage medium of transparent substance
CN111127633A (en) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 Three-dimensional reconstruction method, apparatus, and computer-readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880443A (en) * 2023-02-28 2023-03-31 武汉大学 Method and equipment for reconstructing implicit surface of transparent object

Also Published As

Publication number Publication date
CN112348956B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
AU2017248506B2 (en) Implementation of an advanced image formation process as a network layer and its applications
US9945660B2 (en) Systems and methods of locating a control object appendage in three dimensional (3D) space
US20140307920A1 (en) Systems and methods for tracking occluded objects in three-dimensional space
WO2022001236A1 (en) Three-dimensional model generation method and apparatus, and computer device and storage medium
US9105103B2 (en) Systems and methods of tracking object movements in three-dimensional space
CN111598998A (en) Three-dimensional virtual model reconstruction method and device, computer equipment and storage medium
US10380796B2 (en) Methods and systems for 3D contour recognition and 3D mesh generation
US8670606B2 (en) System and method for calculating an optimization for a facial reconstruction based on photometric and surface consistency
CN114419240B (en) Illumination rendering method and device, computer equipment and storage medium
US11734892B2 (en) Methods for three-dimensional reconstruction of transparent object, computer devices and storage mediums
CN113052976A (en) Single-image large-pose three-dimensional color face reconstruction method based on UV position map and CGAN
US11823321B2 (en) Denoising techniques suitable for recurrent blurs
CN115880443B (en) Implicit surface reconstruction method and implicit surface reconstruction equipment for transparent object
EP3309750B1 (en) Image processing apparatus and image processing method
US11451758B1 (en) Systems, methods, and media for colorizing grayscale images
Xu et al. Hybrid mesh-neural representation for 3d transparent object reconstruction
Liao et al. Indoor scene reconstruction using near-light photometric stereo
CN112348956B (en) Method, device, computer equipment and storage medium for reconstructing grid of transparent object
WO2019042028A1 (en) All-around spherical light field rendering method
JP2005317000A (en) Method for determining set of optimal viewpoint to construct 3d shape of face from 2d image acquired from set of optimal viewpoint
WO2022077146A1 (en) Mesh reconstruction method and apparatus for transparent object, and computer device, and storage medium
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product
Bouafif et al. Monocular 3D head reconstruction via prediction and integration of normal vector field
CN115861520B (en) Highlight detection method, highlight detection device, computer equipment and storage medium
WO2019215472A2 (en) Passive marker systems and methods for motion tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant