CN113822994A - Three-dimensional model construction method and device and storage medium - Google Patents

Three-dimensional model construction method and device and storage medium Download PDF

Info

Publication number
CN113822994A
CN113822994A CN202111398841.9A CN202111398841A CN113822994A CN 113822994 A CN113822994 A CN 113822994A CN 202111398841 A CN202111398841 A CN 202111398841A CN 113822994 A CN113822994 A CN 113822994A
Authority
CN
China
Prior art keywords
dimensional
reconstructed
dimensional model
distance value
subspace
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111398841.9A
Other languages
Chinese (zh)
Other versions
CN113822994B (en
Inventor
张煜
邵志兢
孙伟
吕云
罗栋藩
胡雨森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Prometheus Vision Technology Co ltd
Original Assignee
Shenzhen Prometheus Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Prometheus Vision Technology Co ltd filed Critical Shenzhen Prometheus Vision Technology Co ltd
Priority to CN202111398841.9A priority Critical patent/CN113822994B/en
Publication of CN113822994A publication Critical patent/CN113822994A/en
Application granted granted Critical
Publication of CN113822994B publication Critical patent/CN113822994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a three-dimensional model construction method and a three-dimensional model construction device, wherein the method comprises the following steps: acquiring image data of an object to be reconstructed; determining a three-dimensional model space corresponding to an object to be reconstructed based on the image data; dividing a three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces; calculating a directed distance value corresponding to the corner point coordinates of the three-dimensional subspace; forming at least one triangular surface of an object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate; and constructing a three-dimensional model of the object to be reconstructed based on the formed triangular surface, wherein the method can solve the technical problem that the three-dimensional model constructed in the existing method is inaccurate.

Description

Three-dimensional model construction method and device and storage medium
Technical Field
The invention relates to the field of video data processing, in particular to a three-dimensional model construction method, a three-dimensional model construction device and a storage medium.
Background
Three-dimensional models are polygonal representations of objects, which are usually displayed by computers or other video devices, the displayed objects may be real-world entities or fictional objects, and any physical nature can be represented by three-dimensional models, such as people, animals, and natural environments.
The existing three-dimensional model construction method generally constructs a three-dimensional model based on a single-view or multi-view image, and the information of a single view is incomplete, so that empirical knowledge is required to be utilized during the construction of the three-dimensional model; the method is characterized in that the camera is calibrated firstly, and the three-dimensional model is constructed by utilizing the information in a plurality of two-dimensional images.
Due to the problems of algorithm, original data characteristics and the like, some thin objects (such as golf clubs and skipping ropes) are difficult to reconstruct in three dimensions, and the constructed three-dimensional model is inaccurate.
Therefore, it is desirable to provide a method and an apparatus for constructing a three-dimensional model to solve the above-mentioned technical problems.
Disclosure of Invention
The embodiment of the invention provides a three-dimensional model construction method and a three-dimensional model construction device, which aim to solve the technical problem that a three-dimensional model constructed in the existing method is inaccurate.
The embodiment of the invention provides a three-dimensional model construction method, which comprises the following steps:
acquiring image data of an object to be reconstructed;
determining a three-dimensional model space corresponding to the object to be reconstructed based on the image data;
dividing the three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces;
calculating a directed distance value corresponding to the corner point coordinates of the three-dimensional subspace;
forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate;
and constructing a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
In the three-dimensional model building method of the present invention, the calculating a directional distance value corresponding to the corner coordinates of the three-dimensional subspace includes:
acquiring a three-dimensional coordinate of the object to be reconstructed under a preset camera;
and calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace according to the three-dimensional coordinate and a preset object mask.
In the three-dimensional model building method of the present invention, the calculating a directed distance value corresponding to a corner coordinate of the three-dimensional subspace according to the three-dimensional coordinate and a preset object mask includes:
determining a focal length and a principal point of a preset camera;
calculating two-dimensional coordinates of the corner points of the three-dimensional subspace in a preset camera plane according to the three-dimensional coordinates, the focal length of a preset camera and the principal points;
and determining the directed distance value of the corner point coordinate of the three-dimensional subspace in the preset camera plane according to the two-dimensional coordinate and the directed distance values of all pixels of the preset camera plane.
In the three-dimensional model building method of the present invention, determining a directional distance value corresponding to an angular point coordinate of the three-dimensional subspace according to the two-dimensional coordinate and directional distance values of all pixels of a preset camera plane includes:
obtaining a plane directed distance value of the corner point coordinate of the three-dimensional subspace in each preset camera plane according to the two-dimensional coordinate and directed distance values of all pixels of the preset camera plane;
and determining a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace based on the plane directed distance value.
In the three-dimensional model building method of the present invention, the obtaining of the three-dimensional coordinates of the object to be reconstructed under a preset camera includes:
determining a reference three-dimensional coordinate corresponding to a corner point of the three-dimensional subspace;
and transposing the reference three-dimensional coordinate to obtain a three-dimensional coordinate of the corner point of the three-dimensional subspace under a preset camera.
In the three-dimensional model building method of the present invention, the dividing the three-dimensional model space according to a preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces includes:
acquiring preset model reconstruction precision;
and dividing the three-dimensional model space to obtain three-dimensional subspaces with the number corresponding to the model reconstruction precision.
In the three-dimensional model building method of the present invention, the forming at least one triangular surface where the object to be reconstructed intersects with the three-dimensional subspace in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate includes:
determining the intersection condition of each three-dimensional subspace and the object to be reconstructed based on the directed distance value corresponding to each corner point coordinate;
and forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space according to the condition that each three-dimensional subspace is intersected with the object to be reconstructed.
An embodiment of the present invention further provides a three-dimensional model building apparatus, including:
the acquisition module is used for acquiring image data of an object to be reconstructed;
the determination module is used for determining a three-dimensional model space corresponding to the object to be reconstructed based on the image data;
the dividing module is used for dividing the three-dimensional model space according to preset model reconstruction precision to obtain a plurality of three-dimensional subspaces;
the calculation module is used for calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace;
the forming module is used for forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate;
and the building module builds a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
In the three-dimensional model building apparatus according to the present invention, the calculation module includes:
the acquisition unit is used for acquiring the three-dimensional coordinates of the object to be reconstructed under a preset camera;
and the calculation unit is used for calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace according to the three-dimensional coordinate and a preset object mask.
Embodiments of the present invention also provide a storage medium having stored therein processor-executable instructions, which are loaded by one or more processors to perform the above three-dimensional model building method.
Compared with the prior art, the three-dimensional model building method and the three-dimensional model building device divide the three-dimensional model space into the plurality of three-dimensional subspaces according to the preset model reconstruction accuracy, and form at least one triangular surface in the three-dimensional model space, wherein the object to be reconstructed is intersected with the three-dimensional subspaces, so that the three-dimensional model is built, and therefore, when the three-dimensional model is built, the intersection condition between the object to be reconstructed and the three-dimensional subspaces is determined by utilizing the directed distance value, and the shape of each part of the object to be reconstructed is determined, and therefore, the built three-dimensional model cannot lose some information in the object to be reconstructed due to the problem of original data characteristics; the technical problem that the three-dimensional model constructed in the existing method is inaccurate is effectively solved.
Drawings
FIG. 1 is a schematic illustration of a three-dimensional model;
FIG. 2 is a flow chart of a three-dimensional model construction method of the present invention;
FIG. 3 is a schematic diagram of a spatial octree partitioning method;
FIG. 4 is a schematic diagram of a mask in the three-dimensional model construction method of the present invention;
FIG. 5 is a schematic view of the formation of triangular faces in the three-dimensional model construction method of the present invention;
FIG. 6 is a schematic diagram of a three-dimensional model of an object to be reconstructed in the three-dimensional model construction method of the present invention;
FIG. 7 is another flow chart of a three-dimensional model construction method of the present invention;
FIG. 8 is a schematic view of a three-dimensional model building apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an embodiment of a three-dimensional model building apparatus according to the present invention;
FIG. 10 is a schematic structural diagram of a computation module of an embodiment of the three-dimensional model building apparatus according to the present invention;
fig. 11 is a schematic view of a working environment structure of an electronic device in which the three-dimensional model building apparatus of the present invention is located.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, embodiments of the invention are described with reference to steps and symbols of operations performed by one or more computers, unless otherwise indicated. It will thus be appreciated that those steps and operations, which are referred to herein several times as being computer-executed, include being manipulated by a computer processing unit in the form of electronic signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the computer's memory system, which may reconfigure or otherwise alter the computer's operation in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific details shown, since one skilled in the art will recognize that various steps and operations described below may be implemented in hardware.
The three-dimensional model building method and the transmission device can be arranged in any electronic equipment and are used for obtaining image data of an object to be reconstructed, determining a three-dimensional model space corresponding to the object to be reconstructed based on the image data, dividing the three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces, calculating directed distance values corresponding to corner coordinates of the three-dimensional subspaces, forming at least one triangular surface intersecting the three-dimensional subspaces and the object to be reconstructed in the three-dimensional model space based on the directed distance values corresponding to the corner coordinates, and building a three-dimensional model of the object to be reconstructed based on the formed triangular surface. The electronic devices include, but are not limited to, wearable devices, head-worn devices, medical health platforms, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The three-dimensional model construction apparatus is preferably an image processing terminal or server that performs three-dimensional model data processing, determining a three-dimensional model space corresponding to an object to be reconstructed based on image data, dividing the three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces, then calculating a directed distance value corresponding to a corner coordinate of the three-dimensional subspaces, further based on the directed distance value corresponding to each corner coordinate, at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, is formed in the three-dimensional model space, and finally a three-dimensional model is obtained, it utilizes the directional distance value to determine the intersection condition between the object to be reconstructed and the three-dimensional subspace, determines the shape of each part of the object to be reconstructed, therefore, the constructed three-dimensional model can not lack certain information in the object to be reconstructed due to the problem of the original data characteristics, and the accuracy of the constructed three-dimensional model is effectively improved.
At present, a commonly used three-dimensional model construction method is based on a binocular vision algorithm for three-dimensional model construction, wherein the binocular vision algorithm comprises the following steps: binocular vision is a method of passively perceiving a distance using a computer by simulating the principle of human vision. Observing an object from two or more points, acquiring images of the same object under different viewing angles, and calculating the offset between pixels according to the displacement and matching relation of the pixels between the images by using a triangulation principle to acquire three-dimensional information of the object. The actual distance between the object and the camera, the three-dimensional size of the object and the actual distance between two points can be calculated by obtaining the depth of field information of the object.
Referring to fig. 1, for the object in the S region of fig. 1, the effect of constructing the three-dimensional model by using the binocular vision algorithm is not good, because: the core principle of the binocular algorithm is to judge the displacement of the same visual feature on the left eye image and the right eye image, and the calculation mode of the same visual feature is based on the whole information of a small image block, so that the occupation ratio of an excessively thin object in the image block is too small to be interfered by a background image, the object is difficult to judge as the same object, and the three-dimensional information is difficult to calculate correctly.
In addition, binocular vision algorithms also have difficulty reconstructing the retroreflective objects. The core principle of the binocular algorithm is to judge the displacement of the same visual feature on the left eye image and the right eye image, but the same feature is difficult to find or wrong matching is easy to find due to different light reflection of light-reflecting objects at different angles.
Aiming at the problem that some thin objects are difficult to be reconstructed in a three-dimensional mode, such as a golf club or a skipping rope, the core of the method is that the three-dimensional model of the object can be reconstructed only by using the two-dimensional mask after the foreground and the background are separated, so that the thin object reconstruction which cannot be realized by the traditional reconstruction method is well reconstructed in a three-dimensional mode, and the accuracy of the constructed three-dimensional model is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a three-dimensional model building method according to an embodiment of the invention. The three-dimensional model building method of this embodiment may be implemented using the electronic device, and includes:
step 101, obtaining image data of an object to be reconstructed;
step 102, determining a three-dimensional model space corresponding to an object to be reconstructed based on image data;
103, dividing a three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces;
104, calculating a directed distance value corresponding to the corner point coordinates of the three-dimensional subspace;
105, forming at least one triangular surface intersecting the object to be reconstructed and the three-dimensional subspace in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate;
and 106, constructing a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
The three-dimensional model construction method of the present embodiment is explained in detail below.
In step 101, in order to perform the three-dimensional model reconstruction subsequently, it is necessary to acquire images of the object to be reconstructed from multiple angles to ensure the accuracy of the three-dimensional model to be constructed subsequently, wherein the object to be reconstructed may be continuously photographed around the whole or part of the object to be reconstructed by the moving image acquisition device, for example, the continuously photographing may be performed at a preset frequency. Optionally, in some embodiments, it may also be possible to simultaneously acquire image data of the object to be reconstructed by setting a plurality of image acquisition devices. The object to be reconstructed may be a person, an animal, an article, a plant and/or a building, depending on the actual situation. Optionally, a color image and/or a depth image of the object to be reconstructed may be acquired, and it should be noted that the color image is a red, green and blue color image. Each pixel point in the depth image stores a distance (depth) value from the depth camera to each real point in the scene.
In step 102, the concept of bounding box is required to be introduced, the space enclosed by the camera is defined as a bounding box (namely, three-dimensional model space), the bounding box algorithm is a method for solving the optimal bounding space of the discrete point set, and the basic idea is to approximately replace a complex set object by a geometric body (bounding box) with a slightly larger volume and simple characteristics.
Specifically, an image acquisition region (three-dimensional model space) of the camera may be determined according to the acquired image data; when the number of the cameras is 1, determining the position of the camera corresponding to each acquisition moment, and determining the image acquisition area of the camera; when the number of the cameras is multiple, the image acquisition area of each camera is determined according to the acquisition position of each camera.
In step 103, the model reconstruction accuracy may be preset by an operation and maintenance person, a server, or a terminal, and is specifically determined according to an actual situation, where different model reconstruction accuracies correspond to different space division strategies, the model reconstruction accuracy may be understood as an image resolution of the three-dimensional model, the image resolution refers to an amount of information stored in an image, and is how many pixel points exist in each inch of the image, a unit of the resolution is PPI (pixels Per inch), which is generally called pixel Per inch, for example, a space division strategy corresponding to 100PPI is to divide a three-dimensional model space into 8 three-dimensional subspaces, a space division strategy corresponding to 50PPI is to divide a three-dimensional model space into 3 three-dimensional subspaces, and the specific correspondence may be adjusted according to the actual situation, which is not limited herein, that step 103 specifically may include:
(11) acquiring preset model reconstruction precision;
(12) and dividing the three-dimensional model space to obtain three-dimensional subspaces with the number corresponding to the model reconstruction precision.
It can be understood that the higher the accuracy of the model reconstruction, the higher the overhead (such as the occupied memory and the time of the model reconstruction) is spent, and so on. Optionally, in some embodiments, the three-dimensional model construction method provided in this embodiment divides the three-dimensional model space into a plurality of three-dimensional subspaces by using a space octree algorithm.
The space octree algorithm is a space non-uniform grid subdivision algorithm, and divides a space cube containing the whole scene into eight sub-cube grids according to three directions to organize an octree. According to the limited depth recursive division, the average division is 8 in each time, an octree is formed, and each cubic space becomes a node. Then, recording various objects into leaf nodes according to positions of the objects to form an index table, and carrying out the subdivision process until the number of the faces contained in each leaf node of the octree is smaller than a given threshold value.
Referring to FIG. 3, assume that the three-dimensional model space (i.e. bounding box) is a cube space with a side length a, coordinates (0, 0, 0) of the center of the cube space, and coordinates of point A
Figure 438835DEST_PATH_IMAGE001
The coordinates of point B are
Figure 667822DEST_PATH_IMAGE002
The bounding box is recursively divided according to a space octree algorithm, and if the recursive depth is 8, the side length corresponding to the minimum three-dimensional subspace is
Figure 466013DEST_PATH_IMAGE003
From this, the corner point coordinates corresponding to each minimum three-dimensional subspace can be determined.
In step 104, the concept of a directed Distance Field (SDF), which is a sampling grid of closest distances from the surface of the (polygonal model) object, is first introduced. As a convention, negative values are used to indicate inside the object and positive values are used to indicate outside the object. Specifically, referring to fig. 4, in the present embodiment, a mask (also called a mask) manner is adopted to calculate a directional distance value corresponding to a corner coordinate of a three-dimensional subspace, that is, step 104 may specifically include:
(21) acquiring a three-dimensional coordinate of an object to be reconstructed under a preset camera;
(22) and calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace according to the three-dimensional coordinate and a preset object mask.
First, a directional distance value of each pixel point of the two-dimensional image corresponding to the character model under each preset camera is created, and a specific creation manner is specifically described by taking the character model as an example with reference to fig. 4. Specifically, the following formula can be used for calculation
SDF(p) =[ (x’- x)^2 + (y’- y)^2]^(1/2) + SDF(p’) (1)
In the above formula, (x, y) is the two-dimensional coordinate of the current point, (x ', y') is the coordinate of the point already having the SDF value, SDF (p) is the SDF value of the current point, and SDF (p ') is the SDF value of the known point p'; in the calculation process, scanning from the upper left corner to the lower right corner by using a scanning line, wherein only the point of the mask boundary has an SDF value, namely 0, traversing each point on the scanning line, searching up, down, left and right based on a preset scanning radius, finding out all points with the SDF value, and calculating according to a formula 1; then scanning once again from the lower right to the upper left; note that initially the SDF values outside the mask are all positive and the SDF values inside the mask are all negative, so far, the SDF value I (x, y, j) for each pixel of each camera plane is obtained. x, y are the abscissa and ordinate of the pixel, and j is the jth camera. That is, the correspondence between the two-dimensional coordinates of each pixel of each camera plane and the SDF value I (x, y, j) is obtained.
Then, determining the directional distance value of the corner point coordinate of the three-dimensional subspace in the preset camera plane according to the parameter and the three-dimensional coordinate of each camera and the directional distance values of all pixels of the preset camera plane, that is, optionally, the step of calculating the directional distance value corresponding to the corner point coordinate of the three-dimensional subspace according to the three-dimensional coordinate and the preset object mask may specifically include:
(31) determining a focal length and a principal point of a preset camera;
(32) calculating two-dimensional coordinates of corner points of the three-dimensional subspace in a preset camera plane according to the three-dimensional coordinates, the focal length of the preset camera and the principal point;
(33) and determining the directed distance value of the corner point coordinate of the three-dimensional subspace in the preset camera plane according to the two-dimensional coordinate and the directed distance values of all pixels of the preset camera plane.
In the field of image processing, coordinate system conversion is to convert a three-dimensional world coordinate system in space to a two-dimensional pixel coordinate system for image processing, and commonly used coordinate systems include a world coordinate system, a camera coordinate system and an image coordinate system, wherein the world coordinate system (world coordinate) (xw, yw, zw), also called a measurement coordinate system, is a three-dimensional rectangular coordinate system, and can describe the spatial positions of a camera and an object to be measured by taking the coordinate system as a reference; a camera coordinate system (xc, yc, zc) is also a three-dimensional rectangular coordinate system, the origin is located at the optical center of the lens, the xc and yc axes are respectively parallel to two sides of the image plane, and the zc axis is the optical axis of the lens and is perpendicular to the image plane; an image coordinate system (x, y) is a two-dimensional rectangular coordinate system on an image plane. The origin of the image coordinate system is the intersection point (also called principal point) of the lens optical axis and the image plane, its x-axis is parallel to the xc axis of the camera coordinate system, and its y-axis is parallel to the yc axis of the camera coordinate system.
Specifically, after directional distance values of all pixels of a preset camera plane are obtained through calculation, an image of a minimum three-dimensional subspace is projected onto a plane of each camera, for any pixel point in the image, a corresponding relation between the pixel point in the image and a point of the camera plane can be determined based on an SDF value so as to realize three-dimensional construction of a three-dimensional object, assuming that a space bounding box corresponds to 8 cameras, the minimum three-dimensional subspace is defined as gi, eight corner coordinates are Pi (xi, yi, zi) (i ∈ [1-8 ]), firstly, the Pi under the three-dimensional coordinate system is converted into coordinates under the three-dimensional coordinate system of the camera, then, the three-dimensional coordinates obtained through conversion are converted into two-dimensional coordinates of the camera (two-dimensional coordinates under the image coordinate system), and finally, the two-dimensional coordinates and directional distance values of all pixels of the preset camera plane are obtained based on conversion, determining a directed distance value of the corner coordinates of the three-dimensional subspace in the preset camera plane, that is, the step of determining the directed distance value of the corner coordinates of the three-dimensional subspace in the preset camera plane according to the two-dimensional coordinates and the directed distance values of all pixels of the preset camera plane may specifically include:
(41) obtaining a plane directed distance value of the corner point coordinate of the three-dimensional subspace in each preset camera plane according to the two-dimensional coordinate and the directed distance values of all pixels of the preset camera plane;
(42) and determining a directional distance value corresponding to the corner point coordinate of the three-dimensional subspace based on the plane directional distance value.
Specifically, the converting the world three-dimensional coordinate Pi into a camera three-dimensional coordinate Pij, Pij = R _ jT (Pi-Tj), where R _ jT is a transpose of a j-th camera rotation matrix and Tj is a displacement vector of a j-th camera, that is, the step "obtaining a three-dimensional coordinate of an object to be reconstructed under a preset camera" may specifically include:
(51) determining a reference three-dimensional coordinate corresponding to a corner point of a three-dimensional subspace;
(52) and transposing the reference three-dimensional coordinates to obtain the three-dimensional coordinates of the corner points of the three-dimensional subspace under the preset camera.
Specifically, the camera three-dimensional coordinates pij are converted into camera two-dimensional coordinates p ' i ' j ', which can be converted in the following manner:
x’ = (x / z)* fx + cx
y’ = (y / z)*fy + cy
wherein fx and fy are focal lengths of the cameras, cx and cy are principal points of the cameras, the corner point coordinates in the two-dimensional coordinates of the cameras are p ' i ' j ' (x ', y '), and the corresponding three-dimensional coordinates of the cameras are pij (x, y and z).
And finally, obtaining the SDF value of each corner point of the bounding box of each camera based on the corresponding relation between the two-dimensional coordinates of each pixel of each camera plane and the SDF value I (x, y, j), and then taking the average value of the SDF values (plane directed distance values) of the same corner point in different cameras as the directed distance value (SDF value) of the corner point in the three-dimensional subspace.
Specifically, to calculate the average value of the SDF values corresponding to 8 corner points in the camera bounding box, since gi is projected to all the cameras, the value of I (gi) depends on the average value of the SDF values I (g _ I _ j) of all the cameras
I(g_i) = ∑I(g_i_j) / n
For example, if the value of the corner a on the C1 camera is X1, the value of the corner a on the C2 camera is X2, and the value of the corner a on the C3 camera is X3, then the SDF mean value of the corner a is X1
Figure 320706DEST_PATH_IMAGE004
And calculating the corresponding values of the rest seven angular points of the bounding box in the same way, thereby obtaining the corresponding directed distance value of each angular point in the three-dimensional subspace.
In step 105, a situation that each three-dimensional subspace intersects with the object to be reconstructed may be determined based on the directional distance value corresponding to each corner point coordinate, then at least one triangular surface where the object to be reconstructed intersects with the three-dimensional subspace is formed in the three-dimensional model space according to the situation that each three-dimensional subspace intersects with the object to be reconstructed, and finally, a three-dimensional model of the object to be reconstructed is constructed according to the formed triangular surface.
Referring to fig. 5, if the values of 8 corner points are all greater than 0 or all less than 0, it is indicated that the bounding box is completely inside or outside the target model, and therefore no triangle surface is generated at all, as shown in the schematic diagram of the first row and the first column of fig. 5, the remaining case is that 8 corner points have positive or negative, and therefore certain intersections with the target model can be concluded into 14 cases (except 1 case of all positive and all negative).
These 14 cases may be the disjoint case of the first row and the first column of fig. 5, i.e. without forming any triangular faces;
fig. 5 shows a case where the directional distance value of one corner point in the first row and the second column is greater than 0, that is, the target model intersects with three adjacent edges corresponding to the corner point, so as to form a triangular surface intersecting with the three adjacent edges; the triangular face may isolate a corner point having a directional distance value greater than 0 from other corner points.
In fig. 5, when the directional distance value of two adjacent corner points in the third row and the third column of the first row is greater than 0, that is, the target model intersects with four sides (except for the side connecting the two corner points) which correspond to the two corner points separately, that is, two triangular surfaces intersecting with three sides of the four sides are formed; the two triangular faces may isolate two adjacent corner points having a directional distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of two nonadjacent but coplanar corner points in the fourth column of the first row is greater than 0, the target model intersects with three adjacent edges corresponding to each corner point, so as to form two triangular surfaces intersecting with the three adjacent edges; the two triangular faces may separate corner points having a directional distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of the three adjacent coplanar corner points in the first row and the fifth column is greater than 0, the target model intersects all relevant edges except for the edges between the adjacent corner points, and three triangular surfaces intersecting three edges of five edges are formed; the three triangular faces may separate corner points having a directional distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of the four adjacent coplanar corner points in the second row and the first column is greater than 0, the target model intersects with one of the adjacent sides corresponding to each corner point, and 4 adjacent sides do not intersect, so as to form two triangular surfaces intersecting with three sides of the four sides; the two triangular faces may separate corner points having a directional distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of four corner points (three adjacent and coplanar corner points) in the second row and the second column is greater than 0, the target model intersects with three adjacent sides of the non-coplanar corner points, and the target model intersects with all relevant sides of the coplanar three corner points except for the sides between the adjacent corner points, so as to form four triangular surfaces intersecting with three sides of eight sides; the four triangular faces may isolate corner points having a directed distance value greater than 0 from other corner points.
In the case that the directional distance value of the four nonadjacent corner points in the second row and the third column of fig. 5 is greater than 0, the target model intersects with three adjacent edges corresponding to each corner point, and four triangular surfaces intersecting with the three adjacent edges are formed; the four triangular faces may isolate corner points having a directed distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of the four adjacent coplanar corner points in the fourth row and the fourth column of the second row is greater than 0, the target model intersects all relevant edges except for the edges between the adjacent corner points, and four triangular surfaces intersecting three edges of six edges are formed; the four triangular faces may isolate corner points having a directed distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of four corner points (three adjacent and coplanar corner points) in the second row and the fifth column is greater than 0, the target model intersects all relevant edges except for the edges between the adjacent corner points, and four triangular surfaces intersecting three edges of six edges are formed; the four triangular faces may isolate corner points having a directed distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance value of two non-adjacent corner points in the third row and the first column is greater than 0, the target model intersects with three adjacent edges corresponding to each corner point, so as to form two triangular surfaces intersecting with the three adjacent edges; the two triangular faces may separate corner points having a directional distance value greater than 0 from other corner points.
Fig. 5 shows a case where the directional distance value of three corner points in the third row and the second column is greater than 0, where two corner points are coplanar and adjacent, the target model intersects with three adjacent sides of the corner points that are not coplanar, and the target model intersects with all relevant sides of the coplanar three corner points except for the side between the adjacent corner points, so as to form three triangular surfaces intersecting with three sides of seven sides; the three triangular faces may separate corner points having a directional distance value greater than 0 from other corner points.
In the case that the directional distance value of three nonadjacent corner points in the third row and the third column of fig. 5 is greater than 0, the target model intersects with three adjacent edges corresponding to each corner point, and three triangular surfaces intersecting with the three adjacent edges are formed; the three triangular faces may separate corner points having a directional distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance values of two groups of adjacent corner points (four corner points in total) in the third row and the fourth column are greater than 0, the target model intersects all relevant edges except for the edges between the adjacent corner points, and four triangular surfaces intersecting three edges of eight edges are formed; the four triangular faces may isolate corner points having a directed distance value greater than 0 from other corner points.
In fig. 5, in the case that the directional distance values of two groups of adjacent corner points (four corner points in total) in the third row and the fifth column are greater than 0, the target model intersects all relevant edges except for the edges between the adjacent corner points, and four triangular surfaces intersecting three edges of six edges are formed; the four triangular faces may isolate corner points having a directed distance value greater than 0 from other corner points.
Finally, the set based on the formed triangular faces is the three-dimensional model of the object to be reconstructed, as shown in fig. 6.
This completes the three-dimensional model building process of the present embodiment.
The three-dimensional model building method of the embodiment determines a three-dimensional model space corresponding to an object to be reconstructed based on image data, divides the three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces, then calculates a directed distance value corresponding to corner coordinates of the three-dimensional subspaces, further forms at least one triangular surface intersecting the object to be reconstructed and the three-dimensional subspaces in the three-dimensional model space based on the directed distance value corresponding to each corner coordinate, and finally obtains a three-dimensional model.
An embodiment of the present application further provides a three-dimensional model building method, where the three-dimensional model building apparatus is integrated in a cloud, please refer to fig. 7, and a specific process is as follows:
step 201, the cloud acquires image data of an object to be reconstructed;
step 202, the cloud determines a three-dimensional model space corresponding to an object to be reconstructed based on image data;
step 203, the cloud divides a three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces;
step 204, the cloud calculates a directed distance value corresponding to the corner point coordinates of the three-dimensional subspace;
step 205, the cloud forms at least one triangular surface intersecting the object to be reconstructed and the three-dimensional subspace in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate;
and step 206, the cloud end constructs a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
In an application scene of the three-dimensional panoramic virtual reality, please refer to fig. 8, the cloud end can acquire image data of an object to be reconstructed, which is shot by a camera, then the cloud end divides the image data of the object to be reconstructed according to the display precision of the image display terminal, then the cloud end calculates a directed distance value corresponding to corner coordinates of a three-dimensional subspace, and finally the cloud end constructs a three-dimensional model of the object to be reconstructed based on the directed distance value corresponding to each corner coordinate, so that the precision of the three-dimensional model is improved, and meanwhile, the cost of the terminal is reduced.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of the three-dimensional model building apparatus according to the present invention, and the three-dimensional model building apparatus according to the present embodiment can be implemented by using the three-dimensional model building method. The three-dimensional model building device 30 of this embodiment includes an obtaining module 301, a determining module 302, a dividing module 303, a calculating module 304, a forming module 305, and a building module 306, which are specifically as follows:
an obtaining module 301, configured to obtain image data of an object to be reconstructed.
A determining module 302, configured to determine, based on the image data, a three-dimensional model space corresponding to the object to be reconstructed.
And the dividing module 303 is configured to divide the three-dimensional model space according to a preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces.
And the calculating module 304 is configured to calculate a directional distance value corresponding to the corner coordinates of the three-dimensional subspace.
And a forming module 305, configured to form at least one triangular surface where the object to be reconstructed intersects with the three-dimensional subspace in the three-dimensional model space based on the directional distance value corresponding to each corner coordinate.
And a building module 306 for building a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
Optionally, in some embodiments, the dividing module 303 may be specifically configured to: and acquiring preset model reconstruction precision, and dividing the three-dimensional model space to obtain three-dimensional subspaces with the number corresponding to the model reconstruction precision.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a computing module of an embodiment of a three-dimensional model building apparatus according to the present invention, where the computing module 304 includes an obtaining unit 3041 and a computing unit 3042.
The acquiring unit 3041 is configured to acquire three-dimensional coordinates of an object to be reconstructed under a preset camera; the calculating unit 3042 is configured to calculate a directional distance value corresponding to the corner coordinates of the three-dimensional subspace according to the three-dimensional coordinates and a preset object mask.
The calculation unit 3042 is specifically configured to: determining a focal length and a principal point of a preset camera; calculating two-dimensional coordinates of corner points of the three-dimensional subspace in a preset camera plane according to the three-dimensional coordinates, the focal length of the preset camera and the principal point; and determining the directed distance value of the corner point coordinate of the three-dimensional subspace in the preset camera plane according to the two-dimensional coordinate and the directed distance values of all pixels of the preset camera plane.
Further, the calculating unit 3042 is specifically further configured to: obtaining a plane directed distance value of the corner point coordinate of the three-dimensional subspace in each preset camera plane according to the two-dimensional coordinate and the directed distance values of all pixels of the preset camera plane; and determining a directional distance value corresponding to the corner point coordinate of the three-dimensional subspace based on the plane directional distance value.
Optionally, in some embodiments, the forming module 305 may be specifically configured to: determining the intersection condition of each three-dimensional subspace and the object to be reconstructed based on the directed distance value corresponding to each corner point coordinate; and forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space according to the condition that each three-dimensional subspace is intersected with the object to be reconstructed.
This completes the process of constructing the three-dimensional model by the three-dimensional model construction apparatus 30 of the present embodiment.
The specific working principle of the three-dimensional model building apparatus of this embodiment is the same as or similar to that described in the embodiment of the three-dimensional model building method, and please refer to the detailed description in the embodiment of the three-dimensional model building method.
The three-dimensional model building device of the embodiment determines a three-dimensional model space corresponding to an object to be reconstructed based on image data, divides the three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces, then calculates a directed distance value corresponding to corner coordinates of the three-dimensional subspaces, further forms at least one triangular surface intersecting the object to be reconstructed and the three-dimensional subspaces in the three-dimensional model space based on the directed distance value corresponding to each corner coordinate, and finally obtains a three-dimensional model.
As used herein, the terms "component," "module," "system," "interface," "process," and the like are generally intended to refer to a computer-related entity: hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Fig. 11 and the following discussion provide a brief, general description of an operating environment of an electronic device in which the video data transmission apparatus of the present invention may be implemented. The operating environment of FIG. 11 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example electronic devices 1012 include, but are not limited to, wearable devices, head-mounted devices, medical health platforms, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Although not required, embodiments are described in the general context of "computer readable instructions" being executed by one or more electronic devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
Fig. 11 illustrates an example of an electronic device 1012 that includes one or more embodiments of the video data transmission apparatus of the present invention. In one configuration, electronic device 1012 includes at least one processing unit 1016 and memory 1018. Depending on the exact configuration and type of electronic device, memory 1018 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This configuration is illustrated in fig. 1 by dashed line 1014.
In other embodiments, electronic device 1012 may include additional features and/or functionality. For example, device 1012 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in FIG. 11 by storage 1020. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 1020. Storage 1020 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 1018 for execution by processing unit 1016, for example.
The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 1018 and storage 1020 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by electronic device 1012. Any such computer storage media may be part of electronic device 1012.
Electronic device 1012 may also include communication connection(s) 1026 that allow electronic device 1012 to communicate with other devices. Communication connection(s) 1026 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting electronic device 1012 to other electronic devices. The communication connection 1026 may comprise a wired connection or a wireless connection. Communication connection(s) 1026 may transmit and/or receive communication media.
The term "computer readable media" may include communication media. Communication media typically embodies computer readable instructions or other data in a "modulated data signal" such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may include signals that: one or more of the signal characteristics may be set or changed in such a manner as to encode information in the signal.
Electronic device 1012 may include input device(s) 1024 such as keyboard, mouse, pen, voice input device, touch input device, infrared camera, video input device, and/or any other input device. Output device(s) 1022 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 1012. Input device 1024 and output device 1022 may be connected to electronic device 1012 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another electronic device may be used as input device 1024 or output device 1022 for electronic device 1012.
The components of electronic device 1012 may be connected by various interconnects, such as a bus. Such interconnects may include Peripheral Component Interconnect (PCI), such as PCI express, Universal Serial Bus (USB), firewire (IEEE 13104), optical bus structures, and so forth. In another embodiment, components of electronic device 1012 may be interconnected by a network. For example, memory 1018 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, electronic device 1030 accessible via network 1028 may store computer readable instructions to implement one or more embodiments of the present invention. Electronic device 1012 may access electronic device 1030 and download a part or all of the computer readable instructions for execution. Alternatively, electronic device 1012 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at electronic device 1012 and some at electronic device 1030.
Various operations of embodiments are provided herein. In one embodiment, the one or more operations may constitute computer readable instructions stored on one or more computer readable media, which when executed by an electronic device, will cause the computing device to perform the operations. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Those skilled in the art will appreciate alternative orderings having the benefit of this description. Moreover, it should be understood that not all operations are necessarily present in each embodiment provided herein.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may perform the method in the corresponding method embodiment.
In summary, although the present invention has been disclosed in the foregoing embodiments, the serial numbers before the embodiments are used for convenience of description only, and the sequence of the embodiments of the present invention is not limited. Furthermore, the above embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be limited by the appended claims.

Claims (10)

1. A method of constructing a three-dimensional model, comprising:
acquiring image data of an object to be reconstructed;
determining a three-dimensional model space corresponding to the object to be reconstructed based on the image data;
dividing the three-dimensional model space according to preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces;
calculating a directed distance value corresponding to the corner point coordinates of the three-dimensional subspace;
forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate;
and constructing a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
2. The method according to claim 1, wherein the calculating the directional distance value corresponding to the corner coordinates of the three-dimensional subspace comprises:
acquiring a three-dimensional coordinate of the object to be reconstructed under a preset camera;
and calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace according to the three-dimensional coordinate and a preset object mask.
3. The method according to claim 2, wherein the calculating the directional distance value corresponding to the corner coordinates of the three-dimensional subspace according to the three-dimensional coordinates and a preset object mask comprises:
determining a focal length and a principal point of a preset camera;
calculating two-dimensional coordinates of the corner points of the three-dimensional subspace in a preset camera plane according to the three-dimensional coordinates, the focal length of a preset camera and the principal points;
and determining the directed distance value of the corner point coordinate of the three-dimensional subspace in the preset camera plane according to the two-dimensional coordinate and the directed distance values of all pixels of the preset camera plane.
4. The method according to claim 3, wherein the determining the directional distance value corresponding to the corner coordinates of the three-dimensional subspace according to the two-dimensional coordinates and the directional distance values of all pixels of a preset camera plane comprises:
obtaining a plane directed distance value of the corner point coordinate of the three-dimensional subspace in each preset camera plane according to the two-dimensional coordinate and directed distance values of all pixels of the preset camera plane;
and determining a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace based on the plane directed distance value.
5. The method according to claim 2, wherein the obtaining three-dimensional coordinates of the object to be reconstructed under a preset camera comprises:
determining a reference three-dimensional coordinate corresponding to a corner point of the three-dimensional subspace;
and transposing the reference three-dimensional coordinate to obtain a three-dimensional coordinate of the corner point of the three-dimensional subspace under a preset camera.
6. The method according to any one of claims 1 to 5, wherein the dividing the three-dimensional model space according to the preset model reconstruction accuracy to obtain a plurality of three-dimensional subspaces comprises:
acquiring preset model reconstruction precision;
and dividing the three-dimensional model space to obtain three-dimensional subspaces with the number corresponding to the model reconstruction precision.
7. The method according to any one of claims 1 to 5, wherein the forming at least one triangular surface of the object to be reconstructed intersecting the three-dimensional subspace in the three-dimensional model space based on the directional distance value corresponding to each corner point coordinate comprises:
determining the intersection condition of each three-dimensional subspace and the object to be reconstructed based on the directed distance value corresponding to each corner point coordinate;
and forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space according to the condition that each three-dimensional subspace is intersected with the object to be reconstructed.
8. A three-dimensional model building apparatus, comprising:
the acquisition module is used for acquiring image data of an object to be reconstructed;
the determination module is used for determining a three-dimensional model space corresponding to the object to be reconstructed based on the image data;
the dividing module is used for dividing the three-dimensional model space according to preset model reconstruction precision to obtain a plurality of three-dimensional subspaces;
the calculation module is used for calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace;
the forming module is used for forming at least one triangular surface of the object to be reconstructed, which is intersected with the three-dimensional subspace, in the three-dimensional model space based on the directed distance value corresponding to each corner point coordinate;
and the building module is used for building a three-dimensional model of the object to be reconstructed based on the formed triangular surface.
9. The three-dimensional model building apparatus according to claim 8, wherein the calculation module comprises:
the acquisition unit is used for acquiring the three-dimensional coordinates of the object to be reconstructed under a preset camera;
and the calculation unit is used for calculating a directed distance value corresponding to the corner point coordinate of the three-dimensional subspace according to the three-dimensional coordinate and a preset object mask.
10. A storage medium having stored therein processor-executable instructions, the instructions being loaded by one or more processors to perform a method of building a three-dimensional model according to any one of claims 1 to 7.
CN202111398841.9A 2021-11-24 2021-11-24 Three-dimensional model construction method and device and storage medium Active CN113822994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111398841.9A CN113822994B (en) 2021-11-24 2021-11-24 Three-dimensional model construction method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111398841.9A CN113822994B (en) 2021-11-24 2021-11-24 Three-dimensional model construction method and device and storage medium

Publications (2)

Publication Number Publication Date
CN113822994A true CN113822994A (en) 2021-12-21
CN113822994B CN113822994B (en) 2022-02-15

Family

ID=78918098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111398841.9A Active CN113822994B (en) 2021-11-24 2021-11-24 Three-dimensional model construction method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113822994B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581608A (en) * 2022-03-02 2022-06-03 山东翰林科技有限公司 Three-dimensional model intelligent construction system and method based on cloud platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN108334730A (en) * 2017-08-29 2018-07-27 哈尔滨理工大学 A kind of hipbone modeling and simulation method based on muscle group
US20200111250A1 (en) * 2018-07-03 2020-04-09 Shanghai Yiwo Information Technology Co. LTD Method for reconstructing three-dimensional space scene based on photographing
WO2021078179A1 (en) * 2019-10-22 2021-04-29 华为技术有限公司 Image display method and device
CN112927370A (en) * 2021-02-25 2021-06-08 苍穹数码技术股份有限公司 Three-dimensional building model construction method and device, electronic equipment and storage medium
CN113327278A (en) * 2021-06-17 2021-08-31 北京百度网讯科技有限公司 Three-dimensional face reconstruction method, device, equipment and storage medium
CN113470180A (en) * 2021-05-25 2021-10-01 杭州思看科技有限公司 Three-dimensional mesh reconstruction method, device, electronic device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN108334730A (en) * 2017-08-29 2018-07-27 哈尔滨理工大学 A kind of hipbone modeling and simulation method based on muscle group
US20200111250A1 (en) * 2018-07-03 2020-04-09 Shanghai Yiwo Information Technology Co. LTD Method for reconstructing three-dimensional space scene based on photographing
WO2021078179A1 (en) * 2019-10-22 2021-04-29 华为技术有限公司 Image display method and device
CN112927370A (en) * 2021-02-25 2021-06-08 苍穹数码技术股份有限公司 Three-dimensional building model construction method and device, electronic equipment and storage medium
CN113470180A (en) * 2021-05-25 2021-10-01 杭州思看科技有限公司 Three-dimensional mesh reconstruction method, device, electronic device and storage medium
CN113327278A (en) * 2021-06-17 2021-08-31 北京百度网讯科技有限公司 Three-dimensional face reconstruction method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘冰等: "基于空间离散化的STL模型布尔运算", 《华中科技大学学报(自然科学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581608A (en) * 2022-03-02 2022-06-03 山东翰林科技有限公司 Three-dimensional model intelligent construction system and method based on cloud platform

Also Published As

Publication number Publication date
CN113822994B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN112767538B (en) Three-dimensional reconstruction and related interaction and measurement methods, related devices and equipment
CN110807451B (en) Face key point detection method, device, equipment and storage medium
US11748906B2 (en) Gaze point calculation method, apparatus and device
US6434278B1 (en) Generating three-dimensional models of objects defined by two-dimensional image data
JP6573419B1 (en) Positioning method, robot and computer storage medium
US10529119B2 (en) Fast rendering of quadrics and marking of silhouettes thereof
CN111243093A (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN111640180B (en) Three-dimensional reconstruction method and device and terminal equipment
EP3040944B1 (en) Method and device for rebuilding three-dimensional object and terminal
CN110276774B (en) Object drawing method, device, terminal and computer-readable storage medium
WO2020237492A1 (en) Three-dimensional reconstruction method, device, apparatus, and storage medium
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
US20160232420A1 (en) Method and apparatus for processing signal data
CN113936090A (en) Three-dimensional human body reconstruction method and device, electronic equipment and storage medium
CN110567441A (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN113822994B (en) Three-dimensional model construction method and device and storage medium
JP7432793B1 (en) Mapping methods, devices, chips and module devices based on three-dimensional point clouds
CN113223137B (en) Generation method and device of perspective projection human face point cloud image and electronic equipment
CN111460937B (en) Facial feature point positioning method and device, terminal equipment and storage medium
CN113538682A (en) Model training method, head reconstruction method, electronic device, and storage medium
CN117635875B (en) Three-dimensional reconstruction method, device and terminal
CN108510578A (en) Threedimensional model building method, device and electronic equipment
CN116030216A (en) Industrial image processing method, device, electronic equipment and storage medium
JP2005215724A (en) Texture information generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220609

Address after: 519000 5-196, floor 5, Yunxi Valley Digital Industrial Park, No. 168, Youxing Road, Xiangzhou District, Zhuhai City, Guangdong Province (block B, Meixi Commercial Plaza) (centralized office area)

Patentee after: Zhuhai Prometheus Vision Technology Co.,Ltd.

Address before: 518000 room 217, R & D building, Founder science and Technology Industrial Park, north of Songbai highway, Longteng community, Shiyan street, Bao'an District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN PROMETHEUS VISION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right