CN107358645B - Product three-dimensional model reconstruction method and system - Google Patents
Product three-dimensional model reconstruction method and system Download PDFInfo
- Publication number
- CN107358645B CN107358645B CN201710425967.8A CN201710425967A CN107358645B CN 107358645 B CN107358645 B CN 107358645B CN 201710425967 A CN201710425967 A CN 201710425967A CN 107358645 B CN107358645 B CN 107358645B
- Authority
- CN
- China
- Prior art keywords
- product
- depth information
- image
- dimensional model
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims abstract description 24
- 238000009877 rendering Methods 0.000 claims abstract description 9
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 6
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000002146 bilateral effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Abstract
A product three-dimensional model reconstruction method and a system thereof are provided, wherein a mapping matrix between a product image pixel point and a coordinate point of an actual scene is obtained through a Cholesky decomposition method after a product image is collected and a homography matrix of an infinite plane corresponding to the product image is calculated; then establishing a network flow corresponding to the product, and calculating an optimal value of an energy function to obtain depth information of the surface of the product; and finally, establishing a cube according to the depth information and the mapping matrix, dividing voxels of the cube, and further realizing the reconstruction of the three-dimensional model by updating the TSDF of each cube voxel and rendering and projecting.
Description
Technical Field
The invention relates to a technology in the field of three-dimensional modeling, in particular to a product three-dimensional model reconstruction method and a product three-dimensional model reconstruction system based on two-dimensional images.
Background
The three-dimensional reconstruction means that the object is subjected to three-dimensional modeling, so that the product is displayed more truly and intuitively. The image-based three-dimensional reconstruction can break through the real-time bottleneck and is well developed. The existing three-dimensional reconstruction requires high equipment precision and has more limitations in the camera calibration process. In the three-dimensional reconstruction process, the modeling process is complex, the speed is low, and the reconstruction precision can not meet the requirement, so that the method can not be applied to the three-dimensional reconstruction of the actual product display model.
Disclosure of Invention
Aiming at the defects that the prior art mostly has no depth information optimization processing, and has no processing on smooth items and shielding items, so that the modeling effect is poor, the invention provides the product three-dimensional model reconstruction method and the system thereof, which do not need calibration objects, have high robustness, reduce the influence of imaging distortion, and have high reliability and robustness.
The invention is realized by the following technical scheme:
the invention relates to a product three-dimensional model reconstruction method, which comprises the steps of acquiring a product image, calculating a homography matrix of an infinite plane corresponding to the product image, and then obtaining a mapping matrix between a pixel point of the product image and a coordinate point of an actual scene by a Cholesky decomposition method; then establishing a network flow corresponding to the product, and calculating an optimal value of an energy function to obtain depth information of the surface of the product; and finally, establishing a cube according to the depth information and the mapping matrix, dividing voxels of the cube, and further realizing the reconstruction of the three-dimensional model by updating the TSDF (truncated symbolic distance function) of each cube voxel and performing rendering and projection.
The mapping matrix is obtained by the following method:
1) establishing homography matrix H of infinite plane∞And is provided withSolving the homography matrix H∞;
2) According to H∞=KRtK-1And solving a mapping matrix K between the pixel point of the product image and the coordinate point of the actual scene by adopting a Cholesky decomposition method.
The depth information of the product surface is obtained through the following modes:
a) establishing a virtual network of a product, and establishing an energy function according to pixel point coordinates of a product image;
b) assigning values to grids in the virtual network through the similarity cost and the smooth cost to form a network flow;
c) and optimizing the energy function by adopting a solution algorithm of the maximum flow/minimum cut problem to obtain the depth information of the product surface.
The energy function is:
E(f)=∑p∈P[Il(Tranl(xp,yp,fp))-Ir(Tranr(xp,yp,fp))]2+∑(p,q)∈Nu{p,q}|fp-fql, wherein: i islAnd IrA pixel matrix which is an image of the product, (x)p,yp) The coordinate value of the grid point on the base plane is shown, Tran is a coordinate system conversion function, f is a mapping relation between the pixel point and the label, P is a set of all pixels of the defined image, and P and q are single pixel points in the pixel set P.
The reconstruction specifically comprises the following steps:
i) carrying out bilateral filtering on the depth information;
ii) obtaining a depth map according to the depth information, and performing back projection on the depth map to obtain a vertex map and a normal vector of each vertex;
iii) converting the product image pixels to a world coordinate system according to the mapping matrix K;
iv) establishing a cube and performing voxel division, and updating the TSDF for each voxel;
and v) rendering and projecting according to the TSDF to generate a three-dimensional model of the product.
The truncated symbolic distance function is the signed distance of the voxel to the nearest surface of the built model, i.e. the surface of the model, i.e. the symbolic representation is in front-back relation to the surface; since the reconstruction space is regarded as a cube, a voxel is an abbreviation of a volume element, and is the minimum unit of digital data in three-dimensional space segmentation, similar to a pixel of a two-dimensional image. The voxels represent a series of voxels (only the z coordinate changes) that define the (x, y) coordinates, the resulting TSDF negative number is outside the reconstructed object, 0 is on the surface of the reconstructed object and the inside of the reconstructed object is positive.
The invention relates to a product three-dimensional model reconstruction system, which comprises: camera calibration module, depth information acquisition module and model building module, wherein: the camera calibration module acquires a product image to obtain a mapping matrix between an image pixel point and a coordinate point of an actual scene; the depth information acquisition module acquires the depth information of the product; and the model building module receives the mapping matrix and the depth information and obtains a three-dimensional model of the product through rendering and projection.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The embodiment relates to a product three-dimensional model reconstruction system for realizing the method, which comprises the following steps: camera calibration module, depth information acquisition module and model building module, wherein: the camera calibration module acquires a product image to obtain a mapping matrix between an image pixel point and a coordinate point of an actual scene; the depth information acquisition module acquires the depth information of the product; and the model building module receives the mapping matrix and the depth information and obtains a three-dimensional model of the product through rendering and projection.
As shown in fig. 1, the method for reconstructing a three-dimensional model of a product in the system includes the following steps:
implementing and setting: aiming at the exhibition vehicle, controlling the camera to do one translational motion and 2 random motions, and taking 4 photos (the pixel of the photo is 1200 w);
software and hardware requirements: intel (R) core (TM) i5-3210M CPU @2.5GHz, display card GTX 970.
1) Homography matrix H of infinity plane corresponding to product image is solved∞And obtaining a mapping matrix K between the pixel point of the product image and the coordinate point of the actual scene by adopting a Cholesky decomposition method. And the camera for collecting the product image takes pictures in a translational motion mode and a plurality of random motion modes to obtain the product image.
1.1) establishing a homography H of the plane of infinity∞。
1.2) according to the system of equationsSolving the homography matrix H∞Wherein: e.g. of the type1、e2For the extreme points of the product image after motion, H1、H2Is a homographic matrix of spatial planes, a1And a2Is a scalar quantity, X1And X2Is a column vector.
1.3) according to the different homography matrixes H∞According to formula H∞=KRtK-1And resolving the mapping by Cholesky decompositionAnd a matrix K.
2) And establishing a network flow corresponding to the product, and calculating an optimal value of an energy function to obtain depth information of the surface of the product.
2.1) establishing a virtual network of products. According to the coordinate position of a product in a world coordinate system, a three-dimensional virtual network is established, the product to be reconstructed is completely wrapped in the three-dimensional virtual network, the foremost tangent plane is used as a base plane of the whole three-dimensional network, the position of a network point on each base plane is determined, the section on which an object point on the surface of the product falls is determined, and each section corresponds to a label, so that the problem of obtaining depth information is converted into the problem of carrying out depth labeling on each network point on the base plane of the virtual three-dimensional network.
2.2) establishing an energy function according to the coordinates of the pixel points of the product image. The energy function is used for representing the property information of the image and mainly comprises a data constraint item, a smooth constraint item and an occlusion item. The constraints of the data constraint are: when grid points on the base plane are not correctly labeled, pixel information expressed by image points in an image pixel coordinate system projected by potential object points at incorrect depth labels in a world coordinate system is inconsistent; only in the case that the assigned depth label conforms to the real depth, the image point reflects the pixel information of the same object point, and the cost of the depth label is the minimum. The smooth constraint represents a constraint relation between adjacent pixel pairs, and when the difference between adjacent pixels is large, the smooth constraint term is increased, so that the energy function is increased, and the smooth constraint term reflects the smoothness degree of the slice in the method. The occlusion term constraint conditions are: when the cost of the data items of all the depth labels is greater than a certain threshold value, the points on the surface of the object are set to be blocked, and the smooth constraint parameter values are increased, so that the points can be smoothed by referring to the depths of the object points around the points.
And 2.3) assigning the grids of the virtual network with similarity cost and smoothness cost to form the network flow. The similarity cost is obtained by SAD local matching.
And 2.4) optimizing the energy function by adopting a solution algorithm of the maximum flow/minimum cut problem to obtain the depth information of the surface of the product.
The maximum flow/minimum cut problem solving algorithm includes but is not limited to Push-Relay method and Ford-Fulkerson method.
3) And establishing a cube according to the depth information and the mapping matrix K, carrying out voxel division on the cube, and then rendering and projecting to obtain a three-dimensional model of the product.
3.1) carrying out bilateral filtering on the depth information and carrying out noise reduction.
And 3.2) obtaining a depth map according to the depth information, and carrying out back projection on the depth map to obtain a vertex map and a normal vector of each vertex.
3.3) converting the product image pixels to a world coordinate system according to the mapping matrix K.
3.4) build cube and voxel partition, and update TSDF for each voxel. And for each frame of product image, converting each volume element into a camera coordinate system and projecting to a product image coordinate point, and if the volume element is within the projection range, updating the TSDF.
And 3.5) rendering and projecting according to the TSDF to generate a three-dimensional model of the product.
Compared with the prior art, the method has the advantages that no calibration object is needed, the robustness is high, the influence of imaging distortion is reduced, the three-dimensional model has high reliability and robustness, the computing resource only needs a common commercial GPU, and the modeling speed is improved by 7.3%.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (1)
1. A system for reconstructing a three-dimensional model of a product, comprising: camera calibration module, depth information acquisition module and model building module, wherein: the camera calibration module acquires a product image to obtain a mapping matrix between an image pixel point and a coordinate point of an actual scene; the depth information acquisition module acquires the depth information of the product; the model building module receives the mapping matrix and the depth information, and a three-dimensional model of the product is obtained through rendering and projection;
the product three-dimensional model reconstruction system acquires a product image, calculates a homography matrix of an infinite plane corresponding to the product image, and obtains a mapping matrix between a pixel point of the product image and a coordinate point of an actual scene by a Cholesky decomposition method; then establishing a network flow corresponding to the product, and calculating an optimal value of an energy function to obtain depth information of the surface of the product; finally, a cube is established according to the depth information and the mapping matrix, the cube is divided into voxels, and then the TSDF of each cube is updated, rendered and projected, so that the reconstruction of a three-dimensional model is realized;
the mapping matrix is obtained by the following method:
1.1) establishing homography of the plane at infinityAnd is provided withSolving homography matrixWherein: e.g. of the type1、e2For the extreme points of the product image after motion, H1、H2Is a homographic matrix of spatial planes, a1And a2Is a scalar quantity, X1And X2Is a column vector;
1.2) according toSolving a mapping matrix K between a product image pixel point and a coordinate point of an actual scene by adopting a Cholesky decomposition method;
the depth information of the product surface is obtained through the following modes:
2.1) establishing a virtual network of the product, and establishing an energy function according to pixel point coordinates of the product image;
2.2) assigning values to grids in the virtual network through similarity cost and smooth cost to form network flow;
2.3) optimizing an energy function by adopting a solution algorithm of the maximum flow/minimum cut problem to obtain depth information of the surface of the product;
the energy function is:
E(f)=∑p∈P[Il(Tranl(xp,yp,fp))-Ir(Tranr(xp,yp,fp))]2+∑(p,q)∈Nu{p,q}|fp-fql, wherein: i islAnd IrA pixel matrix which is an image of the product, (x)p,yp) The coordinate value of a grid point on the base plane is shown, Tran is a coordinate system conversion function, f is a mapping relation between a pixel point and a label, P is a set of all pixels of a defined image, and P and q are single pixel points in the pixel set P;
the solving algorithm of the maximum flow/minimum cut problem comprises the following steps: the Push-Relabel method and the Ford-Fulkerson method;
the reconstruction specifically comprises the following steps:
3.1) carrying out bilateral filtering on the depth information;
3.2) obtaining a depth map according to the depth information, and carrying out back projection on the depth map to obtain a vertex map and a normal vector of each vertex;
3.3) converting the product image pixels into a world coordinate system according to the mapping matrix K;
3.4) establishing a cube, dividing voxels, and updating the TSDF of each voxel;
and 3.5) rendering and projecting according to the TSDF to generate a three-dimensional model of the product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710425967.8A CN107358645B (en) | 2017-06-08 | 2017-06-08 | Product three-dimensional model reconstruction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710425967.8A CN107358645B (en) | 2017-06-08 | 2017-06-08 | Product three-dimensional model reconstruction method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107358645A CN107358645A (en) | 2017-11-17 |
CN107358645B true CN107358645B (en) | 2020-08-11 |
Family
ID=60273557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710425967.8A Active CN107358645B (en) | 2017-06-08 | 2017-06-08 | Product three-dimensional model reconstruction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107358645B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062788A (en) * | 2017-12-18 | 2018-05-22 | 北京锐安科技有限公司 | A kind of three-dimensional rebuilding method, device, equipment and medium |
EP3729376A4 (en) | 2017-12-22 | 2021-01-20 | Magic Leap, Inc. | Method of occlusion rendering using raycast and live depth |
CN108711185B (en) * | 2018-05-15 | 2021-05-28 | 清华大学 | Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation |
CN111696145B (en) * | 2019-03-11 | 2023-11-03 | 北京地平线机器人技术研发有限公司 | Depth information determining method, depth information determining device and electronic equipment |
CN110489834A (en) * | 2019-08-02 | 2019-11-22 | 广州彩构网络有限公司 | A kind of designing system for actual products threedimensional model |
CN115272542A (en) * | 2021-04-29 | 2022-11-01 | 中兴通讯股份有限公司 | Three-dimensional imaging method, device, equipment and storage medium |
CN114241029B (en) * | 2021-12-20 | 2023-04-07 | 贝壳技术有限公司 | Image three-dimensional reconstruction method and device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1395222A (en) * | 2001-06-29 | 2003-02-05 | 三星电子株式会社 | Representation and diawing method of three-D target and method for imaging movable three-D target |
CN101262619A (en) * | 2008-03-30 | 2008-09-10 | 深圳华为通信技术有限公司 | Method and device for capturing view difference |
CN101751697A (en) * | 2010-01-21 | 2010-06-23 | 西北工业大学 | Three-dimensional scene reconstruction method based on statistical model |
CN101833786A (en) * | 2010-04-06 | 2010-09-15 | 清华大学 | Method and system for capturing and rebuilding three-dimensional model |
CN101998136A (en) * | 2009-08-18 | 2011-03-30 | 华为技术有限公司 | Homography matrix acquisition method as well as image pickup equipment calibrating method and device |
CN102682467A (en) * | 2011-03-15 | 2012-09-19 | 云南大学 | Plane- and straight-based three-dimensional reconstruction method |
CN102800081A (en) * | 2012-06-06 | 2012-11-28 | 天津大学 | Expansion algorithm of high-noise resistance speckle-coated phase diagram based on image cutting |
CN104899883A (en) * | 2015-05-29 | 2015-09-09 | 北京航空航天大学 | Indoor object cube detection method for depth image scene |
CN105046743A (en) * | 2015-07-01 | 2015-11-11 | 浙江大学 | Super-high-resolution three dimensional reconstruction method based on global variation technology |
CN106355621A (en) * | 2016-09-23 | 2017-01-25 | 邹建成 | Method for acquiring depth information on basis of array images |
CN106651926A (en) * | 2016-12-28 | 2017-05-10 | 华东师范大学 | Regional registration-based depth point cloud three-dimensional reconstruction method |
CN106709948A (en) * | 2016-12-21 | 2017-05-24 | 浙江大学 | Quick binocular stereo matching method based on superpixel segmentation |
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7212201B1 (en) * | 1999-09-23 | 2007-05-01 | New York University | Method and apparatus for segmenting an image in order to locate a part thereof |
CN103198523B (en) * | 2013-04-26 | 2016-09-21 | 清华大学 | A kind of three-dimensional non-rigid body reconstruction method based on many depth maps and system |
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN106373153A (en) * | 2016-09-23 | 2017-02-01 | 邹建成 | Array lens-based 3D image replacement technology |
-
2017
- 2017-06-08 CN CN201710425967.8A patent/CN107358645B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1395222A (en) * | 2001-06-29 | 2003-02-05 | 三星电子株式会社 | Representation and diawing method of three-D target and method for imaging movable three-D target |
CN101262619A (en) * | 2008-03-30 | 2008-09-10 | 深圳华为通信技术有限公司 | Method and device for capturing view difference |
CN101998136A (en) * | 2009-08-18 | 2011-03-30 | 华为技术有限公司 | Homography matrix acquisition method as well as image pickup equipment calibrating method and device |
CN101751697A (en) * | 2010-01-21 | 2010-06-23 | 西北工业大学 | Three-dimensional scene reconstruction method based on statistical model |
CN101833786A (en) * | 2010-04-06 | 2010-09-15 | 清华大学 | Method and system for capturing and rebuilding three-dimensional model |
CN102682467A (en) * | 2011-03-15 | 2012-09-19 | 云南大学 | Plane- and straight-based three-dimensional reconstruction method |
CN102800081A (en) * | 2012-06-06 | 2012-11-28 | 天津大学 | Expansion algorithm of high-noise resistance speckle-coated phase diagram based on image cutting |
CN104899883A (en) * | 2015-05-29 | 2015-09-09 | 北京航空航天大学 | Indoor object cube detection method for depth image scene |
CN105046743A (en) * | 2015-07-01 | 2015-11-11 | 浙江大学 | Super-high-resolution three dimensional reconstruction method based on global variation technology |
CN106355621A (en) * | 2016-09-23 | 2017-01-25 | 邹建成 | Method for acquiring depth information on basis of array images |
CN106709948A (en) * | 2016-12-21 | 2017-05-24 | 浙江大学 | Quick binocular stereo matching method based on superpixel segmentation |
CN106651926A (en) * | 2016-12-28 | 2017-05-10 | 华东师范大学 | Regional registration-based depth point cloud three-dimensional reconstruction method |
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
Non-Patent Citations (1)
Title |
---|
基于图像的三维重建;冯树彪;《中国优秀硕士学位论文全文数据库信息科技辑》;20130415;正文第1.2节、第4章、第5.1节 * |
Also Published As
Publication number | Publication date |
---|---|
CN107358645A (en) | 2017-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107358645B (en) | Product three-dimensional model reconstruction method and system | |
JP7181977B2 (en) | Method and system for detecting and combining structural features in 3D reconstruction | |
US10360718B2 (en) | Method and apparatus for constructing three dimensional model of object | |
CA2687213C (en) | System and method for stereo matching of images | |
EP2080167B1 (en) | System and method for recovering three-dimensional particle systems from two-dimensional images | |
KR101310589B1 (en) | Techniques for rapid stereo reconstruction from images | |
Tung et al. | Simultaneous super-resolution and 3D video using graph-cuts | |
EP3841554A1 (en) | Method and system for reconstructing colour and depth information of a scene | |
US9437034B1 (en) | Multiview texturing for three-dimensional models | |
CN112651881B (en) | Image synthesizing method, apparatus, device, storage medium, and program product | |
Gurdan et al. | Spatial and temporal interpolation of multi-view image sequences | |
JP2018507477A (en) | Method and apparatus for generating initial superpixel label map for image | |
Lu et al. | Depth-based view synthesis using pixel-level image inpainting | |
CN116912405A (en) | Three-dimensional reconstruction method and system based on improved MVSNet | |
KR20140056073A (en) | Method and system for creating dynamic floating window for stereoscopic contents | |
Shalma et al. | A review on 3D image reconstruction on specific and generic objects | |
Hu et al. | 3D map reconstruction using a monocular camera for smart cities | |
Tsiminaki et al. | Joint multi-view texture super-resolution and intrinsic decomposition | |
Yan et al. | Stereoscopic image generation from light field with disparity scaling and super-resolution | |
Fechteler et al. | Articulated 3D model tracking with on-the-fly texturing | |
Chen et al. | A quality controllable multi-view object reconstruction method for 3D imaging systems | |
EP4258221A2 (en) | Image processing apparatus, image processing method, and program | |
Plath et al. | Line-preserving hole-filling for 2d-to-3d conversion | |
Cheng et al. | A novel structure-from-motion strategy for refining depth map estimation and multi-view synthesis in 3DTV | |
Rochette et al. | Human pose manipulation and novel view synthesis using differentiable rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240401 Address after: 201203 Pudong New Area, Shanghai, China (Shanghai) free trade trial area, No. 3, 1 1, Fang Chun road. Patentee after: SHANGHAI HANYU BIOLOGICAL SCIENCE & TECHNOLOGY Co.,Ltd. Country or region after: China Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District Patentee before: SHANGHAI JIAO TONG University Country or region before: China |