CN110246212A - A kind of target three-dimensional rebuilding method based on self-supervisory study - Google Patents

A kind of target three-dimensional rebuilding method based on self-supervisory study Download PDF

Info

Publication number
CN110246212A
CN110246212A CN201910367420.6A CN201910367420A CN110246212A CN 110246212 A CN110246212 A CN 110246212A CN 201910367420 A CN201910367420 A CN 201910367420A CN 110246212 A CN110246212 A CN 110246212A
Authority
CN
China
Prior art keywords
binary map
point cloud
image
self
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910367420.6A
Other languages
Chinese (zh)
Other versions
CN110246212B (en
Inventor
孙冉
方志军
高永彬
周恒�
严娟
黄漫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN201910367420.6A priority Critical patent/CN110246212B/en
Publication of CN110246212A publication Critical patent/CN110246212A/en
Application granted granted Critical
Publication of CN110246212B publication Critical patent/CN110246212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of target three-dimensional rebuilding methods based on self-supervisory study, comprising: S1, training points cloud autoencoder network;S2, training binary map autoencoder network;S3, input RGB image, obtain true binary map;S4, image pose is extracted using Pose net;S5, training image encoder, and generate preliminary point cloud model;S6, transformation point cloud model is generated;S7, training points cloud encoder, and generate recovery binary map;S8, the mean square deviation for restoring binary map and true binary map is calculated, if mean square deviation is less than preset threshold, exported as a result, no then follow the steps S9;S9, feedback mean square deviation are to image encoder, and return step S5 again.Compared with prior art, the present invention using Pose net extraction image pose and increases two-dimentional supervision message, solves the problems such as fuzzy input picture visual angle, shortage supervision item information, improves the accuracy of target three-dimensional reconstruction.

Description

A kind of target three-dimensional rebuilding method based on self-supervisory study
Technical field
The present invention relates to computer vision and technical field of image processing, more particularly, to a kind of based on self-supervisory study Target three-dimensional rebuilding method.
Background technique
As a research direction of computer vision and computer graphics height intersection, three-dimensional reconstruction passes through specific Device and algorithm rebuild the mathematical model of three-dimension object in the real world, have been widely used for Intelligent unattended Multiple industries such as system, robot, computer aided medicine, virtual reality, augmented reality.Traditional three-dimensional rebuilding method is ground Study carefully and focus mostly on multi-view geometry, including SFM and SLAM, although these methods all achieve certain effect in special scenes Fruit, but there is also some drawbacks: 1) part that multi-view geometry can not lack in reconstructed view needs to input enough views To guarantee the integrality of reconstructed object;2) increase for meaning computation complexity to the reconstruction of multiple view is difficult to accomplish weight in real time The effect built.These drawbacks all limit the application of multiple view reconstruction, and therefore, the method based on study realizes the reconstruction of single-view It is particularly important.
Mainly include two methods currently based on the single-view image reconstruction of study: a kind of method is generated according to 2D CNN The method of image is generated by 3D CNN, and with voxel shape representation;Another method is to pass through change based on Weakly supervised mechanism Constituent encoder fits 3D shape, and 3D-RecGAN is the new network of one kind for being suggested based on GAN structure, can directly from The voxel architecture of individual depth map reply object.The voxel that above two method obtains is 256^3, the requirement to hardware device It is very high, since the 3D shape that voxel indicates is to improve surface accuracy by increasing the fast number of 3 D stereo, in order to EQUILIBRIUM CALCULATION FOR PROCESS complexity and surface accuracy have some method for reconstructing based on grid and point cloud to be suggested again in the recent period, for example, Individual color image directly can be generated triangle gridding by the thought of Pixel2Mesh combination picture scroll product;PSG-Net utilizes unordered The network frame and loss function of point cloud realize the point Yun Chongjian indicated better than voxel.But since single-view reconstruction itself lacks Lack enough supervision item information and part input picture visual angle is fuzzy, causes the model reconstructed there are excalation, lacks carefully Situations such as saving surface abundant.
Summary of the invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide one kind to be based on self-supervisory The target three-dimensional rebuilding method of habit, to improve the accuracy of target three-dimensional reconstruction.
The purpose of the present invention can be achieved through the following technical solutions: a kind of target Three-dimensional Gravity based on self-supervisory study Construction method, comprising the following steps:
S1, training points cloud autoencoder network obtain the point potential feature of cloudWherein, point cloud autoencoder network includes point cloud Self-encoding encoderWith a cloud decoder DP
S2, training binary map autoencoder network, obtain the potential feature of binary mapWherein, binary map autoencoder network packet Include binary map self-encoding encoderWith binary map decoder DI
S3, input RGB image, obtain true binary map by binary conversion treatment;
S4, feature extraction, the image position of the RGB image inputted are carried out using RGB image of the Pose net to input Appearance;
S5, the RGB image according to input, training image encoderObtain the first space characteristics FP, pass through the step Point cloud decoder D in S1P, the RGB image of input is generated into preliminary point cloud model;
S6, translation and rotation transformation are carried out to preliminary point cloud model by image pose, generates transformation point cloud model;
S7, according to transformation point cloud model, training points cloud encoderObtain second space feature FB, pass through the step Binary map decoder D in S2I, generate and restore binary map;
S8, the mean square deviation for restoring binary map and true binary map is calculated, if the mean square deviation is less than preset threshold, Output transform point cloud model is the target three-dimensional reconstruction result for inputting RGB image, no to then follow the steps S9;
S9, feedback mean square deviation are to image encoderAnd return step S5 again.
Preferably, the point potential feature of cloud is obtained in the step S1Detailed process are as follows:
S11, by true point cloud data input point cloud self-encoding encoderB × N × 512 are obtained after 5 layer of 1 dimension convolution Feature;
The feature of B × N × 512 obtains the point Yun Qian of B × 512 by maximum pond layer operation in S12, the step S11 In featureWherein, k=512.
Preferably, the midpoint step S1 cloud decoder DPIncluding three layers of full articulamentum, the potential feature of cloud will be putTurn Change the point cloud format of B × N × 3 into.
Preferably, the potential feature of binary map is obtained in the step S2Detailed process are as follows:
S21, binary conversion treatment is carried out to RGB image, obtains true binary mapWherein, binary conversion treatment is by pixel The place that value is 0 indicates that pixel value is that the place of non-zero is indicated with 1 with 0;
S22, by true binary mapInput binary map self-encoding encoderObtain the potential feature of binary map Its In, k=512.
Preferably, binary map decoder D in the step S2IIt is operated using deconvolution and carries out picture material filling, so that Picture material is gradually enriched to restore binary map.
Preferably, Pose net uses full articulamentum to return out image aspects information in the step S4, to obtain image Pose, described image Viewing-angle information include (α, beta, gamma, a, b, c) totally six parameters, wherein (α, beta, gamma) is deflection, respectively Indicate that yaw angle, pitch angle and roll angle, (a, b, c) are translation vector;
Described image pose is (R, t), wherein R is spin matrix, t=(a, b, c), by deflection (α, beta, gamma) to rotation The conversion formula of torque battle array R is as follows:
Preferably, the step S5 specifically includes the following steps:
S51, input RGB image are to image encoderObtain the first space characteristics FP
S52, by the first space characteristics FPWith the potential feature of cloudConstitute first-loss function;
S53, according to first-loss function and the first space characteristics FP, using a cloud decoder DPGenerate preliminary point cloud model.
Preferably, it is specifically by image pose and preliminary point cloud model phase that transformation point cloud model is generated in the step S6 Multiply, so that preliminary point cloud model be made to transform to camera plane:
x′i=Rxi+t i∈[0,N-1]
Wherein, xiFor the point in preliminary point cloud model, x 'iFor the point in transformation point cloud model, N indicates to wrap in three-dimensional structure The number of the point contained, each point xiIt is multiplied after transformation with image pose (R, t) and obtains x 'i
Preferably, the step S7 specifically includes the following steps:
S71, point cloud model input point cloud encoder will be convertedObtain second space feature FB
S72, by second space feature FBWith the potential feature of binary mapConstitute the second loss function;
S73, according to the second loss function and second space feature FB, using binary map decoder DIIt generates and restores binary map.
Compared with prior art, the invention has the following advantages that
One, the present invention, which carries out feature extraction to RGB image using Pose net, has network to obtain image pose The ability at resolution image visual angle solves the problems, such as the input picture dimness of vision, can generate reasonable point cloud by operative constraint Model.
Two, the present invention is based on image poses translates preliminary point cloud model, rotation transformation, and is generated by network extensive Multiple binary map, to make full use of binary map information to carry out self-supervisory, improve mesh as the supervision item information for generating point cloud model Mark the accuracy of three-dimensional reconstruction result.
Detailed description of the invention
Fig. 1 is method flow schematic diagram of the invention;
Fig. 2 is the target three-dimensional reconstruction process schematic of embodiment;
Fig. 3 is the schematic network structure of Pose net.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.
As shown in Figure 1, a kind of target three-dimensional rebuilding method based on self-supervisory study, comprising the following steps:
S1, training points cloud autoencoder network obtain the point potential feature of cloudWherein, point cloud autoencoder network includes point cloud Self-encoding encoderWith a cloud decoder DP
S2, training binary map autoencoder network, obtain the potential feature of binary mapWherein, binary map autoencoder network packet Include binary map self-encoding encoderWith binary map decoder DI
S3, input RGB image, obtain true binary map by binary conversion treatment;
S4, feature extraction, the image position of the RGB image inputted are carried out using RGB image of the Pose net to input Appearance;
S5, the RGB image according to input, training image encoderObtain the first space characteristics FP, pass through the step Point cloud decoder D in S1P, the RGB image of input is generated into preliminary point cloud model;
S6, translation and rotation transformation are carried out to preliminary point cloud model by image pose, generates transformation point cloud model;
S7, according to transformation point cloud model, training points cloud encoderObtain second space feature FB, pass through the step Binary map decoder D in S2I, generate and restore binary map;
S8, the mean square deviation for restoring binary map and true binary map is calculated, if the mean square deviation is less than preset threshold, Output transform point cloud model is the target three-dimensional reconstruction result for inputting RGB image, no to then follow the steps S9;
S9, feedback mean square deviation are to image encoderAnd return step S5 again.
In the present embodiment, cloud self-encoding encoder is putBinary map self-encoding encoderImage encoderPoint cloud encoderPoint cloud decoder DPWith binary map decoder DINetwork structure it is as shown in table 1:
Table 1
Wherein, the detailed process of step S1 includes: to be sent into point cloud certainly with the true point cloud data B × X × 3 of aircraft for one group EncoderThe feature of B × N × 512 is obtained after 5 layer of 1 dimension convolution, then obtains B × 512 after being operated by Max pooling The potential feature of point cloudPoint cloud decoder DPPart is made of three layers of full articulamentum, will finally put cloud Potential featureIt is deformed into the point cloud format of B × N × 3, obtains aircraft point cloud model;
The detailed process of step S2 includes: to carry out binary conversion treatment to the RGB image of one group of aircraft, i.e. pixel value is 0 Place indicates that pixel value is that the place of non-zero is indicated with 1 with 0, and to obtain true binary map, true binary map is then sent into two It is worth figure self-encoding encoderObtain the potential feature of binary mapBinary map decoder DIIt is grasped using deconvolution Make filling picture material, so that picture material becomes gradually to enrich to recover the binary map of aircraft RGB image.
The present embodiment is to carry out target three-dimensional reconstruction to aircraft brake disc, and process schematic is as shown in Figure 2.Wherein, step S3 It is that binary conversion treatment is carried out to the aircraft RGB image of input, obtains the true binary map of aircraft;
The detailed process of step S4 includes: to obtain the image pose (R, t) of input RGB image, Pose by Pose net The network structure of net indicates image aspects letter as shown in figure 3, its last layer returns out a six-vector using full articulamentum Breath, image aspects information include that (a, beta, gamma, a, b, c) totally six parameters, (α, beta, gamma) respectively indicate three deflections, that is, yaw Angle, pitch angle and roll angle, (a, b, c) indicates translation vector, and in image pose (R, t), R is spin matrix, t=(a, b, c), And the conversion formula of deflection to spin matrix R are as follows:
Image encoder needed for generating preliminary point cloud model in step S5With binary map self-encoding encoderNetwork knot Structure is identical, i.e., in addition to the last layer is full articulamentum, remaining network layer is convolutional layer, thus extracts to the RGB image of input Further feature simultaneously obtains the first space characteristics of aircraft F of 512 dimensions finally by full articulamentumP, and by the first space characteristics FPAnd point The potential feature of cloudIt calculates mean square error and constitutes L1 normalization loss function, decoder directlys adopt trained point in step S1 Cloud decoder DP, to obtain preliminary aircraft point cloud model;
The detailed process of step S6 includes: preliminary to generating in step S5 using the image pose (R, t) in step S4 Point cloud model carries out translation and rotation transformation, i.e., is multiplied with first beans-and bullets shooter cloud, so that preliminary point cloud model is transformed to camera plane, obtain The transformation point cloud model of aircraft makes the 3D aircraft shape of the corresponding pose of a binary map, to realize a cloud and binary map One-to-one correspondence:
x′i=Rxi+t i∈[0,N-1]
In formula, N indicates the number for the point for including in three-dimensional structure, each of preliminary point cloud model point xiBy matrix X ' is obtained after (R, t) transformationi, x 'iFor the point in transformation point cloud model;
Step S7 is that the transformation point cloud model of aircraft is sent into point cloud encoderIt performs the encoding operation, exports aircraft second Space characteristics FB, and with the potential feature of binary mapIt constitutes L1 and normalizes loss function, decoder directlys adopt instructs in step S2 The binary map decoder D perfectedI, to obtain the recovery binary map of aircraft;
The detailed process of step S8 and S9 include: the mean square error differential loss of the recovery binary map and true binary map that calculate aircraft Mistake value, if the mean square error penalty values are less than preset threshold, it is winged for exporting the aircraft transformation point cloud model generated in step S6 Otherwise the mean square error penalty values are fed back to image encoder by the target three-dimensional reconstruction result of machineRecirculate step S5 To step S9, until optimal aircraft point cloud model is obtained, using the target three-dimensional reconstruction result as aircraft.

Claims (9)

1. a kind of target three-dimensional rebuilding method based on self-supervisory study, which comprises the following steps:
S1, training points cloud autoencoder network obtain the point potential feature of cloudWherein, point cloud autoencoder network includes that point cloud is self-editing Code deviceWith a cloud decoder DP
S2, training binary map autoencoder network, obtain the potential feature of binary mapWherein, binary map autoencoder network includes two It is worth figure self-encoding encoderWith binary map decoder DI
S3, input RGB image, obtain true binary map by binary conversion treatment;
S4, feature extraction, the image pose of the RGB image inputted are carried out using RGB image of the Pose net to input;
S5, the RGB image according to input, training image encoderObtain the first space characteristics FP, by the step S1 Point cloud decoder DP, the RGB image of input is generated into preliminary point cloud model;
S6, translation and rotation transformation are carried out to preliminary point cloud model by image pose, generates transformation point cloud model;
S7, according to transformation point cloud model, training points cloud encoderObtain second space feature FB, by the step S2 Binary map decoder DI, generate and restore binary map;
S8, the mean square deviation for restoring binary map and true binary map is calculated, if the mean square deviation is less than preset threshold, exported Transformation point cloud model is the target three-dimensional reconstruction result for inputting RGB image, no to then follow the steps S9;
S9, feedback mean square deviation are to image encoderAnd return step S5 again.
2. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described The point potential feature of cloud is obtained in step S1Detailed process are as follows:
S11, by true point cloud data input point cloud self-encoding encoderThe feature of B × N × 512 is obtained after 5 layer of 1 dimension convolution;
The feature of B × N × 512 obtains the potential spy of point cloud of B × 512 by maximum pond layer operation in S12, the step S11 SignWherein, k=512.
3. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described The midpoint step S1 cloud decoder DPIncluding three layers of full articulamentum, the potential feature of cloud will be putIt is converted into the point cloud lattice of B × N × 3 Formula.
4. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described The potential feature of binary map is obtained in step S2Detailed process are as follows:
S21, binary conversion treatment is carried out to RGB image, obtains true binary mapWherein, it is 0 that binary conversion treatment, which is by pixel value, Place indicated with 0, pixel value be non-zero place indicated with 1;
S22, by true binary mapInput binary map self-encoding encoderObtain the potential feature of binary map Wherein, k =512.
5. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described Binary map decoder D in step S2IUsing deconvolution operate carry out picture material filling so that picture material gradually enrich to Restore binary map.
6. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described Pose net returns out image aspects information using full articulamentum in step S4, to obtain image pose, described image visual angle letter Breath includes (α, beta, gamma, a, b, c) totally six parameters, wherein (α, beta, gamma) is deflection, respectively indicates yaw angle, pitch angle and Roll angle, (a, b, c) are translation vector;
Described image pose is (R, t), wherein R is spin matrix, and t=(a, b, c) arrives spin moment by deflection (α, beta, gamma) The conversion formula of battle array R is as follows:
7. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described Step S5 specifically includes the following steps:
S51, input RGB image are to image encoderObtain the first space characteristics FP
S52, by the first space characteristics FPWith the potential feature of cloudConstitute first-loss function;
S53, according to first-loss function and the first space characteristics FP, using a cloud decoder DPGenerate preliminary point cloud model.
8. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 1, which is characterized in that described Step S7 specifically includes the following steps:
S71, point cloud model input point cloud encoder will be convertedObtain second space feature FB
S72, by second space feature FBWith the potential feature of binary mapConstitute the second loss function;
S73, according to the second loss function and second space feature FB, using binary map decoder DIIt generates and restores binary map.
9. a kind of target three-dimensional rebuilding method based on self-supervisory study according to claim 6, which is characterized in that described It is specifically that image pose is multiplied with preliminary point cloud model that transformation point cloud model is generated in step S6, to make preliminary point cloud model Transform to camera plane:
x′i=Rxi+t i∈[0,N-1]
Wherein, xiFor the point in preliminary point cloud model, x 'iTo convert the point in point cloud model, include in N expression three-dimensional structure The number of point, each point xiIt is multiplied after transformation with image pose (R, t) and obtains x 'i
CN201910367420.6A 2019-05-05 2019-05-05 Target three-dimensional reconstruction method based on self-supervision learning Active CN110246212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910367420.6A CN110246212B (en) 2019-05-05 2019-05-05 Target three-dimensional reconstruction method based on self-supervision learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910367420.6A CN110246212B (en) 2019-05-05 2019-05-05 Target three-dimensional reconstruction method based on self-supervision learning

Publications (2)

Publication Number Publication Date
CN110246212A true CN110246212A (en) 2019-09-17
CN110246212B CN110246212B (en) 2023-02-07

Family

ID=67883786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910367420.6A Active CN110246212B (en) 2019-05-05 2019-05-05 Target three-dimensional reconstruction method based on self-supervision learning

Country Status (1)

Country Link
CN (1) CN110246212B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311664A (en) * 2020-03-03 2020-06-19 上海交通大学 Joint unsupervised estimation method and system for depth, pose and scene stream
CN112767468A (en) * 2021-02-05 2021-05-07 中国科学院深圳先进技术研究院 Self-supervision three-dimensional reconstruction method and system based on collaborative segmentation and data enhancement
CN113438481A (en) * 2020-03-23 2021-09-24 富士通株式会社 Training method, image coding method, image decoding method and device
CN113592913A (en) * 2021-08-09 2021-11-02 中国科学院深圳先进技术研究院 Method for eliminating uncertainty of self-supervision three-dimensional reconstruction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1747559A (en) * 2005-07-29 2006-03-15 北京大学 Three-dimensional geometric mode building system and method
US20100309304A1 (en) * 2007-10-24 2010-12-09 Bernard Chalmond Method and Device for the Reconstruction of the Shape of an Object from a Sequence of Sectional Images of Said Object
CN107481313A (en) * 2017-08-18 2017-12-15 深圳市唯特视科技有限公司 A kind of dense three-dimensional object reconstruction method based on study available point cloud generation
CN108665499A (en) * 2018-05-04 2018-10-16 北京航空航天大学 A kind of low coverage aircraft pose measuring method based on parallax method
CN108694741A (en) * 2017-04-07 2018-10-23 杭州海康威视数字技术股份有限公司 A kind of three-dimensional rebuilding method and device
CN109087325A (en) * 2018-07-20 2018-12-25 成都指码科技有限公司 A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1747559A (en) * 2005-07-29 2006-03-15 北京大学 Three-dimensional geometric mode building system and method
US20100309304A1 (en) * 2007-10-24 2010-12-09 Bernard Chalmond Method and Device for the Reconstruction of the Shape of an Object from a Sequence of Sectional Images of Said Object
CN108694741A (en) * 2017-04-07 2018-10-23 杭州海康威视数字技术股份有限公司 A kind of three-dimensional rebuilding method and device
CN107481313A (en) * 2017-08-18 2017-12-15 深圳市唯特视科技有限公司 A kind of dense three-dimensional object reconstruction method based on study available point cloud generation
CN108665499A (en) * 2018-05-04 2018-10-16 北京航空航天大学 A kind of low coverage aircraft pose measuring method based on parallax method
CN109087325A (en) * 2018-07-20 2018-12-25 成都指码科技有限公司 A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANDRÉ HENN, GERHARDGRÖGER, VIKTOR STROH,LUTZ PLÜMER: ""Model driven reconstruction of roofs from sparse LIDAR point clouds"", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 *
THOMAS KRIJNEN,JAKO BBEETZ: ""An IFC schema extension and binary serialization format to efficiently integrate point cloud data into building models"", 《ADVANCED ENGINEERING INFORMATICS》 *
黄煜: ""基于靶标特征的机器人视觉自主寻位方法"", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311664A (en) * 2020-03-03 2020-06-19 上海交通大学 Joint unsupervised estimation method and system for depth, pose and scene stream
CN111311664B (en) * 2020-03-03 2023-04-21 上海交通大学 Combined unsupervised estimation method and system for depth, pose and scene flow
CN113438481A (en) * 2020-03-23 2021-09-24 富士通株式会社 Training method, image coding method, image decoding method and device
CN113438481B (en) * 2020-03-23 2024-04-12 富士通株式会社 Training method, image encoding method, image decoding method and device
CN112767468A (en) * 2021-02-05 2021-05-07 中国科学院深圳先进技术研究院 Self-supervision three-dimensional reconstruction method and system based on collaborative segmentation and data enhancement
WO2022166412A1 (en) * 2021-02-05 2022-08-11 中国科学院深圳先进技术研究院 Self-supervised three-dimensional reconstruction method and system based on collaborative segmentation and data enhancement
CN112767468B (en) * 2021-02-05 2023-11-03 中国科学院深圳先进技术研究院 Self-supervision three-dimensional reconstruction method and system based on collaborative segmentation and data enhancement
CN113592913A (en) * 2021-08-09 2021-11-02 中国科学院深圳先进技术研究院 Method for eliminating uncertainty of self-supervision three-dimensional reconstruction
CN113592913B (en) * 2021-08-09 2023-12-26 中国科学院深圳先进技术研究院 Method for eliminating uncertainty of self-supervision three-dimensional reconstruction

Also Published As

Publication number Publication date
CN110246212B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
CN110246212A (en) A kind of target three-dimensional rebuilding method based on self-supervisory study
Mandikal et al. Dense 3d point cloud reconstruction using a deep pyramid network
Le et al. Pointgrid: A deep network for 3d shape understanding
CN110288695B (en) Single-frame image three-dimensional model surface reconstruction method based on deep learning
CN102592275B (en) Virtual viewpoint rendering method
CN111862101A (en) 3D point cloud semantic segmentation method under aerial view coding visual angle
CN112396703A (en) Single-image three-dimensional point cloud model reconstruction method
Zhang et al. Research on image processing technology of computer vision algorithm
CN108280858A (en) A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN110827295A (en) Three-dimensional semantic segmentation method based on coupling of voxel model and color information
CN107993255A (en) A kind of dense optical flow method of estimation based on convolutional neural networks
CN102306386A (en) Method for quickly constructing third dimension tree model from single tree image
CN110889901B (en) Large-scene sparse point cloud BA optimization method based on distributed system
CN114463511A (en) 3D human body model reconstruction method based on Transformer decoder
CN110443883A (en) A kind of individual color image plane three-dimensional method for reconstructing based on dropblock
CN113077554A (en) Three-dimensional structured model reconstruction method based on any visual angle picture
CN106127743B (en) The method and system of automatic Reconstruction bidimensional image and threedimensional model accurate relative location
CN115951784A (en) Dressing human body motion capture and generation method based on double nerve radiation fields
CN106251281A (en) A kind of image morphing method based on shape interpolation
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN116134491A (en) Multi-view neuro-human prediction using implicit differentiable renderers for facial expression, body posture morphology, and clothing performance capture
Liu et al. Facial image inpainting using attention-based multi-level generative network
Fang et al. One is all: Bridging the gap between neural radiance fields architectures with progressive volume distillation
CN108230431B (en) Human body action animation generation method and system of two-dimensional virtual image
CN103971397B (en) The global illumination method for drafting reduced based on virtual point source and sparse matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant