CN107085824A - A kind of pole view extracting method of threedimensional model - Google Patents

A kind of pole view extracting method of threedimensional model Download PDF

Info

Publication number
CN107085824A
CN107085824A CN201710148535.7A CN201710148535A CN107085824A CN 107085824 A CN107085824 A CN 107085824A CN 201710148535 A CN201710148535 A CN 201710148535A CN 107085824 A CN107085824 A CN 107085824A
Authority
CN
China
Prior art keywords
point cloud
dimensional point
cloud model
mrow
pole
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710148535.7A
Other languages
Chinese (zh)
Inventor
邓杭
周燕
曾凡智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN201710148535.7A priority Critical patent/CN107085824A/en
Publication of CN107085824A publication Critical patent/CN107085824A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T3/067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Abstract

The present invention provides a kind of pole view extracting method of threedimensional model, first, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and three-dimensional point cloud model is moved in rectangular coordinate system zoomed in and out, realize that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;Secondly, it will be fastened in rectangular co-ordinate and be transformed into spherical coordinates by the three-dimensional point cloud model of scaling, and obtain direction and the distance property of three-dimensional point cloud model each point;Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates the ultimate range of each pixel sampling distance set, be used as the ray sampled value of Direction interval;Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph for pole view.The problem of causing computationally intensive, view redundancy and many EMS memory occupations present invention, avoiding traditional multi-angle view extracting method;This method, which extracts obtained threedimensional model pole view, can express the global space geometric properties of threedimensional model.

Description

A kind of pole view extracting method of threedimensional model
Technical field
The present invention relates to threedimensional model processing technology field, carried more specifically to a kind of pole view of threedimensional model Take method.
Background technology
With the fast development of computer hardware and software, and threedimensional model modeling technique and the processing of GPU high speed graphics The development of technology, the quantity sharp increase of threedimensional model promotes the fast development of threedimensional model treatment technology.Threedimensional model conduct 4th kind of multimedia data type, the application field of threedimensional model is also increasingly wider, in Design of Industrial Product, medical biotechnology, mould Intend being widely used in the fields such as emulation, virtual reality, 3D game, 3D video display and multimedia education system.
The sharp increase of threedimensional model quantity and the application demand of threedimensional model increasingly increase, for the three-dimensional of stereochemical structure How the expression and feature extraction of model, analyze exactly and scientifically processing threedimensional model becomes urgent problem to be solved. In recent years, the threedimensional model analysis method based on view has obtained substantial amounts of utilization.By the entity structure of analyzing three-dimensional model, The two dimension view of threedimensional model is extracted, then two dimension view is analyzed and handled.The extraction of traditional view will generally be carried The view of multiple angles is taken, needs to spend computing resource and the time of a large amount of computers in extraction process, just can guarantee that multiple Two dimension view is to the expressed intact of threedimensional model, and multiple views have substantial amounts of information redundancy.Therefore, current view Extracting method is also far from enough for handling large batch of threedimensional model, so needing at this stage by entering to view extracting method Row is improved, or combines new theory, studies new threedimensional model view extracting method, for solving current threedimensional model point Analysis and the problem of feature extraction, have very important significance and are worth.
The content of the invention
It is an object of the invention to overcome shortcoming and deficiency of the prior art to be carried there is provided a kind of pole view of threedimensional model Take method, the pole view extracting method avoid traditional multi-angle view extracting method cause computationally intensive, view redundancy and The problem of EMS memory occupation is more;This method, which extracts obtained threedimensional model pole view, can express the global space geometry spy of threedimensional model Levy, good view feature basis is provided so as to be analyzed for threedimensional model.
In order to achieve the above object, the technical scheme is that:A kind of pole view of threedimensional model Extracting method, it is characterised in that:
First, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and by three-dimensional Point cloud model is moved in rectangular coordinate system and zoomed in and out, and realizes that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;
Secondly, spherical coordinates is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, and obtain three-dimensional point The direction of each point of cloud model and distance property;
Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates each pixel sampling distance set Ultimate range, is used as the ray sampled value of Direction interval;
Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph, is that extracted pole is regarded Figure.
In such scheme, the pole view extracting method of threedimensional model of the present invention avoids traditional multi-angle view and extracted The problem of method causes computationally intensive, view redundancy and many EMS memory occupations.This method is to be located at the direct origin of coordinates to barycenter What threedimensional model was extracted, the mapping that direction to the pole view coordinate of a cloud are done on spherical coordinates goes to extract maximum sampled value, The pole view that this method is extracted can express the global space structure of threedimensional model, so as to provide good for threedimensional model analysis Good view feature basis.Wherein, pole view refers to outwards launch one group of sampling ray, ray and mould from the barycenter of threedimensional model The two-dimentional sample graph that the distance of the intersection point of type to barycenter is arranged in.
Specifically, comprise the following steps:
Step S101:Three-dimensional point cloud model is inputted, the yardstick of the three-dimensional point cloud model is P={ pi(xi,yi,zi) | i=1, 2,...,N};
Step S102:Barycenter g (the g of three-dimensional point cloud model are calculated according to the following equationx, gy, gz), pass through obtained barycenter g (gx, gy, gz) fasten three-dimensional point cloud model translation transformation to rectangular co-ordinate, then chi of the three-dimensional point cloud model in rectangular coordinate system Spend for p 'i=pi- g, i=1,2 ..., N;Three-dimensional point cloud model p ' after translation transformationiBarycenter be located at rectangular coordinate system original Point;
Step s103:The zoom factor s of three-dimensional point cloud model is calculated, three-dimensional point cloud model is zoomed into Unit Scale isOn, wherein, zoom factor s is
Step s104:Spherical coordinates Q is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, it is now three-dimensional Yardstick of the point cloud model on spherical coordinates beConversion formula is as follows:
Wherein θ ∈ [0, π], it is 0 that the elevation angle is born on semiaxis in Z axis;
Step s105:Spherical coordinates Q is mapped on the location of pixels (u, v) of pole view, mapping relations areThen three-dimensional point cloud model is calculated according to the following equation on the location of pixels (u, v) of pole view;Wherein, There is spherical coordinates point, multiple spherical coordinates point on one location of pixels (u, v) or in the absence of there is spherical coordinates point;
Wherein nuAnd nvRespectively pole view is wide and long;
Step s106:Each the sampled distance collection of pixel (u, v) isGo out each picture according to the following formula The maximum of plain sampled distance collection, to obtain pixel sampling value in the view of pole, and is arranged in two-dimentional collection figure and made as ultimate range For pole view I;
Above-mentioned preprocessing process needs to carry out Pan and Zoom conversion to threedimensional model, it is ensured that threedimensional model is in gauge Normalization and standardization on degree.Threedimensional model point cloud is converted into spherical coordinate system, two dimension is mapped to beneficial to by spherical coordinates point The respective pixel position of pole view, by mapping, counts the maximum of the distance set of point on the location of pixels, by maximum sampling Value forms two-dimentional sample graph, is the new pole view of threedimensional model.
Compared with prior art, the invention has the advantages that and beneficial effect:The pole view of the threedimensional model of the present invention Extracting method, which avoids traditional multi-angle view extracting method, causes asking more than computationally intensive, view redundancy and EMS memory occupation Topic;This method, which extracts obtained threedimensional model pole view, can express the global space geometric properties of threedimensional model, so as to be three-dimensional Model analysis provides good view feature basis.
Brief description of the drawings
Fig. 1 is the flow chart of the pole view extracting method of threedimensional model of the present invention;
Fig. 2 is to be extracted in the pole view extracting method of threedimensional model of the present invention to obtain the signal of pole view by threedimensional model Figure;
Embodiment
The present invention is described in further detail with embodiment below in conjunction with the accompanying drawings.
Embodiment
As depicted in figs. 1 and 2, the pole view extracting method of threedimensional model of the invention is such:
First, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and by three-dimensional Point cloud model is moved in rectangular coordinate system and zoomed in and out, and realizes that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;
Secondly, spherical coordinates is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, and obtain three-dimensional point The direction of each point of cloud model and distance property;
Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates each pixel sampling distance set Ultimate range, is used as the ray sampled value of Direction interval;
Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph, is that extracted pole is regarded Figure.
In such scheme, the pole view extracting method of threedimensional model of the present invention avoids traditional multi-angle view and extracted The problem of method causes computationally intensive, view redundancy and many EMS memory occupations.This method is to be located at the direct origin of coordinates to barycenter What threedimensional model was extracted, the mapping that direction to the pole view coordinate of a cloud are done on spherical coordinates goes to extract maximum sampled value, The pole view that this method is extracted can express the global space structure of threedimensional model, so as to provide good for threedimensional model analysis Good view feature basis.Wherein, pole view refers to outwards launch one group of sampling ray, ray and mould from the barycenter of threedimensional model The two-dimentional sample graph that the distance of the intersection point of type to barycenter is arranged in.
Specifically, comprise the following steps:
Step S101:Three-dimensional point cloud model is inputted, the yardstick of the three-dimensional point cloud model is P={ pi(xi,yi,zi) | i=1, 2,...,N};
Step S102:Barycenter g (the g of three-dimensional point cloud model are calculated according to the following equationx, gy, gz), pass through obtained barycenter g (gx, gy, gz) fasten three-dimensional point cloud model translation transformation to rectangular co-ordinate, then chi of the three-dimensional point cloud model in rectangular coordinate system Spend for p 'i=pi- g, i=1,2 ..., N;Three-dimensional point cloud model p ' after translation transformationiBarycenter be located at rectangular coordinate system original Point;
Step s103:The zoom factor s of three-dimensional point cloud model is calculated, three-dimensional point cloud model is zoomed into Unit Scale isOn, wherein, zoom factor s is
Step s104:Spherical coordinates Q is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, it is now three-dimensional Yardstick of the point cloud model on spherical coordinates beConversion formula is as follows:
Wherein θ ∈ [0, π], it is 0 that the elevation angle is born on semiaxis in Z axis;
Step s105:Spherical coordinates Q is mapped on the location of pixels (u, v) of pole view, mapping relations areThen three-dimensional point cloud model is calculated according to the following equation on the location of pixels (u, v) of pole view;Wherein, There is spherical coordinates point, multiple spherical coordinates point on one location of pixels (u, v) or in the absence of there is spherical coordinates point;
Wherein nuAnd nvRespectively pole view is wide and long;
Step s106:Each the sampled distance collection of pixel (u, v) isGo out each picture according to the following formula The maximum of plain sampled distance collection, to obtain pixel sampling value in the view of pole, and is arranged in two-dimentional collection figure and made as ultimate range For pole view I;
Above-mentioned preprocessing process needs to carry out Pan and Zoom conversion to threedimensional model, it is ensured that threedimensional model is in gauge Normalization and standardization on degree.Threedimensional model point cloud is converted into spherical coordinate system, two dimension is mapped to beneficial to by spherical coordinates point The respective pixel position of pole view, by mapping, counts the maximum of the distance set of point on the location of pixels, by maximum sampling Value forms two-dimentional sample graph, is the new pole view of threedimensional model.
Above-described embodiment is preferably embodiment, but embodiments of the present invention are not by above-described embodiment of the invention Limitation, other any Spirit Essences without departing from the present invention and the change made under principle, modification, replacement, combine, simplification, Equivalent substitute mode is should be, is included within protection scope of the present invention.

Claims (2)

1. a kind of pole view extracting method of threedimensional model, it is characterised in that:
First, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and by three-dimensional point cloud Model is moved in rectangular coordinate system and zoomed in and out, and realizes that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;
Secondly, spherical coordinates is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, and obtain three-dimensional point cloud mould The direction of each point of type and distance property;
Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates the maximum of each pixel sampling distance set Distance, is used as the ray sampled value of Direction interval;
Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph, is extracted pole view.
2. the pole view extracting method of threedimensional model according to claim 1, it is characterised in that:Comprise the following steps:
Step S101:Three-dimensional point cloud model is inputted, the yardstick of the three-dimensional point cloud model is P={ pi(xi,yi,zi) | i=1, 2,...,N};
Step S102:Barycenter g (the g of three-dimensional point cloud model are calculated according to the following equationx, gy, gz), pass through obtained barycenter g (gx, gy, gz) fasten three-dimensional point cloud model translation transformation to rectangular co-ordinate, then three-dimensional point cloud model is in the yardstick of rectangular coordinate system p′i=pi- g, i=1,2 ..., N;Three-dimensional point cloud model p after translation transformationi' barycenter be located at rectangular coordinate system origin;
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>y</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>z</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>x</mi> <mi>z</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Step s103:The zoom factor s of three-dimensional point cloud model is calculated, three-dimensional point cloud model is zoomed into Unit Scale is On, wherein, zoom factor s is
<mrow> <mi>s</mi> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
Step s104:Spherical coordinates Q is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, now three-dimensional point cloud Yardstick of the model on spherical coordinates beConversion formula is as follows:
Wherein θ ∈ [0, π], it is 0 that the elevation angle is born on semiaxis in Z axis;
Step s105:Spherical coordinates Q is mapped on the location of pixels (u, v) of pole view, mapping relations are Then three-dimensional point cloud model is calculated according to the following equation on the location of pixels (u, v) of pole view;Wherein, location of pixels (u, V) there is spherical coordinates point, multiple spherical coordinates point on or in the absence of there is spherical coordinates point;
Wherein nuAnd nvRespectively pole view is wide and long;
Step s106:Each the sampled distance collection of pixel (u, v) isGo out each pixel sampling according to the following formula The maximum of distance set is as ultimate range to obtain pixel sampling value in the view of pole, and be arranged in two-dimentional collection figure to regard as pole Scheme I;
CN201710148535.7A 2017-03-14 2017-03-14 A kind of pole view extracting method of threedimensional model Pending CN107085824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710148535.7A CN107085824A (en) 2017-03-14 2017-03-14 A kind of pole view extracting method of threedimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710148535.7A CN107085824A (en) 2017-03-14 2017-03-14 A kind of pole view extracting method of threedimensional model

Publications (1)

Publication Number Publication Date
CN107085824A true CN107085824A (en) 2017-08-22

Family

ID=59614574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710148535.7A Pending CN107085824A (en) 2017-03-14 2017-03-14 A kind of pole view extracting method of threedimensional model

Country Status (1)

Country Link
CN (1) CN107085824A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230242A (en) * 2018-01-10 2018-06-29 大连理工大学 A kind of conversion method from panorama laser point cloud to video flowing
WO2019042028A1 (en) * 2017-09-01 2019-03-07 叠境数字科技(上海)有限公司 All-around spherical light field rendering method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411794A (en) * 2011-07-29 2012-04-11 南京大学 Output method of two-dimensional (2D) projection of three-dimensional (3D) model based on spherical harmonic transform
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411794A (en) * 2011-07-29 2012-04-11 南京大学 Output method of two-dimensional (2D) projection of three-dimensional (3D) model based on spherical harmonic transform
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAOGUANG SHI等: "DeepPano: Deep Panoramic Representation for 3-D Shape Recognition", 《IEEE SIGNAL PROCESSING LETTERS》 *
冯毅攀: "基于视图的三维模型检索技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019042028A1 (en) * 2017-09-01 2019-03-07 叠境数字科技(上海)有限公司 All-around spherical light field rendering method
GB2584753A (en) * 2017-09-01 2020-12-16 Plex Vr Digital Tech Shanghai Co Ltd All-around spherical light field rendering method
US10909752B2 (en) 2017-09-01 2021-02-02 Plex-Vr Digital Technology (Shanghai) Co., Ltd. All-around spherical light field rendering method
GB2584753B (en) * 2017-09-01 2021-05-26 Plex Vr Digital Tech Shanghai Co Ltd All-around spherical light field rendering method
CN108230242A (en) * 2018-01-10 2018-06-29 大连理工大学 A kind of conversion method from panorama laser point cloud to video flowing

Similar Documents

Publication Publication Date Title
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
CN103729885B (en) Various visual angles projection registers united Freehandhand-drawing scene three-dimensional modeling method with three-dimensional
CN101599181A (en) A kind of real-time drawing method of algebra B-spline surface
Komorkiewicz et al. Efficient hardware implementation of the Horn-Schunck algorithm for high-resolution real-time dense optical flow sensor
Ma et al. Pyramid ALKNet for semantic parsing of building facade image
Li et al. RGB-D image processing algorithm for target recognition and pose estimation of visual servo system
Li et al. A novel OpenMVS-based texture reconstruction method based on the fully automatic plane segmentation for 3D mesh models
CN104299255A (en) Three-dimensional terrain model rendering method
CN107085824A (en) A kind of pole view extracting method of threedimensional model
Chen et al. Transmission line vibration damper detection using deep neural networks based on uav remote sensing image
EP3855386A2 (en) Method, apparatus, device and storage medium for transforming hairstyle and computer program product
Zhang et al. A method of optimizing terrain rendering using digital terrain analysis
Zheng et al. A Multi-Scale Rebar Detection Network with an Embedded Attention Mechanism
Lee et al. Performance evaluation of ground AR anchor with WebXR device API
Qi et al. Research on an insulator defect detection method based on improved yolov5
Teng et al. Pose estimation for straight wing aircraft based on consistent line clustering and planes intersection
Bi et al. 3-Dimensional modeling and simulation of the cloud based on cellular automata and particle system
Liu et al. Texture-cognition-based 3D building model generalization
Hu et al. Salient Preprocessing: Robotic ICP Pose Estimation Based on SIFT Features
Chen et al. 3d fast object detection based on discriminant images and dynamic distance threshold clustering
Zhao et al. A smooth transition algorithm for adjacent panoramic viewpoints using matched delaunay triangular patches
Lee et al. Hardware-based adaptive terrain mesh using temporal coherence for real-time landscape visualization
Oh et al. The Design of a 2D Graphics Accelerator for Embedded Systems
Zhil et al. One-shot Learning Classification and Recognition of Gesture Expression From the Egocentric Viewpoint in Intelligent Human-computer Interaction [J]
Chen et al. EPGNet: Enhanced point cloud generation for 3D object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170822

RJ01 Rejection of invention patent application after publication