CN107146287A - Two-dimensional projection image to threedimensional model mapping method - Google Patents
Two-dimensional projection image to threedimensional model mapping method Download PDFInfo
- Publication number
- CN107146287A CN107146287A CN201710172933.2A CN201710172933A CN107146287A CN 107146287 A CN107146287 A CN 107146287A CN 201710172933 A CN201710172933 A CN 201710172933A CN 107146287 A CN107146287 A CN 107146287A
- Authority
- CN
- China
- Prior art keywords
- threedimensional model
- axis
- bounding box
- mrow
- positive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
Abstract
The invention discloses the mapping method of two-dimensional projection image to threedimensional model, including:Step 1, threedimensional model is obtained by spatial digitizer;Step 2, threedimensional model is pre-processed, obtains pretreated threedimensional model;Step 3, pretreated threedimensional model is normalized, obtains the bounding box of threedimensional model;Step 4, threedimensional model OBB bounding boxs are set up;Step 5, contacting between two-dimensional projection image and threedimensional model is set up;Instant invention overcomes threedimensional model in the prior art and its projected image relatively independent shortcoming between the two, contact between the two is effectively established;And threedimensional model and two-dimensional projection image are overcome while the problem of there is scalability, solve pantography and the problem of threedimensional model is scaled.
Description
Technical field
The invention belongs to threedimensional model correlative technology field, it is related to a kind of mapping side of two dimensional image corresponding three-dimensional model
Method.The present invention can map directly to the operation to two dimensional image corresponding threedimensional model in itself.
Background technology
3-D geometric model from geometric angle reproduced real world in object shape information.With 3D model modelings,
Digitlization, the progress of visualization technique, the research of 3-D geometric model have obtained increasing concern, set in industrial products
Under the drivings of application of aspects demand such as meter, video display amusement, ecommerce, historical relic's protection, urban planning, obtain unprecedented
Concern.Digital geometry information is considered as the 4th class Digital Media after DAB, image and video.With digitlization
The development of production and consumption, generates increasing digital geometry information, in the urgent need to being acted upon, analyzing to these information
And identification, effectively to understand, to utilize, or even these information are recycled, further increased productivity.
Two-dimensional projection's method is common in the retrieval of threedimensional model, than in reciprocity application field.In three-dimensional model search, often
The feature of one threedimensional model is described with the two-dimensional projection image of different angles.In three-dimensional model search, people generally recognize
If being two threedimensional models multiple dimensioned, the two-dimensional projection of multi-angle possesses same or similar characteristic vector, then just
It is considered that the two threedimensional models are identical, or it is similar.This method and thought are nowadays highly developed.When three-dimensional mould
When type changes, its corresponding two-dimensional projection can also change, but for the change of the two dimensional image obtained by projection, but
Few related researchs.
The Box technology of threedimensional model is usually used in collision detection.The basic thought of bounding box method is to use simple geometry
Body come replace complexity very strange solid, first the bounding box of object is detected roughly, when bounding box intersect when its
The solid of encirclement is possible to intersect;When bounding box is non-intersect, the solid that it is surrounded is necessarily non-intersect;It can so arrange
Except the solid and geometry position that can not possibly largely intersect, so as to be quickly found out intersecting geometry position.
The present invention mainly make use of Box technology and two-dimensional projection's method to solve two-dimensional projection image to threedimensional model
Mapping problems.
The content of the invention
It is an object of the invention to provide a kind of mapping method of two-dimensional projection image to threedimensional model, prior art is solved
The problem of lacking contact between middle two-dimensional projection and threedimensional model.
Existing ripe threedimensional model shadow casting technique, is all that threedimensional model is simply thrown using various methods
Shadow, and contacting between the two dimensional image obtained by projection and former threedimensional model is then lacked and paid close attention to.
This method proposes a kind of image manipulation that correlation is carried out in the two-dimensional projection image of threedimensional model, is arranged by some
Matrixing, and then corresponding image manipulation is projected to the method for threedimensional model, effectively establish two dimensional image and three-dimensional mould
Contact between type, is that the further associative operation for simplifying threedimensional model establishes basis.
The technical solution adopted in the present invention is, the mapping method of two-dimensional projection image to threedimensional model, it is characterised in that
Comprise the following steps:
Step 1, threedimensional model is obtained by spatial digitizer;
Step 2, threedimensional model is pre-processed, obtains pretreated threedimensional model;
Step 3, pretreated threedimensional model is normalized, obtains the bounding box of threedimensional model;
Step 4, using the center of the bounding box of threedimensional model as origin O, the Y of the bounding box of threedimensional model is X-axis,
X direction is Y-axis, is Z axis perpendicular to the direction of XOY plane, sets up global coordinate system;
XOY plane is moved to the positive and negative direction of Z axis respectively, until the bounding box of XOY plane and threedimensional model is tangent, obtained
To two faces of the bounding box on the positive and negative direction of Z axis;
XOZ planes are moved to the positive and negative direction of Y-axis respectively, until the bounding box of XOZ planes and threedimensional model is tangent, obtained
To two faces of the bounding box on the positive and negative direction of Y-axis;
YOZ planes are moved to the positive and negative direction of X-axis respectively, until the bounding box of YOZ planes and threedimensional model is tangent, obtained
To two faces of the bounding box on the positive and negative direction of X-axis;
Two faces of two faces, bounding box of the bounding box on the positive and negative direction of Z axis on the positive and negative direction of Y-axis, bag
Enclose six faces that two faces of the box on the positive and negative direction of X-axis constitute OBB bounding boxs;
Step 5, by the projected image in six faces of OBB bounding boxs, the ratio between projected image and threedimensional model is obtained
Example coordinate;
Including:
Step 51, using any summit of OBB bounding boxs as new origin of coordinates O ', by the origin translation of global coordinate system
To new origin of coordinates O ', bounding box coordinate system is obtained;
Step 52, the one side of optional OBB bounding boxs projects in the current bounding box face to encirclement as current bounding box face
XO ' the Y planes of box coordinate system, obtain projected image;
Step 53, an optional point M (x on projection imagesm,ym), Scale Translate (x is obtained by formula (2)k,yk):
In formula (2), W represents the pixel quantity in the X-direction of projected image, and L represents the pixel quantity in image Y-direction;
Step 54, the mark point in projected image is reflected by formula (3) optionally a little as mark point on projection images
It is incident upon threedimensional model;
Pi(xp,yp,zi)|xp=lx*xk,yp=ly*yk,0≤zi≤lz (3)
In formula (3), Pi(xp,yp,zi) to meet the institute of the Scale Translate of mark point in projected image a little;
lxFor length of the threedimensional model under global coordinate system in X-axis, lyIt is threedimensional model under global coordinate system in Y-axis
Length, lzFor length of the threedimensional model under global coordinate system on Z axis.
Further, the central point of the bounding box of the threedimensional model after being normalized by formula (1):
In formula (1), n is the quantity of triangle on the threedimensional model after normalization, xi,yi,ziFor the top of i-th of triangle
Point coordinates, i=1,2 ..., n.
Further, denoising is included to threedimensional model progress pretreatment described in step 2.
The beneficial effects of the invention are as follows
(1) mapping method of the present invention, overcome in the prior art both threedimensional model and its projected image it
Between relatively independent shortcoming, effectively establish contact between the two;
(2) mapping method that the present invention is taken, overcomes threedimensional model with two-dimensional projection image while having scalability
The problem of, solve pantography and the problem of threedimensional model is scaled.
Brief description of the drawings
Fig. 1 is flow chart of the invention;
Fig. 2 (a) is non-denoising model;(b) it is denoising model;
Fig. 3 is the threedimensional model after normalization;
Fig. 4 is OBB bounding box schematic diagrames;
Fig. 5 is bounding box coordinate system schematic diagram;
Fig. 6 (a) is projected image;(b) it is the image after mark;
Fig. 7 (a) is experimental result overall diagram;(b) it is experimental result partial enlarged drawing.
Embodiment
Below by drawings and examples, the invention will be further described.
The present invention proposes a kind of image manipulation that correlation is carried out in the two-dimensional projection image of threedimensional model, is arranged by some
Matrixing, and then corresponding image manipulation is projected to the method for threedimensional model, effectively establish two dimensional image and three-dimensional mould
Contact between type, is that the further associative operation for simplifying threedimensional model establishes basis.
Embodiment 1
Two-dimensional projection image is present embodiments provided to the mapping method of threedimensional model, as shown in figure 1, including following step
Suddenly:
Step 1, threedimensional model is obtained by spatial digitizer;
The present embodiment scans object using handheld three-dimensional laser scanner Handyscan3D, obtains three-dimensional model information;
Step 2, threedimensional model is pre-processed, obtains pretreated threedimensional model;
Denoising, such as Fig. 2 (a) and Fig. 2 (b) are carried out to the threedimensional model of acquisition.
Step 3, pretreated threedimensional model is normalized, obtains the bounding box of threedimensional model;Such as Fig. 3 institutes
Show.
Assuming that vertex set is V, normal direction quantity set is N
Step 31, threedimensional model barycenter is tried to achieve
The calculating of barycenter obtains the average as barycenter on summit on three directions just by the statistics of model vertices collection.
Step 32, the realization of translation invariance
The realization of translation invariance is i.e. i.e. achievable to coincidence is moved at the origin of coordinates by the barycenter of model.
Model after can now being moved is:
V1=V-C, N1=N
Step 33, the realization of rotational invariance
The main summit by statistical model of realization of rotational invariance, and then spin matrix is obtained, then model is applied
Plus matrix rotation.
Influence matrix M is calculated first:
A therein be model meshes the gross area, m be model vertices quantity, then try to achieve M three characteristic values and
Characteristic vector, then according to the size of its characteristic value, can obtain spin matrix RM according to the sequential combination characteristic vector successively decreased, obtains
Model can be rotated after to this spin matrix, obtain summit and vector set is:
V2=RM*V1, N2=RM*N1
Step 34, scaling consistency is realized
The realization of scaling consistency is to apply the distribution of the distance between summit and barycenter of above-mentioned operation model by counting
Relation:
Wherein:
Sy, SzIt is similar.Obtain that zoom operations can be applied to model after scaled matrix:
V3=S*V2, N3=N2
After normalized, the origin of coordinates is the center of bounding box;
Step 4, using the center of the bounding box of threedimensional model as origin O, the Y of the bounding box of threedimensional model is X-axis,
X direction is Y-axis, is Z axis perpendicular to the direction of XOY plane, sets up global coordinate system;
The central point of the bounding box of threedimensional model after being normalized by formula (1):
In formula (1), n is the quantity of triangle on the threedimensional model after normalization, xi,yi,ziFor the top of i-th of triangle
Point coordinates, i=1,2 ..., n.
XOY plane is moved to the positive and negative direction of Z axis respectively, until the bounding box of XOY plane and threedimensional model is tangent, obtained
To two faces of the bounding box on the positive and negative direction of Z axis;
XOZ planes are moved to the positive and negative direction of Y-axis respectively, until the bounding box of XOZ planes and threedimensional model is tangent, obtained
To two faces of the bounding box on the positive and negative direction of Y-axis;
YOZ planes are moved to the positive and negative direction of X-axis respectively, until the bounding box of YOZ planes and threedimensional model is tangent, obtained
To two faces of the bounding box on the positive and negative direction of X-axis;
Two faces of two faces, bounding box of the bounding box on the positive and negative direction of Z axis on the positive and negative direction of Y-axis, bag
Six faces that two faces of the box on the positive and negative direction of X-axis constitute OBB bounding boxs are enclosed, as shown in Figure 4;
Step 5, by the projected image in six faces of OBB bounding boxs, the ratio between projected image and threedimensional model is obtained
Example coordinate;
Including:
Step 51, using any summit of OBB bounding boxs as new origin of coordinates O ', by the origin translation of global coordinate system
To new origin of coordinates O ', bounding box coordinate system, such as Fig. 5 are obtained;
Step 52, the one side of optional OBB bounding boxs projects in the current bounding box face to encirclement as current bounding box face
XO ' the Y planes of box coordinate system, obtain projected image, such as Fig. 6 (a);
Step 53, an optional point M (x on projection imagesm,ym), Scale Translate (x is obtained by formula (2)k,yk):
In formula (2), W represents the pixel quantity in the X-direction of projected image, and L represents the pixel quantity in image Y-direction.
Step 54, the mark point in projected image is reflected by formula (3) optionally a little as mark point on projection images
It is incident upon threedimensional model;
Pi(xp,yp,zi)|xp=lx*xk,yp=ly*yk,0≤zi≤lz (3)
In formula (3), Pi(xp,yp,zi) to meet the institute of the Scale Translate of mark point in projected image a little;
lxFor length of the threedimensional model under global coordinate system in X-axis, lyIt is threedimensional model under global coordinate system in Y-axis
Length, lzFor length of the threedimensional model under global coordinate system on Z axis.
Claims (3)
1. two-dimensional projection image is to the mapping method of threedimensional model, it is characterised in that comprise the following steps:
Step 1, threedimensional model is obtained by spatial digitizer;
Step 2, threedimensional model is pre-processed, obtains pretreated threedimensional model;
Step 3, pretreated threedimensional model is normalized, obtains the bounding box of threedimensional model;
Step 4, using the center of the bounding box of threedimensional model as origin O, the Y of the bounding box of threedimensional model is X-axis, transverse axis
To for Y-axis, it is Z axis perpendicular to the direction of XOY plane, sets up global coordinate system;
XOY plane is moved to the positive and negative direction of Z axis respectively, until the bounding box of XOY plane and threedimensional model is tangent, wrapped
Enclose two faces of the box on the positive and negative direction of Z axis;
XOZ planes are moved to the positive and negative direction of Y-axis respectively, until the bounding box of XOZ planes and threedimensional model is tangent, wrapped
Enclose two faces of the box on the positive and negative direction of Y-axis;
YOZ planes are moved to the positive and negative direction of X-axis respectively, until the bounding box of YOZ planes and threedimensional model is tangent, wrapped
Enclose two faces of the box on the positive and negative direction of X-axis;
Two faces, bounding box two face, bounding boxs on Y-axis positive and negative direction of the bounding box on the positive and negative direction of Z axis
Two faces on the positive and negative direction of X-axis constitute six faces of OBB bounding boxs;
Step 5, by the projected image in six faces of OBB bounding boxs, the ratio obtained between projected image and threedimensional model is sat
Mark;
Including:
Step 51, using any summit of OBB bounding boxs as new origin of coordinates O ', by the origin translation of global coordinate system to new
Origin of coordinates O ', obtain bounding box coordinate system;
Step 52, the current bounding box face is projected to bounding box and sat as current bounding box face by the one side of optional OBB bounding boxs
XO ' the Y planes of system are marked, projected image is obtained;
Step 53, an optional point M (x on projection imagesm,ym), Scale Translate (x is obtained by formula (2)k,yk):
<mrow>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>=</mo>
<mfrac>
<msub>
<mi>x</mi>
<mi>m</mi>
</msub>
<mi>W</mi>
</mfrac>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>k</mi>
</msub>
<mo>=</mo>
<mfrac>
<msub>
<mi>y</mi>
<mi>m</mi>
</msub>
<mi>W</mi>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (2), W represents the pixel quantity in the X-direction of projected image, and L represents the pixel quantity in image Y-direction;
Step 54, the mark point in projected image is mapped to by formula (3) optionally a little as mark point on projection images
Threedimensional model;
Pi(xp,yp,zi)|xp=lx*xk,yp=ly*yk,0≤zi≤lz (3)
In formula (3), Pi(xp,yp,zi) to meet the institute of the Scale Translate of mark point in projected image a little;
lxFor length of the threedimensional model under global coordinate system in X-axis, lyFor length of the threedimensional model under global coordinate system in Y-axis
Degree, lzFor length of the threedimensional model under global coordinate system on Z axis.
2. two-dimensional projection image as claimed in claim 1 is to the mapping method of threedimensional model, it is characterised in that pass through formula (1)
The central point of the bounding box of threedimensional model after being normalized:
<mrow>
<mi>&mu;</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>3</mn>
<mi>n</mi>
</mrow>
</mfrac>
<msubsup>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mi>n</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msup>
<mi>x</mi>
<mi>i</mi>
</msup>
<mo>+</mo>
<msup>
<mi>y</mi>
<mi>i</mi>
</msup>
<mo>+</mo>
<msup>
<mi>z</mi>
<mi>i</mi>
</msup>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (1), n is the quantity of triangle on the threedimensional model after normalization, xi,yi,ziSat for i-th of vertex of a triangle
Mark, i=1,2 ..., n.
3. two-dimensional projection image as claimed in claim 1 is to the mapping method of threedimensional model, it is characterised in that institute in step 2
That states includes denoising to threedimensional model progress pretreatment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710172933.2A CN107146287B (en) | 2017-03-22 | 2017-03-22 | Two-dimensional projection image to threedimensional model mapping method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710172933.2A CN107146287B (en) | 2017-03-22 | 2017-03-22 | Two-dimensional projection image to threedimensional model mapping method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107146287A true CN107146287A (en) | 2017-09-08 |
CN107146287B CN107146287B (en) | 2019-08-02 |
Family
ID=59783587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710172933.2A Active CN107146287B (en) | 2017-03-22 | 2017-03-22 | Two-dimensional projection image to threedimensional model mapping method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107146287B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961785A (en) * | 2018-07-10 | 2018-12-07 | 杭州利艾智能科技有限公司 | A kind of system and method adjusting traffic control signal |
CN109631799A (en) * | 2019-01-09 | 2019-04-16 | 王红军 | A kind of intelligentized measurement and labeling method |
CN110766692A (en) * | 2019-11-05 | 2020-02-07 | 北京智拓视界科技有限责任公司 | Method, terminal and computer-readable storage medium for detecting protrusion in intestinal tract |
CN112540711A (en) * | 2020-11-30 | 2021-03-23 | 国机工业互联网研究院(河南)有限公司 | Control method, device and equipment for selecting three-dimensional space object at webpage end |
CN113112535A (en) * | 2021-06-15 | 2021-07-13 | 四川大学 | Scale point cloud registration method and device based on directed bounding box |
CN113129423A (en) * | 2019-12-30 | 2021-07-16 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring three-dimensional model of vehicle, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254343A (en) * | 2011-07-01 | 2011-11-23 | 浙江理工大学 | Convex hull and OBB (Oriented Bounding Box)-based three-dimensional grid model framework extracting method |
CN103116904A (en) * | 2012-10-23 | 2013-05-22 | 北京航空航天大学深圳研究院 | Two-dimensional feature extraction system and two-dimensional feature extraction method of three-dimensional model |
CN103729877A (en) * | 2013-12-18 | 2014-04-16 | 珠海金山网络游戏科技有限公司 | Method and system for constructing directed bounding box of three-dimensional model |
CN106503347A (en) * | 2016-10-25 | 2017-03-15 | 福州大学 | One kind is based on AABB and the adaptive compound OBBs algorithms of OBB |
-
2017
- 2017-03-22 CN CN201710172933.2A patent/CN107146287B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254343A (en) * | 2011-07-01 | 2011-11-23 | 浙江理工大学 | Convex hull and OBB (Oriented Bounding Box)-based three-dimensional grid model framework extracting method |
CN103116904A (en) * | 2012-10-23 | 2013-05-22 | 北京航空航天大学深圳研究院 | Two-dimensional feature extraction system and two-dimensional feature extraction method of three-dimensional model |
CN103729877A (en) * | 2013-12-18 | 2014-04-16 | 珠海金山网络游戏科技有限公司 | Method and system for constructing directed bounding box of three-dimensional model |
CN106503347A (en) * | 2016-10-25 | 2017-03-15 | 福州大学 | One kind is based on AABB and the adaptive compound OBBs algorithms of OBB |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961785A (en) * | 2018-07-10 | 2018-12-07 | 杭州利艾智能科技有限公司 | A kind of system and method adjusting traffic control signal |
CN109631799A (en) * | 2019-01-09 | 2019-04-16 | 王红军 | A kind of intelligentized measurement and labeling method |
CN110766692A (en) * | 2019-11-05 | 2020-02-07 | 北京智拓视界科技有限责任公司 | Method, terminal and computer-readable storage medium for detecting protrusion in intestinal tract |
CN111127314A (en) * | 2019-11-05 | 2020-05-08 | 北京智拓视界科技有限责任公司 | Method, apparatus and storage medium for three-dimensional data planarization of brain |
CN113129423A (en) * | 2019-12-30 | 2021-07-16 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring three-dimensional model of vehicle, electronic equipment and storage medium |
CN113129423B (en) * | 2019-12-30 | 2023-08-11 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring three-dimensional model of vehicle, electronic equipment and storage medium |
CN112540711A (en) * | 2020-11-30 | 2021-03-23 | 国机工业互联网研究院(河南)有限公司 | Control method, device and equipment for selecting three-dimensional space object at webpage end |
CN113112535A (en) * | 2021-06-15 | 2021-07-13 | 四川大学 | Scale point cloud registration method and device based on directed bounding box |
Also Published As
Publication number | Publication date |
---|---|
CN107146287B (en) | 2019-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107146287A (en) | Two-dimensional projection image to threedimensional model mapping method | |
Zhu et al. | Vpfnet: Improving 3d object detection with virtual point based lidar and stereo data fusion | |
JP4785880B2 (en) | System and method for 3D object recognition | |
CN102855663B (en) | Method for building CSG (Constructive Solid Geometry) model according to laser radar grid point cloud | |
CN104809755B (en) | Three-dimensional rebuilding method based on single image cultural relic ruins | |
CN107341846B (en) | Method and device for displaying large-scale three-dimensional reconstruction scene in real time | |
CN104574432B (en) | Three-dimensional face reconstruction method and three-dimensional face reconstruction system for automatic multi-view-angle face auto-shooting image | |
CN105046743A (en) | Super-high-resolution three dimensional reconstruction method based on global variation technology | |
CN107680154A (en) | Voxel geometric parameter extracting method based on view | |
CN102385750B (en) | Line matching method and line matching system on basis of geometrical relationship | |
WO2023093739A1 (en) | Multi-view three-dimensional reconstruction method | |
CN105243137A (en) | Draft-based three-dimensional model retrieval viewpoint selection method | |
CN102111562A (en) | Projection conversion method for three-dimensional model and device adopting same | |
CN108010084A (en) | A kind of depth camera is rebuild and method, system, the equipment of automatic Calibration | |
Zimny et al. | Multiplanenerf: Neural radiance field with non-trainable representation | |
CN104036547B (en) | Method and device for three-dimensional brush to generate network module in real time based on Leap Motion control | |
Wu et al. | Variational mannequin approximation using spheres and capsules | |
CN102110299A (en) | Method and device for projecting application distortion in three-dimensional model | |
Zhang et al. | Kinect-based universal range sensor for laboratory experiments | |
CN113432558A (en) | Device and method for measuring irregular object surface area based on laser | |
CN109191365B (en) | Three-dimensional space handwritten character dimension reduction method based on directed cuboid | |
CN102110298A (en) | Method and device for projecting three-dimensional model in virtual studio system | |
Lu et al. | 3D Reconstruction of Grabbed Objects Using a Single Image Based on Palletizing Task | |
Xia | Application of augmented reality technology in carton packaging structure design | |
Yang et al. | Mmwave radar and vision fusion for semantic 3D reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |