CN106846485A - A kind of indoor three-dimensional modeling method and device - Google Patents

A kind of indoor three-dimensional modeling method and device Download PDF

Info

Publication number
CN106846485A
CN106846485A CN201611269656.9A CN201611269656A CN106846485A CN 106846485 A CN106846485 A CN 106846485A CN 201611269656 A CN201611269656 A CN 201611269656A CN 106846485 A CN106846485 A CN 106846485A
Authority
CN
China
Prior art keywords
threedimensional model
indoor
action
course
unmanned plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611269656.9A
Other languages
Chinese (zh)
Inventor
冯子钜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201611269656.9A priority Critical patent/CN106846485A/en
Publication of CN106846485A publication Critical patent/CN106846485A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention is applied to field of virtual reality, more particularly to a kind of indoor three-dimensional modeling method and device, including:Collection unmanned plane anchor point, partial 3 d map is built according to the unmanned plane anchor point;Indoor image is shot, according to the indoor partially visible threedimensional model of video generation;According to the partially visible threedimensional model, complete threedimensional model is calculated by pattern classification;According to the complete threedimensional model and the partial 3 d map, course of action is calculated;Drive unmanned plane to be moved automatically according to the course of action, calculate the threedimensional model of subsequent region.In embodiments of the present invention, by way of unmanned plane is automatically moved and auto-mapping is modeled, user more quickly can carry out three-dimensional model reconfiguration to substantial amounts of building, on the other hand, because artificial mapping modeling is avoided, so as to save human cost and improve the convenience of three-dimensional modeling.

Description

A kind of indoor three-dimensional modeling method and device
Technical field
The invention belongs to field of virtual reality, more particularly to a kind of indoor three-dimensional modeling method and device.
Background technology
Indoor dimensional Modeling Technology has been widely used in the fields such as building, civil engineering, tourism, numerical map, energy Enough greatly improve understanding efficiency of the people to the interior space.For example, in real estate domain, compared with the photo or knot of several sheet of planar Composition, complete three-dimensional simulation model can allow client more truly to recognize that the overall structure inside house is thin with each Section, improves the efficiency and satisfaction of client's house-purchase.
But traditional indoor three-dimensional modeling generally requires the equipment that expensive specific customization is used by relevant speciality personage Substantial amounts of shooting and measurement are carried out to the interior space, and needs to aid in completing by hand by manpower, not only inefficiency, difficult To be quickly modeled to substantial amounts of interior architecture, and equipment and human cost are also higher.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of indoor three-dimensional modeling method and device, it is intended to solve existing three-dimensional Modeling technique is due to the slow problem of the low caused modeling speed of automatization level.
The embodiment of the present invention is achieved in that a kind of indoor three-dimensional modeling method, including:
Collection unmanned plane anchor point, partial 3 d map is built according to the unmanned plane anchor point;
Indoor image is shot, according to the indoor partially visible threedimensional model of video generation;
According to the partially visible threedimensional model, complete threedimensional model is calculated by pattern classification;
According to the complete threedimensional model and the partial 3 d map, course of action is calculated;
Drive unmanned plane to be moved automatically according to the course of action, calculate the threedimensional model of subsequent region.
The another object of the embodiment of the present invention is to provide a kind of indoor three-dimensional modeling apparatus, including:
Map generation module, for gathering unmanned plane anchor point, partial 3 d ground is built according to the unmanned plane anchor point Figure;
It can be seen that model generation module, for shooting indoor image, according to the indoor partially visible three-dimensional of video generation Model;
Partial model generation module, for according to the partially visible threedimensional model, calculating complete by pattern classification Threedimensional model;
Course of action setting module, for according to the complete threedimensional model and the partial 3 d map, calculating Course of action;
World model's generation module, is used for
Drive unmanned plane to be moved automatically according to the course of action, calculate the threedimensional model of subsequent region.
In embodiments of the present invention, by way of unmanned plane is automatically moved and auto-mapping is modeled, user can be more Three-dimensional model reconfiguration quickly is carried out to substantial amounts of building, on the other hand, because avoiding artificial mapping modeling, so as to save Human cost and improve the convenience of three-dimensional modeling.
Brief description of the drawings
Fig. 1 is that indoor three-dimensional modeling method provided in an embodiment of the present invention realizes flow chart;
Fig. 2 is that indoor three-dimensional modeling method S101 provided in an embodiment of the present invention implements flow chart;
Fig. 3 is that indoor three-dimensional modeling method S102 provided in an embodiment of the present invention implements flow chart;
Fig. 4 is that indoor three-dimensional modeling method S103 provided in an embodiment of the present invention implements flow chart;
Fig. 5 is the structured flowchart of indoor three-dimensional modeling apparatus provided in an embodiment of the present invention.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
What Fig. 1 showed indoor three-dimensional modeling method provided in an embodiment of the present invention realizes flow, and details are as follows:
In S101, unmanned plane anchor point is gathered, partial 3 d map is built according to the unmanned plane anchor point.
Unmanned plane after start-up, until during modeling terminates since the initial position, it has been required to collection from The anchor point of body, by these anchor points, by point to line, by line to face, can sketch out a unmanned plane and move by face to body Three dimensions.Since a partial 3 d space for very little is built, with the motion of unmanned plane, the three dimensions of structure by Step is increased, and eventually constructs the three-dimensional map in room where.
Fig. 2 shows the flow that implements of indoor three-dimensional modeling method S101 provided in an embodiment of the present invention, describes in detail such as Under:
In S201, multiple framing signals are collected.
In embodiments of the present invention, it is contemplated that the positioning that occurs in real work of single-sensor is inaccurate asks Topic, and in view of sensor fault problem, framing signal can be collected using multiple sensors.The sensor of use can include WIFI signal sensor, inertia measurement processor and three-dimensional camera shooting are first-class.In the case where fund is allowed, light can also be installed Radar is learned to replace three-dimensional camera.To sum up, in the present embodiment, in synchronization, unmanned plane can be generally received by sensor At least two different framing signals for collecting, to improve positioning accurate accuracy.
In S202, noise reduction process is carried out to the multiple framing signal.
Collect original framing signal and necessarily have noise jamming, this noise can cause positioning inaccurate, thus can shadow The follow-up route planning of sound, and the accuracy for modeling.
In embodiments of the present invention, noise reduction is carried out to the framing signal for collecting by the way of gaussian filtering.Here What gaussian filtering referred to is smoothed to gaussian signal.Process is that Gaussian smoothing filter is first done to image, cancelling noise, then Second order is asked to lead, the zero crossing led with second order determines edge, and Gaussian filter is exactly the Mathematical Modeling set up, by this mould Framing signal is carried out energy conversion by type, and energy lower part is excluded, and noise is just belonging to low energy part.
In S203, the framing signal after multiple noise reductions is entered into line sensor fusion treatment, generate anchor point.
In embodiments of the present invention, due to having used various framing signals, it is therefore desirable to by Multi-sensor fusion so as to The accurately single anchor point of generation one.
In embodiments of the present invention, enter line sensor using Kalman filtering algorithm to merge, to merge multiple sensors Result is calculating more accurate positioning and cartographic information.
In S102, indoor image is shot, according to the indoor partially visible threedimensional model of video generation.
In embodiments of the present invention, depth map is generated by the way of passive ranging sensing first, and according to grid reconstruction Method generate visible part threedimensional model.
Fig. 3 shows the flow that implements of indoor three-dimensional modeling method S102 provided in an embodiment of the present invention, describes in detail such as Under:
In S301, indoor image is shot from more than one position respectively, generate more than one original image.
In embodiments of the present invention, 3 three-dimensional cameras are installed, respectively to same target zone on each unmanned plane Shot, at the same moment, 3 original images of the same target zone got by three-dimensional camera can be produced.
In S302, closed according to the geometry between the half-tone information and original image of one original image above System, generates depth map.
The method that the embodiment of the present invention uses passive ranging sensing, 3 original images are led to by 3 three-dimensional cameras Cross and receive what the light energy from scene transmitting was produced, therefore 3 original images at each moment reflect relevant scene luminous energy The distribution function of amount, i.e. gray level image, then need the depth information of the restoration scenario on the basis of these images.
The method of recovering depth information is half-tone information and imaging geometry by three width original images generates depth map Depth information can also indirectly be estimated using the light and shadow characteristics of gray level image, textural characteristics, motion feature.
In S303, by grid reconstruction technology, depth map is processed, the visible threedimensional model of generating portion.
The depth map that will have been generated first changes into a cloud.Point cloud refers to the image point set of target surface characteristic, specifically Conversion method is as follows:
Assuming that certain world coordinate point M is (Xw, Yw, Zw) in depth map, then it is mapped to the process of picture point m (u, v) It is:
Wherein u, v are the arbitrary coordinate point under image coordinate system, u0, v0The respectively centre coordinate of image, Xw, Yw, Zw table Show the three-dimensional coordinate point under world coordinate system, ZcRepresent the z-axis value of camera coordinates, i.e. distance of the target to camera.R, T are respectively The spin matrix and translation matrix of outer ginseng matrix.
The externally setting of ginseng matrix:Because world coordinates origin and camera origin are to overlap, i.e., no rotation and translation, So:
The origin of coordinates of camera coordinates system and world coordinate system overlaps, therefore same under camera coordinates and world coordinates Object has identical depth, i.e. Zc=Zw, in can be to be calculated picture point【U, v】TTo world coordinate point [Xw, Yw, Zw ]TTransformation for mula be:
After a cloud is obtained, geometry and shape facility according to vertex neighborhood are classified to the point in cloud model, to point Neighborhood carries out accommodation, each data neighborhood of a point is approached the corresponding topological field of the point in master mould.
It is finally that the order of boundary point is entered to network model finally according to from first kind point to Equations of The Second Kind point and the 3rd class point Line reconstruction.
In S103, according to the partially visible threedimensional model, complete threedimensional model is calculated by pattern classification.
Fig. 4 shows the flow that implements of indoor three-dimensional modeling method S103 provided in an embodiment of the present invention, describes in detail such as Under:
In S401, the feature of the threedimensional model of the visible part is extracted.
First having to the threedimensional model of the visible part to having constructed carries out the extraction of characteristic vector, leads in the present embodiment Cross principal component analysis (PCA) carries out feature extraction and dimensionality reduction to the threedimensional model of visible part.
In S402, prior information is called, and using the prior information as training data.
Prior information refers to the feature of some common indoor elements such as bed, cabinet, window etc..Call these features, and with This is used as the training data in pattern classification.
In S403, the feature of the threedimensional model according to the visible part and the training data, by pattern point Class technology, generates complete threedimensional model.
In the present embodiment, pattern classification is carried out to the threedimensional model of visible part by SVMs (SVM), therefore By a part for object it may determine that going out this object, so as to obtain an object for entirety, i.e., one complete three Dimension module.
In S104, according to the complete threedimensional model and the partial 3 d map, course of action is calculated.
By formula:
Course of action is calculated, wherein, L (r) represents the course of action, and O represents the complete threedimensional model having calculated that Set, p represents the characteristic point on complete threedimensional model, and Ф (p) represents the constraint function of p, rpRepresent energy on course of action See the set of the camera site of p, CijP () represents the compliance evaluation of the original image that camera site i and camera site j shoots Function, Ψ (r) represents path constraint function.
Wherein CijP () can be defined as
Cij(p)=ρ (Ii(Ω(πi(p))), Jj(Ω(πi(p))))
ρ (f, g) is the similarity function of vector, πiP () represents projections of the p on image i, Ω (x) represents description point x's The image-region on x peripheries, IiX () represents the function that characteristic vector is extracted from image-region.
By optimizing cost function L, you can calculate the optimal course of action under the present situation.
In S105, drive unmanned plane to be moved automatically according to the course of action, calculate the threedimensional model of subsequent region.
In the present embodiment, after calculating a complete three-dimensional model for part, unmanned plane can be according to being calculated most Excellent course of action moves to next position, a complete three-dimensional model for part can be also calculated in next position, with nothing Man-machine flight, is gradually superimposed threedimensional model complete one by one, until the threedimensional model whole of whole objects of interior It is constructed.Unmanned plane stops flight after the indoor overall situation threedimensional model of generation.
Corresponding to the method for adjusting of the on-screen display objects described in foregoing embodiments, Fig. 5 shows implementation of the present invention Example provides the structured flowchart of indoor three-dimensional modeling apparatus provided in an embodiment of the present invention.
Reference picture 5, the device includes:
Map generation module 501, for gathering unmanned plane anchor point, partial 3 d is built according to the unmanned plane anchor point Map;
It can be seen that model generation module 502, for shooting indoor image, according to the indoor video generation partially visible three Dimension module;
Partial model generation module 503, for according to the partially visible threedimensional model, having been calculated by pattern classification Whole threedimensional model;
Course of action setting module 504, for according to the complete threedimensional model and the partial 3 d map, meter Calculate course of action;
Mobile computing module 505, for driving unmanned plane to be moved automatically according to the course of action, calculates subsequent region Threedimensional model.
Further, map generation module includes:
Signal collection submodule, for collecting multiple framing signals;
Noise reduction submodule, for carrying out noise reduction process to the multiple framing signal;
Positioning generation submodule, for the framing signal after multiple noise reductions to be entered into line sensor fusion treatment, generation positioning Point.
Further, it is seen that model generation module includes:
Shooting submodule, for shooting indoor image from more than one position respectively, generates more than one original graph Picture;
Drawing submodule, for several between the half-tone information and original image according to one original image above What relation, generates depth map;
Modeling submodule, for by grid reconstruction technology, processing depth map, the visible three-dimensional mould of generating portion Type.
Further, partial model generation module includes:
First computation subunit, the feature of the threedimensional model for extracting the visible part;
Subelement is called, for calling prior information, and using the prior information as training data;
Second computation subunit, feature and the training data for the threedimensional model according to the visible part, By Pattern classification techniques, complete threedimensional model is generated.
Further, course of action setting module passes through formula:
Movement locus is calculated, wherein, L (r) represents the course of action, and O represents the complete threedimensional model having calculated that Set, p represents the characteristic point on complete threedimensional model, and Ф (p) represents the constraint function of p, rpRepresent energy on course of action See the set of the camera site of p, CijP () represents the compliance evaluation of the original image that camera site i and camera site j shoots Function, Ψ (r) represents path constraint function.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Unit and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel Described function, but this realization can be realized it is not considered that exceeding using distinct methods to each specific application The scope of the present invention.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method, can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, for example multiple units or component Can combine or be desirably integrated into another system, or some features can be ignored, or do not perform.It is another, it is shown or The coupling each other for discussing or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces Close or communicate to connect, can be electrical, mechanical or other forms.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme 's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which more than one unit is integrated in a unit.
If the function is to realize in the form of SFU software functional unit and as independent production marketing or when using, can be with Storage is in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are used to so that a computer equipment (can be individual People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (10)

1. a kind of indoor three-dimensional modeling method, it is characterised in that including:
Collection unmanned plane anchor point, partial 3 d map is built according to the unmanned plane anchor point;
Indoor image is shot, according to the indoor partially visible threedimensional model of video generation;
According to the partially visible threedimensional model, complete threedimensional model is calculated by pattern classification;
According to the complete threedimensional model and the partial 3 d map, course of action is calculated;
Drive unmanned plane to be moved automatically according to the course of action, calculate the threedimensional model of subsequent region.
2. the method for claim 1, it is characterised in that the collection unmanned plane anchor point includes:
Collect multiple framing signals;
Noise reduction process is carried out to the multiple framing signal;
Framing signal after multiple noise reductions is entered into line sensor fusion treatment, anchor point is generated.
3. method as claimed in claim 2, it is characterised in that shooting interior image, according to the indoor video generation Partially visible threedimensional model:
Indoor image is shot from more than one position respectively, more than one original image is generated;
According to the geometrical relationship between the half-tone information and original image of one original image above, depth map is generated;
By grid reconstruction technology, depth map is processed, the visible threedimensional model of generating portion.
4. the method for claim 1, it is characterised in that described according to the partially visible threedimensional model, by mould Formula identification calculates complete threedimensional model to be included:
Extract the feature of the threedimensional model of the partially visible part;
Call prior information, and using the prior information as training data;
The feature of the threedimensional model according to the visible part and the training data, by Pattern classification techniques, have generated Whole threedimensional model.
5. the method for claim 1, it is characterised in that described according to the complete threedimensional model and the part Three-dimensional map, calculating course of action includes:
By formula:
L ( r ) = Σ o ∈ O Σ p ∈ o ( Φ ( p ) + 1 | r p | ( | r p | - 1 ) Σ i , j ∈ r p C i j ( p ) ) + Ψ ( r )
Course of action is calculated, wherein, L (r) represents the course of action, and O represents the collection of the complete threedimensional model having calculated that Close, p represents the characteristic point on complete threedimensional model, Ф (p) represents the constraint function of p, rpRepresent and can see p on course of action Camera site set, CijP () represents the compliance evaluation function of the original image that camera site i and camera site j shoots, Ψ (r) represents path constraint function.
6. a kind of indoor three-dimensional modeling apparatus, it is characterised in that including:
Map generation module, for gathering unmanned plane anchor point, partial 3 d map is built according to the unmanned plane anchor point;
It can be seen that model generation module, for shooting indoor image, according to the indoor partially visible threedimensional model of video generation;
Partial model generation module, for according to the partially visible threedimensional model, complete three being calculated by pattern classification Dimension module;
Course of action setting module, for according to the complete threedimensional model and the partial 3 d map, calculating action Route;
Mobile computing module, for driving unmanned plane to be moved automatically according to the course of action, calculates the three-dimensional mould of subsequent region Type.
7. device as claimed in claim 6, it is characterised in that the map generation module includes:
Signal collection submodule, for collecting multiple framing signals;
Noise reduction submodule, for carrying out noise reduction process to the multiple framing signal;
Positioning generation submodule, for the framing signal after multiple noise reductions to be entered into line sensor fusion treatment, generates anchor point.
8. device as claimed in claim 6, it is characterised in that the visible model generation module includes:
Shooting submodule, for shooting indoor image from more than one position respectively, generates more than one original image;
Drawing submodule, closes for the geometry between the half-tone information and original image according to one original image above System, generates depth map;
Modeling submodule, for by grid reconstruction technology, processing depth map, the visible threedimensional model of generating portion.
9. device as claimed in claim 6, it is characterised in that the partial model generation module includes:
First computation subunit, the feature of the threedimensional model for extracting the visible part;
Subelement is called, for calling prior information, and using the prior information as training data;
Second computation subunit, feature and the training data for the threedimensional model according to the visible part, passes through Pattern classification techniques, generate complete threedimensional model.
10. device as claimed in claim 6, it is characterised in that the course of action setting module passes through formula:
L ( r ) = Σ o ∈ O Σ p ∈ o ( Φ ( p ) + 1 | r p | ( | r p | - 1 ) Σ i , j ∈ r p C i j ( p ) ) + Ψ ( r )
Movement locus is calculated, wherein, L (r) represents the course of action, and O represents the collection of the complete threedimensional model having calculated that Close, p represents the characteristic point on complete threedimensional model, Ф (p) represents the constraint function of p, rpRepresent and can see p on course of action Camera site set, CijP () represents the compliance evaluation function of the original image that camera site i and camera site j shoots, Ψ (r) represents path constraint function.
CN201611269656.9A 2016-12-30 2016-12-30 A kind of indoor three-dimensional modeling method and device Pending CN106846485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611269656.9A CN106846485A (en) 2016-12-30 2016-12-30 A kind of indoor three-dimensional modeling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611269656.9A CN106846485A (en) 2016-12-30 2016-12-30 A kind of indoor three-dimensional modeling method and device

Publications (1)

Publication Number Publication Date
CN106846485A true CN106846485A (en) 2017-06-13

Family

ID=59116991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611269656.9A Pending CN106846485A (en) 2016-12-30 2016-12-30 A kind of indoor three-dimensional modeling method and device

Country Status (1)

Country Link
CN (1) CN106846485A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341851A (en) * 2017-06-26 2017-11-10 深圳珠科创新技术有限公司 Real-time three-dimensional modeling method and system based on unmanned plane image data
CN108230361A (en) * 2016-12-22 2018-06-29 Tcl集团股份有限公司 Enhance target tracking method and system with unmanned plane detector and tracker fusion
CN108898657A (en) * 2018-05-14 2018-11-27 肇庆学院 A kind of robot three-dimensional based on planar grid model builds drawing method and system
CN109064545A (en) * 2018-06-06 2018-12-21 链家网(北京)科技有限公司 The method and device that a kind of pair of house carries out data acquisition and model generates
CN109635834A (en) * 2018-11-02 2019-04-16 中铁上海工程局集团有限公司 A kind of method and system that grid model intelligence is inlayed
CN109657403A (en) * 2019-01-07 2019-04-19 南京工业职业技术学院 A kind of three-dimensional live bridge modeling optimization method based on unmanned plane oblique photograph
CN110045750A (en) * 2019-05-13 2019-07-23 南京邮电大学 A kind of indoor scene building system and its implementation based on quadrotor drone
CN110926479A (en) * 2019-12-20 2020-03-27 杜明利 Method and system for automatically generating indoor three-dimensional navigation map model
CN111179413A (en) * 2019-12-19 2020-05-19 中建科技有限公司深圳分公司 Three-dimensional reconstruction method and device, terminal equipment and readable storage medium
US10872467B2 (en) 2018-06-06 2020-12-22 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
US11127202B2 (en) 2017-12-18 2021-09-21 Parthiv Krishna Search and rescue unmanned aerial system
WO2021202340A1 (en) * 2020-04-01 2021-10-07 Nec Laboratories America, Inc. Infrastructure-free tracking and response

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714262A (en) * 2009-12-10 2010-05-26 北京大学 Method for reconstructing three-dimensional scene of single image
CN103389103A (en) * 2013-07-03 2013-11-13 北京理工大学 Geographical environmental characteristic map construction and navigation method based on data mining
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104236548A (en) * 2014-09-12 2014-12-24 清华大学 Indoor autonomous navigation method for micro unmanned aerial vehicle
US20160360428A1 (en) * 2015-04-14 2016-12-08 ETAK Systems, LLC 3d modeling of cell sites to detect configuration and site changes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714262A (en) * 2009-12-10 2010-05-26 北京大学 Method for reconstructing three-dimensional scene of single image
CN103389103A (en) * 2013-07-03 2013-11-13 北京理工大学 Geographical environmental characteristic map construction and navigation method based on data mining
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104236548A (en) * 2014-09-12 2014-12-24 清华大学 Indoor autonomous navigation method for micro unmanned aerial vehicle
US20160360428A1 (en) * 2015-04-14 2016-12-08 ETAK Systems, LLC 3d modeling of cell sites to detect configuration and site changes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
连晓峰: "《移动机器人及室内环境三维模型重建技术》", 31 August 2010, 国防工业出版社 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230361A (en) * 2016-12-22 2018-06-29 Tcl集团股份有限公司 Enhance target tracking method and system with unmanned plane detector and tracker fusion
CN108230361B (en) * 2016-12-22 2022-01-18 Tcl科技集团股份有限公司 Method and system for enhancing target tracking by fusing unmanned aerial vehicle detector and tracker
CN107341851A (en) * 2017-06-26 2017-11-10 深圳珠科创新技术有限公司 Real-time three-dimensional modeling method and system based on unmanned plane image data
US11127202B2 (en) 2017-12-18 2021-09-21 Parthiv Krishna Search and rescue unmanned aerial system
CN108898657A (en) * 2018-05-14 2018-11-27 肇庆学院 A kind of robot three-dimensional based on planar grid model builds drawing method and system
CN108898657B (en) * 2018-05-14 2019-04-16 肇庆学院 A kind of robot three-dimensional based on planar grid model builds drawing method and system
US10872467B2 (en) 2018-06-06 2020-12-22 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
WO2019233445A1 (en) * 2018-06-06 2019-12-12 贝壳找房(北京)科技有限公司 Data collection and model generation method for house
CN109064545B (en) * 2018-06-06 2020-07-07 贝壳找房(北京)科技有限公司 Method and device for data acquisition and model generation of house
CN109064545A (en) * 2018-06-06 2018-12-21 链家网(北京)科技有限公司 The method and device that a kind of pair of house carries out data acquisition and model generates
CN109635834A (en) * 2018-11-02 2019-04-16 中铁上海工程局集团有限公司 A kind of method and system that grid model intelligence is inlayed
CN109657403A (en) * 2019-01-07 2019-04-19 南京工业职业技术学院 A kind of three-dimensional live bridge modeling optimization method based on unmanned plane oblique photograph
CN110045750A (en) * 2019-05-13 2019-07-23 南京邮电大学 A kind of indoor scene building system and its implementation based on quadrotor drone
CN111179413A (en) * 2019-12-19 2020-05-19 中建科技有限公司深圳分公司 Three-dimensional reconstruction method and device, terminal equipment and readable storage medium
CN111179413B (en) * 2019-12-19 2023-10-31 中建科技有限公司深圳分公司 Three-dimensional reconstruction method, device, terminal equipment and readable storage medium
CN110926479A (en) * 2019-12-20 2020-03-27 杜明利 Method and system for automatically generating indoor three-dimensional navigation map model
CN110926479B (en) * 2019-12-20 2023-04-28 杜明利 Method and system for automatically generating indoor three-dimensional navigation map model
WO2021202340A1 (en) * 2020-04-01 2021-10-07 Nec Laboratories America, Inc. Infrastructure-free tracking and response

Similar Documents

Publication Publication Date Title
CN106846485A (en) A kind of indoor three-dimensional modeling method and device
US11244189B2 (en) Systems and methods for extracting information about objects from scene information
WO2022188379A1 (en) Artificial intelligence system and method serving electric power robot
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
CN108225348A (en) Map building and the method and apparatus of movement entity positioning
CN108537876A (en) Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
CN104715471B (en) Target locating method and its device
CN109544677A (en) Indoor scene main structure method for reconstructing and system based on depth image key frame
CN107462892A (en) Mobile robot synchronous superposition method based on more sonacs
CN105856243A (en) Movable intelligent robot
CN107730519A (en) A kind of method and system of face two dimensional image to face three-dimensional reconstruction
CN106471544A (en) The system and method that threedimensional model produces
CN104346608A (en) Sparse depth map densing method and device
CN109064549B (en) Method for generating mark point detection model and method for detecting mark point
CN103413352A (en) Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN105911988A (en) Automatic drawing device and method
CN105989625A (en) Data processing method and apparatus
CN112833892B (en) Semantic mapping method based on track alignment
CN112818925A (en) Urban building and crown identification method
CN113568435B (en) Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN111710040B (en) High-precision map construction method, system, terminal and storage medium
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN107622525A (en) Threedimensional model preparation method, apparatus and system
CN105243375A (en) Motion characteristics extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170613

RJ01 Rejection of invention patent application after publication