CN105023291A - Criminal scene reconstructing apparatus and method based on stereoscopic vision - Google Patents

Criminal scene reconstructing apparatus and method based on stereoscopic vision Download PDF

Info

Publication number
CN105023291A
CN105023291A CN201510266373.8A CN201510266373A CN105023291A CN 105023291 A CN105023291 A CN 105023291A CN 201510266373 A CN201510266373 A CN 201510266373A CN 105023291 A CN105023291 A CN 105023291A
Authority
CN
China
Prior art keywords
scene
algorithm
point
picture
host computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510266373.8A
Other languages
Chinese (zh)
Inventor
张立国
赵会宾
董旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201510266373.8A priority Critical patent/CN105023291A/en
Publication of CN105023291A publication Critical patent/CN105023291A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a criminal scene reconstructing apparatus and method based on stereoscopic vision. The apparatus is composed of a 360-degree panorama holder, a digital camera and a host computer. The method comprises that the digital camera captures a scene and uploads a picture to the host computer and that the host computer processes the picture by using different algorithms. CMVS and PMVS algorithms for processing a large quantity of pictures are provided. The algorithms divide the pictures into small subsets according to the characteristic of the pictures, accurately model the small subsets, and finally fuse the remodeled subset models to obtain a final reconstructed scene. The apparatus and the method may provide accurate reconstruction information as required, and accurately model special positions and objects in order to construct a visualized, continuous and integrated crime scene and provide basis for criminal case cracking.

Description

A kind of scene of a crime reconfiguration device based on stereoscopic vision and method
Technical field
The present invention relates to image reproduction field, especially a kind of image reconstruction device and method based on stereoscopic visual effect being applied to scene of a crime.
Background technology
Scene of a crime reconstruction technique is theoretical as the research that site inspection one in investigation is new, has been subject to the attention of investigation educational circles of various countries since the nineties in last century gradually.In practice, public security department has also applied to the method that scene of a crime rebuilds in the investigating and prosecuting of some cases, and successfully tracks down some major cases.The static state of the so-called scene rebuilding mainly integral layout of the position, state etc. of site space environment and article describes and manifests.The state acquisition of scene of a crime, for grasping the firsthand information of solving a case, determines that the direction of solving a case has extremely important effect.
Take the method for stereoscopic vision, by design of hardware and software, stereo reconstruction carried out to scene of a crime, build one intuitively, continuous print, overall site environment, provide foundation for solving a case.Scene of a crime method for reconstructing comparatively ripe is now mostly the stage being in photo, be based upon on the analysis foundation to the photo of collection in worksite, artificial simulation scene of a crime, the photo collected of this method is normally at random but not intuitively, the clue provided has very large restriction, and intricate operation, need expensive equipment and professional.Along with the development of science and technology, virtual reality technology is widely applied, and utilizes virtual reality technology, and computing machine can produce and reproduce a three-dimensional virtual case hair ring border and scene, allows investigator produce sensation on the spot in person.But above two kinds of methods have only paid close attention to the reconstruction to on-the-spot integrated environment, be not that real scene of a crime is reproduced, Accurate Model can not have been carried out to some key position, so foundation accurately can not be provided for solving a case, affecting the investigation of merit.
Summary of the invention
The object of the invention is to overcome deficiency of the prior art, provides a kind of and merges stereoscopic vision and three-dimensional reconstruction, the scene of a crime reconfiguration device based on stereoscopic vision of reconstruction scene of a crime three-dimensional model and method.
For achieving the above object, have employed following technical scheme:
Device of the present invention is made up of 360 degree of pan and tilt heads, digital camera and host computers; Digital camera is arranged on 360 degree of pan and tilt heads, and digital camera is connected with host computer by USB data line; Digital camera carries out the shooting of panorama sphere, and host computer passed in real time by picture of taking the photograph, and host computer processes and scene rebuilding picture.
The method of the invention carries out the shooting of panorama sphere by digital camera, gained picture is sent to host computer in real time, in host computer, picture match algorithm is carried out to the picture obtained, SIFT feature is utilized to detect and coupling, and process with light-stream adjustment, calibrate after camera parameter through light-stream adjustment, calculate the three-dimensional information of each unique point, utilize three-dimensionalreconstruction algorithm to each unique point three-dimensional information, generate point of density cloud, again point cloud model is generated to process such as point of density cloud trigonometric ratios, rebuild image scene.
Wherein, described picture match algorithm utilizes SIFT feature to detect and matching algorithm, utilizes the characteristic of Scale invariant features transform, extract the key point in each picture, then mate by testing the similarity of image to feature between two than the method for test.
Described three-dimensionalreconstruction algorithm is PMVS and CMVS algorithm, point of density cloud is generated by PMVS and CMVS cryptographic algorithm, utilize the match point that Stereo matching obtains, according to the relation between space coordinates and camera coordinate system, obtain two-dimensional points back projection to the three-dimensional coordinate in space by a light-stream adjustment, a large amount of match points finally can obtain three-dimensional point cloud model; Described PMVS algorithm is based on block expansion, and described CMVS algorithm is the sparse some cloud encryption to calculating.
Described light-stream adjustment mates based on SIFT feature the pixel corresponding relation obtained to run one iteratively by the SFM step calculating camera parameter moving to structure, and iteration optimization estimates camera parameter and space three-dimensional point coordinate.
The course of work is roughly as follows:
Digital camera is arranged on 360 degree of pan and tilt heads, digital camera is utilized to take the scene of scene of a crime any direction as required, and several pictures are taken to some privileged sites reconstruct for precision, the pictorial information obtained is transferred to host computer by USB data line, in host computer, utilize SIFT feature to detect and mate and pictorial information is processed, and process again with light-stream adjustment, calibrate after camera parameter through light-stream adjustment, calculate the three-dimensional information of each unique point, utilize three-dimensionalreconstruction algorithm to each unique point three-dimensional information, generate point of density cloud, again point cloud model is generated to process such as point of density cloud trigonometric ratios, rebuild image scene.In addition, the size that the parameters such as the rotational speed of pan and tilt head are chosen to control baseline can be controlled.
Compared with prior art, tool of the present invention has the following advantages:
1, take the method for stereoscopic vision, by the design of software and hardware, stereo reconstruction carried out to scene of a crime, build one intuitively, continuous print, overall scene of a crime environment, provide more accurate foundation for solving a case.
2, can leave out redundancy picture and error matching points as required when processing picture match information, cost of saving time also effectively can improve accuracy.
Accompanying drawing explanation
Fig. 1 is the signal wiring block diagram of apparatus of the present invention.
Fig. 2 is the algorithm flow chart of the inventive method.
Fig. 3 is the Stereo Matching Algorithm process flow diagram of the inventive method.
Fig. 4 is the light-stream adjustment of the inventive method is the algorithm flow chart of core.
Fig. 5 is the PMVS algorithm flow chart based on block models of the inventive method.
Fig. 6 is the branch's restructing algorithm process flow diagram divided based on image set of the inventive method.
Fig. 7 is the experiment effect figure of a simulation scene of a crime of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention will be further described:
In signal wiring block diagram of the present invention as shown in Figure 1, device of the present invention is made up of 360 degree of pan and tilt heads, digital camera and host computers; Digital camera is arranged on 360 degree of pan and tilt heads, and digital camera is connected with host computer by USB data line; Digital camera carries out the shooting of panorama sphere, and host computer passed in real time by picture of taking the photograph, and host computer processes and scene rebuilding picture.
The method of the invention carries out the shooting of panorama sphere by digital camera, gained picture is sent to host computer in real time, in host computer, picture match algorithm is carried out to the picture obtained, SIFT feature is utilized to detect and coupling, and process with light-stream adjustment, calibrate after camera parameter through light-stream adjustment, calculate the three-dimensional information of each unique point, utilize three-dimensionalreconstruction algorithm to each unique point three-dimensional information, generate point of density cloud, again point cloud model is generated to process such as point of density cloud trigonometric ratios, rebuild image scene.
Wherein, described picture match algorithm utilizes SIFT feature to detect and matching algorithm, utilizes the characteristic of Scale invariant features transform, extract the key point in each picture, then mate by testing the similarity of image to feature between two than the method for test.
Described three-dimensionalreconstruction algorithm is PMVS and CMVS algorithm, point of density cloud is generated by PMVS and CMVS cryptographic algorithm, utilize the match point that Stereo matching obtains, according to the relation between space coordinates and camera coordinate system, obtain two-dimensional points back projection to the three-dimensional coordinate in space by a light-stream adjustment, a large amount of match points finally can obtain three-dimensional point cloud model; Described PMVS algorithm is based on block expansion, and described CMVS algorithm is the sparse some cloud encryption to calculating.
Described light-stream adjustment mates based on SIFT feature the pixel corresponding relation obtained to run one iteratively by the SFM step calculating camera parameter moving to structure, and iteration optimization estimates camera parameter and space three-dimensional point coordinate.For this problem, the coordinate of space three-dimensional point is unknown, camera parameter is also unknown, it is known that spatial point image space in the picture, be exactly this algorithm wants Simultaneous Iteration to optimize camera parameter and three-dimensional point information, total namely minimize all three-dimensional point to visual picture re-projection error to estimate camera attitude and three-dimensional point coordinate.
Because scene of a crime is a three-dimensional space, for this invention, camera is connected to a pan and tilt head by us, can take arbitrarily angled and scene that is direction along with The Cloud Terrace rotates.Repeat the problems such as reconstruct and camera distortion to what may occur in scene, the present invention improves in the algorithm, repeats reconstruct and distortion to eliminate these.
Be described further below in conjunction with accompanying drawing.
Fig. 1 is the signal wiring block diagram of apparatus of the present invention, adopts digital camera scene to be carried out to the shooting of multi-angle any amount, is transferred to through USB data line the process that host computer does next step.
Fig. 2 is the algorithm flow chart of the inventive method, the picture that logarithmic code camera obtains in host computer carries out SIFT feature coupling, and process with light-stream adjustment, calculate the three-dimensional information of each unique point, generate point of density cloud by CMVS and PMVS cryptographic algorithm, then point cloud model is generated to process such as point of density cloud trigonometric ratios.
Fig. 3 is the Stereo Matching Algorithm process flow diagram of the inventive method, and wherein Fig. 3 (a) is a stereoscopic model, and Fig. 3 (b) is algorithm steps, and Fig. 3 (c) is a simple experiment effect figure.Stereo matching is exactly the unique point in left image by finding two cameras to obtain and right image, mates these features.
The first step of SIFT feature extraction algorithm is exactly the feature of searching all stable existences under different scale space in the picture, obtains position and the corresponding relation of these features.
The second step that SIFT extracts feature accurately locates extreme point, 3rd step calculates key point direction parameter, in order to reach rotational invariance, SIFT algorithm is that each key point detected solves a principal direction, can be aimed at feature interpretation vector by principal direction.
The final step of feature extraction calculates feature interpretation vector, in order to realize rotational invariance, we need direction coordinate system being registered to key point, descriptor coordinate and gradient direction are rotated according to key point principal direction, finally description vectors is normalized, as the foundation of selected characteristic during coupling.
SIFT feature matching algorithm is after carrying out feature extraction to every width picture, find image between match point, SIFT algorithm selects Euclidean distance as the standard of mating between characteristics of image, here adopt the method than test: for each unique point of piece image, find out two unique points nearest with it in another pictures, if the ratio between distance minimum value and sub-minimum is less than certain threshold value, then think that they can mate.
Fig. 4 is the light-stream adjustment of the inventive method is the algorithm flow chart of core, the coordinate of space three-dimensional point is unknown, camera parameter is also unknown, it is known that spatial point image space in the picture, be exactly light-stream adjustment wants Simultaneous Iteration to optimize camera parameter and three-dimensional point information, total namely minimize all three-dimensional point to visual picture re-projection error to estimate camera attitude and three-dimensional point coordinate.
If m width image, n three dimensions point, if three-dimensional point j is visible for camera i, then has according to projection relation:
x ij=P iX j,i=1,...,m,j=1,...,n
Projection matrix P in formula ijointly be made up of camera inside and outside parameter, wherein Camera extrinsic number just corresponding image center C and principal plane towards R 6 unknown parameters altogether.And in internal reference, supposing that camera pixel length breadth ratio is 1, degree of tilt is 0, so be only left the focal distance f variable in units of pixel, represents the set of whole camera parameter and three-dimensional point coordinate in the present invention with the vectorial X of vectorial C and the m dimension of n dimension respectively.Q ijrepresent three-dimensional point j characteristic of correspondence pixel coordinate in image i.Such objective function is:
g ( C , X ) = Σ i = 1 n Σ j = 1 m w i j || q i j - P ( C i , X j ) || 2
Wherein w ijget 1 interval scale three-dimensional point j visible in camera i, the core concept of light-stream adjustment is exactly that iteration optimization camera parameter C and three-dimensional point coordinate X make above-mentioned objective function minimum.
Fig. 5 is the PMVS algorithm flow chart based on block models of the inventive method.The way that PMVS takes original image set is divided into several little image sets to calculate respectively, and the plane of delineation is divided into 2*2 unit by this algorithm, and block extended target is to each pixel cell C i(x, y) at least rebuilds a block, by visual picture follow the tracks of the block message that reconstructs nearby unit generate new block, here is the specific implementation of the flow process of PMVS block expansion algorithm.
(1) find out the elementary area for block expansion, for the block p that has reconstructed, in its all visual picture, look for adjacent cells to filter out candidate's expanding element collection C (p)
C(p)={C i(x',y')|p∈Q i(x,y),|x-x'|+|y-y'|=1}
(2) execution block expansion generates new block, first initialization is carried out to new block p', make n (p')=n (p), R (p')=R (p), V (p')=V (p): c (p'); Optimize c (p') and n (p'), then in V (p'), add new visible picture, to V *(p') upgrade, also just complete block expansion.Wherein, block center c (p), towards unit normal vector n (p) of camera view, visible image R (p), visual picture collection V (p), block element set Q (x, y).
Fig. 6 is the branch's restructing algorithm process flow diagram divided based on image set of the inventive method.For the very large photograph collection of quantity, because the problems such as internal memory cannot be rebuild simultaneously, the method that the present invention adopts image set to divide here, a large amount of photo is divided into the group having little overlap, to each group application reconstruction algorithm, then reconstructed results is merged, obtain final reconstruction model.The image set of CMVS divides sparse the cloud information calculated based on bundle adjustment, and basic thought makes to divide rear each subset picture number to be less than certain upper limit N maxand any one sparse some Pi can be out rebuilt at least one subset.The input of algorithm is the image collection corrected, by the reconstruction algorithm moving to structure, camera attitude and sparse some cloud are calculated to each cluster application, then process by CMVS cryptographic algorithm, cluster object is exactly find overlapping image set, each reconstruction point can be reconstructed exactly by a cluster, and this is also an advantage of this algorithm.Based on three criterions during division: the first, in each subset, do not comprise redundant image; Minimize cluster ∑ k| C k|; The second, the picture number of each subset is enough little makes the service requirement that can meet PMVS; 3rd, the reconstructed results of comprehensive all subsets is compared to the reconstructed results loss reduction of whole image set.
Overlapping clustering method of the present invention is defined as follows:
m i n Σ k | C k | Constraint condition:
· ∀ k | C k | ≤ α
·
Concerning scene of a crime rebuilds, sometimes need the accurate information of some part in restoration scenario, and picture number may also be very large, will carry out Accurate Model by the method.
Fig. 7 is the experiment effect figure of a simulation scene of a crime of the present invention.Be extracted 54 views in experiment and carry out three-dimensionalreconstruction, Fig. 7 (a) is 54 pictures that simulation scene of a crime obtains, Fig. 7 (b) is by the sparse spatial point cloud of two dimensional image dot generation, comprise 10894 points, Fig. 7 (c) is through the dense point cloud model that CMVS algorithm for encryption generates, size is the some cloud of 48.8M, comprises 776376 points; Finally dense three-dimensional model is obtained through CMVS and PMVS combination algorithm, point cloud size is 110M, comprise 1345992 points, Fig. 7 (d) to Fig. 7 (m) is the picture of 10 different angles intercepted from the three-dimensional model generated, experimental result illustrates that last what generate is the scene of a 3 D stereo, can be able to provide intuitively from arbitrarily angled observation, continuous print stereo scene, provide foundation for solving a case.
Above-described embodiment is only be described the preferred embodiment of the present invention; not scope of the present invention is limited; under not departing from the present invention and designing the prerequisite of spirit; the various distortion that those of ordinary skill in the art make technical scheme of the present invention and improvement, all should fall in protection domain that claims of the present invention determines.

Claims (5)

1. based on a scene of a crime reconfiguration device for stereoscopic vision, it is characterized in that: described device is made up of 360 degree of pan and tilt heads, digital camera and host computers; Digital camera is arranged on 360 degree of pan and tilt heads, and digital camera is connected with host computer by USB data line; Digital camera carries out the shooting of panorama sphere, and host computer passed in real time by picture of taking the photograph, and host computer processes and scene rebuilding picture.
2. the scene of a crime reconstructing method based on stereoscopic vision, it is characterized in that: carry out the shooting of panorama sphere by digital camera, gained picture is sent to host computer in real time, in host computer, picture match algorithm is carried out to the picture obtained, SIFT feature is utilized to detect and coupling, and process with light-stream adjustment, calibrate after camera parameter through light-stream adjustment, calculate the three-dimensional information of each unique point, utilize three-dimensionalreconstruction algorithm to each unique point three-dimensional information, generate point of density cloud, again point cloud model is generated to process such as point of density cloud trigonometric ratios, rebuild image scene.
3. a kind of scene of a crime reconstructing method based on stereoscopic vision according to claim 2, it is characterized in that: described picture match algorithm utilizes SIFT feature to detect and matching algorithm, utilize the characteristic of Scale invariant features transform, extract the key point in each picture, then mate by testing the similarity of image to feature between two than the method for test.
4. a kind of scene of a crime reconstructing method based on stereoscopic vision according to claim 2, it is characterized in that: described three-dimensionalreconstruction algorithm is PMVS and CMVS algorithm, point of density cloud is generated by PMVS and CMVS cryptographic algorithm, utilize the match point that Stereo matching obtains, according to the relation between space coordinates and camera coordinate system, obtain two-dimensional points back projection to the three-dimensional coordinate in space by a light-stream adjustment, a large amount of match points finally can obtain three-dimensional point cloud model; Described PMVS algorithm is based on block expansion, and described CMVS algorithm is the sparse some cloud encryption to calculating.
5. a kind of scene of a crime reconstructing method based on stereoscopic vision according to claim 2 and 4, it is characterized in that: described light-stream adjustment mates based on SIFT feature the pixel corresponding relation obtained to run one iteratively by the SFM step calculating camera parameter moving to structure, and iteration optimization estimates camera parameter and space three-dimensional point coordinate.
CN201510266373.8A 2015-05-22 2015-05-22 Criminal scene reconstructing apparatus and method based on stereoscopic vision Pending CN105023291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510266373.8A CN105023291A (en) 2015-05-22 2015-05-22 Criminal scene reconstructing apparatus and method based on stereoscopic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510266373.8A CN105023291A (en) 2015-05-22 2015-05-22 Criminal scene reconstructing apparatus and method based on stereoscopic vision

Publications (1)

Publication Number Publication Date
CN105023291A true CN105023291A (en) 2015-11-04

Family

ID=54413231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510266373.8A Pending CN105023291A (en) 2015-05-22 2015-05-22 Criminal scene reconstructing apparatus and method based on stereoscopic vision

Country Status (1)

Country Link
CN (1) CN105023291A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507274A (en) * 2017-08-30 2017-12-22 北京图航科技有限公司 A kind of quick restoring method of public security criminal-scene three-dimensional live based on cloud computing
CN107655459A (en) * 2017-09-07 2018-02-02 南京理工大学 A kind of measurement of field rock texture surface roughness and computational methods
CN108734773A (en) * 2018-05-18 2018-11-02 中国科学院光电研究院 A kind of three-dimensional rebuilding method and system for mixing picture
CN110942511A (en) * 2019-11-20 2020-03-31 中国电子科技集团公司电子科学研究院 Indoor scene model reconstruction method and device
CN111695424A (en) * 2020-05-07 2020-09-22 广东康云科技有限公司 Crime scene restoration method, crime scene restoration system and storage medium based on three-dimensional real scene
CN111986265A (en) * 2020-08-04 2020-11-24 禾多科技(北京)有限公司 Method, apparatus, electronic device and medium for calibrating camera
CN112926362A (en) * 2019-12-06 2021-06-08 邓继红 Information analysis and storage system
CN114390270A (en) * 2020-10-16 2022-04-22 中国移动通信集团设计院有限公司 Real-time intelligent site panoramic surveying method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129154A1 (en) * 2011-11-18 2013-05-23 Hailin Jin Methods and Apparatus for Detecting Poorly Conditioned Points in Bundle Adjustment
CN103533235A (en) * 2013-09-17 2014-01-22 北京航空航天大学 Quick digital panoramic device based on linear array charge coupled device (CCD) for great case/event scene
US20140270484A1 (en) * 2013-03-14 2014-09-18 Nec Laboratories America, Inc. Moving Object Localization in 3D Using a Single Camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129154A1 (en) * 2011-11-18 2013-05-23 Hailin Jin Methods and Apparatus for Detecting Poorly Conditioned Points in Bundle Adjustment
US20140270484A1 (en) * 2013-03-14 2014-09-18 Nec Laboratories America, Inc. Moving Object Localization in 3D Using a Single Camera
CN103533235A (en) * 2013-09-17 2014-01-22 北京航空航天大学 Quick digital panoramic device based on linear array charge coupled device (CCD) for great case/event scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
何豫航,岳俊: "基于CMVS/PMVS多视角密集匹配方法的研究与实现", 《测绘地理信息》 *
张峰 等: "一种基于图像的室内大场景自动三维重建系统", 《自动化学报》 *
戴嘉境: "基于多幅图像的三维重建理论及算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507274A (en) * 2017-08-30 2017-12-22 北京图航科技有限公司 A kind of quick restoring method of public security criminal-scene three-dimensional live based on cloud computing
CN107655459A (en) * 2017-09-07 2018-02-02 南京理工大学 A kind of measurement of field rock texture surface roughness and computational methods
CN108734773A (en) * 2018-05-18 2018-11-02 中国科学院光电研究院 A kind of three-dimensional rebuilding method and system for mixing picture
CN110942511A (en) * 2019-11-20 2020-03-31 中国电子科技集团公司电子科学研究院 Indoor scene model reconstruction method and device
CN110942511B (en) * 2019-11-20 2022-12-16 中国电子科技集团公司电子科学研究院 Indoor scene model reconstruction method and device
CN112926362A (en) * 2019-12-06 2021-06-08 邓继红 Information analysis and storage system
CN111695424A (en) * 2020-05-07 2020-09-22 广东康云科技有限公司 Crime scene restoration method, crime scene restoration system and storage medium based on three-dimensional real scene
CN111986265A (en) * 2020-08-04 2020-11-24 禾多科技(北京)有限公司 Method, apparatus, electronic device and medium for calibrating camera
CN114390270A (en) * 2020-10-16 2022-04-22 中国移动通信集团设计院有限公司 Real-time intelligent site panoramic surveying method and device and electronic equipment
CN114390270B (en) * 2020-10-16 2023-08-15 中国移动通信集团设计院有限公司 Real-time intelligent site panorama exploration method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN105023291A (en) Criminal scene reconstructing apparatus and method based on stereoscopic vision
CN111063021B (en) Method and device for establishing three-dimensional reconstruction model of space moving target
Agarwal et al. Building rome in a day
Agarwal et al. Reconstructing rome
Gao et al. View-based 3D object retrieval: challenges and approaches
US9047706B1 (en) Aligning digital 3D models using synthetic images
Tao et al. Massive stereo-based DTM production for Mars on cloud computers
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
JP2020205048A (en) Object detection method based on deep learning network, apparatus, and electronic device
CN108734773A (en) A kind of three-dimensional rebuilding method and system for mixing picture
Cheng et al. Extracting three-dimensional (3D) spatial information from sequential oblique unmanned aerial system (UAS) imagery for digital surface modeling
Chen et al. Research on 3D reconstruction based on multiple views
Gupta et al. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones
CN102708589A (en) Three-dimensional target multi-viewpoint view modeling method on basis of feature clustering
Zhu et al. Large-scale architectural asset extraction from panoramic imagery
Li et al. Primitive fitting using deep geometric segmentation
Dong et al. Learning stratified 3D reconstruction
Chen et al. 3d reconstruction of spatial non cooperative target based on improved traditional algorithm
Ren et al. A SKETCH-BASED 3D MODELING METHOD FOR 3D CRIME SCENE PRESENTATION
Bandyopadhyay et al. RectiNet-v2: A stacked network architecture for document image dewarping
Hlubik et al. Advanced point cloud estimation based on multiple view geometry
Blanc et al. A semi-automatic tool to georeference historical landscape images
Sarkar et al. Feature-augmented Trained Models for 6DOF Object Recognition and Camera Calibration.
Flagg et al. Direct sampling of multiview line drawings for document retrieval
Ewerth et al. Using depth features to retrieve monocular video shots

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151104

WD01 Invention patent application deemed withdrawn after publication