CN110021065A - A kind of indoor environment method for reconstructing based on monocular camera - Google Patents

A kind of indoor environment method for reconstructing based on monocular camera Download PDF

Info

Publication number
CN110021065A
CN110021065A CN201910172508.2A CN201910172508A CN110021065A CN 110021065 A CN110021065 A CN 110021065A CN 201910172508 A CN201910172508 A CN 201910172508A CN 110021065 A CN110021065 A CN 110021065A
Authority
CN
China
Prior art keywords
point
matching
image
characteristic point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910172508.2A
Other languages
Chinese (zh)
Inventor
杨晓春
王斌
席冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910172508.2A priority Critical patent/CN110021065A/en
Publication of CN110021065A publication Critical patent/CN110021065A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of, and the indoor environment method for reconstructing based on monocular camera is extracted the feature of every piece image by Harris Corner Detection Algorithm, obtains the characteristic point of every piece image by the photo of monocular camera shooting indoor different angle and position;Feature Points Matching is carried out to picture similar in any two camera sites, obtains the matching characteristic point of all images to collection;Since there are error hiding characteristic points pair in matching, matching characteristic point is eliminated to the error hiding characteristic point pair of concentration;By inferred motion structure SFM, sparse cloud is reconstructed to photo of the error hiding characteristic point to after is eliminated;Dense point cloud reconstruction is carried out to sparse cloud, reconstructs all the points cloud in scene, restores indoor scene environment.The feature extraction algorithm that the present invention passes through fast speed, the complexity for reducing common feature extracting method carries out Feature Points Matching by kd tree, sparse three-dimensional point cloud is reconstructed according to method of geometry, later period by dense algorithm for reconstructing, realizes preferable reconstruction effect.

Description

A kind of indoor environment method for reconstructing based on monocular camera
Technical field
The present invention relates to Three Dimensional Reconfiguration field, especially a kind of indoor environment method for reconstructing based on monocular camera.
Background technique
People, which perceive the world, to be recognized by three-dimensional information, and from image, we can only recognize things Two-dimensional signal, and its steric information can not be obtained.Simultaneously with the technology of virtual reality, augmented reality and automatic Pilot Fast development, the technology that two dimensional image to three-dimensional perception reconstructs is also just more and more important, for example is superimposed in actual environment empty Quasi-3-dimensional model obtains the reconstruct that depth information of video capture etc. all be unable to do without 2 d-to-3 d in automatic Pilot automatically Technology.
Existing Three Dimensional Reconfiguration can be roughly divided into two classes, the three-dimensionalreconstruction based on method of geometry, and based on The three-dimensionalreconstruction of learning method.Wherein the three-dimensional reconstruction based on method of geometry can be divided into again based on monocular camera reconstruction, binocular Camera is rebuild and the reconstruction of depth camera etc. all multi-methods.Method based on study is rebuild, and CNN is presently mainly based on The estimation of Depth of convolutional neural networks, to obtain three-dimensional point cloud.However the reconstruction limitation based on method of geometry is its calculating It measures very big, accomplishes that the effect of real-time reconstruction is very general, and due to the complexity of indoor environment, the effect of reconstruction is often not It is too ideal.
Summary of the invention
The invention aims to solve the deficiencies in the prior art, a kind of interior based on monocular camera is provided Environment rebuilt method.
In order to achieve the above objectives, the present invention is implemented according to following technical scheme:
A kind of indoor environment method for reconstructing based on monocular camera, comprising the following steps:
S1, the photo that indoor different angle and position are shot by monocular camera, are mentioned by Harris Corner Detection Algorithm The feature for taking every piece image obtains the characteristic point of every piece image;
S2, Feature Points Matching is carried out to picture similar in any two camera sites, obtains the matching characteristic of all images Point is to collection;
S3, due to there are error hiding characteristic point pair, eliminating matching characteristic point to the error hiding characteristic point of concentration in matching It is right;
S4, pass through inferred motion structure SFM, reconstruct sparse cloud to photo of the error hiding characteristic point to after is eliminated;
S5, dense point cloud reconstruction is carried out to sparse cloud, reconstructs all the points cloud in scene, restore indoor scene ring Border.
Further, the specific steps of the S2 are as follows: to picture similar in any two camera sites, with the first picture For reference picture, the Feature Descriptor of the first picture is built into kd tree construction, then by the characteristic point of the second picture The matching that the kd tree of Feature Descriptor and first figure carries out, using NCC matching algorithm, when two feature Point correlation coefficients are big When given threshold, then it is assumed that the success of the two Feature Points Matchings.
Further, the S3 specific steps are as follows:
S31, using RANSAC algorithm, to matching characteristic point to the matching characteristic point of concentration to carrying out repeating M time sampling;
S32, selection calculate basis matrix F by 8 groups of corresponding random samples formed;
S33, every group of correspondence to hypothesis calculate distance d;
S34, corresponding number is determined according to d, and then calculated and the consistent interior points of F;
S35, selection have the F of most imperial palace points, and the F for selecting the interior standard put minimum when number is equal meets F's Matching characteristic point to remaining, it is ungratified as Mismatching point to getting rid of.
Further, the specific steps of the S4 are as follows:
S41, Epipolar geometry: pixel x and x ' in two images of the matching characteristic point pair in two images is set, by phase Available two formulas of the pin-hole model of machine:
s1X=KX,
s2X '=K (RX+t),
Two formulas of simultaneous obtain basis matrix F and essential matrix E, wherein
F=K-TEK-1,
E=tΛR,
It decomposes essential matrix E and obtains spin matrix R and translation matrix t between two images, to just can determine that two Positional relationship between image;
S42, triangulation: the spin matrix R and translation matrix t obtained by S41 determines matrix [R:t] matrix, first In image according in space three-dimensional point and corresponding subpoint obtain formula:
Since in addition to X variable, other are all known quantities, thus at the two formula calculating of simultaneous spatial point X coordinate;
S43, binding constraint: error, i.e., the three-dimensional coordinate re-projection that will have been acquired are optimized using binding bounding algorithm Onto image, due to the presence of error, so can not be overlapped with pixel coordinate actual on image, the coordinate of re-projection and Difference between the coordinate of original pixel is exactly that the target of the system optimization by gradient descent method constantly reduces error, from And the result made is the result most having.
Further, the specific steps of the S5 are as follows:
S51, polar curve search: for any one pixel in first image, the line of the optical center of the point and camera is remembered Make l, simultaneously in second image, the plane of the composition of second image optical center and l straight line intersects with the second width image Straight line be exactly polar curve, characteristic point corresponding with any one pixel in first image just should be in second image It is searched in the limit in second image, the other end of polar curve is traversed from polar curve one end, with any one in first image The similar point of pixel is just denoted as the correspondence of the corresponding same three-dimensional space point of any one pixel in first image Projected pixel;
S52, Block- matching: the window of a w × w is taken around any one pixel in first image, is then existed Also the window that w × w is taken on polar curve, is at this moment matched the dense point cloud just rebuild to the pixel in window, to rebuild All the points cloud in scene out, recovers indoor scene environment.
Compared with prior art, traditional feature extracting method sift algorithm, often calculation amount is bigger, and of the invention Then calculation amount is relatively much smaller for the harris Corner Detection Algorithm of use, but last effect is really similar;Lead to simultaneously It crosses closest distance and NCC matches the algorithm combined, better matching effect can be obtained;In addition the error matching of latter step Elimination algorithm, therefore substantially can determine that matching to being in the main true, a other mistake has no effect on final effect;SFM algorithm Realization is then close in existing implementation method, and for the pin-point model of camera, visual geometric principle is utilized;It is last dense heavy In building, in order to which limit violence matches the huge calculation amount of bring, by polar curve search and block-matching technique, weight can be very good Build out all pixels.
Summary, the present invention reduce the complexity of common feature extracting method, pass through kd by the extraction of improvement feature Tree carries out Feature Points Matching, and sparse three-dimensional point cloud is finally reconstructed according to method of geometry, and the later period passes through dense algorithm for reconstructing, Realize preferable reconstruction effect.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 is the window moving process figure in Harris Corner Detection Algorithm.
Fig. 3 is the Epipolar geometry figure in inferred motion structure SFM.
Fig. 4 is the limit search graph during dense point cloud is rebuild.
Fig. 5 is the Data Matching effect picture of specific practical example.
Fig. 6 is the sparse reconstruction effect picture of specific practical example.
Fig. 7 is the dense reconstruction effect picture of specific practical example.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, with reference to embodiments, to the present invention It is described in further detail.Described herein the specific embodiments are only for explaining the present invention, is not used to limit hair It is bright.
As shown in Figure 1, a kind of indoor environment method for reconstructing based on monocular camera of the present embodiment, which is characterized in that packet Include following steps:
S1, the photo that indoor different angle and position are shot by monocular camera, are mentioned by Harris Corner Detection Algorithm The feature for taking every piece image obtains the characteristic point of every piece image;
S2, Feature Points Matching is carried out to picture similar in any two camera sites, obtains the matching characteristic of all images Point is to collection;
S3, due to there are error hiding characteristic point pair, eliminating matching characteristic point to the error hiding characteristic point of concentration in matching It is right;
S4, pass through inferred motion structure SFM, reconstruct sparse cloud to photo of the error hiding characteristic point to after is eliminated;
S5, dense point cloud reconstruction is carried out to sparse cloud, reconstructs all the points cloud in scene, restore indoor scene ring Border.
Image feature extraction techniques are realized:
Feature is extracted using harris angle point algorithm, a small window is created, this window is allowed to move on the image, What is drawn a circle to approve in window is image sub-fraction region, and when window is mobile to different directions in next step, the gray value in window is all Very big transformation can occur, then the position before moving window, angle point, such as Fig. 2 be encountered in window, third width figure is then angle At point, pixel is exactly characteristic point herein.
For image I (x, y), when (self-similarity after Δ x, Δ y), can pass through auto-correlation for translation at point (x, y) Function provides:
c(x,y;Δ x, Δ y)=∑ w (u, v) (I (u, v)-I (u+ Δ x, v+ Δ y)) 2 (formulas 1.1)
Wherein, w (u, v) is weighting function, it both can be constant, be also possible to gaussian weighing function.
According to Taylor expansion, to image I (x, y) translation (carry out first approximation after Δ x, Δ y):
I (u+ Δ x, v+ Δ y)=I (u, v)+Ix (u, v) Δ x+Iy (u, v) Δ y+O (Δ x2, Δ y2) ≈ I (u, v)+Ix (u, v) Δ x+Iy (u, v) Δ y (formula 1.2)
Composite type 1.1 and 1.2 is it can be concluded that it can be concluded that a matrix:
By the λ of the available matrix of formula 1.31And λ2Two characteristic values.To there is following judgment criterion: straight in image Line, a characteristic value is big, another characteristic value is small, 2 > > λ 1 of λ 1 > > λ 2 or λ.Functional value is big in one direction, at it Other party is small upwards.Plane in image, two characteristic values are all small and approximately equal;Function value is all small in all directions. Angle point in image, two characteristic values are all big and approximately equal, and function all increases in all directions.To pass through the algorithm just It can propose the characteristic point of all pictures.
Image Feature Point Matching technology is realized:
The kd tree for constructing reference picture Feature Descriptor first, constructs root node, makes in root node correspondence and k dimension space The hypermatrix region for all example points for including;Then cutting is constantly carried out to k dimension space by recursive method, generated Child node.A reference axis and a cut-off in this reference axis are selected on hypermatrix region (node), determine one Current hypermatrix region is cut by hyperplane, this hyperplane by selected cut-off and perpendicular to selected reference axis Left and right two blocks domain (child node);At this moment, example is assigned to two sub-regions.This process does not have example until subregion When terminate (node be leaf node) when termination.In the process, example is stored on corresponding node.
Since traditional method is directly by by n angle point in m feature angle point in the first width figure and the second width figure, often A angle point do similitude matching (NCC), the time complexity searched in this way is O (m*n), and complexity can be very high, there are also one is By kd tree arest neighbors matching algorithm, by the matching meeting of distance, there is a certain error.So I use kd tree search for The method that NCC matching (formula 1.4) combines, is found apart from nearest point by kd tree, is then observed by NCC matching algorithm The related coefficient of two characteristic points, when vector distance is less than certain threshold value between two characteristic points, while related coefficient is greater than When certain threshold value, then it is assumed that two Feature Points Matching successes.
The nearest neighbor search method of Kd tree: pass through binary tree search (point of node and split vertexes more to be checked first The value for splitting dimension, less than or equal to entering left subtree branch, equal to entering right subtree branch until leaf node), along " searching Rope path " can find the approximate point of arest neighbors quickly, that is, the leaf node of same sub-spaces is in point to be checked; Then searching route is recalled again, and judges whether there may be distance to look into other child node spaces of the node in searching route The closer data point of point is ask, (is added other child nodes if it were possible, then needing to jump to removal search in other child node spaces Enter to searching route).This process is repeated until searching route is sky.
Wherein indicate the multiplying of the corresponding position as numerical value.
The technology for eliminating of image mismatch point pair:
By the extraction of first two steps characteristic point, the matching of similar features point pair can obtain a point to set of matches, however by In noise or matching error influence, inevitably will cause the generation of error hiding pair, however matched correctness directly affects Subsequent reconstruction effect, so it is extremely important to obtain a correct matching set.This system uses matching error technology for eliminating It is RANSAC algorithm.The algorithm widely uses and computer vision field, and can obtain good effect.
RANSAS algorithm realizes process:
The process of algorithm realization in the present embodiment:
(1) characteristic point on every piece image is extracted using Harris algorithm;
(2) by matching technique, the matching double points collection between all images is calculated;
(3) RANSAC Robust estimation: repeating M time and sample, M here according to algorithm RA NSAC adaptivity method It determines;
(4) it selects to calculate basis matrix F by 8 groups of corresponding random samples formed;
(5) to every group of correspondence of hypothesis, distance d is calculated;
(6) corresponding number is determined according to d, and then calculated and the consistent interior points of F;
(7) selection has the F of most imperial palace points, the F for selecting the interior standard put minimum when number is equal.
In short, the thought of RANSAC is exactly to be fitted a maximum sample set without being fitted whole sample sets It closes, such as in the present embodiment, in M sampling, it is only necessary to find a basis matrix, meeting it in M sampling should The point of F matrix then selects the F matrix to examine all matching double points to most, and the matching double points for meeting F remain, and is discontented with Foot as Mismatching point to getting rid of.
SFM (from structure is moved to) technology is realized:
Step 1: Epipolar geometry.
Realize that system has been obtained for the good matching double points in any two picture, right by the technology of front three Point pair in two images, meets relationship as shown in Figure 3.Position for 1 and 2 liang of picture in left figure, between them Relationship also just represents the positional relationship between camera.A spin matrix R can be used from photo 1 to the movement photo 2 It is indicated with translation matrix t.A pixel characteristic point x on image 1, its corresponding pixel characteristic point in image 2 are considered now For x '.The two corresponding pixel characteristic points are to the matching characteristic point pair realized before being, if it is correct match point, then This two o'clock is also in the same space o'clock to the projection mapping on two images.Due to the optical center of this corresponding camera of c and c ', institute WithWithTwo rays in three dimensions ideally can Yu Yidian X.At this moment c and c ', there are also X 3 points totally one Planar delta.The relative position that two field pictures can be calculated according to their geometrical relationship and matching characteristic between them The corresponding three dimensional space coordinate of point, so as to reconstruct matched point pair well.
It is understood that pixel x and x ' in two images, by available two formulas of the pin-hole model of camera:
s1X=KX (formula 1.5)
s2X '=K (RX+t) (formula 1.6)
The available basis matrix F and essential matrix E of joint type 1.5 and 1.6, wherein
F=K-TEK-1(formula 1.7)
E=tΛR (formula 1.8)
Spin matrix R and translation matrix t between two images can be obtained by decomposing essential matrix E.To just can determine that Positional relationship between two images.
Step 2: triangulation.
Matrix [R:t] matrix can be determined by the Epipolar geometry in previous step.According in space in first image Three-dimensional point and the available equilibrium relationships of formula 1.9 and 1.10 of corresponding subpoint.Since in addition to X variable, other are all known Amount, so the two formulas of simultaneous can calculate the coordinate of place spatial point X.Due to the presence of noise, the R and t of previous step estimation Not necessarily exact value, so can also be solved using least square method, come the solution being optimal:
Step 3: binding constraint.
The value calculated by one or two steps is not accurately to be worth due to the presence of inevitable noise, is all existed certain Error, so the system optimizes one or two steps value calculated by reducing error.Here using binding bounding algorithm come excellent Change error, i.e., by the three-dimensional coordinate re-projection to image acquired, due to the presence of error, thus can not and image Upper actual pixel coordinate is overlapped, and the difference between the coordinate of re-projection and the coordinate of original pixel is exactly the mesh of the system optimization Mark, by gradient descent method, constantly reduces error, so that obtained result is the result most having.
Dense point cloud reconstruction technique is realized:
Since by feature extraction algorithm, obtained characteristic point is the pixel with obvious characteristic, and on an image Often there is the pixel of feature to account for the seldom a part of the entire pixel of image, thus according to characteristic point reconstruct come three-dimensional Point cloud be often it is sparse, can not reflect three-dimensional scene well, so the reconstruction of dense point cloud this step must can not It is few.
Step 1: polar curve is searched for.
As shown in figure 4, being exactly the process of limit search, for two images 1 and 2.For any one picture in image 1 The line of the optical center of vegetarian refreshments, the point and camera is denoted as l, simultaneously in 2 images, the composition of 2 optical center O2 of image and straight line l Plane, the straight line intersected with the second width image are exactly the limit, and characteristic point corresponding with image p1 pixel just should in image 2 Searched in the limit in image 2, thus can one greatly reduce traversal image 2 search for brought by huge calculation amount open Pin.The other end that polar curve is traversed from polar curve one end, point similar with p1 are just denoted as the corresponding same three-dimensional space point of p1 Correspondence projected pixel.
If would not then generate intersection without corresponding pixel.
Step 2: Block- matching.
Have a problem that and be exactly how to be matched in the first step, if to single pixel progress Match, have great contingency, because having many same or similar pixels between a general pixel. The window that can take a w × w around pixel within the system, then also takes the window of w × w in the limit, At this moment the dense point cloud just rebuild is matched to the pixel in window, so that all the points cloud in scene is reconstructed, it is extensive It appears again indoor scene environment.
Specific practical example
Data picture: what is selected when system testing is one jiao of picture of a desk of indoor environment, using camera from each A angle shoots this square ring border, and the picture generally shot is The more the better, due to the rotation and translation of consideration, then The photo that the system does not support original place to shoot.
The effect of feature extraction and matching operation is as shown in Figure 5.
When carrying out sparse reconstruction, i.e., using the algorithm for moving to structure, what it is due to reconstruction is characteristic point, so field Scape can not reconstruct true effect well, as shown in Figure 6.
Dense reconstruction finally is carried out to system, each pixel is rebuild, so the model rebuild can be preferable Reflect true three-dimensional scenic effect, as shown in Figure 7.
The limitation that technical solution of the present invention is not limited to the above specific embodiments, it is all according to the technique and scheme of the present invention The technology deformation made, falls within the scope of protection of the present invention.

Claims (5)

1. a kind of indoor environment method for reconstructing based on monocular camera, which comprises the following steps:
S1, the photo that indoor different angle and position are shot by monocular camera, are extracted every by Harris Corner Detection Algorithm The feature of piece image obtains the characteristic point of every piece image;
S2, Feature Points Matching is carried out to picture similar in any two camera sites, obtains the matching characteristic point pair of all images Collection;
S3, due to there are error hiding characteristic point pair, eliminating matching characteristic point to the error hiding characteristic point pair of concentration in matching;
S4, pass through inferred motion structure SFM, reconstruct sparse cloud to photo of the error hiding characteristic point to after is eliminated;
S5, dense point cloud reconstruction is carried out to sparse cloud, reconstructs all the points cloud in scene, restore indoor scene environment.
2. the indoor environment method for reconstructing according to claim 1 based on monocular camera, it is characterised in that: the tool of the S2 Body step are as follows: to picture similar in any two camera sites, using the first picture as reference picture, by the spy of the first picture Sign description son be built into kd tree construction, then by the kd tree of the Feature Descriptor of the characteristic point of the second picture and first figure into Capable matching, using NCC matching algorithm, when two feature Point correlation coefficients are greater than given threshold, then it is assumed that the two features Point successful match.
3. the indoor environment method for reconstructing according to claim 2 based on monocular camera, it is characterised in that: the S3 is specific Step are as follows:
S31, using RANSAC algorithm, to matching characteristic point to the matching characteristic point of concentration to carrying out repeating M time sampling;
S32, selection calculate basis matrix F by 8 groups of corresponding random samples formed;
S33, every group of correspondence to hypothesis calculate distance d;
S34, corresponding number is determined according to d, and then calculated and the consistent interior points of F;
S35, selection have the F of most imperial palace points, the F for selecting the interior standard put minimum when number is equal, and the matching for meeting F is special Sign point to remaining, it is ungratified as Mismatching point to getting rid of.
4. the indoor environment method for reconstructing according to claim 3 based on monocular camera, it is characterised in that: the tool of the S4 Body step are as follows:
S41, Epipolar geometry: pixel x and x ' in two images of the matching characteristic point pair in two images is set, by the needle of camera Available two formulas of pore model:
s1X=KX,
s2X '=K (RX+t),
Two formulas of simultaneous obtain basis matrix F and essential matrix E, wherein
F=K-TEK-1,
E=tΛR,
It decomposes essential matrix E and obtains spin matrix R and translation matrix t between two images, to just can determine that two images Between positional relationship;
S42, triangulation: the spin matrix R and translation matrix t obtained by S41 determines matrix [R:t] matrix, schemes at first As according in space three-dimensional point and corresponding subpoint obtain formula:
Since in addition to X variable, other are all known quantities, thus at the two formula calculating of simultaneous spatial point X coordinate;
S43, binding constraint: optimizing error using binding bounding algorithm, i.e., by the three-dimensional coordinate re-projection acquired to figure As upper, due to the presence of error, so can not be overlapped with pixel coordinate actual on image, the coordinate and original pixel of re-projection Coordinate between difference be exactly that the target of the system optimization constantly reduces error by gradient descent method so that To result be the result most having.
5. the indoor environment method for reconstructing according to claim 4 based on monocular camera, it is characterised in that: the tool of the S5 Body step are as follows:
S51, polar curve search: for any one pixel in first image, the line of the optical center of the point and camera is denoted as l, Simultaneously in second image, the plane of the composition of second image optical center and l straight line, the straight line intersected with the second width image It is exactly polar curve, characteristic point corresponding with any one pixel in first image should just be schemed at second in second image It is searched in the limit as in, the other end of polar curve is traversed from polar curve one end, with any one pixel phase in first image As point, be just denoted as the correspondence projected pixel of the corresponding same three-dimensional space point of any one pixel in first image;
S52, Block- matching: the window of a w × w is taken around any one pixel in first image, then in polar curve On also take the window of w × w, the dense point cloud just rebuild at this moment is matched to the pixel in window, to rebuild appearance All the points cloud in scape, recovers indoor scene environment.
CN201910172508.2A 2019-03-07 2019-03-07 A kind of indoor environment method for reconstructing based on monocular camera Pending CN110021065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910172508.2A CN110021065A (en) 2019-03-07 2019-03-07 A kind of indoor environment method for reconstructing based on monocular camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910172508.2A CN110021065A (en) 2019-03-07 2019-03-07 A kind of indoor environment method for reconstructing based on monocular camera

Publications (1)

Publication Number Publication Date
CN110021065A true CN110021065A (en) 2019-07-16

Family

ID=67189345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910172508.2A Pending CN110021065A (en) 2019-03-07 2019-03-07 A kind of indoor environment method for reconstructing based on monocular camera

Country Status (1)

Country Link
CN (1) CN110021065A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910431A (en) * 2019-10-15 2020-03-24 西安理工大学 Monocular camera-based multi-view three-dimensional point set recovery method
CN111144478A (en) * 2019-12-25 2020-05-12 电子科技大学 Automatic detection method for through lens
CN111798505A (en) * 2020-05-27 2020-10-20 大连理工大学 Monocular vision-based dense point cloud reconstruction method and system for triangularized measurement depth
CN112102504A (en) * 2020-09-16 2020-12-18 成都威爱新经济技术研究院有限公司 Three-dimensional scene and two-dimensional image mixing method based on mixed reality
CN112634306A (en) * 2021-02-08 2021-04-09 福州大学 Automatic detection method for indoor available space
CN112837419A (en) * 2021-03-04 2021-05-25 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium
CN113052880A (en) * 2021-03-19 2021-06-29 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113178005A (en) * 2021-05-26 2021-07-27 国网河南省电力公司南阳供电公司 Efficient photographing modeling method and device for power equipment
CN114492652A (en) * 2022-01-30 2022-05-13 广州文远知行科技有限公司 Outlier removing method and device, vehicle and storage medium
CN115063485A (en) * 2022-08-19 2022-09-16 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium
WO2022193976A1 (en) * 2021-03-16 2022-09-22 华为技术有限公司 Image depth prediction method and electronic device
CN115115847A (en) * 2022-08-31 2022-09-27 海纳云物联科技有限公司 Three-dimensional sparse reconstruction method and device and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100315412A1 (en) * 2009-06-15 2010-12-16 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN102496183A (en) * 2011-11-03 2012-06-13 北京航空航天大学 Multi-view stereo reconstruction method based on Internet photo gallery
CN103646391A (en) * 2013-09-30 2014-03-19 浙江大学 Real-time camera tracking method for dynamically-changed scene
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN107274483A (en) * 2017-06-14 2017-10-20 广东工业大学 A kind of object dimensional model building method
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100315412A1 (en) * 2009-06-15 2010-12-16 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN102496183A (en) * 2011-11-03 2012-06-13 北京航空航天大学 Multi-view stereo reconstruction method based on Internet photo gallery
CN103646391A (en) * 2013-09-30 2014-03-19 浙江大学 Real-time camera tracking method for dynamically-changed scene
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN107274483A (en) * 2017-06-14 2017-10-20 广东工业大学 A kind of object dimensional model building method
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910431A (en) * 2019-10-15 2020-03-24 西安理工大学 Monocular camera-based multi-view three-dimensional point set recovery method
CN111144478B (en) * 2019-12-25 2022-06-14 电子科技大学 Automatic detection method for through lens
CN111144478A (en) * 2019-12-25 2020-05-12 电子科技大学 Automatic detection method for through lens
CN111798505A (en) * 2020-05-27 2020-10-20 大连理工大学 Monocular vision-based dense point cloud reconstruction method and system for triangularized measurement depth
CN112102504A (en) * 2020-09-16 2020-12-18 成都威爱新经济技术研究院有限公司 Three-dimensional scene and two-dimensional image mixing method based on mixed reality
CN112634306A (en) * 2021-02-08 2021-04-09 福州大学 Automatic detection method for indoor available space
CN112837419B (en) * 2021-03-04 2022-06-24 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium
JP2023519466A (en) * 2021-03-04 2023-05-11 チョーチアン センスタイム テクノロジー デベロップメント カンパニー,リミテッド POINT CLOUD MODEL CONSTRUCTION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM
CN112837419A (en) * 2021-03-04 2021-05-25 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium
WO2022183657A1 (en) * 2021-03-04 2022-09-09 浙江商汤科技开发有限公司 Point cloud model construction method and apparatus, electronic device, storage medium, and program
KR20220125714A (en) * 2021-03-04 2022-09-14 저지앙 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Methods, devices, electronic devices, storage media and programs for building point cloud models
KR102638632B1 (en) * 2021-03-04 2024-02-20 저지앙 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Methods, devices, electronic devices, storage media and programs for building point cloud models
WO2022193976A1 (en) * 2021-03-16 2022-09-22 华为技术有限公司 Image depth prediction method and electronic device
CN113052880A (en) * 2021-03-19 2021-06-29 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113052880B (en) * 2021-03-19 2024-03-08 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113178005A (en) * 2021-05-26 2021-07-27 国网河南省电力公司南阳供电公司 Efficient photographing modeling method and device for power equipment
CN114492652A (en) * 2022-01-30 2022-05-13 广州文远知行科技有限公司 Outlier removing method and device, vehicle and storage medium
CN114492652B (en) * 2022-01-30 2024-05-28 广州文远知行科技有限公司 Outlier removing method and device, vehicle and storage medium
CN115063485A (en) * 2022-08-19 2022-09-16 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium
CN115063485B (en) * 2022-08-19 2022-11-29 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium
CN115115847B (en) * 2022-08-31 2022-12-16 海纳云物联科技有限公司 Three-dimensional sparse reconstruction method and device and electronic device
CN115115847A (en) * 2022-08-31 2022-09-27 海纳云物联科技有限公司 Three-dimensional sparse reconstruction method and device and electronic device

Similar Documents

Publication Publication Date Title
CN110021065A (en) A kind of indoor environment method for reconstructing based on monocular camera
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN111707281B (en) SLAM system based on luminosity information and ORB characteristics
CN109934862A (en) A kind of binocular vision SLAM method that dotted line feature combines
CN112288627B (en) Recognition-oriented low-resolution face image super-resolution method
CN110570522B (en) Multi-view three-dimensional reconstruction method
CN107329962B (en) Image retrieval database generation method, and method and device for enhancing reality
CN111724439A (en) Visual positioning method and device in dynamic scene
CN110580720B (en) Panorama-based camera pose estimation method
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN112562081B (en) Visual map construction method for visual layered positioning
CN108648264A (en) Underwater scene method for reconstructing based on exercise recovery and storage medium
CN106530407A (en) Three-dimensional panoramic splicing method, device and system for virtual reality
CN112418288A (en) GMS and motion detection-based dynamic vision SLAM method
CN114020953B (en) Multi-image retrieval method and device for appearance design product
CN110517309A (en) A kind of monocular depth information acquisition method based on convolutional neural networks
CN116664782A (en) Neural radiation field three-dimensional reconstruction method based on fusion voxels
CN112102504A (en) Three-dimensional scene and two-dimensional image mixing method based on mixed reality
CN115063542A (en) Geometric invariant prediction and model construction method and system
CN112200917A (en) High-precision augmented reality method and system
Zenati et al. Dense stereo matching with application to augmented reality
CN116228992A (en) Visual positioning method for different types of images based on visual positioning system model
CN116704205A (en) Visual positioning method and system integrating residual error network and channel attention
CN116109778A (en) Face three-dimensional reconstruction method based on deep learning, computer equipment and medium
CN113850293B (en) Positioning method based on multisource data and direction prior combined optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190716

RJ01 Rejection of invention patent application after publication