CN110319776A - A kind of three-dimensional space distance measurement method and device based on SLAM - Google Patents
A kind of three-dimensional space distance measurement method and device based on SLAM Download PDFInfo
- Publication number
- CN110319776A CN110319776A CN201910596753.6A CN201910596753A CN110319776A CN 110319776 A CN110319776 A CN 110319776A CN 201910596753 A CN201910596753 A CN 201910596753A CN 110319776 A CN110319776 A CN 110319776A
- Authority
- CN
- China
- Prior art keywords
- camera
- point
- true
- frame
- depth value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S11/00—Systems for determining distance or velocity not using reflection or reradiation
- G01S11/12—Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Abstract
This application provides a kind of three-dimensional space distance measurement method and device based on SLAM, wherein the described method includes: obtaining the intrinsic parameter of camera;Debounce processing is carried out to video to be processed, obtains processing rear video;For the processing rear video, the initial depth value that camera image corresponds to the true three-dimension point is calculated;According to the intrinsic parameter and the initial depth value, the outer parameter of camera is obtained;According to the outer parameter, the space length of the true three-dimension point is calculated.A kind of three-dimensional space distance measurement method and device based on SLAM provided by the present application can effectively solve the problem that the unstable problem of feature caused by existing distance measuring method estimation of Depth inaccuracy, video jitter.
Description
Technical field
This application involves technical field of computer vision more particularly to a kind of three-dimensional space range measurement sides based on SLAM
Method and device.
Background technique
Existing ranging means generally use the calibration object in scene and resolve monocular scale, this needs to have mark in scene
Object, but this very inconvenient realization in actual scene.In order to eliminate monocular SLAM (Simultaneous Localization
And Mapping, immediately positioning and map structuring) scale it is uncertain, at present by using ORB-SLAM (0bject
Request Broker-Simultaneous Localization and Mapping, three-dimensional localization based on ORB feature with
Map structuring) method when, front end just need to every width key frame calculate an ORB feature, this is very time-consuming.And ORB-
Three thread structures of SLAM also bring heavier burden to CPU.Therefore, use LSD-SLAM (Large Scale now more
Direct Monocular-Simultaneous Localization and Mapping, the three-dimensional localization based on ORB feature
With map structuring) realize half dense scene rebuilding, reduce the time-consuming of Attitude estimation.
LSD-SLAM method needs then to pass through increasing using a random number initialisation image depth when estimating depth map
Amount formula solid matching method constantly updates reference frame and depth map, and the method for this random number initialization depth will lead to depth and estimate
The inaccuracy of meter.Meanwhile video jitter will cause the instability problem of video interframe feature in practical application, this, which will affect, is based on
The rear end the SLAM effect of optimization of feature.
Summary of the invention
This application provides a kind of three-dimensional space distance measurement method and device based on SLAM, to solve existing ranging side
The unstable problem of feature caused by method estimation of Depth inaccuracy and video jitter.
In a first aspect, this application provides a kind of three-dimensional space distance measurement method based on SLAM, which is characterized in that institute
The method of stating includes:
The intrinsic parameter of camera is obtained, the intrinsic parameter is true three-dimension point corresponding pixel coordinate and phase on camera picture
The mapping relations of machine coordinate;
Debounce processing is carried out to video to be processed, obtains processing rear video, the video to be processed is based on characteristic matching
The video of acquisition;
For the processing rear video, the initial depth value that camera image corresponds to the true three-dimension point is calculated;
According to the intrinsic parameter and the initial depth value, the outer parameter of camera is obtained, the outer parameter is corresponding for camera
Attitude parameter;
According to the outer parameter, the space length of the true three-dimension point is calculated, the space length is described true three
Dimension point arrives the distance of camera photocentre.
Optionally, the intrinsic parameter for obtaining camera includes:
Obtain the calibration picture for the 15-20 width different angle that camera shoots scaling board;
Corner feature detection and characteristic matching are carried out to each calibration picture, obtain the intrinsic parameter of camera.
Optionally, described to carry out debounce processing to video to be processed, obtaining processing rear video includes:
Using SIFT feature matching method, characteristic point between matching camera picture consecutive frame obtains match point;
Using random sampling consensus method, the erroneous point in match point is rejected, obtains being effectively matched a little;
Calculate the point number of the average effective match point of frame to be processed and adjacent two frame;
Determine the maximum point number and smallest point number in described number;
If the smallest point number and the maximum number of points purpose ratio are less than or equal to default dithering threshold, really
The fixed frame to be processed is shake frame;
All shake frames are rejected from video to be processed, obtain processing rear video.
Optionally, described for processing rear video, the initial depth value that calculating camera image corresponds to true three-dimension point includes:
Initial reference frame and keyframe sequence are determined from the processing rear video;
The initial reference frame is matched into each characteristic point in the keyframe sequence, obtains matching result;
According to the matching result, each characteristic point is calculated in the parallax of adjacent key frame;
According to the parallax, camera focal length and adjacent two frame between parallax range, calculate the corresponding depth of characteristic point
Value, the depth value are distance of the true three-dimension point to camera photocentre;
Using least square method, the corresponding initial depth value of depth value of each characteristic point is obtained.
Optionally, described according to outer parameter, the space length for calculating true three-dimension point includes:
Coordinate of the true three-dimension point under world coordinate system is calculated according to the following formula,
Wherein, u and v represents the pixel coordinate that really three-dimensional point projects in camera image, fxRepresent the horizontal ratio of camera
Focal length, fyRepresent the vertical ratio focal length of camera, u0And v0The principal point coordinate of camera is represented, R and t represent the outer parameter of camera, XW、
YW、ZWRepresent coordinate of the true three-dimension point under world coordinate system;
The space length of true three-dimension point is calculated according to the following formula,
Wherein, D represent true three-dimension point to camera photocentre distance.
Second aspect, this application provides a kind of three-dimensional space distance-measuring device based on SLAM, described device include:
Intrinsic parameter acquiring unit, for obtaining the intrinsic parameter of camera, the intrinsic parameter is true three-dimension point in camera picture
The mapping relations of upper corresponding pixel coordinate and camera coordinates;
Debounce processing unit obtains processing rear video, the view to be processed for carrying out debounce processing to video to be processed
Frequency is the video obtained based on characteristic matching;
Initial depth value computing unit calculates camera image and corresponds to described true three for being directed to the processing rear video
Tie up the initial depth value of point;
Outer parameter calculation unit, for obtaining the outer parameter of camera, institute according to the intrinsic parameter and the initial depth value
Stating outer parameter is the corresponding attitude parameter of camera;
Space length computing unit, it is described for calculating the space length of the true three-dimension point according to the outer parameter
Space length is distance of the true three-dimension point to camera photocentre.
Optionally, the intrinsic parameter acquiring unit includes:
Picture acquiring unit is demarcated, for obtaining the calibration picture for the 15-20 width different angle that camera shoots scaling board;
Intrinsic parameter determination unit obtains camera for carrying out corner feature detection and characteristic matching to each calibration picture
Intrinsic parameter.
Optionally, the debounce processing unit includes:
Match point obtaining unit, for utilizing SIFT feature matching method, characteristic point between matching camera picture consecutive frame,
Obtain match point;
It is effectively matched a determination unit, for using random sampling consensus method, the erroneous point in match point is rejected, is had
Imitate match point;
Point number calculating unit, the point number of the average effective match point for calculating frame to be processed and adjacent two frame;
Particular point number decision unit, for determining maximum point number and smallest point number in described number;
Frame determination unit is shaken, if being less than or waiting with the maximum number of points purpose ratio for the smallest point number
In default dithering threshold, it is determined that the frame to be processed is shake frame;
Frame clearing cell is shaken, for rejecting all shake frames from video to be processed, obtains processing rear video.
Optionally, the initial depth value computing unit includes:
Special frames determination unit, for determining initial reference frame and keyframe sequence from the processing rear video;
Matching result computing unit, for the initial reference frame to be matched each characteristic point in the keyframe sequence,
Obtain matching result;
Disparity computation unit, for calculating each characteristic point in the parallax of adjacent key frame according to the matching result;
Depth value computing unit, for the parallax range between the focal length and adjacent two frame according to the parallax, camera, meter
The corresponding depth value of characteristic point is calculated, the depth value is distance of the true three-dimension point to camera photocentre;
Initial depth value obtaining unit, for using least square method, the depth value for obtaining each characteristic point is corresponding initial
Depth value.
Optionally, the space length computing unit includes:
World coordinates computing unit, for calculating coordinate of the true three-dimension point under world coordinate system according to the following formula,
Wherein, u and v represents the pixel coordinate that really three-dimensional point projects in camera image, fxRepresent the horizontal ratio of camera
Focal length, fyRepresent the vertical ratio focal length of camera, u0And v0The principal point coordinate of camera is represented, R and t represent the outer parameter of camera, XW、
YW、ZWRepresent coordinate of the true three-dimension point under world coordinate system;
Space length obtaining unit, for calculating the space length of true three-dimension point according to the following formula,
Wherein, D represent true three-dimension point to camera photocentre distance.Method, comprising:
By the above technology it is found that this application provides a kind of three-dimensional space distance measurement method and device based on SLAM,
Wherein, which comprises obtain the intrinsic parameter of camera, the intrinsic parameter is true three-dimension point corresponding picture on camera picture
The mapping relations of plain coordinate and camera coordinates;Debounce processing is carried out to video to be processed, obtains processing rear video;For the place
Rear video is managed, the initial depth value that camera image corresponds to the true three-dimension point is calculated;According to the intrinsic parameter and described initial
Depth value obtains the outer parameter of camera;According to the outer parameter, the space length of the true three-dimension point is calculated.In use, first
The image that true three-dimension point is taken using camera determines the intrinsic parameter of camera according to the image of true three-dimension point;Then, to each
The video of camera image composition carries out debounce processing, obtains processing rear video, and calculate according to intrinsic parameter and processing rear video
The initial depth value of true three-dimension point.Intrinsic parameter and initial depth value are finally utilized, the outer parameter of camera is calculated, that is,
The attitude parameter of camera, and according to outer parameter, accurately calculate the space length of true three-dimension point.A kind of base provided by the present application
In the three-dimensional space distance measurement method and device of SLAM, existing distance measuring method estimation of Depth inaccuracy, view can effectively solve the problem that
The unstable problem of feature caused by frequency is shaken.
Detailed description of the invention
In order to illustrate more clearly of the technical solution of the application, letter will be made to attached drawing needed in the embodiment below
Singly introduce, it should be apparent that, for those of ordinary skills, without any creative labor,
It is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of flow chart of the three-dimensional space distance measurement method based on SLAM provided by the embodiments of the present application;
Fig. 2 is a kind of flow chart of the method for intrinsic parameter for obtaining camera provided by the embodiments of the present application;
Fig. 3 is a kind of flow chart of the method for video debounce processing provided by the embodiments of the present application;
Fig. 4 is a kind of flow chart of method for calculating initial depth value provided by the embodiments of the present application;
Fig. 5 is a kind of schematic diagram of the three-dimensional space distance-measuring device based on SLAM provided by the embodiments of the present application.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Whole description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, being a kind of stream of the three-dimensional space distance measurement method based on SLAM provided by the embodiments of the present application
Cheng Tu, which comprises
S1, the intrinsic parameter for obtaining camera, the intrinsic parameter are true three-dimension point corresponding pixel coordinate on camera picture
With the mapping relations of camera coordinates.
For camera to the range measurement problem of three-dimensional space point, present invention aims at propose a kind of three based on SLAM
Dimension space distance measurement method, it may not be necessary to the scale that the marker in scene can eliminate monocular SLAM is uncertain,
Influence of the characteristic point distribution to range accuracy is eliminated, weakens influence of the video jitter to the rear end SLAM effect of optimization, more accurately
Estimate depth map.
Using Zhang Zhengyou calibration method, the intrinsic parameter of camera is obtained.
Specifically, as shown in Fig. 2, for a kind of process of the method for the intrinsic parameter for obtaining camera provided by the embodiments of the present application
Figure, which comprises
S101, the calibration picture for obtaining the 15-20 width different angle that camera shoots scaling board;
S102, corner feature detection and characteristic matching are carried out to each calibration picture, obtains the intrinsic parameter of camera.
After each calibration picture carries out corner feature detection and characteristic matching, the pixel that camera picture can be calculated is sat
The mapping relations with camera coordinates are marked, areWherein, u and v representative is really three-dimensional point in camera
The pixel coordinate projected in image, fxRepresent the horizontal ratio focal length of camera, fyRepresent the vertical ratio focal length of camera, u0And v0It represents
The principal point coordinate of camera, XC、YC、ZCRepresent coordinate of the true three-dimension point under camera coordinates system.
It should be noted that can then skip this step if there is the intrinsic parameter of camera calibrated in advance.
S2, debounce processing is carried out to video to be processed, obtains processing rear video, the video to be processed is based on feature
Video with acquisition.
Specifically, as shown in figure 3, being a kind of flow chart of the method for video debounce processing provided by the embodiments of the present application,
The described method includes:
S201, using SIFT feature matching method, characteristic point between matching camera picture consecutive frame obtains match point;
S202, using random sampling consensus method, reject the erroneous point in match point, obtain being effectively matched a little;
S203, the point number for calculating frame to be processed with the average effective match point of adjacent two frame;
S204, maximum point number and smallest point number in described number are determined;
If S205, the smallest point number and the maximum number of points purpose ratio are less than or equal to default shake threshold
Value, it is determined that the frame to be processed is shake frame;
S206, all shake frames are rejected from video to be processed, obtain processing rear video.
For the n frame video sequence of input, SIFT (Scale-invariant Feature Transform, scale are utilized
Invariant features transformation) feature matching method matches the characteristic point of adjacent frame time, later, rejected using random sampling consensus method complete
Erroneous point in portion's match point obtains being effectively matched a little.Assuming that frame to be processed is the i-th frame, then consecutive frame be the (i-1)-th frame and i-th+
1 frame, it should be noted that if it is head and the tail frame, only one consecutive frame.Calculate the points of resulting average effective match point
Mesh is si, can determine siIn smallest point number be smin, maximum point number is smax, then smallest point number and maximum number of points
Purpose ratio isAssuming that default dithering threshold is sa, then ifSo the i-th frame is to shake frame, shakes frame
The selection and winding detection process of LSD-SLAM algorithm key frame will not be continued to participate in;IfSo the i-th frame is not
It is shake frame, it should be retained.
S3, it is directed to the processing rear video, calculates the initial depth value that camera image corresponds to the true three-dimension point.
Specifically, as shown in figure 4, for a kind of process for the method for calculating initial depth value provided by the embodiments of the present application
Figure, which comprises
S301, initial reference frame and keyframe sequence are determined from the processing rear video;
S302, the initial reference frame is matched into each characteristic point in the keyframe sequence, obtains matching result;
S303, according to the matching result, calculate each characteristic point in the parallax of adjacent key frame;
S304, according to the parallax, camera focal length and adjacent two frame between parallax range, calculate characteristic point it is corresponding
Depth value, the depth value are distance of the true three-dimension point to camera photocentre;
S305, using least square method, obtain the corresponding initial depth value of depth value of each characteristic point.
Initial reference frame f is determined from processing rear videorWith keyframe sequence Fim, the collection of the characteristic point in key frame is combined into
F can correct a pair of of initial key frame then according to matching result, according to the following formula, calculate each characteristic point PjIn adjacent key frame
Parallax,
Z=fT/xd
Wherein, Z representative depth values, f represent the focal length of camera, and T represents the parallax range between adjacent two frame, xdRepresent view
Difference.
Finally, the initial depth value of each characteristic point can be calculated using least square method.
S4, according to the intrinsic parameter and the initial depth value, obtain the outer parameter of camera, the outer parameter is camera pair
The attitude parameter answered.
Video sequence is input to LSD-SLAM, the process of tracking, map structuring and winding detection can be completed, obtain phase
The Attitude estimation of machine, that is, the outer parameter of camera, while also can get one and half dense scene rebuildings.
S5, according to the outer parameter, calculate the space length of the true three-dimension point, the space length is described true
Distance of the three-dimensional point to camera photocentre.
Specifically, coordinate of the true three-dimension point under world coordinate system is calculated according to the following formula,
Wherein, u and v represents the pixel coordinate that really three-dimensional point projects in camera image, fxRepresent the horizontal ratio of camera
Focal length, fyRepresent the vertical ratio focal length of camera, u0And v0The principal point coordinate of camera is represented, R and t represent the outer parameter of camera, XW、
YW、ZWRepresent coordinate of the true three-dimension point under world coordinate system;
The space length of true three-dimension point is calculated according to the following formula,
Wherein, D represent true three-dimension point to camera photocentre distance.
Fig. 5 is referred to, for a kind of showing for three-dimensional space distance-measuring device based on SLAM provided by the embodiments of the present application
It is intended to, described device includes:
Intrinsic parameter acquiring unit 1, for obtaining the intrinsic parameter of camera, the intrinsic parameter is true three-dimension point in camera picture
The mapping relations of upper corresponding pixel coordinate and camera coordinates;
Debounce processing unit 2, it is described to be processed for obtaining processing rear video to video to be processed progress debounce processing
Video is the video obtained based on characteristic matching;
Initial depth value computing unit 3 calculates camera image and corresponds to described true three for being directed to the processing rear video
Tie up the initial depth value of point;
Outer parameter calculation unit 4, for obtaining the outer parameter of camera according to the intrinsic parameter and the initial depth value,
The outer parameter is the corresponding attitude parameter of camera;
Space length computing unit 5, for calculating the space length of the true three-dimension point, institute according to the outer parameter
Stating space length is distance of the true three-dimension point to camera photocentre.
Optionally, the intrinsic parameter acquiring unit 1 includes: calibration picture acquiring unit, for obtaining camera to scaling board
The calibration picture of the 15-20 width different angle of shooting;Intrinsic parameter determination unit, for carrying out corner feature inspection to each calibration picture
Survey and characteristic matching, obtain the intrinsic parameter of camera.
Optionally, the debounce processing unit 2 includes: match point obtaining unit, for utilizing SIFT feature matching method,
With the characteristic point between camera picture consecutive frame, match point is obtained;It is effectively matched a determination unit, for using random sampling one
Cause method rejects the erroneous point in match point, obtains being effectively matched a little;Point number calculating unit, for calculating frame to be processed and phase
The point number of the average effective match point of adjacent two frames;Particular point number decision unit, for determining the maximum in described number
Point number and smallest point number;Frame determination unit is shaken, if for the smallest point number and the maximum number of points purpose ratio
Value is less than or equal to default dithering threshold, it is determined that the frame to be processed is shake frame;Shake frame clearing cell, for to
It handles and rejects all shake frames in video, obtain processing rear video.
Optionally, the initial depth value computing unit 3 includes: special frames determination unit, is used for from the processing backsight
Initial reference frame and keyframe sequence are determined in frequency;Matching result computing unit, for will the initial reference frame matching described in
Each characteristic point in keyframe sequence, obtains matching result;Disparity computation unit, for calculating each according to the matching result
Parallax of the characteristic point in adjacent key frame;Depth value computing unit, for the focal length and adjacent two frame according to the parallax, camera
Between parallax range, calculate the corresponding depth value of characteristic point, the depth value is distance of the true three-dimension point to camera photocentre;
Initial depth value obtaining unit obtains the corresponding initial depth value of depth value of each characteristic point for using least square method.
Optionally, the space length computing unit 5 includes: world coordinates computing unit, true for calculating according to the following formula
Coordinate of the real three-dimensional point under world coordinate system,
Wherein, u and v represents the pixel coordinate that really three-dimensional point projects in camera image, fxRepresent the horizontal ratio of camera
Focal length, fyRepresent the vertical ratio focal length of camera, u0And v0The principal point coordinate of camera is represented, R and t represent the outer parameter of camera, XW、
YW、ZWRepresent coordinate of the true three-dimension point under world coordinate system;
Space length obtaining unit, for calculating the space length of true three-dimension point according to the following formula,
Wherein, D represent true three-dimension point to camera photocentre distance.
It is worth noting that, in the specific implementation, the present invention also provides a kind of computer storage mediums, wherein the computer
Storage medium can be stored with program, which may include the service providing method or use of user identity provided by the invention when executing
Step some or all of in each embodiment of family register method.The storage medium can be magnetic disk, CD, read-only storage note
Recall body (English: read-only memory, abbreviation: ROM) or random access memory (English: random access
Memory, referred to as: RAM) etc..
It is required that those skilled in the art can be understood that the technology in the embodiment of the present invention can add by software
The mode of general hardware platform realize.Based on this understanding, the technical solution in the embodiment of the present invention substantially or
Say that the part that contributes to existing technology can be embodied in the form of software products, which can deposit
Storage is in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are used so that computer equipment (can be with
It is personal computer, server or the network equipment etc.) execute certain part institutes of each embodiment of the present invention or embodiment
The method stated.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the present invention
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the application is not limited to the precise structure that has been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.Scope of the present application is only limited by the accompanying claims.
Claims (10)
1. a kind of three-dimensional space distance measurement method based on SLAM, which is characterized in that the described method includes:
The intrinsic parameter of camera is obtained, the intrinsic parameter is that true three-dimension point corresponding pixel coordinate and camera on camera picture are sat
Target mapping relations;
Debounce processing is carried out to video to be processed, obtains processing rear video, the video to be processed is to be obtained based on characteristic matching
Video;
For the processing rear video, the initial depth value that camera image corresponds to the true three-dimension point is calculated;
According to the intrinsic parameter and the initial depth value, the outer parameter of camera is obtained, the outer parameter is the corresponding appearance of camera
State parameter;
According to the outer parameter, the space length of the true three-dimension point is calculated, the space length is the true three-dimension point
To the distance of camera photocentre.
2. the method according to claim 1, wherein the intrinsic parameter for obtaining camera includes:
Obtain the calibration picture for the 15-20 width different angle that camera shoots scaling board;
Corner feature detection and characteristic matching are carried out to each calibration picture, obtain the intrinsic parameter of camera.
3. being obtained everywhere the method according to claim 1, wherein described carry out debounce processing to video to be processed
Managing rear video includes:
Using SIFT feature matching method, characteristic point between matching camera picture consecutive frame obtains match point;
Using random sampling consensus method, the erroneous point in match point is rejected, obtains being effectively matched a little;
Calculate the point number of the average effective match point of frame to be processed and adjacent two frame;
Determine the maximum point number and smallest point number in described number;
If the smallest point number and the maximum number of points purpose ratio are less than or equal to default dithering threshold, it is determined that institute
Frame to be processed is stated as shake frame;
All shake frames are rejected from video to be processed, obtain processing rear video.
4. calculating camera image is corresponding the method according to claim 1, wherein described for processing rear video
The initial depth value of true three-dimension point includes:
Initial reference frame and keyframe sequence are determined from the processing rear video;
The initial reference frame is matched into each characteristic point in the keyframe sequence, obtains matching result;
According to the matching result, each characteristic point is calculated in the parallax of adjacent key frame;
According to the parallax, camera focal length and adjacent two frame between parallax range, calculate the corresponding depth value of characteristic point, institute
Stating depth value is distance of the true three-dimension point to camera photocentre;
Using least square method, the corresponding initial depth value of depth value of each characteristic point is obtained.
5. the method according to claim 1, wherein described according to outer parameter, the space of calculating true three-dimension point
Distance includes:
Coordinate of the true three-dimension point under world coordinate system is calculated according to the following formula,
Wherein, u and v represents the pixel coordinate that really three-dimensional point projects in camera image, fxThe horizontal ratio focal length of camera is represented,
fyRepresent the vertical ratio focal length of camera, u0And v0The principal point coordinate of camera is represented, R and t represent the outer parameter of camera, XW、YW、ZWGeneration
Coordinate of the table true three-dimension point under world coordinate system;
The space length of true three-dimension point is calculated according to the following formula,
Wherein, D represent true three-dimension point to camera photocentre distance.
6. a kind of three-dimensional space distance-measuring device based on SLAM, which is characterized in that described device includes:
Intrinsic parameter acquiring unit, for obtaining the intrinsic parameter of camera, the intrinsic parameter is that true three-dimension point is right on camera picture
The mapping relations of the pixel coordinate and camera coordinates answered;
Debounce processing unit obtains processing rear video, the video to be processed is for carrying out debounce processing to video to be processed
The video obtained based on characteristic matching;
Initial depth value computing unit calculates camera image and corresponds to the true three-dimension point for being directed to the processing rear video
Initial depth value;
Outer parameter calculation unit, it is described outer for obtaining the outer parameter of camera according to the intrinsic parameter and the initial depth value
Parameter is the corresponding attitude parameter of camera;
Space length computing unit, for calculating the space length of the true three-dimension point, the space according to the outer parameter
Distance is distance of the true three-dimension point to camera photocentre.
7. device according to claim 6, which is characterized in that the intrinsic parameter acquiring unit includes:
Picture acquiring unit is demarcated, for obtaining the calibration picture for the 15-20 width different angle that camera shoots scaling board;
Intrinsic parameter determination unit obtains the internal reference of camera for carrying out corner feature detection and characteristic matching to each calibration picture
Number.
8. device according to claim 6, which is characterized in that the debounce processing unit includes:
Match point obtaining unit is used to utilize SIFT feature matching method, and the characteristic point between matching camera picture consecutive frame obtains
Match point;
It is effectively matched a determination unit, for using random sampling consensus method, the erroneous point in match point is rejected, obtains effective
With point;
Point number calculating unit, the point number of the average effective match point for calculating frame to be processed and adjacent two frame;
Particular point number decision unit, for determining maximum point number and smallest point number in described number;
Frame determination unit is shaken, if be less than or equal to for the smallest point number and the maximum number of points purpose ratio pre-
If dithering threshold, it is determined that the frame to be processed is shake frame;
Frame clearing cell is shaken, for rejecting all shake frames from video to be processed, obtains processing rear video.
9. device according to claim 6, which is characterized in that the initial depth value computing unit includes:
Special frames determination unit, for determining initial reference frame and keyframe sequence from the processing rear video;
Matching result computing unit is obtained for the initial reference frame to be matched each characteristic point in the keyframe sequence
Matching result;
Disparity computation unit, for calculating each characteristic point in the parallax of adjacent key frame according to the matching result;
Depth value computing unit calculates special for the parallax range between the focal length and adjacent two frame according to the parallax, camera
The corresponding depth value of sign point, the depth value are distance of the true three-dimension point to camera photocentre;
Initial depth value obtaining unit obtains the corresponding initial depth of depth value of each characteristic point for using least square method
Value.
10. device according to claim 6, which is characterized in that the space length computing unit includes:
World coordinates computing unit, for calculating coordinate of the true three-dimension point under world coordinate system according to the following formula,
Wherein, u and v represents the pixel coordinate that really three-dimensional point projects in camera image, fxThe horizontal ratio focal length of camera is represented,
fyRepresent the vertical ratio focal length of camera, u0And v0The principal point coordinate of camera is represented, R and t represent the outer parameter of camera, XW、YW、ZWGeneration
Coordinate of the table true three-dimension point under world coordinate system;
Space length obtaining unit, for calculating the space length of true three-dimension point according to the following formula,
Wherein, D represent true three-dimension point to camera photocentre distance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910596753.6A CN110319776B (en) | 2019-07-03 | 2019-07-03 | SLAM-based three-dimensional space distance measuring method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910596753.6A CN110319776B (en) | 2019-07-03 | 2019-07-03 | SLAM-based three-dimensional space distance measuring method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110319776A true CN110319776A (en) | 2019-10-11 |
CN110319776B CN110319776B (en) | 2021-05-07 |
Family
ID=68122500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910596753.6A Active CN110319776B (en) | 2019-07-03 | 2019-07-03 | SLAM-based three-dimensional space distance measuring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110319776B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106289071A (en) * | 2016-08-18 | 2017-01-04 | 温州大学 | A kind of structure three-dimensional displacement monocular photographing measurement method |
EP3385916A1 (en) * | 2017-04-05 | 2018-10-10 | Testo SE & Co. KGaA | Measuring tool and corresponding measuring method |
CN108648240A (en) * | 2018-05-11 | 2018-10-12 | 东南大学 | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration |
CN109737874A (en) * | 2019-01-17 | 2019-05-10 | 广东省智能制造研究所 | Dimension of object measurement method and device based on 3D vision technology |
CN109855822A (en) * | 2019-01-14 | 2019-06-07 | 中山大学 | A kind of high-speed rail bridge based on unmanned plane vertically moves degree of disturbing measurement method |
-
2019
- 2019-07-03 CN CN201910596753.6A patent/CN110319776B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106289071A (en) * | 2016-08-18 | 2017-01-04 | 温州大学 | A kind of structure three-dimensional displacement monocular photographing measurement method |
EP3385916A1 (en) * | 2017-04-05 | 2018-10-10 | Testo SE & Co. KGaA | Measuring tool and corresponding measuring method |
CN108648240A (en) * | 2018-05-11 | 2018-10-12 | 东南大学 | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration |
CN109855822A (en) * | 2019-01-14 | 2019-06-07 | 中山大学 | A kind of high-speed rail bridge based on unmanned plane vertically moves degree of disturbing measurement method |
CN109737874A (en) * | 2019-01-17 | 2019-05-10 | 广东省智能制造研究所 | Dimension of object measurement method and device based on 3D vision technology |
Non-Patent Citations (1)
Title |
---|
曹璞璘等: "基于雷电记录与行波数据的雷击故障测距结果", 《电力系统自动化》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110319776B (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11668571B2 (en) | Simultaneous localization and mapping (SLAM) using dual event cameras | |
CN110568447B (en) | Visual positioning method, device and computer readable medium | |
KR101532864B1 (en) | Planar mapping and tracking for mobile devices | |
KR101689923B1 (en) | Online reference generation and tracking for multi-user augmented reality | |
WO2021196941A1 (en) | Method and apparatus for detecting three-dimensional target | |
US20140253679A1 (en) | Depth measurement quality enhancement | |
US20150138193A1 (en) | Method and device for panorama-based inter-viewpoint walkthrough, and machine readable medium | |
CN111127524A (en) | Method, system and device for tracking trajectory and reconstructing three-dimensional image | |
WO2020119467A1 (en) | High-precision dense depth image generation method and device | |
CN110648363A (en) | Camera posture determining method and device, storage medium and electronic equipment | |
KR102443551B1 (en) | Point cloud fusion method, apparatus, electronic device and computer storage medium | |
CN107851331B (en) | Smoothing three-dimensional models of objects to mitigate artifacts | |
JP2020067978A (en) | Floor detection program, floor detection method, and terminal device | |
CN110567441B (en) | Particle filter-based positioning method, positioning device, mapping and positioning method | |
JP6061770B2 (en) | Camera posture estimation apparatus and program thereof | |
CN104966307B (en) | A kind of AR method based on real-time tracking | |
CN110738703A (en) | Positioning method and device, terminal and storage medium | |
CN110310331A (en) | A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature | |
CN112991441A (en) | Camera positioning method and device, electronic equipment and storage medium | |
CN113012224B (en) | Positioning initialization method and related device, equipment and storage medium | |
CN107274448B (en) | Variable weight cost aggregation stereo matching algorithm based on horizontal tree structure | |
US11475629B2 (en) | Method for 3D reconstruction of an object | |
CN110319776A (en) | A kind of three-dimensional space distance measurement method and device based on SLAM | |
CN112598736A (en) | Map construction based visual positioning method and device | |
CN109389032B (en) | Picture authenticity determining method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |