CN107392964B - The indoor SLAM method combined based on indoor characteristic point and structure lines - Google Patents
The indoor SLAM method combined based on indoor characteristic point and structure lines Download PDFInfo
- Publication number
- CN107392964B CN107392964B CN201710552072.0A CN201710552072A CN107392964B CN 107392964 B CN107392964 B CN 107392964B CN 201710552072 A CN201710552072 A CN 201710552072A CN 107392964 B CN107392964 B CN 107392964B
- Authority
- CN
- China
- Prior art keywords
- point
- image
- key frame
- structure lines
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 26
- 238000005457 optimization Methods 0.000 claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 239000000284 extract Substances 0.000 claims abstract description 3
- 238000010586 diagram Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims description 3
- 238000009825 accumulation Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 9
- 238000004804 winding Methods 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 description 10
- 238000013480 data collection Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 231100000572 poisoning Toxicity 0.000 description 1
- 230000000607 poisoning effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to the vision SLAM algorithms that indoor characteristic point and structure lines combine, comprising: S1 carries out the calibration of camera internal reference;S2 is directed to the video frame image data that camera obtains, and extracts characteristic point and structure lines;S3 carries out characteristic point and structure lines tracking, and carry out key frame extraction according to the characteristic point and structure lines of acquisition;S4 carries out surrounding ambient point and space line drawing and platform positioning and optimizing according to the characteristic point of acquisition and the tracking information of structure lines;S5 judges whether platform motion profile forms closed loop, obtains correct closed loop key frame, carries out global optimization to global image posture and map;S6 exports result.The present invention has real-time, high efficiency, it is charted using matched characteristic point and structure lines to the posture of image and the environment of surrounding, and winding detection processing is carried out, while making full use of structure lines to reduce drift error, the structure feature of last available preferably mobile robot platform positioning result and ambient enviroment is detected using winding.
Description
Technical field
The invention belongs to photogrammetric and computer vision fields, more particularly to the view that indoor characteristic point and structure lines combine
Feel SLAM algorithm.
Background technique
With computer vision and photogrammetric development, figure optimization SLAM (Simultaneous Localization
And Mapping) increasingly cause vision SLAM researcher's note that estimation and bundle adjustment are introduced by it
SLAM, estimation is to solve the position of robot and ambient enviroment feature as Global Optimal Problem, by mentioning
Characteristic point, the structure lines etc. on image are taken, signature tracking is carried out, establish observation error equation, and by linear or non-linear
Optimization Solution observation error value minimum calculates optimal robot location and ambient enviroment feature.Previous estimation due to
It is time-consuming too many in feature extraction and matching and subsequent optimization link, self poisoning and map structure can not be completed in real time online
It builds, offline pose refinement and three-dimensional reconstruction can only be done, in recent years, computer vision research person pass through sparse matrix, more
The methods of thread has gradually decreased operation time, and figure optimization SLAM technology can also reach online processing.
Characteristics of image used by vision SLAM mainly includes image pixel, image point feature, image line feature and face
Feature etc..
Wherein, the vision SLAM technology based on image point feature has been a hot spot of research content, and algorithm comparison is mature.Base
In point feature SLAM algorithm using characteristic point on image based on, signature tracking, composition, closed loop inspection can be carried out in real time
It surveys, completes while positioning the overall process with drawing, be the field current vision SLAM one of algorithm leading.Due to point feature
It is easy by illumination, the interference of noise, and the three-dimensional point map constructed, than sparse, be beyond expression true scene structure,
In point feature region not abundant enough, but be easy tracking failure, therefore can represent environmental structure line feature and region feature also by
The concern of researchers is gradually caused, especially in the more region of man-made structures, such as street and indoor environment etc..It compares
For point feature, line feature is affected by environment smaller and can preferably express the environmental information of higher semanteme simultaneously.Cause
This, the SLAM algorithm based on structure lines is also the hot research object of researchers.
Since feature extraction and matching can consume more calculation amount, the vision SLAM based on image pixel is also gradually
The concern for causing researchers, the vision SLAM algorithm based on image pixel directly use image grayscale information carry out image with
Track is not necessarily to feature extraction and description, is directly tracked and optimized using the gradient of image, this can be in the less region of feature
Enhance the continuity of vision SLAM, but it fully relies on image pixel, and the more strong regional effect of illumination variation is compared
Difference, and only rely on image pixel gradient, the positioning being calculated and pattern accuracy also compare lower.
Summary of the invention
In order to solve the above-mentioned technical problem, can quickly, efficiently and accurately obtain indoor mobile robot it is real-time positioning and
Charting results, there is provided herein a kind of vision positioning and drafting algorithm based on indoor characteristic point and structure lines.
The technical scheme adopted by the invention is that: the indoor SLAM method combined based on indoor characteristic point and structure lines,
It is characterized in that, comprising the following steps:
Step 1, the calibration of camera internal reference is carried out, the camera internal reference includes principal point for camera, focal length and distortion factor;
Step 2, the video frame image data obtained for camera on mobile robot platform, is extracted on each image frame
Characteristic point and structure lines;
Step 3, according to the characteristic point of acquisition and structure lines, characteristic point and structure lines tracking are carried out, and carries out key frame choosing
It takes, specifically includes following sub-step;
Step 3.1, according to the characteristic point obtained in step 2, feature point tracking is carried out using feature descriptor distance, is obtained
Preferable tracking characteristics point pair;
Step 3.2, according to the structure lines obtained in step 2, carry out structure lines tracking using the parameter of structure lines, obtain compared with
Ideal tracking structure line pair, wherein the parameter of the structure lines includes the midpoint of structure lines, the length of line, angle side
To;
Step 3.3, using current tracking characteristics point to and tracking structure line pair, current image posture is carried out preliminary
Optimization;
Step 3.4, according to the tracking characteristics point in step 3.1 to and step 3.3 in the image posture that optimizes, judgement is current
Whether image frame is chosen for key frame, if it is determined that not being key frame, new image frame is re-read back to step 2, if
Judgement is key frame, then is sent to step 4 and is handled;
Step 4, according to the tracking information of the characteristic point of acquisition and structure lines, surrounding ambient point and space line system are carried out
Figure and platform positioning and optimizing;
Step 5, judge whether platform motion profile forms closed loop, obtain correct closed loop key frame, to global image
Posture and map carry out global optimization;
Step 6, result is exported.
Further, the implementation of step 2 is as follows,
Step 2.1, pyramid is established to the video frame image of acquisition, carries out image block on pyramid diagram picture respectively,
The imagery zone for obtaining certain window size is carried out characteristic point respectively in each piecemeal using ORB feature point extraction algorithm and mentioned
It takes and describes;
Step 2.2, gaussian filtering is carried out to video frame, carries out extraction of structure lines using LSD operator;
Step 2.3, for characteristic point is extracted and the later image of structure lines carries out piecemeal, respectively by characteristic point and structure
Line is divided into different imagery zones.
Further, the implementation of step 3.1,
Step 3.1.1 judges whether there is candidate matches frame using the characteristic point on current image, if there is the time of selection
Choosing matching frame, then open up appropriately sized imaging window on it, obtain characteristic point therein as candidate matches point, otherwise select
It takes the previous frame of present frame as candidate matches frame, carries out the selection of corresponding candidate match point;
Step 3.1.2, calculate current image on characteristic point and candidate matches point descriptor distance, to characteristic point according to
Descriptor distance takes preceding 60 percent to be used as optimal match point according to ascending sort;
Step 3.1.3 carries out erroneous point using optimal match point of the RANSAC to acquisition and filters out;
Step 3.1.4 projects optimal match point using the posture of present frame image and the posture of candidate matches frame,
Its back projection's error is calculated, the big matching double points of error dot are filtered out.
Further, the implementation of step 3.2,
Step 3.2.1 on key frame to be matched, is opened up appropriately sized using the center of the structure lines on current image
Imaging window, obtain structure lines center fall structure lines in the area as candidate matches line;
Step 3.2.2, calculates the parameter difference of the structure lines and candidate matches line on current image, and utilizes present frame
Posture and the posture of key frame to be matched carry out the reconstruction of three-dimensional space line, by the degree of overlapping for observing corresponding three-dimensional space line segment
To carry out structure lines matching;
Step 3.2.3, it is similar by the angles and positions for comparing space 3D line segment using the line segment of the same name on multiple images
Property, the matching image line of final more similar spatial line segment is chosen to as candidate imagery matched line pair, then to candidate shadow
Picture matched line projects on multiple corresponding visible images to back projection is carried out, calculates separately space three-dimensional on each image
The error of the projection line of line segment original two-dimensional line segment corresponding on image, removal are greater than the matched line of certain threshold value.
Further, the implementation of step 3.3,
Step 3.3.1, if total energy equation is,
E=Ep(θ)+λEl(θ)
Wherein, Ep(θ) represents the energy term of characteristic point, El(θ) indicates the energy term of structure lines, and λ is the scale of both balances
With weight difference;
The energy term E of characteristic point is arranged in step 3.3.2p(θ), calculation formula is as follows,
Wherein, K is space 3D point number, and M is the number of image, vijRepresentative is whether spatial point i is mapped to current image j
On, PiRepresent the parameter of space 3D point i, pijThe corresponding original 2D match point for being spatial point i on current image j, TjFor shadow
As the parameter of j, f (Tj,Pi) what is represented is the position that spatial point i is projected on current image j;
Step 3.3.3, the energy term E of setting structure linel(θ), calculation formula is as follows,
Wherein, N is space 3D line number, and M is the number of image, vijRepresentative is whether space line i is mapped to current image j
On, LiFor the parameter of space 3D line i, lijFor space line i on current image j corresponding original 2D matched line parameter, TjFor shadow
As the parameter of j, Q (Tj,li) projection line of the representation space line i on image j, d (Q (Tj,li),lij)2Represent corresponding original 2D line lij
To projection line Q (Tj,li) descriptor range error;
Step 3.3.4 optimizes library by using Ceres, is adjusted to the initial attitude of image, until function optimization energy
Quantifier E is minimum, obtains optimal image posture.
Further, the implementation of step 4,
Step 4.1, the image key frame group for having total view key frame with current key frame is calculated, utilizes on current key frame two
The space three-dimensional point that match point is rebuild is tieed up, calculating other key frames can be seen the number of corresponding three-dimensional points to be chosen, and drop
Sequence sequence current image key frame group;
Step 4.2, current key frame regards the key frame in key frame group together and carries out characteristic point and structure lines matching, and filters
Except the less three-dimensional point of visual image;
Step 4.3, according to the characteristic matching point and matched line newly increased in step 4.2, three-dimensional space point and correspondence are generated
Three-dimensional space line;
Step 4.4, using the three-dimensional space point of generation and corresponding three-dimensional space line, to current image posture carry out into
One-step optimization;
Step 4.5, for the image key frame after optimization, if there is 90% three-dimensional point quilt in its corresponding space three-dimensional point
Other are seen depending on key frame altogether, are filtered out to the key frame.
Further, the implementation of step 5 is as follows,
Step 5.1, image feature point is described using DBoW2 dictionary, and is waited by shared number of words to choose
Select closed loop key frame;
Step 5.2, current key frame and candidate closed loop key frame carry out Image Matching, calculate its matching points and wait to confirm
Select closed loop key frame whether correct;
Step 5.3, using the correct candidate key-frames of acquisition, the posture of image is readjusted, it is tired to remove it
Long-pending drift error.
Further, camera internal reference calibration implementation is to be obtained under multiple different perspectivess using camera in the step 1
Fixed size gridiron pattern image data;Then by Zhang Zhengyou camera calibration method, to the gridiron pattern image data got
The calculating of camera internal reference is carried out, camera calibration result is obtained.
It is compared to the indoor SLAM algorithm of existing prevalence, the beneficial effects of the present invention are: can be well according to indoor
Structural environment realizes that the locating effect of mobile platform and building have the ambient enviroment feature of structural information, can be very good more
It is obtained on kind experimental data set high-precision as a result, there is real-time, high efficiency, using matched characteristic point and structure lines to shadow
The posture of picture and the environment of surrounding chart, and have carried out winding detection processing, miss making full use of structure lines to reduce drift
While poor, the structure of last available preferably mobile robot platform positioning result and ambient enviroment is detected using winding
Feature.
Moreover, the line structural characteristics using indoor environment of the invention alleviate in image trace well
Drift error, and can preferably remove the accumulative drift error of image using winding detection, indoors or artificial structure
More region can obtain more high-precision positioning result.
Detailed description of the invention
Fig. 1 is the flow chart of embodiment of the present invention method.
Fig. 2 is the point three-dimensional reconstruction schematic diagram of embodiment of the present invention method.Wherein, C1-C6Represent the corresponding phase of sequence image
Machine center, P are the space 3D point rebuild, p1-p6Represent two-dimentional match point of the 3D point P on original image.
Fig. 3 is the line three-dimensional reconstruction schematic diagram of embodiment of the present invention method.Wherein, C1-C6Represent the corresponding phase of sequence image
Machine center,It is the space 3D line rebuild.
Fig. 4 is the image pose refinement schematic diagram of embodiment of the present invention method.Wherein, point, line be respectively pose refinement with
Three-dimensional space point, line segment afterwards, rectangle indicates image key frame in the positional relationship in space in figure.
Fig. 5 is the closed loop detection schematic diagram of embodiment of the present invention method.Wherein, left side is the camera track before closed loop detection
Figure, right figure are corresponding camera trajectory diagram after closed loop detects.
Specific embodiment
Below in conjunction with attached drawing and specific embodiment, the invention will be further described.
The technical scheme adopted by the invention is that: the indoor SLAM method combined based on indoor characteristic point and structure lines, tool
Body implementation flow chart is shown in Fig. 1, mainly comprises the steps that
Step 1: carrying out camera internal reference (principal point for camera, focal length and distortion factor) calibration;
Step 1.1: the gridiron pattern image data of the fixed size under multiple different perspectivess is obtained using camera;
Step 1.2: utilizing Zhang Zhengyou camera calibration method, camera internal reference meter is carried out to the gridiron pattern image data got
It calculates, obtains camera calibration result.
Step 2: the video frame image data obtained for camera on mobile robot platform extracts characteristic point and structure
Line;
Step 2.1: pyramid being established to the video frame of acquisition first, then carries out image point on pyramid diagram picture respectively
Block obtains the imagery zone of certain window size, then using existing feature point extraction algorithm in each piecemeal respectively into
Row feature point extraction and description, it is contemplated that ORB feature point extraction algorithm speed is fast, characteristic point is more, the characteristic point that the present invention selects
It is expressed as ORB characteristic point;
Step 2.2: gaussian filtering being carried out to video frame image, then carries out extraction of structure lines using LSD operator;
Step 2.3: for characteristic point is extracted and the later image of structure lines carries out piecemeal processing, respectively by characteristic point and
Structure lines are divided into different imagery zones, and structure lines divide the central point that reference frame is image.
Step 3: according to the characteristic point and structure lines of acquisition, carrying out characteristic point and structure lines tracking, and carry out key frame choosing
It takes;
Step 3.1: according to the characteristic point obtained in step 2, carrying out feature point tracking using feature descriptor distance, obtain
Preferable tracking characteristics point pair;
Step 3.1.1: using the characteristic point on current image, candidate matches frame is judged whether there is, if there is the time of selection
Choosing matching frame, then open up appropriately sized imaging window on it, obtain characteristic point therein as candidate matches point, otherwise select
It takes the previous frame of present frame as candidate matches frame, carries out the selection of corresponding candidate match point;
Step 3.1.2: calculate current image on characteristic point and candidate matches point descriptor distance, to characteristic point according to
Descriptor distance takes preceding 60 percent to be used as optimal match point according to ascending sort;
Step 3.1.3: erroneous point is carried out using optimal match point of the RANSAC to acquisition and is filtered out;
Step 3.1.4: projecting optimal match point using the posture of present frame image and the posture of candidate matches frame,
Its back projection's error is calculated, Fig. 2, C are detailed in1-C6The corresponding image center of sequence image is represented, P is the space 3D point rebuild, p1-
p6Two-dimentional match point of the 3D point P on original image is represented, the big matching double points of error dot are filtered out.
Step 3.2: according to the structure lines obtained in step 2, utilizing parameter (midpoint, the length of line, angle of structure lines
Spend direction) structure lines tracking is carried out, obtain comparatively ideal tracking structure line pair;
Step 3.2.1: it using the center of the structure lines on current image, on key frame to be matched, opens up appropriately sized
Imaging window, obtain structure lines center fall in the structure lines in region as candidate matches line;
Step 3.2.2: the parameter difference of the structure lines and candidate matches line on current image, length, angle such as line are calculated
Direction etc. is spent, and carries out the reconstruction of three-dimensional space line using the posture of present frame and the posture of key frame to be matched, passes through observation pair
The degree of overlapping for the three-dimensional space line segment answered carries out structure lines matching;
Step 3.2.3: similar by the angles and positions for comparing space 3D line segment using the line segment of the same name on multiple images
Property, the matching image line of final more similar spatial line segment is chosen to as candidate imagery matched line pair, then to candidate shadow
Picture matched line projects on multiple corresponding visible images to back projection is carried out, calculates separately space three-dimensional on each image
It is big usually to choose 3 pixels if more than certain threshold value for the error of the projection line of line segment original two-dimensional line segment corresponding on image
It is small, then it is removed, is detailed in Fig. 3, C1-C6Represent the corresponding image center of sequence image, l1-l6It represents and is extracted on sequence image
Structure lines,It is the space 3D line rebuild.
Step 3.3: using current tracking characteristics point to and tracking structure line pair, current image posture is carried out preliminary
Optimization;
Step 3.3.1: total energy equation are as follows:
E=Ep(θ)+λEl(θ)
Wherein, Ep(θ) represents the energy term of characteristic point, El(θ) indicates the energy term of structure lines, and λ is the scale of both balances
With weight difference, it is typically set to 1;
Step 3.3.2: the energy term E of characteristic point is setp(θ), calculation formula is as follows:
Wherein, K is space 3D point number, and M is the number of image, vijRepresentative is whether spatial point i is mapped to current image j
On, PiRepresent the parameter of space 3D point i, pijThe corresponding original 2D match point for being spatial point i on current image j, TjFor shadow
As the parameter of j, f (Tj,Pi) what is represented is the position that spatial point i is projected on current image j;
Step 3.3.3: the energy term E of setting structure linel(θ), calculation formula is as follows:
Wherein, N is space 3D line number, and M is the number of image, vijRepresentative is whether space line i is mapped to current image j
On, LiFor the parameter (such as endpoint, midpoint) of space 3D line i, lijCorresponding original 2D matching on current image j for space line i
Line parameter (such as endpoint, midpoint), TjFor the parameter of image j, Q (Tj,li) projection line of the representation space line i on image j, d (Q
(Tj,li),lij)2Represent corresponding original 2D line lijTo projection line Q (Tj,li) descriptor range error;
Step 3.3.4: library is optimized by using Ceres, the initial attitude of image can be adjusted, until function is excellent
It is minimum to change energy term E, the optimal image posture that can be obtained in this way.
Step 3.4: according to the image posture of optimization and accurate matching double points, judging whether current image frame selects
It is taken as key frame.
Step 3.4.1: the trace point pair and posture acquired respectively according to step 3.1 and step 3.3 rebuilds three-dimensional space
Point judges whether that newly-built three-dimensional space point 90% or more is repeated with the three-dimensional space point that key frame before generates, if it is, setting
It is set to non-key frame, new image frame is re-read, returns to step 2, re-starts characteristic point, line tracking is sentenced with key frame
It is disconnected, otherwise 3.4.2 is entered step as candidate key-frames;
Step 3.4.2: as shown in Fig. 2, C1-C6The corresponding image center of sequence image is represented, P is the space 3D point rebuild,
p1-p6Two-dimentional match point of the 3D point P on original image is represented, judges whether the corresponding major part (80%) three of current image frame
Dimension space point respectively with current image line C1P, with match image line C2Angle between P is greater than certain threshold value, and (angle is logical
Often take 1 °), the threshold value such as larger than given is then set as key frame images;
Step 3.4.3: it if present frame is set as key frame, is sent to image composition link and carries out ambient enviroment system
Figure and image posture adjusting and optimizing again;
Step 4: according to the tracking information of the characteristic point of acquisition and structure lines, carrying out surrounding ambient point and space line system
Figure and platform positioning and optimizing, space map and positioning schematic diagram after pose refinement are shown in Fig. 4, wherein point, line are respectively posture
Optimize later three-dimensional space point, line segment, rectangle indicates image key frame in the positional relationship in space in figure;
Step 4.1: calculating the image key frame group for having total view key frame with current key frame.Utilize on current key frame two
The space three-dimensional point that match point is rebuild is tieed up, calculating other key frames can be seen the number of corresponding three-dimensional points to be chosen, and drop
Sequence sequence current image key frame group;
Step 4.2: current key frame regards the key frame in key frame group together and carries out characteristic point and structure lines matching, and filters
Except the less three-dimensional point of visual image;
Step 4.2.1: the feature point description symbol distance that current key frame regards key frame together is calculated, existing method word is utilized
Allusion quotation is layered characteristic point, and Feature Points Matching is then carried out in respective layer;
Step 4.2.2: calculating the parameter distance of current key frame and its structure lines for regarding key frame altogether, carries out structure lines
Match, method is identical as step 3.2;
Step 4.2.3: the visual number of key frames of three-dimensional space point is calculated, three-dimensional point is projected on image, if fallen
In image capturing range, then it is assumed that the key frame is candidate visual key frame, is then found and current three-dimensional point on this key frame
The high characteristic point of similarity system design calculates the real key of three-dimensional point if it is found, then thinking that the key frame is real key frame
Frame number and the visually ratio of both crucial frame numbers then should if it is less than certain threshold value (this algorithm is used as threshold value using 0.8)
Three-dimensional point is filtered out;
Step 4.3: according to the characteristic matching point and matched line newly increased in step 4.2, generating three-dimensional space point and correspondence
Three-dimensional space line;
Step 4.3.1: using the posture of principle of triangulation and key frame, new space three-dimensional point is generated, and is calculated anti-
Projection error removes error matching points pair;
Step 4.3.2: posture and matched structure lines using key frame generate space three-dimensional line, pass through space line
Similitude removes erroneous matching line;
Step 4.4: using three-dimensional space point and corresponding three-dimensional space line is generated, current image posture being carried out into one
Step optimization, the step refer to step 3.3;
Step 4.5: for the image key frame after optimization, if there is 90% three-dimensional point quilt in its corresponding space three-dimensional point
Other are seen depending on key frame altogether, are filtered out to the key frame.
Step 5: winding detection is carried out, that is, judges the region that motion platform has moved to before whether having moved, if
It was found that moving region before having arrived, then motion profile forms closed loop, i.e., the image in extensive range is removed using the closed loop
Drift error, final closed loop detection schematic diagram are shown in Fig. 5, wherein left side is the camera trajectory diagram before closed loop detection, and right figure is closed loop
Corresponding camera trajectory diagram after detecting.
Step 5.1: image feature point being described using DBoW2 dictionary, and is waited by shared number of words to choose
Select closed loop key frame;
Step 5.1.1: calculating present frame and its dictionary vector similitude score for regarding key frame altogether, chooses wherein the smallest
Score S is as screening key frame threshold value;
Step 5.1.2: it is removing in the current total key frame data collection for regarding key frame altogether, is searching and have with current key frame
There is the key frame data collection K of common word, and calculates the number of maximum shared word;
Step 5.1.3: choosing the key frame that the maximum in common word key frame data collection greater than 80% shares word, and
Whether the score for calculating itself and current key frame is greater than the smallest score S, chooses the key frame collection greater than score S and waits as initial
Select key frame set I;
Step 5.1.4: to each key frame K of initial candidate key frame Ii, calculate its regard altogether in key frame with present frame
Shared word greater than 80% maximum share word key frame and current key frame score, if altogether regard key frame in
Score is greater than KiScore, then be regarded as best key frame, the total score depending on key frame added up, as best key frame
Total score, and record its maximum total score score;
Step 5.1.5: in order to ensure the correctness of closed loop key frame, the embodiment of the present invention, which is provided with, must continuous three
Total depending on that must have common image in key frame set, the i.e. calculating current key of the candidate key-frames collection of the key frame newly increased
The total view key frame collection of the candidate key-frames of frame, it is subsequent to increase new key frame, calculate the total view key frame of its candidate key-frames
Whether with the total of candidate key-frames before common image key frame had depending on key frame collection, it is subsequent that new key is opened in increase by one
Frame, and so calculate, the total view key of connection can be found if there is the candidate key-frames of continuous three new key frames
Frame, then it is assumed that the initial candidate key frame is final candidate key-frames.
Step 5.2: current key frame and candidate closed loop key frame carry out Image Matching, calculate its matching points and wait to confirm
Select key frame whether correct;
Step 5.2.1: first matching candidate key-frames and current key frame, and matching strategy uses dictionary
With with descriptor distance value constrain;
Step 5.2.2: according to the corresponding space 3D point of match point, the embodiment of the present invention establishes the corresponding pass of space three-dimensional point
System, that is, establish the posture corresponding relationship of candidate key-frames and current key frame, calculate corresponding relative attitude, including spin moment
Battle array, translation matrix and scale parameter.The embodiment of the present invention is used for multiple times RANSAC to candidate key-frames and chooses multiple groups spatial point, meter
The transformational relation of posture is calculated, and whether verify by the number of interior point the posture transformational relation correct, if interior array foot
It is more than enough, then it is assumed that initially to have found suitable closed loop key frame, relationship optimizes posture using point, then passes through optimization
Posture relative attitude is optimized to corresponding relationship, and at this choosing more point;
Step 5.2.3: after relative attitude optimization, spatial point corresponding on closed loop key frame is projected into current image again
Come up, matched, increases corresponding points pair, finally judge whether the matching double points number found is greater than specified threshold value, usually
30 are set as, is greater than the threshold value, then it is assumed that candidate's closed loop key frame is confirmed as true closed loop key frame.
Step 5.3: using the correct candidate key-frames obtained, the posture of image being readjusted, it is tired to remove it
Long-pending drift error;
Step 5.3.1: acquisition closed loop key frame and its local closed loop that can be seen depending on key frame altogether be dimensionally first
Figure point modifies current image frame and it regards key frame altogether according to the relative attitude of the current key frame of acquisition and closed loop key frame
Image posture;
Step 5.3.2: local closed loop three-dimensional map point is projected into current image frame and is regarded on key frame image altogether, is carried out
More point cloud corresponding relationships are found in the merging of point cloud;
Step 5.3.3: for current key frame and its regard key frame altogether, according to the number of its closed loop point map seen,
Again the visual crucial frame condition of the key frame is updated, and establishes new closed loop and visually contacts, and it is original visual to remove it simultaneously
Key frame connection;
Step 5.3.4: due to image key frame original in current key frame and its visual key frame image set and map
Visual connection is established, therefore, all key frames in map have newly increased closed loop connection in addition to pervious visual connection,
In order to accelerate the speed of closed loop merging, the relative attitude being first depending between these visual key frames optimizes library using G2O, right
Key frame data collection posture in map optimizes, and cumulative errors is assigned to each key frame up, the figure being optimal
It is preliminary to realize closed loop function as pose refinement effect;
Step 5.3.5: after image posture completes closed loop, it is first depending on the corresponding relationship of point map and image, is set again
New photomap feature is set, then according to new image posture and ambient enviroment map feature, open new routes journey, to the overall situation
Image posture and map carry out global optimization;
Step 5.3.6: it after new image posture and map optimization, indicates to repair original space characteristics and posture
Change, and modify to the subsequent feature newly increased, completes the detection and optimization of overall image closed loop in this way;
Step 6: output result.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention
The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method
In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.
Claims (7)
1. the indoor SLAM method combined based on indoor characteristic point and structure lines, which comprises the steps of:
Step 1, the calibration of camera internal reference is carried out, the camera internal reference includes principal point for camera, focal length and distortion factor;
Step 2, the video frame image data obtained for camera on mobile robot platform, extracts the feature on each image frame
Point and structure lines;
Step 3, according to the characteristic point of acquisition and structure lines, characteristic point and structure lines tracking are carried out, and carries out key frame extraction, is had
Body includes following sub-step;
Step 3.1, according to the characteristic point obtained in step 2, feature point tracking is carried out using feature descriptor distance, is obtained preferable
Tracking characteristics point pair, specific implementation,
Step 3.1.1 judges whether there is candidate matches frame using the characteristic point on current image, if there is candidate of selection
With frame, then appropriately sized imaging window is opened up on it, obtain characteristic point therein as candidate matches point, otherwise choose and work as
The previous frame of previous frame carries out the selection of corresponding candidate match point as candidate matches frame;
Step 3.1.2 calculates the descriptor distance of the characteristic point and candidate matches point on current image, to characteristic point according to description
Distance is accorded with according to ascending sort, takes preceding 60 percent to be used as optimal match point;
Step 3.1.3 carries out erroneous point using optimal match point of the RANSAC to acquisition and filters out;
Step 3.1.4 projects optimal match point using the posture of present frame image and the posture of candidate matches frame, calculates
Its back projection's error filters out the big matching double points of error dot;
Step 3.2, according to the structure lines obtained in step 2, structure lines tracking is carried out using the parameter of structure lines, is obtained more satisfactory
Tracking structure line pair, wherein the parameter of the structure lines includes the midpoint of structure lines, the length of line, angle direction;
Step 3.3, using current tracking characteristics point to and tracking structure line pair, current image posture is carried out preliminary excellent
Change;
Step 3.4, according to the tracking characteristics point in step 3.1 to and step 3.3 in the image posture that optimizes, judge current image
Whether frame is chosen for key frame, if it is determined that not being key frame, new image frame is re-read back to step 2, if it is determined that
It is key frame, then is sent to step 4 and is handled;
Step 4, according to the tracking information of the characteristic point of acquisition and structure lines, carry out surrounding ambient point and space line drawing with
And platform positioning and optimizing;
Step 5, judge whether platform motion profile forms closed loop, obtain correct closed loop key frame, to global image posture
Global optimization is carried out with map;
Step 6, result is exported.
2. the indoor SLAM method combined as described in claim 1 based on indoor characteristic point and structure lines, it is characterised in that: step
Rapid 2 implementation is as follows,
Step 2.1, pyramid is established to the video frame image of acquisition, carries out image block on pyramid diagram picture respectively, obtain
The imagery zone of certain window size, carried out respectively in each piecemeal using ORB feature point extraction algorithm feature point extraction and
Description;
Step 2.2, gaussian filtering is carried out to video frame, carries out extraction of structure lines using LSD operator;
Step 2.3, for characteristic point is extracted and the later image of structure lines carries out piecemeal, characteristic point and structure lines are drawn respectively
It assigns in different imagery zones.
3. the indoor SLAM method combined as claimed in claim 2 based on indoor characteristic point and structure lines, it is characterised in that: step
Rapid 3.2 implementation,
Step 3.2.1 on key frame to be matched, opens up appropriately sized shadow using the center of the structure lines on current image
As window, obtains structure lines center and fall structure lines in the area as candidate matches line;
Step 3.2.2, calculates the parameter difference of the structure lines and candidate matches line on current image, and utilizes the posture of present frame
Carry out the reconstruction of three-dimensional space line with the posture of key frame to be matched, by observe the degree of overlapping of corresponding three-dimensional space line segment come into
Row structure lines matching;
Step 3.2.3, using the line segment of the same name on multiple images, by comparing the angles and positions similitude of space 3D line segment,
The matching image line of final more similar spatial line segment is chosen to as candidate imagery matched line pair, then to candidate imagery
Wiring projects on multiple corresponding visible images to back projection is carried out, space three-dimensional line segment is calculated separately on each image
Projection line original two-dimensional line segment corresponding on image error, removal be greater than certain threshold value matched line.
4. the indoor SLAM method combined as claimed in claim 3 based on indoor characteristic point and structure lines, it is characterised in that: step
Rapid 3.3 implementation,
Step 3.3.1, if total energy equation is,
E=Ep(θ)+λEl(θ)
Wherein, Ep(θ) represents the energy term of characteristic point, El(θ) indicates the energy term of structure lines, and λ is the scale and power of both balances
The method of double differences is different;
The energy term E of characteristic point is arranged in step 3.3.2p(θ), calculation formula is as follows,
Wherein, K is space 3D point number, and M is the number of image, vijRepresentative is whether spatial point i is mapped on current image j,
PiRepresent the parameter of space 3D point i, pijThe corresponding original 2D match point for being spatial point i on current image j, TjFor image j
Parameter, f (Tj, Pi) what is represented is the position that spatial point i is projected on current image j;
Step 3.3.3, the energy term E of setting structure linel(θ), calculation formula is as follows,
Wherein, N is space 3D line number, and M is the number of image, vijRepresentative is whether space line i is mapped on current image j,
LiFor the parameter of space 3D line i, lijFor space line i on current image j corresponding original 2D matched line parameter, TjFor image j
Parameter, Q (Tj, li) projection line of the representation space line i on image j, d (Q (Tj, li), lij)2Represent corresponding original 2D line lijIt arrives
Projection line Q (Tj, li) descriptor range error;
Step 3.3.4 optimizes library by using Ceres, is adjusted to the initial attitude of image, until function optimization energy term
E is minimum, obtains optimal image posture.
5. the indoor SLAM method combined as claimed in claim 4 based on indoor characteristic point and structure lines, it is characterised in that: step
Rapid 4 implementation,
Step 4.1, the image key frame group for having total view key frame with current key frame is calculated, two dimension on current key frame is utilized
With the space three-dimensional point rebuild, calculating other key frames can be seen the number of corresponding three-dimensional points to be chosen, and descending is arranged
Sequence current image key frame group;
Step 4.2, current key frame regards the key frame in key frame group together and carries out characteristic point and structure lines matching, and filtering out can
The less three-dimensional point of visible image;
Step 4.3, according to the characteristic matching point and matched line newly increased in step 4.2, three-dimensional space point and corresponding three is generated
Dimension space line;
Step 4.4, using the three-dimensional space point of generation and corresponding three-dimensional space line, current image posture is carried out further
Optimization;
Step 4.5, for the image key frame after optimization, if having 90% three-dimensional point in its corresponding space three-dimensional point by other
Seen altogether depending on key frame, which is filtered out.
6. the indoor SLAM method combined as claimed in claim 5 based on indoor characteristic point and structure lines, it is characterised in that: step
Rapid 5 implementation is as follows,
Step 5.1, image feature point is described using DBoW2 dictionary, and is closed by shared number of words to choose candidate
Ring key frame;
Step 5.2, current key frame and candidate closed loop key frame carry out Image Matching, calculate its matching points to confirm that candidate closes
Whether ring key frame is correct;
Step 5.3, using the correct candidate key-frames of acquisition, the posture of image is readjusted, removes its accumulation
Drift error.
7. the indoor SLAM method combined as described in claim 1 based on indoor characteristic point and structure lines, it is characterised in that: institute
Stating camera internal reference calibration implementation in step 1 is that the gridiron pattern of the fixed size under multiple different perspectivess is obtained using camera
Image data;Then by Zhang Zhengyou camera calibration method, the calculating of camera internal reference is carried out to the gridiron pattern image data got, is obtained
Take camera calibration result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710552072.0A CN107392964B (en) | 2017-07-07 | 2017-07-07 | The indoor SLAM method combined based on indoor characteristic point and structure lines |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710552072.0A CN107392964B (en) | 2017-07-07 | 2017-07-07 | The indoor SLAM method combined based on indoor characteristic point and structure lines |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107392964A CN107392964A (en) | 2017-11-24 |
CN107392964B true CN107392964B (en) | 2019-09-17 |
Family
ID=60335376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710552072.0A Active CN107392964B (en) | 2017-07-07 | 2017-07-07 | The indoor SLAM method combined based on indoor characteristic point and structure lines |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107392964B (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090958B (en) * | 2017-12-06 | 2021-08-27 | 上海阅面网络科技有限公司 | Robot synchronous positioning and map building method and system |
CN108108716A (en) * | 2017-12-29 | 2018-06-01 | 中国电子科技集团公司信息科学研究院 | A kind of winding detection method based on depth belief network |
CN108510516A (en) * | 2018-03-30 | 2018-09-07 | 深圳积木易搭科技技术有限公司 | A kind of the three-dimensional line segment extracting method and system of dispersion point cloud |
CN108692720B (en) | 2018-04-09 | 2021-01-22 | 京东方科技集团股份有限公司 | Positioning method, positioning server and positioning system |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
CN108776478B (en) * | 2018-06-05 | 2021-05-07 | 北京智行者科技有限公司 | Planning method of operation path |
CN109166149B (en) * | 2018-08-13 | 2021-04-02 | 武汉大学 | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU |
CN109556596A (en) * | 2018-10-19 | 2019-04-02 | 北京极智嘉科技有限公司 | Air navigation aid, device, equipment and storage medium based on ground texture image |
CN109544632B (en) * | 2018-11-05 | 2021-08-03 | 浙江工业大学 | Semantic SLAM object association method based on hierarchical topic model |
CN109739227B (en) * | 2018-12-27 | 2021-11-05 | 驭势(上海)汽车科技有限公司 | System and method for constructing driving track |
CN109978919B (en) * | 2019-03-22 | 2021-06-04 | 广州小鹏汽车科技有限公司 | Monocular camera-based vehicle positioning method and system |
CN110335319B (en) * | 2019-06-26 | 2022-03-18 | 华中科技大学 | Semantic-driven camera positioning and map reconstruction method and system |
CN112148815B (en) * | 2019-06-27 | 2022-09-27 | 浙江商汤科技开发有限公司 | Positioning method and device based on shared map, electronic equipment and storage medium |
CN112149471B (en) * | 2019-06-28 | 2024-04-16 | 北京初速度科技有限公司 | Loop detection method and device based on semantic point cloud |
CN110489501B (en) * | 2019-07-24 | 2021-10-22 | 西北工业大学 | SLAM system quick relocation algorithm based on line characteristics |
CN110490085B (en) * | 2019-07-24 | 2022-03-11 | 西北工业大学 | Quick pose estimation algorithm of dotted line feature vision SLAM system |
CN110458897B (en) * | 2019-08-13 | 2020-12-01 | 北京积加科技有限公司 | Multi-camera automatic calibration method and system and monitoring method and system |
CN111311588B (en) * | 2020-02-28 | 2024-01-05 | 浙江商汤科技开发有限公司 | Repositioning method and device, electronic equipment and storage medium |
CN111539982B (en) * | 2020-04-17 | 2023-09-15 | 北京维盛泰科科技有限公司 | Visual inertial navigation initialization method based on nonlinear optimization in mobile platform |
CN111858996B (en) | 2020-06-10 | 2023-06-23 | 北京百度网讯科技有限公司 | Indoor positioning method and device, electronic equipment and storage medium |
CN111928861B (en) * | 2020-08-07 | 2022-08-09 | 杭州海康威视数字技术股份有限公司 | Map construction method and device |
CN112200850B (en) * | 2020-10-16 | 2022-10-04 | 河南大学 | ORB extraction method based on mature characteristic points |
CN112418288B (en) * | 2020-11-17 | 2023-02-03 | 武汉大学 | GMS and motion detection-based dynamic vision SLAM method |
CN112665575B (en) * | 2020-11-27 | 2023-12-29 | 重庆大学 | SLAM loop detection method based on mobile robot |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105856230A (en) * | 2016-05-06 | 2016-08-17 | 简燕梅 | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot |
CN105957005A (en) * | 2016-04-27 | 2016-09-21 | 湖南桥康智能科技有限公司 | Method for bridge image splicing based on feature points and structure lines |
CN106909877A (en) * | 2016-12-13 | 2017-06-30 | 浙江大学 | A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101750340B1 (en) * | 2010-11-03 | 2017-06-26 | 엘지전자 주식회사 | Robot cleaner and controlling method of the same |
CN103984037B (en) * | 2014-04-30 | 2017-07-28 | 深圳市墨克瑞光电子研究院 | The mobile robot obstacle detection method and device of view-based access control model |
-
2017
- 2017-07-07 CN CN201710552072.0A patent/CN107392964B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105957005A (en) * | 2016-04-27 | 2016-09-21 | 湖南桥康智能科技有限公司 | Method for bridge image splicing based on feature points and structure lines |
CN105856230A (en) * | 2016-05-06 | 2016-08-17 | 简燕梅 | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot |
CN106909877A (en) * | 2016-12-13 | 2017-06-30 | 浙江大学 | A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously |
Non-Patent Citations (1)
Title |
---|
ORB-SLAM:A Versatile and Accurate Monocular SLAM System;Raul Mur-Artal,et al.;《IEEE TRANSACTIONS ON ROBOTICS》;20151231;第31卷(第5期);第1147-1163页 |
Also Published As
Publication number | Publication date |
---|---|
CN107392964A (en) | 2017-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107392964B (en) | The indoor SLAM method combined based on indoor characteristic point and structure lines | |
Yu et al. | DS-SLAM: A semantic visual SLAM towards dynamic environments | |
CN111080627B (en) | 2D +3D large airplane appearance defect detection and analysis method based on deep learning | |
CN104484648B (en) | Robot variable visual angle obstacle detection method based on outline identification | |
CN109558879A (en) | A kind of vision SLAM method and apparatus based on dotted line feature | |
CN108648240A (en) | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration | |
CN108805906A (en) | A kind of moving obstacle detection and localization method based on depth map | |
CN109740665A (en) | Shielded image ship object detection method and system based on expertise constraint | |
CN109712172A (en) | A kind of pose measuring method of initial pose measurement combining target tracking | |
CN110378997A (en) | A kind of dynamic scene based on ORB-SLAM2 builds figure and localization method | |
CN109919141A (en) | A kind of recognition methods again of the pedestrian based on skeleton pose | |
CN112560741A (en) | Safety wearing detection method based on human body key points | |
CN110555408B (en) | Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation | |
CN108628306B (en) | Robot walking obstacle detection method and device, computer equipment and storage medium | |
CN110008913A (en) | The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism | |
CN113077519B (en) | Multi-phase external parameter automatic calibration method based on human skeleton extraction | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN107677274A (en) | Unmanned plane independent landing navigation information real-time resolving method based on binocular vision | |
CN109087323A (en) | A kind of image three-dimensional vehicle Attitude estimation method based on fine CAD model | |
CN109766758A (en) | A kind of vision SLAM method based on ORB feature | |
CN112379773B (en) | Multi-person three-dimensional motion capturing method, storage medium and electronic equipment | |
CN114529605A (en) | Human body three-dimensional attitude estimation method based on multi-view fusion | |
CN113393439A (en) | Forging defect detection method based on deep learning | |
CN110910349A (en) | Wind turbine state acquisition method based on aerial photography vision | |
Zhang et al. | Dense 3d mapping for indoor environment based on feature-point slam method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |