CN112507778A - Loop detection method of improved bag-of-words model based on line characteristics - Google Patents

Loop detection method of improved bag-of-words model based on line characteristics Download PDF

Info

Publication number
CN112507778A
CN112507778A CN202011111454.8A CN202011111454A CN112507778A CN 112507778 A CN112507778 A CN 112507778A CN 202011111454 A CN202011111454 A CN 202011111454A CN 112507778 A CN112507778 A CN 112507778A
Authority
CN
China
Prior art keywords
loop
visual
bag
words
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011111454.8A
Other languages
Chinese (zh)
Other versions
CN112507778B (en
Inventor
孟庆浩
史佳豪
戴旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011111454.8A priority Critical patent/CN112507778B/en
Publication of CN112507778A publication Critical patent/CN112507778A/en
Application granted granted Critical
Publication of CN112507778B publication Critical patent/CN112507778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to a loop detection method of an improved bag-of-words model based on line characteristics, which comprises the following steps: extracting LSD (line Segment detector) characteristics through an offline image data set and calculating a corresponding LBD descriptor, and taking the descriptor as original data of a clustering generation dictionary. And constructing an LSD characteristic word bag model by using an improved word bag model construction method, and constructing a visual dictionary tree with self-adaptive branches. And converting the bag-of-words model vector. And optimizing visual word weight. And (3) similarity calculation: and calculating the similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame to obtain an appearance similarity score between the images. And acquiring loop candidate frames, grouping the loop candidate frames, and removing isolated loop candidate frames with similar appearances. The continuity verification can only consider that the loop is a reliable loop candidate if the loop is continuously detected, and the loop candidate is reserved. And (5) verifying geometric consistency.

Description

Loop detection method of improved bag-of-words model based on line characteristics
Technical Field
The invention relates to the field of visual SLAM (Simultaneous Localization And Mapping), in particular to a visual SLAM loop detection method for improving a bag-of-words model based on line characteristics.
Background
The loopback detection is an indispensable part in the visual SLAM, and can eliminate the accumulative error generated by the visual odometer part, thereby constructing a globally consistent map. The loop detection algorithm based on the bag-of-words model is the current main method, and judges whether a loop exists by constructing similarity between the bag-of-words model and the contrast images. The bag-of-words model was originally derived from text analysis, and the similarity of texts was determined by comparing the frequency of occurrence of words in the texts. Accordingly, the visual bag-of-words model also measures the similarity between two images by comparing the frequency of appearance of "visual words" in the images.
Cummins et al (Cummins M, Newman P.FAB-MAP: preceding Localization and Mapping in the Space of application [ M ]. Sage Publications, Inc.2008.) in 2008 propose a bag-of-words model based on SURF (speeded Up Robust features) features and a Chou-Liu tree, and better realize camera position identification based on image Appearance through the bag-of-words model. But the bag-of-words model vector is a binary vector, i.e. only whether visual words appear in the image is considered, but not the different frequencies of appearance of different words.
In 2011 Galvez-Lopez et al (Galvez-Lopez D, Tardos J D. real-time loop detection with bases of binary words [ C ]. International Conference on Intelligent Robots and Systems,2011:25-30.), FAST (features from accessed segmented test) key points and BRIEF (binary route index Elementary features) binary descriptors are adopted to realize extraction and description of point features, and a data structure of a k-D tree is introduced for dictionary construction. A point feature-based binary descriptor visual bag-of-words model is constructed by using a hierarchical K-means clustering method. The dictionary structure of the K-d tree also leads the K-means clustering adopted in the dictionary construction process to adopt the same parameter K, however, any data are not clustered by using the same K value, and the obtained clustering result is the best.
Subsequently, in ORB-SLAM proposed in 2015 by Mur-Artal et al (Mur-Artal R, Montiel J M, Tardos J D. ORB-SLAM: a versatile and acid monomeric SLAM system [ J ]. IEEE Transactions on Robotics,2015,31(5):1147 + 1163), a visual bag-of-words model based on ORB (organized FAST and rotaed BRIEF) point characteristics was constructed. The ORB point characteristics solve the problems of rotation invariance and scale invariance of FAST key points and achieve better effect in experiments. However, the visual dictionary still adopts K-means clustering and a dictionary structure of a K-d tree, and a bag-of-words model construction process is not improved.
The detection effect based on the point feature bag-of-words model depends on the quantity of the point features extracted from the environment, and when enough point features cannot be extracted from the environment and the point features are easy to appear in a pile, the appearance similarity between the bag-of-words vector of the video frame and the video frame cannot be calculated.
In a structured low-texture environment, there are abundant line features available in such a scene, although often not enough point features can be extracted.
Lee et al (Lee J H, Zhang G, Lim J, et al. plant recognition using string lines for vision-based SLAM [ C ],. 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2013, pp.3799-3806) propose a bag-of-words model based on MSLD (Mean-Standard discovery Line Descriptor) Line feature Descriptor, and obtain better effect in experiments. However, the MSLD line feature descriptor does not have scale invariance, and is high in computational complexity, which is not favorable for real-time operation.
Linrimong et al (Linrimong, Wangmei. binocular vision SLAM algorithm [ J ] for improving dotted line characteristics. computer measurement and control, 2019(9): 156-.
Patent 201811250049.7 (a close-coupled binocular vision inertia SLAM method with point-line feature fusion) constructs a point feature bag model and a line feature bag model respectively, calculates a point feature similarity score and a line feature similarity score between two frames, and takes the weighted sum as the similarity score of the final two frames. Both methods use line characteristics to construct a bag-of-words model, but still use dictionary structures of K-means clustering and a K-d tree in the visual bag-of-words model construction process, and have no essential difference from the above several bag-of-words model construction processes, and a better visual word clustering result can not be obtained. Moreover, the TF-IDF (Term Frequency-Inverse Document Frequency) method is adopted in the word weight calculation method in the bag-of-words model, the Frequency of the visual words in the current image and the importance of the visual words on the training data set are considered, and the importance of the visual words on the loop detection query data set is not considered.
In summary, the line feature is a local feature that can replace a point feature in a structured environment, and a visual SLAM loop detection algorithm based on a line feature bag-of-words model is constructed in real time, so that the problem that a loop cannot be effectively detected by the loop detection algorithm based on the point feature in the structured low-texture environment can be effectively solved. The visual SLAM loop detection algorithm based on the line features and used for improving the bag-of-words model improves the construction process of the bag-of-words model and the weight of visual words.
Disclosure of Invention
The invention provides a loop detection method of an improved bag-of-words model based on line features, aiming at the problem that sufficient point features are difficult to extract to realize visual SLAM loop detection in a structured low-texture environment. The algorithm can utilize abundant line features as visual local features in a structured environment to realize visual-based loop detection. And the accuracy and recall rate of the recall loop detection are improved by improving the construction method of the bag-of-words model and the visual word weight calculation method. The technical scheme is as follows:
a loop detection method of an improved bag-of-words model based on line features comprises the following steps:
step 1: extracting LSD (line Segment descriptor) characteristics through an offline image data set, calculating a corresponding LBD (line Band descriptor) descriptor, and taking the descriptor as original data of a clustering generation dictionary.
Step 2: constructing an LSD characteristic bag-of-words model by utilizing an improved bag-of-words model construction method: before each clustering of the dictionary tree is constructed, the optimal clustering k value k 'for the current data is determined, and then the current data is clustered into k' classes. And the process is circulated until the visual dictionary tree with the adaptive branches is finally constructed.
And step 3: bag of words model vector transformation: and (3) extracting LSD-LBD line characteristics from the image, and quantizing each line characteristic in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line characteristic descriptor and the visual word, thereby converting the whole image into a corresponding numerical value vector.
And 4, step 4: visual word weight optimization: introducing a weight optimization parameter in loop detection
Figure BDA0002728735520000021
And optimizing the visual word weight in the bag-of-words model vector according to the distribution condition of the visual words on the historical key frame data set, calculating the weight optimization parameters of the visual words, and combining the word weight calculated by the TF-IDF method to obtain the weight-optimized visual bag-of-words vector.
And 5: and (3) similarity calculation: and calculating the similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame to obtain an appearance similarity score between the images.
Step 6: loop candidate frames are acquired and grouped: setting the historical key frames meeting the requirement of the similarity threshold value as loop candidate frames, grouping the loop candidate frames, dividing the loop candidate frames with similar time sequence into a group, and then rejecting the isolated loop candidate frames with similar appearances according to the similarity scores of the whole group and the given threshold value.
And 7: and (3) verifying the continuity: at this stage, it is detected whether or not a loop can be continuously detected for a certain period of time in the loop candidate frames. If the loop is detected continuously, the loop candidate can be regarded as a reliable loop candidate, and then the loop candidate is reserved.
And 8: and (3) verifying geometric consistency: in order to ensure the accuracy of the loop, the visual word distribution of the current frame and the loop candidate frame is verified, and the two frames can be considered to form the loop only if the line feature distribution corresponding to the visual words is the same.
Currently, the visual SLAM mainly adopts point features as visual features, and compared with point feature loop detection, the method adopts line features which are more abundant in a structured environment as local visual features for loop detection. The key points of the invention are that 1) firstly, a visual dictionary tree with self-adaptive branch number is constructed by adopting line characteristics, the discrimination of visual words is improved, and the quantization error of converting local characteristics into visual words is reduced. 2) Then, according to the distribution condition of the visual words in the loop detection query data set, calculating the weight optimization parameter of each word, and optimizing the visual bag-of-word vector through the parameters, so that the calculation result of the bag-of-word vector similarity has higher discrimination. Compared with an unoptimized visual bag-of-words model, the method can obtain higher recall rate at 100% accuracy, thereby showing that the method can detect the recall loop more accurately and effectively.
Drawings
FIG. 1 is a flow chart of improved bag-of-words model construction
FIG. 2 is a diagram of the LSD line feature extraction result in the low texture of the structured environment
FIG. 3 is a diagram illustrating the result of extracting ORB point features in low texture in structured environment
FIG. 4 is a schematic diagram of LBD descriptor construction of visual dictionary in accordance with an embodiment of the present invention
Detailed Description
The invention is further illustrated below with reference to specific examples. It should be noted that the described embodiments are only intended to facilitate the understanding of the invention and do not have any limiting effect thereon.
Step 1: for a large amount of image data acquired offline, line segment features are extracted by using an LSD algorithm, and descriptors of the line features are calculated by using an LBD algorithm. The LSD algorithm can quickly realize the detection of line characteristics, the LBD descriptor can generate a binary descriptor, and can realize quick matching, and the real-time requirement of loopback detection can be met by adopting the line characteristic extraction and description method.
Step 2: and constructing a line feature visual bag-of-words model with the adaptive branch number. The traditional bag-of-words construction method adopts a data structure of a K-means clustering algorithm and a K-d tree, so that the defect that the K-means algorithm artificially specifies a K value is kept. Before each clustering of the dictionary tree is constructed, a link for determining the optimal k value of the current data clustering is added, and the optimal k value of the clustering is determined by introducing a clustering evaluation index contour coefficient. Firstly, calculating the contour coefficient of the current data under different k values (k is 5-15), selecting the most reasonable k value under the current data according to the contour coefficient, namely the k value k 'corresponding to the maximum contour coefficient, and then taking k' as the clustering k value of the current node of the finally constructed visual dictionary tree. The above steps are repeated in a circulating mode until the 5 th layer of the bag-of-words model is built.
The cluster evaluative index profile coefficient combines two factors, intra-class cohesion a (i) and inter-class separation b (i). Assuming that the current data has been grouped into k classes, the current contour coefficient S can be obtained as follows:
first, the contour coefficient of each element after clustering is calculated
s(i)=(b(i)-a(i))/(max{a(i),b(i)}) (1)
Wherein the in-class polymerization degree a (i) represents the current element miTo other elements m of its classjThe average distance of (a), the degree of separation between classes b (i) represents the current element miMinimum of average distance to other clusters. It can be seen that s (i) e [ -1, 1]S (i) is close to 1, indicating sample element miClustering is reasonable; s (i) is close to-1, indicating sample element miShould be classified into other clusters; s (i) is close to 0, indicating a sample element miLocated on the boundary of two clusters.
After calculating the contour coefficient of each element, the average of the contour coefficients of all elements is used as the contour coefficient of the current clustering result, i.e., S1/k Σ0<i≤ks(i)
And finally, according to the contour coefficient S calculated under different k values, selecting the k value corresponding to the maximum contour coefficient as the final clustering branch number of the current node in the visual dictionary tree. By analogy, the optimal clustering branch number of all intermediate nodes in the visual dictionary building process is calculated, so that a relatively good clustering effect is obtained, and visual words are more distinctive.
And step 3: and converting the bag-of-words model vector. Quantizing each line feature in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line feature descriptor and the visual word, thereby converting the whole image into a corresponding numerical vector, such as
Figure BDA0002728735520000041
Wherein wiRepresenting the ith visual word, ηiFor its corresponding weight, a weight calculation method of TF-IDF is used, i.e. etai=TFi*IDFi. In practice, each image contains only a small number of visual words in the visual dictionary, and therefore most of η in the numerical vectori0, i.e. vaIs a sparse vector.
And 4, step 4: and optimizing visual word weight. The TF-IDF weight calculation method takes into account the frequency of the visual word in the current image, as well as its importance on the training data set, but does not take into account its importance on the loop detection historical key frame data set. The TF-IDF method considers that the smaller the frequency of text occurrences (i.e., the number of texts containing a word) is, the greater its ability to distinguish between different classes of text. Then, similarly, it can be said that the smaller the frequency of occurrence of a visual word on the historical key frame data set, the greater its ability to distinguish between different images in the historical key frame data set.
Therefore, a repetition factor is introduced in the weight calculation
Figure BDA0002728735520000045
Counting the number I of each visual word appearing in the key frame in the historical key frame data seti. Let the parameters
Figure BDA0002728735520000046
With key frame number IiIs increased and decreased, thereby reducing the visual words that recur in the processAnd (4) weighting. The method comprises the following specific steps:
1) in loop detection, the number of key frames in which each visual word appears is counted while establishing a mutual index between the visual word and the key frame
Figure BDA00027287355200000410
And calculating the repetition factor of the visual word according to the number of the corresponding key frames
Figure BDA0002728735520000047
Where n is the number of keyframes in the historical keyframe data set,
Figure BDA00027287355200000412
for the appearance therein of visual words wiThe number of key frames.
2) Binding repeat factor
Figure BDA0002728735520000049
And TF-IDF, generating visual word wiNew weights
Figure BDA0002728735520000048
And generates therefrom a new bag-of-words model vector v'a
v′a={(W1,η′1),(w2,η′2),...,(wN,η′N)} (3)
Wherein eta'iThe optimized weights for the visual word i,
Figure BDA00027287355200000411
optimizing the parameters, TF, for the weight of the word iiFor the word frequency, IDF, of the word i in the current pictureiIs the inverse document frequency of word i on the training data set.
And 5: and calculating the image similarity. And calculating the image similarity by using the new bag-of-words model vector calculated by the current image and the historical key frame. For bag of words model vectors of any two images, the similarity of the images is evaluated using the L1 norm as follows:
Figure BDA0002728735520000042
the similarity calculation result is between 0 and 1, and when the two images are completely unrelated, the similarity score is 0; when the two images are identical, the similarity score is 1.
Step 6: loop back candidate frames are obtained and grouped. In the historical key frames, if there are key frames whose similarity to the current key frame satisfies a certain threshold α, the key frames can be set as loop candidate frames. After all loop candidate frames are obtained, the loop candidate frames are grouped, the loop candidate frames with close time sequence are grouped into a group, and the group similarity score is calculated. For each candidate group, use I1,I2,I3,..,InRepresenting a key frame therein, s1,s2,s3,...,snRepresenting their similarity to the current key frame, the group similarity score may be represented by the sum of these similarities, i.e.
Figure BDA0002728735520000043
Figure BDA0002728735520000044
In the formula vkBag of words model vector, v, corresponding to the kth key frame in the groupcAnd the corresponding bag-of-words model vector of the current key frame.
And (3) grouping the loopback candidate frames to obtain corresponding group similarity scores, and eliminating the loopback candidate key frames with lower group scores according to a given group score threshold value beta. Because the correct loop-back key frame, the key frame with the similar time sequence and the current key frame have higher similarity and also belong to the loop-back candidate frame, so that some incorrect loop-back candidate frames can be excluded.
And 7: and (5) checking the continuity. At this stage, we consider that the loop candidate is retained only when the loop is detected in a plurality of consecutive frames at the same time, and the loop is considered to be reliable.
And 8: and (5) verifying geometric consistency. Because the visual bag-of-words model ignores the spatial information of the visual features, in the final stage, the geometric consistency verification needs to be performed on the loopback candidate frame and the current key frame to ensure the accuracy of loopback detection.
Calculating line feature reprojection error by calculating line features matched between the current frame and the loop candidate frame, and solving pose transformation between the current frame and the loop candidate frame by local BA optimization. And judging whether the pose transformation is reasonable or not by calculating the number of line features inliers under the pose transformation, thereby judging whether the loopback candidate frame passes geometric consistency verification or not.
And extracting loop candidate frames when the image appearance similarity reaches a threshold value alpha, and judging that loop has occurred after a series of verification links for ensuring loop accuracy are passed, and correcting and updating the global map according to the detected loop.

Claims (1)

1. A loop detection method of an improved bag-of-words model based on line features comprises the following steps:
step 1: extracting LSD (line Segment descriptor) characteristics through an offline image data set, calculating a corresponding LBD (line Band descriptor) descriptor, and taking the descriptor as original data of a clustering generation dictionary;
step 2: constructing an LSD characteristic bag-of-words model by utilizing an improved bag-of-words model construction method: before each clustering of the dictionary tree is constructed, determining an optimal clustering k value k 'for current data, and then clustering the current data into a k' class; the steps are circulated until a visual dictionary tree with self-adaptive branches is finally constructed;
and step 3: bag of words model vector transformation: extracting LSD-LBD line characteristics from the image, quantizing each line characteristic in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line characteristic descriptor and the visual word, thereby converting the whole image into a corresponding numerical vector;
and 4, step 4: visual word weight optimization: in-loop testIntroducing a weight optimization parameter in the measurement
Figure FDA0002728735510000011
Optimizing the visual word weight in the bag-of-words model vector according to the distribution condition of the visual words on the historical key frame data set, calculating the weight optimization parameters of the visual words, and combining the word weight calculated by the TF-IDF method to obtain the weight-optimized visual bag-of-words vector;
and 5: and (3) similarity calculation: calculating similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame, and obtaining an appearance similarity score between the images;
step 6: loop candidate frames are acquired and grouped: setting the historical key frames meeting the requirement of the similarity threshold value as loop candidate frames, grouping the loop candidate frames, dividing the loop candidate frames with similar time sequence into a group, and then rejecting isolated loop candidate frames with similar appearances according to the similarity score of the whole group and the given threshold value;
and 7: and (3) verifying the continuity: at this stage, it is detected whether or not a loop can be continuously detected for a period of time in the loop candidate frames; if the loop is detected continuously, the loop can be considered as a reliable loop candidate, and the loop candidate is reserved;
and 8: and (3) verifying geometric consistency: in order to ensure the accuracy of the loop, the visual word distribution of the current frame and the loop candidate frame is verified, and the two frames can be considered to form the loop only if the line feature distribution corresponding to the visual words is the same.
CN202011111454.8A 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics Active CN112507778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011111454.8A CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011111454.8A CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Publications (2)

Publication Number Publication Date
CN112507778A true CN112507778A (en) 2021-03-16
CN112507778B CN112507778B (en) 2022-10-04

Family

ID=74953814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011111454.8A Active CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Country Status (1)

Country Link
CN (1) CN112507778B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991448A (en) * 2021-03-22 2021-06-18 华南理工大学 Color histogram-based loop detection method and device and storage medium
CN115240115A (en) * 2022-07-27 2022-10-25 河南工业大学 Visual SLAM loop detection method combining semantic features and bag-of-words model
CN117409388A (en) * 2023-12-11 2024-01-16 天津中德应用技术大学 Intelligent automobile vision SLAM closed-loop detection method for improving word bag model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109656545A (en) * 2019-01-17 2019-04-19 云南师范大学 A kind of software development activity clustering method based on event log
CN109886065A (en) * 2018-12-07 2019-06-14 武汉理工大学 A kind of online increment type winding detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109886065A (en) * 2018-12-07 2019-06-14 武汉理工大学 A kind of online increment type winding detection method
CN109656545A (en) * 2019-01-17 2019-04-19 云南师范大学 A kind of software development activity clustering method based on event log

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RUIFANG DONG ET AL: "A Novel Loop Closure Detection Method Using Line Features", 《IEEE ACCESS》 *
程瑞营: "基于点线综合特征的视觉SLAM中闭环检测方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
马冰: "基于计算机视觉的机场增强型地勤检测系统的设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991448A (en) * 2021-03-22 2021-06-18 华南理工大学 Color histogram-based loop detection method and device and storage medium
CN112991448B (en) * 2021-03-22 2023-09-26 华南理工大学 Loop detection method, device and storage medium based on color histogram
CN115240115A (en) * 2022-07-27 2022-10-25 河南工业大学 Visual SLAM loop detection method combining semantic features and bag-of-words model
CN117409388A (en) * 2023-12-11 2024-01-16 天津中德应用技术大学 Intelligent automobile vision SLAM closed-loop detection method for improving word bag model

Also Published As

Publication number Publication date
CN112507778B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
Song et al. Region-based quality estimation network for large-scale person re-identification
CN110163258B (en) Zero sample learning method and system based on semantic attribute attention redistribution mechanism
CN112507778B (en) Loop detection method of improved bag-of-words model based on line characteristics
CN109670528B (en) Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy
CN112069940B (en) Cross-domain pedestrian re-identification method based on staged feature learning
CN111161315B (en) Multi-target tracking method and system based on graph neural network
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
CN108230291B (en) Object recognition system training method, object recognition method, device and electronic equipment
CN111967343A (en) Detection method based on simple neural network and extreme gradient lifting model fusion
CN110827265B (en) Image anomaly detection method based on deep learning
JP2017062778A (en) Method and device for classifying object of image, and corresponding computer program product and computer-readable medium
CN115937655B (en) Multi-order feature interaction target detection model, construction method, device and application thereof
CN112364791B (en) Pedestrian re-identification method and system based on generation of confrontation network
CN112801019B (en) Method and system for eliminating re-identification deviation of unsupervised vehicle based on synthetic data
CN111062278A (en) Abnormal behavior identification method based on improved residual error network
CN111723600B (en) Pedestrian re-recognition feature descriptor based on multi-task learning
Yang et al. Multi-scale bidirectional fcn for object skeleton extraction
CN112784929A (en) Small sample image classification method and device based on double-element group expansion
Du et al. Convolutional neural network-based data anomaly detection considering class imbalance with limited data
CN112613474B (en) Pedestrian re-identification method and device
CN117373062A (en) Real-time end-to-end cross-resolution pedestrian re-identification method based on joint learning
CN111291785A (en) Target detection method, device, equipment and storage medium
CN110968735A (en) Unsupervised pedestrian re-identification method based on spherical similarity hierarchical clustering
Han et al. Adapting dynamic appearance for robust visual tracking
Lin et al. Dual-mode iterative denoiser: tackling the weak label for anomaly detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant