CN112507778B - Loop detection method of improved bag-of-words model based on line characteristics - Google Patents

Loop detection method of improved bag-of-words model based on line characteristics Download PDF

Info

Publication number
CN112507778B
CN112507778B CN202011111454.8A CN202011111454A CN112507778B CN 112507778 B CN112507778 B CN 112507778B CN 202011111454 A CN202011111454 A CN 202011111454A CN 112507778 B CN112507778 B CN 112507778B
Authority
CN
China
Prior art keywords
visual
bag
loop
words
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011111454.8A
Other languages
Chinese (zh)
Other versions
CN112507778A (en
Inventor
孟庆浩
史佳豪
戴旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011111454.8A priority Critical patent/CN112507778B/en
Publication of CN112507778A publication Critical patent/CN112507778A/en
Application granted granted Critical
Publication of CN112507778B publication Critical patent/CN112507778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to a loop detection method of an improved bag-of-words model based on line characteristics, which comprises the following steps: LSD (Line Segment Detector) features are extracted from the offline image dataset and corresponding LBD descriptors are computed, which are used as the raw data of the clustering generation dictionary. And constructing an LSD characteristic word bag model by using an improved word bag model construction method, and constructing a visual dictionary tree with self-adaptive branches. And converting the bag-of-words model vector. And optimizing visual word weight. And (3) similarity calculation: and calculating the similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame, and obtaining the appearance similarity score between the images. And acquiring loop candidate frames, grouping the loop candidate frames, and removing isolated loop candidate frames with similar appearances. The continuity verification can only consider that the loop is a reliable loop candidate if the loop is continuously detected, and the loop candidate is reserved. And (5) verifying geometric consistency.

Description

Loop detection method of improved bag-of-words model based on line characteristics
Technical Field
The invention relates to the field of visual SLAM (Simultaneous Localization And Mapping), in particular to a visual SLAM loop detection method for improving a bag-of-words model based on line characteristics.
Background
Loopback detection is an indispensable part in the visual SLAM, and can eliminate accumulated errors generated by a visual odometer part, so that a globally consistent map is constructed. The loop detection algorithm based on the bag-of-words model is the current main method, and judges whether a loop exists by constructing similarity between the bag-of-words model and the contrast images. The bag-of-words model was originally derived from text analysis, and the similarity of texts was determined by comparing the frequency of occurrence of words in the texts. Accordingly, the visual bag-of-words model also measures the similarity between two images by comparing the frequency of appearance of "visual words" in the images.
Cummins et al (Cummins M, newman P.FAB-MAP: predictive Localization and Mapping in the Space of application [ M ]. Sage Publications, inc.2008.) in 2008 proposed a bag-of-words model based on SURF (speed Up route feeds) Features and the Chou-Liu tree, and better realized camera position identification based on image Appearance through the bag-of-words model. But the bag-of-words model vector is a binary vector, i.e. only whether visual words appear in the image is considered, but not the different frequencies of appearance of different words.
In 2011 Galvez-Lopez et al (Galvez-Lopez D, tardos J D. Real-time loop detection with bases of Binary words [ C ]. International Conference on Intelligent Robots and Systems, 2011. A point feature-based binary descriptor visual bag-of-words model is constructed by using a hierarchical K-means clustering method. The dictionary structure of the K-d tree also leads the K-means clustering adopted in the dictionary construction process to adopt the same parameter K, however, any data are not clustered by using the same K value, and the obtained clustering result is the best.
Subsequently, in ORB-SLAM proposed in 2015 by Mur-Artal et al (Mur-Artal R, montiel J M, tardos J D. ORB-SLAM: a versatile and acid monomeric SLAM system [ J ]. IEEE Transactions on Robotics,2015,31 (5): 1147-1163), a visual bag-of-words model based on ORB (organized FAST and rotad BRIEF) point characteristics was constructed. The ORB point characteristics solve the problems of rotation invariance and scale invariance of FAST key points and achieve better effect in experiments. However, the visual dictionary still adopts K-means clustering and a dictionary structure of a K-d tree, and a bag-of-words model construction process is not improved.
The detection effect based on the point feature bag-of-words model depends on the quantity of the point features extracted from the environment, and when enough point features cannot be extracted from the environment and the point features are easy to appear in a pile, the appearance similarity between the bag-of-words vector of the video frame and the video frame cannot be calculated.
In a structured low-texture environment, there are abundant line features available in such a scene, although often not enough point features can be extracted.
Lee et al (Lee J H, zhang G, lim J, et al. Plant recognition using string lines for vision-based SLAM [ C ],. 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2013, pp.3799-3806) propose a bag-of-words model based on MSLD (Mean-Standard discovery Line Descriptor) Line feature Descriptor, and obtain better effect in experiments. However, the MSLD line feature descriptor does not have scale invariance, and is high in computational complexity, which is not favorable for real-time operation.
Linrimong et al (Linrimong, wangmei. Binocular vision SLAM Algorithm [ J ] for improving dotted line characteristics computer measurement and control, 2019 (9): 156-162) construct dotted line characteristics in a visual bag-of-word model, extract point and line characteristics of an image, convert the two characteristics into a bag-of-word vector using the bag-of-word model, and calculate image similarity through the bag-of-word vector.
Patent 201811250049.7 (a close-coupled binocular vision inertia SLAM method with dotted line feature fusion) respectively constructs a point feature word bag model and a line feature word bag model, calculates a point feature similarity score and a line feature similarity score between two frames, and takes a weighted sum of the point feature similarity score and the line feature similarity score as a similarity score of the final two frames. Both methods use line characteristics to construct a bag-of-words model, but still use dictionary structures of K-means clustering and a K-d tree in the visual bag-of-words model construction process, and have no essential difference from the above several bag-of-words model construction processes, and a better visual word clustering result can not be obtained. Moreover, the TF-IDF (Term Frequency-Inverse Document Frequency) method is adopted in the word weight calculation method in the bag-of-words model, the Frequency of the visual words in the current image and the importance of the visual words on the training data set are considered, and the importance of the visual words on the loop detection query data set is not considered.
In summary, the line feature is a local feature that can replace a point feature in a structured environment, and a visual SLAM loop detection algorithm based on a line feature bag-of-words model is constructed in real time, so that the problem that a loop cannot be effectively detected by the loop detection algorithm based on the point feature in the structured low-texture environment can be effectively solved. The visual SLAM loop detection algorithm based on the line features and used for improving the bag-of-words model improves the construction process of the bag-of-words model and the weight of visual words.
Disclosure of Invention
The invention provides a loop detection method of an improved bag-of-words model based on line features, aiming at the problem that sufficient point features are difficult to extract to realize visual SLAM loop detection in a structured low-texture environment. The algorithm can utilize abundant line features as visual local features in a structured environment to realize visual-based loop detection. And the accuracy and recall rate of the recall loop detection are improved by improving the construction method of the bag-of-words model and the visual word weight calculation method. The technical scheme is as follows:
a loop detection method of an improved bag-of-words model based on line features comprises the following steps:
step 1: and extracting LSD (Line Segment Descriptor) features and calculating corresponding LBD (Line Band Descriptor) descriptors through the offline image data set, and taking the descriptors as original data of a clustering generation dictionary.
And 2, step: constructing an LSD characteristic bag-of-words model by utilizing an improved bag-of-words model construction method: before each clustering of the dictionary tree is constructed, the optimal clustering k value k 'for the current data is determined, and then the current data is clustered into k' classes. And the process is circulated until the visual dictionary tree with the adaptive branches is finally constructed.
And step 3: bag-of-words model vector transformation: and (3) extracting LSD-LBD line characteristics from the image, and quantizing each line characteristic in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line characteristic descriptor and the visual word, thereby converting the whole image into a corresponding numerical value vector.
And 4, step 4: visual word weight optimization: introducing a weight optimization parameter in loop detection
Figure BDA0002728735520000021
Optimizing the visual word weight in the bag-of-words model vector according to the distribution condition of the visual words on the historical key frame data set, calculating the weight optimization parameters of the visual words, and combining the word weight calculated by the TF-IDF method to obtain the visual words with optimized weightsA pocket vector.
And 5: and (3) calculating the similarity: and calculating the similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame, and obtaining the appearance similarity score between the images.
And 6: loop candidate frames are acquired and grouped: setting the historical key frames meeting the requirement of the similarity threshold value as loop candidate frames, grouping the loop candidate frames, dividing the loop candidate frames with similar time sequence into a group, and then rejecting the isolated loop candidate frames with similar appearances according to the similarity scores of the whole group and the given threshold value.
And 7: and (3) verifying the continuity: at this stage, it is detected whether or not a loop can be continuously detected for a certain period of time in the loop candidate frames. Only if the loop is continuously detected, the loop candidate can be regarded as a reliable loop candidate, and the loop candidate is reserved.
And step 8: and (3) verifying geometric consistency: in order to ensure the accuracy of the loop, the visual word distribution of the current frame and the loop candidate frame is verified, and the two frames can be considered to form the loop only if the line feature distribution corresponding to the visual words is the same.
The current visual SLAM still mainly adopts point features as visual features, and compared with point feature loop detection, the method adopts line features which are richer in a structured environment as local visual features for loop detection. The key points of the invention are that 1) firstly, a visual dictionary tree with self-adaptive branch number is constructed by adopting line characteristics, the discrimination of visual words is improved, and the quantization error of converting local characteristics into visual words is reduced. 2) Then, according to the distribution condition of the visual words in the loop detection query data set, calculating the weight optimization parameter of each word, and optimizing the visual bag-of-word vector through the parameters, so that the calculation result of the bag-of-word vector similarity has higher discrimination. Compared with an unoptimized visual bag-of-words model, the method can obtain higher recall rate at 100% accuracy, thereby showing that the method can detect the recall loop more accurately and effectively.
Drawings
FIG. 1 is a flow chart of improved bag-of-words model construction
FIG. 2 is a diagram of the LSD line feature extraction result in the low texture of the structured environment
FIG. 3 is a diagram illustrating the result of extracting ORB point features in low texture in structured environment
FIG. 4 is a schematic diagram of LBD descriptor construction of visual dictionary according to an embodiment of the present invention
Detailed Description
The invention is further illustrated below with reference to specific examples. It should be noted that the described embodiments are only intended to facilitate the understanding of the present invention, and do not impose any limitation thereon.
Step 1: for a large amount of image data acquired offline, line segment features are extracted by using an LSD algorithm, and line feature descriptors are calculated by using an LBD algorithm. The LSD algorithm can quickly realize the detection of line characteristics, the LBD descriptor can generate a binary descriptor, and can realize quick matching, and the real-time requirement of loopback detection can be met by adopting the line characteristic extraction and description method.
And 2, step: and constructing a line feature visual bag-of-words model with the adaptive branch number. The traditional bag-of-words construction method adopts a data structure of a K-means clustering algorithm and a K-d tree, so that the defect that the K-means algorithm artificially specifies a K value is reserved. Before each clustering of the dictionary tree is constructed, a link for determining the optimal k value of the current data clustering is added, and the optimal k value of the clustering is determined by introducing a clustering evaluation index contour coefficient. Firstly, calculating the contour coefficient of the current data under different k values (k is 5-15), selecting the most reasonable k value under the current data according to the contour coefficient, namely the k value k 'corresponding to the maximum contour coefficient, and then taking k' as the cluster k value of the current node of the finally constructed visual dictionary tree. The above steps are repeated in a circulating mode until the 5 th layer of the bag-of-words model is built.
The clustering evaluation index profile coefficient combines two factors of intra-class cohesion a (i) and inter-class separation b (i). Assuming that the current data has been grouped into k classes, the current contour coefficient S can be obtained as follows:
firstly, calculating the contour coefficient of each element after clustering
s(i)=(b(i)-a(i))/(max{a(i),b(i)}) (1)
Wherein the in-class polymerization degree a (i) represents the current element m i To other elements m of its class j The inter-class separation b (i) represents the current element m i Minimum of average distance to other clusters. It can be seen that s (i) ∈ [ -1,1]And s (i) is close to 1, the sample element m is indicated i Clustering is reasonable; s (i) is close to-1, indicating sample element m i Should be classified into other clusters; s (i) is close to 0, this indicates a sample element m i Located on the boundary of two clusters.
After calculating the contour coefficient of each element, the average of the contour coefficients of all elements is used as the contour coefficient of the current clustering result, i.e., S =1/k Σ 0<i≤k s(i)
And finally, selecting a k value corresponding to the maximum profile coefficient according to the profile coefficient S calculated under different k values as a final clustering branch number at the current node in the visual dictionary tree. By analogy, the optimal clustering branch number of all intermediate nodes in the visual dictionary building process is calculated, so that a relatively good clustering effect is obtained, and visual words are more distinctive.
And step 3: and converting the bag-of-words model vector. Quantizing each line feature in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line feature descriptor and the visual word, thereby converting the whole image into a corresponding numerical vector, such as
Figure BDA0002728735520000041
Wherein w i Representing the ith visual word, η i For its corresponding weight, a weight calculation method of TF-IDF is used, i.e. eta i =TF i *IDF i . In practice, each image contains only a small number of visual words in the visual dictionary, so most of η in the numerical vector i =0, i.e. v a Is a sparse vector.
And 4, step 4: and optimizing visual word weight. The TF-IDF weight calculation method takes into account the frequency of the visual word in the current image, as well as its importance on the training data set, but does not take into account its importance on the loop detection historical key frame data set. The TF-IDF method considers that the smaller the frequency of text occurrences (i.e., the number of texts containing a word) is, the greater its ability to distinguish between different classes of text. Then, similarly, it can be said that the smaller the frequency of occurrence of a visual word on the historical key frame data set, the greater its ability to distinguish between different images in the historical key frame data set.
Therefore, a repetition factor is introduced in the weight calculation
Figure BDA0002728735520000045
Counting the number I of each visual word in the key frame in the historical key frame data set i . Let the parameters
Figure BDA0002728735520000046
With key frame number I i Is increased and decreased, thereby reducing the weight of visual words that recur in the process. The method comprises the following specific steps:
1) In loop detection, the number of key frames in which each visual word appears is counted while establishing a mutual index between the visual word and the key frame
Figure BDA00027287355200000410
And calculating the repetition factor of the visual word according to the number of the corresponding key frames
Figure BDA0002728735520000047
Where n is the number of keyframes in the historical keyframe data set,
Figure BDA00027287355200000412
for the appearance therein of visual words w i The number of key frames.
2) Binding to repeat factor
Figure BDA0002728735520000049
And TF-IDF, generating visual word w i New weights
Figure BDA0002728735520000048
And generates therefrom a new bag-of-words model vector v' a
v′ a ={(W 1 ,η′ 1 ),(w 2 ,η′ 2 ),...,(w N ,η′ N )} (3)
Wherein eta' i The optimized weights for the visual word i,
Figure BDA00027287355200000411
optimizing the parameters, TF, for the weight of the word i i For the word frequency, IDF, of the word i in the current picture i Is the inverse document frequency of word i on the training data set.
And 5: and calculating the image similarity. And calculating the image similarity by using the new bag-of-words model vector calculated by the current image and the historical key frame. For the bag-of-words model vectors of any two images, the similarity of the images is evaluated by using an L1 norm, which is as follows:
Figure BDA0002728735520000042
the similarity calculation result is between 0 and 1, and when the two images are completely unrelated, the similarity score is 0; when the two images are identical, the similarity score is 1.
And 6: loop back candidate frames are obtained and grouped. In the historical key frames, if there are key frames whose similarity with the current key frame satisfies a certain threshold α, the current key frame may be set as a candidate frame for loop back. After all loop candidate frames are obtained, the loop candidate frames are grouped, the loop candidate frames with close time sequence are grouped into a group, and the group similarity score is calculated. For each candidate group, use I 1 ,I 2 ,I 3 ,..,I n Representing a key frame therein, s 1 ,s 2 ,s 3 ,...,s n Representing their similarity to the current key frame, the group similarity score may be represented by the sum of these similarities, i.e.
Figure BDA0002728735520000043
Figure BDA0002728735520000044
In the formula v k Bag of words model vector v corresponding to the kth key frame in the group c And the corresponding bag-of-words model vector of the current key frame.
And (3) grouping the loopback candidate frames to obtain corresponding group similarity scores, and eliminating the loopback candidate key frames with lower group scores according to a given group score threshold value beta. Because the correct loop-back key frame, the key frame with the similar time sequence and the current key frame have higher similarity and also belong to the loop-back candidate frame, so that some incorrect loop-back candidate frames can be excluded.
And 7: and (5) checking the continuity. At this stage, we consider that the loop candidate is retained only when the loop is detected in a plurality of consecutive frames at the same time, and the loop is considered to be reliable.
And 8: and (5) verifying geometric consistency. Because the visual bag-of-words model ignores the spatial information of the visual features, in the final stage, the geometric consistency verification needs to be performed on the loopback candidate frame and the current key frame to ensure the accuracy of loopback detection.
Calculating line feature reprojection error by calculating line features matched between the current frame and the loop candidate frame, and solving pose transformation between the current frame and the loop candidate frame by local BA optimization. And judging whether the pose transformation is reasonable or not by calculating the number of line features inliers under the pose transformation, thereby judging whether the loopback candidate frame passes the geometric consistency verification or not.
And extracting loop candidate frames when the image appearance similarity reaches a threshold value alpha, and judging that loop has occurred after a series of verification links for ensuring loop accuracy are passed, and correcting and updating the global map according to the detected loop.

Claims (1)

1. A loop detection method of an improved bag-of-words model based on line features comprises the following steps:
step 1: extracting LSD (Line Segment Descriptor) features through an offline image data set, calculating corresponding LBD (Line Band Descriptor) Line feature descriptors, and taking the LBD Line feature descriptors as original data of a clustering generation dictionary;
and 2, step: constructing a bag-of-words model based on LBD descriptors: before each clustering of the dictionary tree is constructed, determining an optimal clustering k value k 'for current data, and then clustering the current data into a k' class; the steps are circulated until a visual dictionary tree with self-adaptive branches is finally constructed;
and step 3: bag of words model vector transformation: extracting LSD line features from the image, quantizing each line feature in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor, the line feature descriptor and the Hamming distance between the visual words, and converting the whole image into a corresponding numerical vector;
and 4, step 4: visual word weight optimization: in the loop detection, the mutual index between the visual words and the key frames is established, and the number of the key frames with each visual word is counted
Figure FDA0003806043830000011
And calculating the repetition factor of the visual word according to the number of the corresponding key frames
Figure FDA0003806043830000012
Where n is the number of keyframes in the historical keyframe data set,
Figure FDA0003806043830000013
the number of key frames in which the visual word wi appears;
optimizing the visual word weight in the bag-of-words model vector according to the distribution condition of the visual words on the historical key frame data set, calculating the weight optimization parameters of the visual words, and combining the word weight calculated by the TF-IDF method to obtain the weight-optimized visual bag-of-words vector;
repetition factor in conjunction with visual words
Figure FDA0003806043830000014
And TF-IDF algorithm to generate visual word w i New weights
Figure FDA0003806043830000015
And generates therefrom a new bag-of-words model vector v' a
v′ a ={(w 1 ,η′ 1 ),(w 2 ,η′ 2 ),...,(w N ,η′ N )}
TF i As visual words w i Word frequency, IDF, in the current picture i As visual words w i Inverse document frequency on the training data set;
and 5: and (3) similarity calculation: calculating similarity by adopting an L1 norm according to visual bag-of-word vectors between the current frame and the historical key frame, and obtaining an appearance similarity score between the images;
step 6: loop candidate frames are acquired and grouped: setting the historical key frames meeting the requirement of the similarity threshold as loop candidate frames, grouping the loop candidate frames, dividing the loop candidate frames with similar time sequence into a group, and then rejecting the loop candidate frames with low group scores according to the similarity scores of the whole group and the given threshold;
and 7: and (3) verifying the continuity: at this stage, when a loop is continuously detected in the loop candidate frames, the corresponding loop candidate frames are retained;
and step 8: and (3) verifying geometric consistency: and performing geometric consistency verification on the loop candidate frame and the current key frame to ensure the accuracy of loop detection.
CN202011111454.8A 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics Active CN112507778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011111454.8A CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011111454.8A CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Publications (2)

Publication Number Publication Date
CN112507778A CN112507778A (en) 2021-03-16
CN112507778B true CN112507778B (en) 2022-10-04

Family

ID=74953814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011111454.8A Active CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Country Status (1)

Country Link
CN (1) CN112507778B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991448B (en) * 2021-03-22 2023-09-26 华南理工大学 Loop detection method, device and storage medium based on color histogram
CN115240115B (en) * 2022-07-27 2023-04-07 河南工业大学 Visual SLAM loop detection method combining semantic features and bag-of-words model
CN117409388A (en) * 2023-12-11 2024-01-16 天津中德应用技术大学 Intelligent automobile vision SLAM closed-loop detection method for improving word bag model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109886065A (en) * 2018-12-07 2019-06-14 武汉理工大学 A kind of online increment type winding detection method
CN109656545B (en) * 2019-01-17 2022-03-25 云南师范大学 Event log-based software development activity clustering analysis method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words

Also Published As

Publication number Publication date
CN112507778A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
Song et al. Region-based quality estimation network for large-scale person re-identification
CN112507778B (en) Loop detection method of improved bag-of-words model based on line characteristics
CN110163258B (en) Zero sample learning method and system based on semantic attribute attention redistribution mechanism
CN108960140B (en) Pedestrian re-identification method based on multi-region feature extraction and fusion
CN109670528B (en) Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy
CN112069940B (en) Cross-domain pedestrian re-identification method based on staged feature learning
CN111161315B (en) Multi-target tracking method and system based on graph neural network
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
CN104679818B (en) A kind of video key frame extracting method and system
Yue et al. Robust loop closure detection based on bag of superpoints and graph verification
CN110110694B (en) Visual SLAM closed-loop detection method based on target detection
CN110827265B (en) Image anomaly detection method based on deep learning
CN111832484A (en) Loop detection method based on convolution perception hash algorithm
CN113705597A (en) Image processing method and device, computer equipment and readable storage medium
Yang et al. Multi-scale bidirectional fcn for object skeleton extraction
CN110880010A (en) Visual SLAM closed loop detection algorithm based on convolutional neural network
CN112364791A (en) Pedestrian re-identification method and system based on generation of confrontation network
CN114926742A (en) Loop detection and optimization method based on second-order attention mechanism
Du et al. Convolutional neural network-based data anomaly detection considering class imbalance with limited data
CN105678349B (en) A kind of sub- generation method of the context-descriptive of visual vocabulary
CN114821299A (en) Remote sensing image change detection method
CN111291785A (en) Target detection method, device, equipment and storage medium
Tu et al. Toward automatic plant phenotyping: starting from leaf counting
CN112613474B (en) Pedestrian re-identification method and device
CN110968735A (en) Unsupervised pedestrian re-identification method based on spherical similarity hierarchical clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant