CN109902619B - Image closed loop detection method and system - Google Patents

Image closed loop detection method and system Download PDF

Info

Publication number
CN109902619B
CN109902619B CN201910141189.9A CN201910141189A CN109902619B CN 109902619 B CN109902619 B CN 109902619B CN 201910141189 A CN201910141189 A CN 201910141189A CN 109902619 B CN109902619 B CN 109902619B
Authority
CN
China
Prior art keywords
image
closed loop
closed
operator
loop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910141189.9A
Other languages
Chinese (zh)
Other versions
CN109902619A (en
Inventor
安平
余佳东
王国平
陈亦雷
尤志翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910141189.9A priority Critical patent/CN109902619B/en
Publication of CN109902619A publication Critical patent/CN109902619A/en
Application granted granted Critical
Publication of CN109902619B publication Critical patent/CN109902619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image closed loop detection method and a system, wherein the method comprises the following steps: extracting a FAST corner point for each frame image, and calculating a BRIEF operator; substituting the BRIEF operator into a pre-established word bag model to obtain a visual word corresponding to the operator; the visual words are used for establishing vector description of the image; judging whether a current image is likely to generate a closed loop or not based on a tracking prediction algorithm, and predicting the likely position of the closed loop to obtain a closed loop candidate set; evaluating the similarity degree of the current image and each image in the closed-loop candidate set through the visual word vector, and taking the image with the highest similarity in the closed-loop candidate set as a candidate image; carrying out normalization processing on the candidate image to obtain a normalized image; and calculating an ORB global operator of the normalized image to complete the structure check of the candidate image. The invention can effectively accelerate the detection algorithm and provide more accurate closed-loop detection performance.

Description

Image closed loop detection method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to an image closed-loop detection method and system.
Background
The Simultaneous Localization and Mapping (SLAM) technology aims at reconstructing a three-dimensional model of an environment in an unknown environment in real time and simultaneously realizing the self-Localization of a robot. Whereas closed loop detection is a fundamental problem for visual SLAM. If the visual SLAM estimates the trajectory only by means of a visual odometer, cumulative drift will inevitably occur. Loop back detection can give the constraint that some time intervals are more distant than adjacent frames. When the operation track is operated to the same position for the second time, a closed loop is formed. If it is effectively detected by the closed loop detection that the camera has traveled this second time, the loop back edge can "pull" the edge with the accumulated error to the correct position. Fig. 1(a) is an unmodified trajectory diagram, fig. 1(b) is a trajectory diagram after closed-loop detection optimization, and black circles in the diagram indicate positions where closed loops occur.
The existing closed-loop detection method still has some disadvantages, under the closed-loop detection in a large scene, the number of images is increased sharply, and the similarity of the current image and each previous frame image needs to be compared, which can cause a great deal of computing resource waste. In addition, the accuracy of the current closed-loop detection method is not enough, and the closed loop is not mistakenly closed. As shown in fig. 2, the solid line indicates that a closed loop has occurred, but the closed loop is erroneous except at three circles. In conclusion, how to improve the efficiency and accuracy of closed-loop detection becomes an urgent problem to be solved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an image closed-loop detection method and system, which improve the efficiency and accuracy of closed-loop detection.
According to a first aspect of the present invention, there is provided an image closed-loop detection method, including:
extracting a FAST corner point for each frame image, and calculating a BRIEF operator;
substituting the BRIEF operator into a pre-established word bag model to obtain a visual word corresponding to the operator; the visual words are used for establishing vector description of the image;
judging whether a current image is likely to generate a closed loop or not based on a tracking prediction algorithm, and predicting the position of the closed loop which is likely to generate in the current image to obtain a closed loop candidate set; the closed loop candidate set is used for storing images which are likely to generate closed loops;
evaluating the similarity degree of the current image and each image in the closed-loop candidate set through the visual word vector, and taking the image with the highest similarity in the closed-loop candidate set as a candidate image;
carrying out normalization processing on the candidate image to obtain a normalized image;
and calculating an ORB global operator of the normalized image to complete the structure check of the candidate image.
Optionally, extracting a FAST corner for each frame image, and calculating a BRIEF operator, including:
extracting a FAST corner of the image by adopting a preset FAST corner extraction algorithm; the preset FAST corner extraction algorithm is as follows:
Figure BDA0001978581850000021
wherein: t is a preset threshold; i ispThe pixel value of the central pixel of the image; i isp→xAre pixels in a circular template; d is darker and represents Ip→xThe pixel is darker; s is similar and represents Ip→xAnd IpThe pixels are similar; b is bright and represents Ip→xThe pixel is brighter;
counting the times of d or b in the circular area, and taking the points with the times of d or b being more than n as FAST angular points;
and taking the FAST corner as the center of each FAST corner, taking an S multiplied by S neighborhood window, and randomly selecting point pairs in the neighborhood window to carry out binary value assignment to obtain BRIEF operators.
Optionally, substituting the BRIEF operator into a pre-established bag-of-words model to obtain a visual word corresponding to the operator, where the method includes:
selecting a bag-of-words model according to the search complexity; the bag of words model includes: a tree-structured bag-of-words model; each node of the tree-structure bag-of-words model is used for storing visual words;
substituting the BRIEF operator into the tree-structure bag-of-words model, and traversing layer by layer from the root node until finding the visual word corresponding to the BRIEF operator.
Optionally, judging whether the current image is likely to have a closed loop based on a tracking prediction algorithm, and predicting a position of the closed loop in the current image, to obtain a closed loop candidate set, including:
denote the current image as Xi+n+1Then the last frame image is marked as Xi+n
If the last frame image Xi+nIf closed loop exists, the current image is put into a closed loop candidate set, and the current image X is predictedi+n+1The location where a closed loop may exist;
if the previous frame isImage Xi+nIf the closed loop does not exist, judging whether a low-resolution image exists in the four previous frames of images of the current image or not; the low-score image is an image of which the number of visual words of the same type is less than a preset threshold value compared with the previous image;
if the low-score image exists, determining that the current image does not have a closed loop;
if the low-score image exists, the current image is put into a closed-loop candidate set, and the current image X is predictedi+n+1There may be a closed loop position.
Optionally, the evaluating the similarity degree of the current image and each image in the closed-loop candidate set through the visual word vector comprises:
traversing all visual words of the current image based on the direct index, and then querying all images containing the current visual words in the closed-loop candidate set based on the reverse index;
and calculating the similarity between the current image and the inquired image by adopting a similarity calculation formula, wherein the similarity calculation formula is as follows:
Figure BDA0001978581850000031
wherein: s (v)1,v2) Score the similarity of image 1 and image 2, v1Is a visual word vector, v, of image 12Is the visual word vector for image 2.
Optionally, calculating an ORB global operator of the normalized image, and completing structure verification on the candidate image, including:
taking 4 corner points and a central point of the normalized image as feature points; for each feature point, extracting an ORB operator in a window of 31 multiplied by 31 pixels respectively;
taking 5 ORB operators corresponding to the 4 corner points and the central point of the image as ORB global operators of the image;
calculating the Hamming distance between the current image and the global operator of the candidate image;
and if the Hamming distance between more than two operators in the 5 ORB operators is more than 100, determining that the image is an error closed loop, and finishing the structure verification.
According to a first aspect of the present invention, there is provided an image closed-loop detection system comprising: a processor and a memory; the memory has stored therein a code program for executing the image closed-loop detection method when the code program is called by the processor.
Compared with the prior art, the invention has the following beneficial effects:
the image closed-loop detection method and the image closed-loop detection system provided by the invention judge whether the current image is likely to generate closed loop or not based on the tracking prediction model, and if the current image is not likely to generate closed loop, the current image does not need to be compared with each frame of image in the past for similarity, so that the efficiency is improved. Accuracy is improved by predicting closed-loop candidate sets if closed-loop is likely to occur, and quasi-efficiency may also be improved by reducing the number of images participating in comparing similarities.
According to the image closed loop detection method and system provided by the invention, the image is normalized, the ORB global operator is calculated for the normalized image, and the global similarity between the candidate closed loop and the current image is compared to complete the structure verification, so that the closed loop detection accuracy can be effectively improved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1(a) is a prior art uncorrected trace diagram;
FIG. 1(b) is a prior art trace diagram after closed loop detection optimization;
FIG. 2 is a diagram illustrating the results of a conventional closed loop detection of a line;
FIG. 3 is a schematic diagram illustrating an image closed-loop detection method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a feature extraction operator according to an embodiment of the present invention;
FIG. 5 is a diagram of a bag tree model according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a predictive tracking model according to an embodiment of the invention;
FIG. 7 is a diagram illustrating a global operator for normalized image extraction according to an embodiment of the present invention;
FIG. 8(a) is a schematic diagram illustrating the effect of a conventional closed loop detection method;
fig. 8(b) is a schematic diagram illustrating an effect of the closed-loop detection method according to an embodiment of the invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Fig. 3 is a schematic diagram illustrating the principle of the image closed-loop detection method according to the embodiment of the present invention, and as shown in fig. 3, a new image is obtained first, and then FAST corner extraction is performed on each frame of image to calculate a BRIEF operator; then, a description of the image is established by obtaining a visual word through an offline Bag of words model (Bag of words, Bow). And judging whether the current image is likely to generate a closed loop or not based on a tracking prediction algorithm, and predicting the likely closed loop position of the current image to obtain a closed loop candidate set. And evaluating the similarity degree of the images through the matching degree of the visual words, and finding the most similar candidate image in the candidate set. And finally, directly calculating an ORB global operator according to the normalized image to realize the structure verification of the candidate image.
The image closed-loop detection method provided by the embodiment of the invention comprises the following specific steps:
step 1: and extracting a FAST corner point for each frame image, and calculating a BRIEF operator.
In this embodiment, an algorithm for extracting the FAST corner is as follows:
Figure BDA0001978581850000051
wherein: t is a preset threshold (the default value is 10, and the values of different scenes are different); i ispThe pixel value of the central pixel of the image; i isp→xFor pixels in a circular template, the pixel value I of the central pixelpPixel value I less than xp→xThen the pixel belongs to dark, Sp→xTwo other cases represent bright and similar areas, respectively. Such a block (circular) area can be divided into three types d, s and b. At this time, as long as the number of d or b in the circular area is counted, and as long as the number of d or b occurring is greater than n, the point is considered as the FAST corner point.
Further, fig. 4 is a schematic diagram of an operator for extracting features; referring to fig. 4, for each FAST corner, taking the FAST corner as a center, taking an S × S neighborhood large window, randomly selecting point pairs (generally 256 pairs) in the large window, performing binary value assignment, and calculating a BRIEF operator.
Step 2: substituting the BRIEF operator into a pre-established word bag model to obtain a visual word corresponding to the image; the visual words are used to build a vector description of the image.
In this embodiment, a bag-of-words model (bag-of-words tree) is first constructed, and the bag-of-words model is trained, where: the bag-of-words model is the result of the clustering of the feature operators. Specifically, hundreds of thousands or even millions of trained pictures are adopted, and the K-means algorithm is adopted to cluster the image features into visual words. For example, a k-branch, d-deep visual word tree (bag of words tree) may be formed. The bag-of-words tree is characterized in that all characteristic operators are distributed at each leaf node, and every k child nodes are clustered into a father node till the root node.
Specifically, FIG. 5 is a diagram of a bag of words tree model; referring to fig. 5, each feature operator in the image traverses the pool bag through the hierarchical order, that is, the child node most similar to the feature operator is found from the root node, and thus, the visual word corresponding to the feature operator is obtained through step-by-step traversal. And finally, obtaining a visual word description model of the whole graph:
Iu→{d1n}
in the above formula, IuIs an image, dnIs the nth visual word.
And step 3: judging whether a current image is likely to generate a closed loop or not based on a tracking prediction algorithm, and predicting the position of the closed loop which is likely to generate in the current image to obtain a closed loop candidate set; the closed-loop candidate set stores images that may occur as closed loops.
In this embodiment, whether a closed loop may exist in the current frame is predicted according to the closed loop detection result of the previous frame. During the movement of the robot, since the similarity of the obtained adjacent images is generally larger, if the image of the previous frame is not a closed loop, and if there are images of low score in four frames from the current frame onward (the image of low score contains fewer visual words of the same type than all the previous images), there is probably no closed loop in the current frame. The comparison with each frame of image before is not needed, thereby improving the efficiency of the system.
Specifically, FIG. 6 is a schematic diagram of a predictive tracking model; see FIG. 6, xn,xn+1Estimating the current image X for the sequence number of the previous image according to a tracking prediction modeli+n+1Whether or not a closed loop exists, if it is the previous frame image Xi+nIf the closed loop does not exist, judging whether a low-resolution image exists in the four previous frames of images of the current image or not; the low-score image is an image of which the number of visual words of the same type is less than a preset threshold value compared with the previous image; if the low-resolution image exists, determining that the current frame image does not have a closed loop; if the low-score image exists, the current image is put into a closed-loop candidate set, and the current image X is predictedi+n+1There may be a closed loop position.
If the previous frame has a closed loop, predicting the possible closed loop position of the current image to obtain a closed loop candidate set. Assuming i-1, i-2, …, i-n are the sequence numbers of the previous images, the corresponding closed loop is marked as Xi-1、Xi-2,…、Xi-n. The camera is approximately in uniform motion in a short time, and the difference value of the camera and the camera is approximately in Gaussian distribution in consideration of the noise interference condition.
Dn=Xi-n-Xi-n-1~N(μ,σ2)
To obtain [ X ]i-1+μ-10σ,Xi-1+μ+10σ]As a closed-loop candidate set; wherein D isnIs Xi-n-Xi-n-1Difference of (d), N (μ, σ)2) For a gaussian distribution, μ is the mathematical expectation and σ is the standard deviation.
And 4, step 4: and evaluating the similarity degree of the current image and each image in the closed-loop candidate set through visual words, and taking the image with the highest similarity in the closed-loop candidate set as a candidate image.
In this embodiment, all visual words in the current view are traversed based on the direct index, and then indexed at [ X ] based on the reverse indexi-1+μ-10σ,Xi-1+μ+10σ]All images in the candidate set containing the current visual word are quickly queried.
The similarity score between images is calculated according to the following formula:
Figure BDA0001978581850000061
wherein: s (v)1,v2) Score the similarity of image 1 and image 2, v1Is a visual word vector, v, of image 12Is the visual word vector for image 2. And taking the image with the highest similarity score as a candidate image.
And 5: and carrying out normalization processing on the candidate image to obtain a normalized image.
Step 6: and calculating an ORB global operator of the normalized image to complete the structure check of the candidate image.
In this embodiment, referring to fig. 7, the original 370 × 1226 image is normalized to an image of 64 × 64 pixels, and 4 corner points and a center point of the normalized image are directly used as feature points. For each feature point, an ORB operator is extracted in a window of 31 × 31 pixels, and for each feature point, 256 pairs of generated random points (256 for example) are rotated, then distinguished, and binary-coded.
Figure BDA0001978581850000062
Wherein: s denotes the random point position (2 x n matrix); (x)n,yn) Is the nth point pair.
Specifically, the above 5 ORB descriptors (ORB operators corresponding to 4 corner points and a center point of the normalized image) are taken as the global operator of the image, and the hamming distance between the current image and the global operator of the candidate image is calculated. And if the Hamming distance between more than two operators in the 5 operators is more than 100, the operators are considered to be an error closed loop, and the structure check is completed.
The effect graph of the closed loop detection after the correction of the present embodiment is shown in fig. 8 (b). In the paths of fig. 8(a) and 8(b), two places 1 and 2 are accurate closed loops. Where there are some discontinuities in the paths of both figures, they represent false closed loops. It can be seen that the breaks in fig. 8(b) are significantly less than in fig. 8(a), and the false closed loops are significantly reduced. The embodiment has larger improvement on the accuracy of closed loop detection.
The improvement in temporal performance is shown in table 1. As the number of pictures increases, the better the example performs in time efficiency based on the tracking prediction model.
TABLE 1
Number of images of data set Percentage of time reduction
123 -1.67%
490 -0.86%
830 -3,7%
1730 -5.02%
In conclusion, the method has the following obvious substantive outstanding characteristics and obvious advantages, and whether the current image is likely to generate closed loop is judged based on the tracking prediction model, so that the efficiency is improved; the accuracy is improved by predicting a closed-loop candidate set, and the efficiency is further improved by reducing the number of images participating in comparison of similarity; a structure checking algorithm is provided, the image is normalized, an ORB global operator is calculated for the normalized image, the global similarity of a candidate closed loop and the current image is compared to complete structure checking, and the accuracy of closed loop detection is improved.
It should be noted that, the steps in the image closed-loop detection method provided by the present invention may be implemented by using corresponding modules, devices, units, and the like in the image closed-loop detection system, and those skilled in the art may refer to the technical solution of the system to implement the step flow of the method, that is, the embodiment in the system may be understood as a preferred example for implementing the method, and is not described herein again.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices provided by the present invention in purely computer readable program code means, the method steps can be fully programmed to implement the same functions by implementing the system and its various devices in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices thereof provided by the present invention can be regarded as a hardware component, and the devices included in the system and various devices thereof for realizing various functions can also be regarded as structures in the hardware component; means for performing the functions may also be regarded as structures within both software modules and hardware components for performing the methods.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (4)

1. An image closed-loop detection method is characterized by comprising the following steps:
extracting a FAST corner point for each frame image, and calculating a BRIEF operator;
substituting the BRIEF operator into a pre-established word bag model to obtain a visual word corresponding to the operator; the visual words are used for establishing vector description of the image;
judging whether a current image is likely to generate a closed loop or not based on a tracking prediction algorithm, and predicting the position of the closed loop which is likely to generate in the current image to obtain a closed loop candidate set; the closed loop candidate set is used for storing images which are likely to generate closed loops;
evaluating the similarity degree of the current image and each image in the closed-loop candidate set through the visual word vector, and taking the image with the highest similarity in the closed-loop candidate set as a candidate image;
carrying out normalization processing on the candidate image to obtain a normalized image;
calculating an ORB global operator of the normalized image to complete structural verification of the candidate image;
judging whether the current image is likely to generate a closed loop or not based on a tracking prediction algorithm, predicting the position of the closed loop which is likely to generate in the current image, and obtaining a closed loop candidate set, wherein the closed loop candidate set comprises:
denote the current image as Xi+n+1Then the last frame image is marked as Xi+n
If the last frame image Xi+nIf closed loop exists, the current image is put into a closed loop candidate set, and the current image X is predictedi+n+1The location where a closed loop may exist;
if the last frame image Xi+nIf the closed loop does not exist, judging whether a low-resolution image exists in the four previous frames of images of the current image or not; the low-score image is an image of which the number of visual words of the same type is less than a preset threshold value compared with the previous image;
if the low-score image exists, determining that the current image does not have a closed loop;
if the low-score image does not exist, the current image is put into a closed-loop candidate set, and the current image X is predictedi+n+1The location where a closed loop may exist;
evaluating how similar the current image is to each image in the closed-loop candidate set by the visual word vector, comprising:
traversing all visual words of the current image based on the direct index, and then querying all images containing the current visual words in the closed-loop candidate set based on the reverse index;
and calculating the similarity between the current image and the inquired image by adopting a similarity calculation formula, wherein the similarity calculation formula is as follows:
Figure FDA0003155326380000011
wherein: s (v)1,v2) Score the similarity of image 1 and image 2, v1Is a visual word vector, v, of image 12A visual word vector for image 2;
calculating an ORB global operator of the normalized image to complete structure verification of the candidate image, wherein the ORB global operator comprises the following steps:
taking 4 corner points and a central point of the normalized image as feature points; for each feature point, extracting an ORB operator in a window of 31 multiplied by 31 pixels respectively;
taking 5 ORB operators corresponding to the 4 corner points and the central point of the image as ORB global operators of the image;
calculating the Hamming distance between the current image and the global operator of the candidate image;
and if the Hamming distance between more than two operators in the 5 ORB operators is more than 100, determining that the image is an error closed loop, and finishing the structure verification.
2. The method according to claim 1, wherein extracting FAST corner points for each frame of image, and calculating BRIEF operator, comprises:
extracting a FAST corner of the image by adopting a preset FAST corner extraction algorithm; the preset FAST corner extraction algorithm is as follows:
Figure FDA0003155326380000021
wherein: t is a preset threshold; i ispThe pixel value of the central pixel of the image; i isp→xAre pixels in a circular template; d is darker and represents Ip→xThe pixel is darker; s is similar and represents Ip→xAnd IpThe pixels are similar; b is bright and represents Ip→xThe pixel is brighter;
counting the times of d or b in the circular area, and taking the points with the times of d or b being more than m as FAST angular points;
and taking the FAST corner as the center of each FAST corner, taking an S multiplied by S neighborhood window, and randomly selecting a point pair in the neighborhood window to carry out binary value assignment to obtain a BRIEF operator.
3. The image closed-loop detection method of claim 1, wherein the step of substituting a BRIEF operator into a pre-established bag-of-words model to obtain a visual word corresponding to the operator comprises:
selecting a bag-of-words model according to the search complexity; the bag of words model includes: a tree-structured bag-of-words model; each node of the tree-structure bag-of-words model is used for storing visual words;
substituting the BRIEF operator into the tree-structure bag-of-words model, and traversing layer by layer from the root node until finding the visual word corresponding to the BRIEF operator.
4. An image closed loop detection system, comprising: a processor and a memory; the memory has stored therein a code program for executing the image closed loop detection method according to any one of claims 1 to 3 when the code program is called by the processor.
CN201910141189.9A 2019-02-26 2019-02-26 Image closed loop detection method and system Active CN109902619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910141189.9A CN109902619B (en) 2019-02-26 2019-02-26 Image closed loop detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910141189.9A CN109902619B (en) 2019-02-26 2019-02-26 Image closed loop detection method and system

Publications (2)

Publication Number Publication Date
CN109902619A CN109902619A (en) 2019-06-18
CN109902619B true CN109902619B (en) 2021-08-31

Family

ID=66945371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910141189.9A Active CN109902619B (en) 2019-02-26 2019-02-26 Image closed loop detection method and system

Country Status (1)

Country Link
CN (1) CN109902619B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852327A (en) * 2019-11-07 2020-02-28 首都师范大学 Image processing method, image processing device, electronic equipment and storage medium
CN111652306A (en) * 2020-05-28 2020-09-11 武汉理工大学 Closed loop detection method integrating multiple visual features
CN111862162B (en) * 2020-07-31 2021-06-11 湖北亿咖通科技有限公司 Loop detection method and system, readable storage medium and electronic device
CN111986313B (en) * 2020-08-21 2024-09-17 浙江商汤科技开发有限公司 Loop detection method and device, electronic equipment and storage medium
CN112396593B (en) * 2020-11-27 2023-01-24 广东电网有限责任公司肇庆供电局 Closed loop detection method based on key frame selection and local features
CN112699954B (en) * 2021-01-08 2024-04-16 北京工业大学 Closed loop detection method based on deep learning and bag-of-word model

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831446A (en) * 2012-08-20 2012-12-19 南京邮电大学 Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping)
CN105856230B (en) * 2016-05-06 2017-11-24 简燕梅 A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
US20180161986A1 (en) * 2016-12-12 2018-06-14 The Charles Stark Draper Laboratory, Inc. System and method for semantic simultaneous localization and mapping of static and dynamic objects
CN106897666B (en) * 2017-01-17 2020-09-08 上海交通大学 Closed loop detection method for indoor scene recognition
CN107563308B (en) * 2017-08-11 2020-01-31 西安电子科技大学 SLAM closed loop detection method based on particle swarm optimization algorithm
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Bags of Binary Words for Fast Place Recognition in Image Sequences;Dorian 等;《IEEE Transactions on Robotics》;20121031;第1188-1197页 *
ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras;MurArtal R 等;《IEEE Transactions on Robotics》;20170612;第1255-1262页 *

Also Published As

Publication number Publication date
CN109902619A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109902619B (en) Image closed loop detection method and system
Kristan et al. The seventh visual object tracking VOT2019 challenge results
CN108470354B (en) Video target tracking method and device and implementation device
CN109035304B (en) Target tracking method, medium, computing device and apparatus
US11640714B2 (en) Video panoptic segmentation
CN107633226B (en) Human body motion tracking feature processing method
CN109544592B (en) Moving object detection algorithm for camera movement
US11676018B2 (en) Feature extraction with keypoint resampling and fusion (KRF)
Xing et al. DE‐SLAM: SLAM for highly dynamic environment
CN110349188B (en) Multi-target tracking method, device and storage medium based on TSK fuzzy model
Li et al. Robust object tracking with discrete graph-based multiple experts
CN110222565A (en) A kind of method for detecting human face, device, electronic equipment and storage medium
Meus et al. Embedded vision system for pedestrian detection based on HOG+ SVM and use of motion information implemented in Zynq heterogeneous device
CN108288020A (en) Video shelter detecting system based on contextual information and method
Iraei et al. Object tracking with occlusion handling using mean shift, Kalman filter and edge histogram
You et al. MISD‐SLAM: multimodal semantic SLAM for dynamic environments
Shao et al. Faster R-CNN learning-based semantic filter for geometry estimation and its application in vSLAM systems
CN116630367B (en) Target tracking method, device, electronic equipment and storage medium
Yang et al. Probabilistic projective association and semantic guided relocalization for dense reconstruction
CN112597997A (en) Region-of-interest determining method, image content identifying method and device
US20230281867A1 (en) Methods performed by electronic devices, electronic devices, and storage media
CN112084855A (en) Outlier elimination method for video stream based on improved RANSAC method
Ruan et al. Object tracking via online trajectory optimization with multi-feature fusion
Aing et al. Instancepose: Fast 6dof pose estimation for multiple objects from a single rgb image
Qi et al. Multiple object tracking with segmentation and interactive multiple model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant