CN112380371A - Closed loop detection method based on local and convolutional neural network characteristics - Google Patents

Closed loop detection method based on local and convolutional neural network characteristics Download PDF

Info

Publication number
CN112380371A
CN112380371A CN202011360886.2A CN202011360886A CN112380371A CN 112380371 A CN112380371 A CN 112380371A CN 202011360886 A CN202011360886 A CN 202011360886A CN 112380371 A CN112380371 A CN 112380371A
Authority
CN
China
Prior art keywords
image
input image
images
closed
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011360886.2A
Other languages
Chinese (zh)
Inventor
游林辉
胡峰
孙仝
陈政
张谨立
宋海龙
黄达文
王伟光
梁铭聪
黄志就
何彧
陈景尚
谭子毅
尤德柱
区嘉亮
罗鲜林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202011360886.2A priority Critical patent/CN112380371A/en
Publication of CN112380371A publication Critical patent/CN112380371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a closed loop detection method based on local and convolutional neural network characteristics, which comprises the following steps: extracting global image features from the acquired input image by adopting a convolutional neural network, and inserting the extracted global features into the small world map; searching an image which is most similar to the current input image as a closed loop candidate image of the current input image through HNSW in the searching range of the current input image; introducing geometric consistency check to match the characteristic points of the two images; inputting the matched characteristic points of the two images into a random sampling consistency algorithm, and if the number of the internal points between the two images is greater than a threshold value, the two images may form a closed loop; and introducing time consistency check, and if the continuous 2 frames of images after the current input image meet a threshold condition, considering that the input image and the closed-loop candidate image form a group of closed loops. In the closed-loop detection process, scenes with changed image appearances can be processed, and geometric topological information among images can be obtained.

Description

Closed loop detection method based on local and convolutional neural network characteristics
Technical Field
The invention relates to the field of positioning and navigation based on vision in autonomous inspection of unmanned aerial vehicles, in particular to a closed loop detection method based on local and convolutional neural network characteristics.
Background
At unmanned aerial vehicle independently the in-process of patrolling and examining, unmanned aerial vehicle need independently decide the required operation of going on according to environmental information. Therefore, autonomous positioning and environment map sensing and construction are key links in autonomous inspection of the unmanned aerial vehicle. In recent years, with the increase of computer hardware level and the development of vision processing technology, vision SLAM (simultaneous localization and mapping) technology has been widely applied to mobile robot localization and navigation tasks. The closed-loop detection is an important loop in a visual SLAM system, is mainly used for judging whether a mobile robot passes through a place visited once or not, and plays an extremely important role in reducing the front-end positioning accumulated error of the SLAM and constructing a globally consistent environment map. Closed loop detection is essentially a location identification problem, the key to which is the characterization of the image.
The Chinese patent application document with the publication number of 'CN 110533661A' and the publication date of 2019, 12 and 3 discloses an adaptive real-time closed-loop detection method based on image feature cascade, the image representation after the feature cascade is adopted is richer, the misjudgment rate of the closed loop is reduced, and the real-time performance is improved to a certain extent. Compared with other systems using a convolutional neural network for closed-loop detection, the method has the advantages that different levels of features are cascaded, low-level and high-level image information is synthesized, image representation is richer, feature dimension reduction operation is performed before feature cascading, real-time performance of the system is guaranteed, in addition, a self-adaptive candidate range matching algorithm is provided aiming at adjacent frame misjudgment and misjudgment of similar scenes, the algorithm not only reduces misjudgment, but also improves robustness, misjudgment rate is reduced as much as possible through an image-sequence calibration algorithm, and the effect of closed-loop detection is further improved. However, this method is robust in a scene with a change in viewing angle, and can obtain geometric topological information between images, but it is difficult to cope with a strong appearance change.
Disclosure of Invention
In order to overcome the problem that detection in the prior art is difficult to deal with stronger appearance change, the invention provides a closed-loop detection method based on local and convolutional neural network characteristics.
In order to solve the technical problems, the invention adopts the technical scheme that: a closed loop detection method based on local and convolutional neural network characteristics comprises the following steps:
the method comprises the following steps: extracting global image features from an input image acquired by a mobile robot by adopting a convolutional neural network, and gradually inserting the extracted global features into a layered navigable small-world map by adopting an approximate nearest neighbor search algorithm;
step two: searching an image which is most similar to the current input image as a closed loop candidate image of the current input image through HNSW in the searching range of the current input image;
step three: introducing geometric consistency check, respectively extracting ORB feature points and corresponding local differential binary descriptors (LDBs) from the input image and the retrieved closed-loop candidate image, and matching the feature points of the two images;
step four: inputting the matched characteristic points of the two images into a random sampling consistency algorithm to further eliminate mismatching and solve a basic matrix, wherein if the number of internal points between the two images is less than a threshold value, the two images do not form a closed loop; if the number of inner points between the two images is larger than the threshold value, the two images may form a closed loop;
step five: and (4) introducing time consistency check, and if the 2 continuous frames of images after the current input image all meet the threshold condition of the step four, considering that the input image and the closed-loop candidate image form a group of closed loops.
Preferably, in the first step, extracting global image features from the input image acquired by the mobile robot by using a convolutional neural network specifically includes:
for input image IiPreprocessing according to the input requirement of the convolutional neural network, and taking the output of the last but one full connection of the network as the global feature f of image extractionglo,i
Preferably, the pair of input images IiConvolutional neural networkThe input requirement of the network is preprocessed, and the preprocessing comprises the operations of scaling and normalizing the input image.
Preferably, in the first step, the step of gradually inserting the extracted global features into the hierarchical navigable small-world map by the approximate nearest neighbor search algorithm specifically includes:
randomly setting the highest layer number l of the characteristic node in the HNSW structure by an exponentially decaying probability distribution functionmaxInsert the node into lmaxTo the bottom layer l0In all layers of (a); and searching M nodes nearest to the node in each layer respectively, and connecting the new characteristic node with the M nodes nearest to the new characteristic node.
Preferably, in the second step, the search range of the current input image specifically includes:
Usa=Ubefore-Ufr×ct
wherein, UsaIndicating a search range of the input image; u shapebeforeA set representing all images preceding the current input image; fr is the frame rate of the camera; ct is a time constant; u shapefr×ctIs a set of fr × ct frame images preceding the current input image.
Preferably, in the second step, the searching, by the HNSW, an image most similar to the current input image as a closed-loop candidate image of the current input image specifically includes:
search distance f starting from the top layer of HNSW structureglo,iThe nearest node of the global feature node is stored in the nearest dynamic list and is used as the starting point of the next layer of search until the lowest layer is searched; distance f searched at the lowest layer of HNSWglo,iTaking the image corresponding to the characteristic node with the nearest node as the searched closed-loop candidate image In
Preferably, in the third step, the extracting the ORB feature points and the corresponding local difference binary descriptors for the input image and the retrieved closed-loop candidate image respectively is specifically as follows:
for input image IiAnd closed loop candidate image InExtracting ORB feature points, and extracting each feature point kijCut out for the centerDividing the image block into c × c grid units with equal size, and calculating the average intensity I of each grid unitavgAnd gradient dx、dy(ii) a For any two grid cells in each image block
Figure BDA0002803954330000031
Executing binary test to obtain binary code as characteristic point kijThe LDB descriptor of (1).
Preferably, for any two grid cells in each image block
Figure BDA0002803954330000032
The execution of the binary test is specifically:
Figure BDA0002803954330000033
wherein f (m), f (n) respectively represent grid cells
Figure BDA0002803954330000034
Average intensity ofavgAnd gradient dx、dyThe value is obtained.
Preferably, in the third step, the matching of the feature points of the two images specifically includes:
input image I using Hamming distanceiAnd closed loop candidate image InFor the input image IiLDB descriptor of
Figure BDA0002803954330000035
In the candidate image InIn search and
Figure BDA0002803954330000036
two descriptors with the closest distance
Figure BDA0002803954330000037
If it is
Figure BDA0002803954330000038
And
Figure BDA0002803954330000039
if the following conditions are satisfied, the product is considered to be
Figure BDA00028039543300000310
And
Figure BDA00028039543300000311
is a good feature match:
Figure BDA00028039543300000312
wherein the content of the first and second substances,
Figure BDA00028039543300000313
respectively represent feature descriptors
Figure BDA00028039543300000314
And
Figure BDA00028039543300000315
hamming distance, epsilon betweendThe value is usually less than 1 for the distance scaling factor.
Preferably, the Hamming distance is adopted for the input image IiAnd closed loop candidate image InThe specific matching of the LDB descriptors is as follows:
Figure BDA00028039543300000316
wherein d is1,d2Representing two LDB descriptors, diDenotes d1,d2Bit i of the descriptor.
Compared with the prior art, the invention has the beneficial effects that:
1. the method searches the image most similar to the input image on line through the layered navigable small world map (HNSW), does not need to establish a visual dictionary off line, can be suitable for all scenes, and has strong generalization capability.
2. The method extracts the global characteristics of the input image and retrieves the closed-loop candidate image through the convolutional neural network, and has better robustness for scenes with changed image appearances.
3. The invention verifies whether the two images form the closed loop or not by matching the LDB descriptor between the input image and the closed loop candidate image, the LDB descriptor is a partial feature descriptor of a binary system, the occupied memory space is small, and not only can the verification whether the two images form the closed loop or not, but also the geometric topological relation between the images can be obtained.
Drawings
FIG. 1 is a flow chart of a closed loop detection method based on local and convolutional neural network features of the present invention;
FIG. 2 is a block diagram of a convolutional neural network VGG16 of a closed-loop detection method based on local and convolutional neural network characteristics of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there are terms such as "upper", "lower", "left", "right", "long", "short", etc., indicating orientations or positional relationships based on the orientations or positional relationships shown in the drawings, it is only for convenience of description and simplicity of description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationships in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
The technical scheme of the invention is further described in detail by the following specific embodiments in combination with the attached drawings:
examples
Fig. 1-2 show an embodiment of a closed loop detection method based on local and convolutional neural network features, which includes the following steps:
the method comprises the following steps: current input image I collected for mobile robotiPreprocessing is performed to resize the image to 224 x 224 pixels. Extracting global features of a current input image by adopting a convolutional neural network VGG16, wherein the VGG16 network is pre-trained by a Places365-standard data set, and the output of a penultimate full-connected layer of the network is used as an image IiGlobal feature f ofglo,iThe dimension of the global feature of the image is 4096. Gradually inserting the extracted global features into a hierarchical navigable small world map (HNSW) of an approximate nearest neighbor search algorithm;
step two: searching the current input image I by HNSW within the searching range of the current input imageiThe most similar image is taken as a closed loop candidate image I of the current imagen. Because the images collected by the mobile robot are continuous images, the current input image IiWith a higher similarity to its neighboring images. Therefore, the search range U of the current input imagesaComprises the following steps:
Usa=Ubefore-Ufr×ct
wherein, UbeforeFor the set of all images before the current input image, fr is the frame rate of the camera, ct is the time constant, Ufr×ctIs a set of fr × ct frame images preceding the current input image.
HNSW adopts a layered structure, each layer is a network with small-world navigation characteristics, nodes on the upper layer in the structure are subsets of nodes on the lower layer, and the lowest layer comprises all the nodes. At the time of searching, the distance f is searched from the top layerglo,iThe nearest node of the characteristic node is stored in the nearest dynamic list and is used as the starting point of the next layer search until the searchTo the lowest layer. Distance f searched at the lowest layer of HNSWglo,iTaking the image corresponding to the characteristic node with the nearest node as the searched closed-loop candidate image In
Step three: introducing geometric consistency check to the current input image IiExtracting ORB feature points, and extracting each feature point kijCutting out S multiplied by S image blocks for the center, dividing the image blocks into c multiplied by c grid units with equal size, and respectively calculating the average intensity I of each grid unitavgAnd gradient dx、dy. For any two grid cells in each image block
Figure BDA0002803954330000051
The binary test is performed as follows:
Figure BDA0002803954330000052
wherein f (m), f (n) respectively represent grid cells
Figure BDA0002803954330000053
Average intensity ofavgAnd gradient dx、dyAfter binary test is performed on c × c grid units of the whole image block, a string of binary codes obtained is a feature point kijA binary LDB descriptor. For closed loop candidate image InThe image ORB feature points and the LDB descriptors are extracted by the same method.
Using Hamming distance to input image IiAnd closed loop candidate image InFor the current input image IiLDB descriptor of
Figure BDA0002803954330000054
In the candidate image InIn search and
Figure BDA0002803954330000055
two descriptors that match best
Figure BDA0002803954330000061
If it is
Figure BDA0002803954330000062
And
Figure BDA0002803954330000063
if the following conditions are satisfied, the product is considered to be
Figure BDA0002803954330000064
And
Figure BDA0002803954330000065
is a pair of satisfactory feature matching:
Figure BDA0002803954330000066
in the formula (I), the compound is shown in the specification,
Figure BDA0002803954330000067
respectively represent feature descriptors
Figure BDA0002803954330000068
And
Figure BDA0002803954330000069
has a hamming distance ofdThe value is usually less than 1 for the distance scaling factor.
Step four: inputting the matched characteristic points of the two images into a random sampling consistency algorithm to further eliminate mismatching and solve a basic matrix, wherein if the number of internal points between the two images is less than a threshold value, the two images do not form a closed loop; if the number of inner points between the two images is larger than the threshold value, the two images may form a closed loop;
step five: and (4) introducing time consistency check, and if the 2 continuous frames of images after the current input image all meet the threshold condition of the step four, considering that the input image and the closed-loop candidate image form a group of closed loops.
The beneficial effects of this example: 1. the method searches the image most similar to the input image on line through the layered navigable small world map (HNSW), does not need to establish a visual dictionary off line, can be suitable for all scenes, and has strong generalization capability. 2. The method extracts the global characteristics of the input image and retrieves the closed-loop candidate image through the convolutional neural network, and has better robustness for scenes with changed image appearances. 3. The invention verifies whether the two images form the closed loop or not by matching the LDB descriptor between the input image and the closed loop candidate image, the LDB descriptor is a partial feature descriptor of a binary system, the occupied memory space is small, and not only can the verification whether the two images form the closed loop or not, but also the geometric topological relation between the images can be obtained.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A closed loop detection method based on local and convolutional neural network characteristics is characterized by comprising the following steps:
the method comprises the following steps: extracting global image features from an input image acquired by a mobile robot by adopting a convolutional neural network, and gradually inserting the extracted global features into a layered navigable small-world map by adopting an approximate nearest neighbor search algorithm;
step two: searching an image which is most similar to the current input image as a closed loop candidate image of the current input image through HNSW in the searching range of the current input image;
step three: introducing geometric consistency check, respectively extracting ORB feature points and corresponding local differential binary descriptors (LDBs) from the input image and the retrieved closed-loop candidate image, and matching the feature points of the two images;
step four: inputting the matched characteristic points of the two images into a random sampling consistency algorithm to further eliminate mismatching and solve a basic matrix, wherein if the number of internal points between the two images is less than a threshold value, the two images do not form a closed loop; if the number of inner points between the two images is larger than the threshold value, the two images may form a closed loop;
step five: and (4) introducing time consistency check, and if the 2 continuous frames of images after the current input image all meet the threshold condition of the step four, considering that the input image and the closed-loop candidate image form a group of closed loops.
2. The closed-loop detection method based on the local and convolutional neural network features as claimed in claim 1, wherein in the first step, the extracting global image features from the input image acquired by the mobile robot by using the convolutional neural network specifically comprises:
for input image IiPreprocessing according to the input requirement of the convolutional neural network, and taking the output of the last but one full connection of the network as the global feature f of image extractionglo,i
3. The method of claim 2, wherein the pair of input images I is a closed loop detection method based on local and convolutional neural network featuresiThe preprocessing according to the input requirement of the convolutional neural network comprises the steps of scaling and normalizing the input image.
4. The closed-loop detection method based on the local and convolutional neural network features as claimed in claim 1, wherein in the first step, the extracted global features are gradually inserted into the approximate nearest neighbor search algorithm hierarchical navigable small-world map specifically:
randomly setting the highest layer number l of the characteristic node in the HNSW structure by an exponentially decaying probability distribution functionmaxInsert the node into lmaxTo the bottom layer l0In all layers of (a); and searching M nodes nearest to the node in each layer respectively, and connecting the new characteristic node with the M nodes nearest to the new characteristic node.
5. The closed-loop detection method based on local and convolutional neural network features as claimed in claim 1, wherein in the second step, within the search range of the current input image, specifically:
Usa=Ubefore-Ufr×ct
wherein, UsaIndicating a search range of the input image; u shapebeforeA set representing all images preceding the current input image; fr is the frame rate of the camera; ct is a time constant; u shapefr×ctIs a set of fr × ct frame images preceding the current input image.
6. The method according to claim 1, wherein in the second step, retrieving the image most similar to the current input image as the closed-loop candidate image of the current input image by HNSW includes:
search distance f starting from the top layer of HNSW structureglo,iThe nearest node of the global feature node is stored in the nearest dynamic list and is used as the starting point of the next layer of search until the lowest layer is searched; distance f searched at the lowest layer of HNSWglo,iTaking the image corresponding to the characteristic node with the nearest node as the searched closed-loop candidate image In
7. The method according to claim 1, wherein in the third step, the extraction of ORB feature points and corresponding local difference binary descriptors for the input image and the retrieved closed-loop candidate image is specifically as follows:
for input image IiAnd closed loop candidate image InExtracting ORB feature points, and extracting each feature point kijCutting out S multiplied by S image blocks for the center, dividing the image blocks into c multiplied by c grid units with equal size, and respectively calculating the average intensity I of each grid unitavgAnd gradient dx、dy(ii) a For any two grid cells in each image block
Figure FDA0002803954320000021
Executing binary test to obtain binary code as characteristic point kijThe LDB descriptor of (1).
8. The method of claim 7, wherein for any two grid cells in each image block, the method comprises
Figure FDA0002803954320000022
The execution of the binary test is specifically:
Figure FDA0002803954320000023
wherein f (m), f (n) respectively represent grid cells
Figure FDA0002803954320000024
Average intensity ofavgAnd gradient dx、dyThe value is obtained.
9. The closed-loop detection method based on the local and convolutional neural network features as claimed in claim 8, wherein in the third step, matching the feature points of the two images specifically comprises:
input image I using Hamming distanceiAnd closed loop candidate image InFor the input image IiLDB descriptor of
Figure FDA0002803954320000025
In the candidate image InIn search and
Figure FDA0002803954320000026
two descriptors with the closest distance
Figure FDA0002803954320000031
If it is
Figure FDA0002803954320000032
And
Figure FDA0002803954320000033
if the following conditions are satisfied, the product is considered to be
Figure FDA0002803954320000034
And
Figure FDA0002803954320000035
is a good feature match:
Figure FDA0002803954320000036
wherein the content of the first and second substances,
Figure FDA0002803954320000037
respectively represent feature descriptors
Figure FDA0002803954320000038
And
Figure FDA0002803954320000039
hamming distance, epsilon betweendThe value is usually less than 1 for the distance scaling factor.
10. The method of claim 9, wherein the Hamming distance is used for input image IiAnd closed loop candidate image InThe specific matching of the LDB descriptors is as follows:
Figure FDA00028039543200000310
wherein d is1,d2Representing two LDB descriptors, diDenotes d1,d2Bit i of the descriptor.
CN202011360886.2A 2020-11-27 2020-11-27 Closed loop detection method based on local and convolutional neural network characteristics Pending CN112380371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011360886.2A CN112380371A (en) 2020-11-27 2020-11-27 Closed loop detection method based on local and convolutional neural network characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011360886.2A CN112380371A (en) 2020-11-27 2020-11-27 Closed loop detection method based on local and convolutional neural network characteristics

Publications (1)

Publication Number Publication Date
CN112380371A true CN112380371A (en) 2021-02-19

Family

ID=74588983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011360886.2A Pending CN112380371A (en) 2020-11-27 2020-11-27 Closed loop detection method based on local and convolutional neural network characteristics

Country Status (1)

Country Link
CN (1) CN112380371A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781790A (en) * 2019-10-19 2020-02-11 北京工业大学 Visual SLAM closed loop detection method based on convolutional neural network and VLAD
WO2020233724A1 (en) * 2019-05-23 2020-11-26 全球能源互联网研究院有限公司 Visual slam-based grid operating environment map construction method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020233724A1 (en) * 2019-05-23 2020-11-26 全球能源互联网研究院有限公司 Visual slam-based grid operating environment map construction method and system
CN110781790A (en) * 2019-10-19 2020-02-11 北京工业大学 Visual SLAM closed loop detection method based on convolutional neural network and VLAD

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHAN AN,ET.AL: "ast and Incremental Loop Closure Detection Using Proximity", 《ARXIV:1911.10752V1 [CS.RO]》, pages 139 - 8 *
张东波 等: "基于LDB描述子和局部空间结构匹配的快速场景辨识", 《山东大学学报(工学版)》, vol. 48, no. 5, pages 16 - 23 *

Similar Documents

Publication Publication Date Title
CN107679250B (en) Multi-task layered image retrieval method based on deep self-coding convolutional neural network
Garcia-Fidalgo et al. Vision-based topological mapping and localization methods: A survey
WO2018121018A1 (en) Picture identification method and device, server and storage medium
CN108734210B (en) Object detection method based on cross-modal multi-scale feature fusion
CN108108657A (en) A kind of amendment local sensitivity Hash vehicle retrieval method based on multitask deep learning
CN104794219A (en) Scene retrieval method based on geographical position information
CN110942471A (en) Long-term target tracking method based on space-time constraint
CN113033454A (en) Method for detecting building change in urban video camera
CN114037640A (en) Image generation method and device
Han et al. A novel loop closure detection method with the combination of points and lines based on information entropy
CN116091946A (en) Yolov 5-based unmanned aerial vehicle aerial image target detection method
Zhao et al. YOLO‐Highway: An Improved Highway Center Marking Detection Model for Unmanned Aerial Vehicle Autonomous Flight
CN114693966A (en) Target detection method based on deep learning
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN113177956A (en) Semantic segmentation method for unmanned aerial vehicle remote sensing image
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
KR102556765B1 (en) Apparatus and method for visual localization
CN116630610A (en) ROI region extraction method based on semantic segmentation model and conditional random field
CN112380371A (en) Closed loop detection method based on local and convolutional neural network characteristics
Zou et al. An intelligent image feature recognition algorithm with hierarchical attribute constraints based on weak supervision and label correlation
CN115187614A (en) Real-time simultaneous positioning and mapping method based on STDC semantic segmentation network
CN112396596A (en) Closed loop detection method based on semantic segmentation and image feature description
CN114168780A (en) Multimodal data processing method, electronic device, and storage medium
CN112396593B (en) Closed loop detection method based on key frame selection and local features
Zhang et al. Appearance-based loop closure detection via bidirectional manifold representation consensus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination