CN114494736A - Outdoor location re-identification method based on saliency region detection - Google Patents

Outdoor location re-identification method based on saliency region detection Download PDF

Info

Publication number
CN114494736A
CN114494736A CN202210104480.0A CN202210104480A CN114494736A CN 114494736 A CN114494736 A CN 114494736A CN 202210104480 A CN202210104480 A CN 202210104480A CN 114494736 A CN114494736 A CN 114494736A
Authority
CN
China
Prior art keywords
region
feature
image
visual
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210104480.0A
Other languages
Chinese (zh)
Inventor
张晓峰
欧垚君
陈哲
王梅
丁红
施正阳
陶秦
魏东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202210104480.0A priority Critical patent/CN114494736A/en
Publication of CN114494736A publication Critical patent/CN114494736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides an outdoor location re-identification method based on saliency region detection, and belongs to the technical field of computer vision and deep learning. The technical scheme is as follows: the method comprises the following steps: step one, extracting an SE-ResNet feature map; step two, detecting a salient region; step three, training a visual word bag model; and step four, matching the similarity between the images. The invention has the beneficial effects that: according to the method, the local features of the salient regions are fused into the whole local features through the visual bag-of-words model constructed by deep learning features, and the matching accuracy is improved.

Description

Outdoor location re-identification method based on saliency region detection
Technical Field
The invention relates to the technical field of computer vision and deep learning, in particular to an outdoor location re-identification method based on salient region detection.
Background
For an autonomous navigation robot, positioning and mapping are the primary objectives. The solution to the positioning problem of robots that rely on vision sensors is vision location identification. Given a scene image describing a designated location, the robot needs to determine whether the location has been reached, and the determination process needs to perform similarity matching on the keyframes of the path trajectory in the database. Because the scene location images generally have interference factors such as illumination change, view angle orientation change, pedestrian shielding and the like, the extraction mode of the image feature points in the traditional method excessively depends on the manually designed features, and even if a stable indoor environment has good effect, the obtained effect is not good under the interference of the outdoor scene.
How to solve the above technical problems is the subject of the present invention.
Disclosure of Invention
The invention aims to provide an outdoor location re-identification method based on saliency region detection, which can better extract image global features under various interferences to outdoor scenes, and can fuse local features of saliency regions into global features through a visual word bag model constructed by deep learning features to improve the matching accuracy.
The invention idea of the invention is: the invention divides the whole process into two parts, one part is a detection significance region, the second part is a region feature conversion to a more robust bag-of-word vector, global features are obtained, and similarity matching is carried out on the images.
The invention is realized by the following measures: an outdoor location re-identification method based on saliency region detection comprises the following steps:
step one, extracting SE-ResNet characteristic diagram
In a convolutional neural network, a large part of convolution operation is to improve a receptive field, fuse features in space or extract multi-scale spatial information through multiple channels, the conventional convolution operation basically defaults to fuse all channels of an input feature map, and the relation between attention channels of SE-Net enables a model to automatically learn the importance degree of different channel features, the network architecture of SE-Net is shown in FIG. 1, and most of current mainstream networks are constructed based on two similar units in a repeated mode; it follows that SE modules can be embedded in almost all network architectures today. The comparative experiment proves that the SE-Net is embedded into the ResNet with better effect, so that in the extraction of the characteristic diagram, the SE-ResNet model is adopted to carry out convolution operation on the image, and the input image I belongs to the RW′×H′×3Obtaining a characteristic diagram F epsilon R after convolution operationW×H×C
Step two, detection of salient regions
By analyzing the characteristics of the outdoor scene image, it can be found that whether the two images belong to the same place or not can be distinguished through some objects such as landmark buildings or landmarks, for this reason, in a feature map F obtained by convolution, an area with a high activation value is often a particularly significant area in the image, but the sizes of the objects in the image are not unique, and in order to adapt to the difference in the sizes of the respective significant areas, the present invention uses a detection method of a non-zero value connected area to determine the position of the significant area, and therefore, the following operations are performed in the feature map extracted in step one:
(1) and binarizing the feature map
The image retains the spatial texture characteristics of the image after being processed by a convolutional layer of a convolutional neural network and an activation function, wherein the activation value of a characteristic map reflects the texture intensity of the region of the image, so that in order to screen out a significant region, firstly, the region needing to be detected is divided in the characteristic map of each channel, firstly, a binarization characteristic map is used for dividing each image object region, a region with a larger activation value in the characteristic map uses 1 to represent the region as a region worth paying attention, and a region with a smaller activation value uses 0 to represent the region as a region with less texture; regions that are not of interest, in the process of binarizing the feature map, the present invention uses a threshold value δ to distinguish whether each region should be set to 0 or 1;
obtaining a feature map F after binarization by the following formulaB
Figure BDA0003493479000000021
(2) Dividing the region of interest ROI
It is assumed that the regions of significance should be independent, or at least non-overlapping. Each individual image area is represented using a connected component of non-zero value.
In a binary feature map FBSearching the values of 8 adjacent positions of all the positions with the value of 1, if the same element with the value of 1 exists, forming the same area, searching the adjacent values of the other elements in the area until all the elements in the same area are searched, and finally obtaining a plurality of related areas rois (regions of interest), wherein each channel has different numbers of elementsROI, eventually yields a total of N relevant regions.
(3) Determining the position of the salient region
Calculating the mean value a of the activation values of the feature maps for the regions of the feature maps corresponding to the N ROIsr
Figure BDA0003493479000000023
The formula is as follows:
Figure BDA0003493479000000022
i=1,...,ir;j=1,...,jr
and according to the mean value arThe values of (a) are sorted from high to low, the highest m regions are selected as the final significant region S ═ Si|i∈{1,...,m}}。
(4) Extracting local features
For a selected region of significance siThe area range is Ws×HsWherein 0 < Ws,HsMin (W, H), locating the region s on the feature map FiAnd on all channels of the region, obtaining the dimension Ws×HsLocal feature D of xCL(ii) a Finally, a total pooling method is adopted to obtain a pooled local feature DL∈R1×1×CThe formula is as follows:
Figure BDA0003493479000000031
i=1,...,Ws;j=1,...,Hs
wherein
Figure BDA0003493479000000032
As a local feature DLThe value of the c-th channel.
Step three, training the visual word bag model
The general visual bag-of-words model is trained on SIFT features extracted from images, the invention uses a network layer of SE-ResNet to generate feature descriptors, and meanwhile, convolution information and local features are reserved, and the performance of the descriptors is superior to that of a SIFT-like detector, especially under the condition that SIFT contains many abnormal values or cannot match with a sufficient number of feature points.
In the third step of the invention, the training visual word bag model consists of three parts, namely image characteristic extraction, visual word tree generation and visual word characteristic construction; wherein, the image feature extraction of the first part is obtained in the first step and the second step, the generation of the visual vocabulary tree and the construction of the visual vocabulary features are mainly explained in the third step, and the main flow is as follows:
(1) collecting features for constructing a lexical tree
For the generation of the vocabulary tree, the invention uses a k-means method, and a k-means algorithm is used as the most common clustering method, and is widely used for clustering the local features of the images because of intuition and easy understanding.
(2) Construction of lexical tree T using k-means
Firstly, a root node is constructed, k-means is used for clustering all the characteristics for the first time to obtain k classes and class centers thereof, so that the similarity in the classes is higher, the similarity between the classes is lower, the class centers are taken as child nodes of the root node to complete the construction of a first layer of the vocabulary tree, the class of each node of the first layer is continuously subjected to k-means clustering to obtain k classes, the class centers are taken as child nodes of the node, and the process is circulated until all the characteristics are distributed to leaf nodes, so that the construction of the vocabulary tree T is completed.
(3) Visual vocabulary feature vector Vbow
Each leaf node of the vocabulary tree represents a visual word, and the number s of times each word appears in the image in the word list is counted to represent the image as a vector V with a dimension V, assuming that the vocabulary tree contains V visual wordsbow
(4) Weighted feature vector VW
When a visual word appears in many images or each image of an image database, the statistical value of some words without practical meaning is larger, the number of times of each word appearing in the image in a statistical word list is not enough, because the importance of each word is different, the importance of the word needs to be calculated, namely the weight of the visual word in the word list is calculated, in order to solve the problem, the invention uses a TF-IDF (term frequency-inverse document frequency) re-weighting method, wherein TF refers to the frequency of the appearance of a certain visual word, IDF is inverse document frequency, the fewer pictures containing a certain visual word are, the larger the IDF value is, the word has strong distinguishing capability, the larger the TF-IDF value is, the larger the importance of the feature word to the text is, and the calculation formula is as follows:
Figure BDA0003493479000000033
Figure BDA0003493479000000034
TFIDF=TF×IDF
wherein s is the number of times the visual word appears, v is the total number of visual words, TFwRepresents the frequency of occurrence of the word w in all words, P is the total number of pictures, PwThe number of words w that appear.
Step four, similarity matching between images
For two images IaAnd IbObtaining global features by the above steps
Figure BDA0003493479000000041
The invention measures the distance between two global feature vectors by a cosine similarity formula,
Figure BDA0003493479000000042
compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a more robust feature extraction method, which is used for detecting a significant region in a picture, effectively preventing the interference of scene visual angle change, extracting the global features of the image with stronger robustness in an outdoor scene and reducing mismatching.
2. The method provided by the invention does not need to use a large amount of data to carry out parameter training of the convolutional neural network, thereby saving the expenditure of computing resources and time.
3. The deep learning features are used for replacing the traditional features and are combined into the bag-of-words model, and the accuracy of the place re-identification is improved under the condition that the dimension size of the features is kept.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart of the SE-Net network of the present invention.
FIG. 2 is a schematic view of the overall process of the present invention.
FIG. 3 is a graph of experimental results in an embodiment provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. Of course, the specific embodiments described herein are merely illustrative of the invention and are not intended to be limiting.
Example 1
Referring to fig. 1 to 3, the present invention provides a technical solution that the present invention provides an outdoor location re-identification method based on saliency region detection, where the visual location identification problem is similar to the image retrieval problem, and a picture with the highest similarity to an input image is retrieved from an image database for the input image. In the experimental process, model pre-training of SE-ResNet and generation of visual bag-of-word models are performed on a Place365 public data set. The model accuracy of the invention was compared to the accuracy of the NetVLAD method and SIFT feature matching method on the Tokyo24/7 dataset.
1. Model pre-training
Place365 is a data set used for training a scene recognition model, the data set comprises a plurality of different scenes, the SE-ResNet network model used for extracting features is pre-trained by using the data set, and the obtained model is more sensitive to important information of outdoor scenes such as street signposts, buildings and the like. The extracted features are more reliable.
2. Generation of visual bag of words model
In the invention, a certain number of pictures need to be selected to extract features so as to form a dictionary tree of the visual word bag model. In this embodiment, 2000 representative pictures of outdoor scenes are selected, and all the pictures are selected from the Place365 data set. And after the pictures are selected, performing feature extraction on all the pictures by using a pre-trained SE-ResNet network. And the feature extraction of the picture is output after the last convolution layer of the SE-ResNet is calculated. After the convolutional layer features are obtained, the salient regions of the features are extracted to obtain a fixed number of local features. In this embodiment, each picture selects the top 10 regions with the highest activation value as the local features of the picture. Thus, a total of 20000 feature vectors are obtained. And after regularizing all the feature vectors, circularly executing a k-means clustering algorithm, constructing a dictionary tree, and distributing weights to all the leaf nodes according to a TF-IDF formula.
And (3) comparison test:
the present embodiment uses the Tokyo24/7 data set for model accuracy verification, and the Tokyo24/7 data set contains 7 ten thousand 5 database images for retrieval and 315 images for query, which are shot by using a cell phone camera. Wherein the query image is taken during day, evening and night, respectively, and the database image is taken during day only, so that the illumination between the query image and the images in the search database varies greatly and the difficulty of comparison is also great.
For the evaluation criterion of the correctness of the query, in the embodiment, it is set that, among the first n images with the highest similarity of the search result, if the distance between the position of at least one result image and the position of the query image is within the range of 5 meters, a successful query is considered. The location distance may be determined from the GPS information for each picture given by the data set. The percentage of correctly identified queries (recall) is then plotted for different values of n.
In this embodiment, a NetVLAD method is first adopted to perform a comparison experiment with the model effect of the present invention, the same query picture is input, the same n value is set, whether each query succeeds or not is recorded, the recall ratio under different n values is obtained, and a recall ratio curve is drawn, as shown in fig. 3. It can be seen that the recall rate of the present invention is higher than that of the NetVLAD method at different values of n.
In addition, the present embodiment also compares the precision of the retrieval result between the conventional feature point extraction method (SIFT) and the feature extraction method of the present invention. Firstly, the same training picture set is adopted, and features are collected on all pictures by using an SIFT feature extraction method. Then, a visual word bag model is constructed for all SIFT features to obtain an SIFT feature dictionary tree TSIFT. Extracting SIFT characteristics from the query picture, and converting the SIFT characteristics into visual word bag vectors through a dictionary tree
Figure BDA0003493479000000051
And comparing the image data with all the images in the image database, and searching the top n images with the highest similarity. The percentage (recall) of different n-values corresponding to different correctly identified queries is recorded. As can be seen from fig. 3, the effect of the convolutional neural network for extracting features and re-identifying the location in the outdoor scene is better than that of the SIFT features extracted by the conventional method.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. An outdoor location re-identification method based on saliency region detection is characterized by comprising the following steps:
step one, extracting SE-ResNet characteristic diagram
In the convolutional neural network, the convolution operation is to spatially fuse the features or extract multi-scale spatial information through multiple channels, the relationship between attention channels of SE-Net enables a model to automatically learn the importance degree of the features of different channels, SE-Net is embedded into ResNet, in the extraction of a feature map, the SE-ResNet model is adopted to carry out convolution operation on an image, and an input image I belongs to RW′×H′×3Obtaining a characteristic diagram F epsilon R after convolution operationW×H×C
Step two, detection of salient regions
The method comprises the steps that through analyzing the characteristics of outdoor scene images, whether outdoor places belong to the same place or not is distinguished through landmark buildings or road signs, in a feature diagram F obtained through convolution, areas with high activation values are particularly significant areas in the images, and in order to adapt to different sizes of the significant areas, the positions of the significant areas are determined through a non-zero value connected area detection method;
step three, training visual word bag model
The general visual bag-of-words model is obtained based on SIFT feature training of image extraction, a feature descriptor is generated by using a network layer of SE-ResNet, convolution information and local features are reserved, the performance of the descriptor is superior to that of a detector similar to SIFT, and particularly under the condition that SIFT contains a lot of abnormal values or cannot match with a sufficient number of feature points;
step four, similarity matching between images
For two images IaAnd IbObtaining global features by the above steps
Figure FDA0003493478990000011
Measuring the distance between two global feature vectors through a cosine similarity formula;
Figure FDA0003493478990000012
2. the outdoor location re-identification method based on the saliency region detection as claimed in claim 1, characterized in that the feature map extracted in the first step is subjected to the following steps:
(1) and binarizing the feature map
The method comprises the steps that after an image is processed through a convolution layer of a convolution neural network and an activation function, the spatial texture features of the image are reserved, wherein the size of an activation value of a feature map reflects the size of the texture intensity of the region of the image, the region needing to be detected is firstly divided in the feature map of each channel, each image object region is divided by using a binarization feature map, the region with the larger activation value in the feature map uses 1 to represent that the region is a region worth paying attention, the region with the smaller activation value uses 0 to represent that the region has less texture and is not worth paying attention, and in the process of binarizing the feature map, a threshold value delta is used for distinguishing whether each region is set to be 0 or 1;
obtaining a feature map F after binarization by the following formulaB
Figure FDA0003493478990000013
(2) Dividing the region of interest ROI
Assuming independence, or at least no overlap, between salient regions, a connected component of non-zero value is used to represent each individual image region;
in a binary feature map FBSearching values of 8 adjacent positions of all positions with the value of 1, if the same element with the value of 1 exists, forming the same region, searching adjacent values of other elements in the region until all the elements in the same region are searched, finally obtaining a plurality of related regions ROIs, wherein each channel has ROI with different quantity, and finally generating N related regions;
(3) determining the position of the salient region
Calculating the mean value of the activation values of the feature maps for the regions of the feature maps corresponding to the N ROIs
Figure FDA0003493478990000021
The formula is as follows:
Figure FDA0003493478990000022
and according to the mean value arThe values of (a) are sorted from high to low, the highest m regions are selected as the final significant region S ═ Si|i∈{1,...,m}};
(4) Extracting local features
For a selected region of significance siThe area range is Ws×HsWherein 0 < Ws,HsMin (W, H), locating the region s on the feature map FiAnd on all channels of the region, obtaining the dimension Ws×HsLocal feature D of xCLObtaining the pooled local features D by using a sum pooling methodL∈R1×1×CThe formula is as follows:
Figure FDA0003493478990000023
wherein
Figure FDA0003493478990000024
As a local feature DLThe value of the c-th channel.
3. The outdoor location re-recognition method based on saliency region detection according to claim 1, wherein in step three, the training of the visual bag of words model is composed of three parts of image feature extraction, visual vocabulary tree generation and visual vocabulary feature construction, wherein the image feature extraction is obtained in step one and step two, and the specific contents of the visual vocabulary tree generation and the visual vocabulary feature construction in step three comprise the following steps:
(1) collecting features for constructing a lexical tree
For the generation of the visual vocabulary tree, a k-means method and a k-means algorithm are used as a clustering method, a certain number of representative outdoor scene images are collected before clustering, feature extraction is carried out on each image according to the first step and the second step, m significant regions are selected from each image, and local features of all significant regions are obtained;
(2) construction of lexical tree T using k-means
Firstly, a root node is constructed, k-means is used for carrying out first clustering on all the characteristics to obtain k classes and class centers thereof, the class centers are taken as child nodes of the root node to complete the construction of a first layer of the vocabulary tree, the class of each node of the first layer is continuously subjected to k-means clustering to obtain k classes, the class centers are taken as child nodes of the node, and the process is circulated until all the characteristics are divided into leaf nodes, so that the construction of the vocabulary tree T is completed;
(3) visual vocabulary feature vector Vbow
Each leaf node of the vocabulary tree represents a visual word, and the number s of times each word appears in the image in the word list is counted assuming that the vocabulary tree contains V visual words, so that the image is represented as a vector V with a dimension Vbow
(4) Weighted feature vector VW
Using TF-IDF re-weighting method, where TF is the frequency of a certain visual word, IDF is the frequency of a reverse document, the fewer pictures containing a certain visual word, the larger the IDF value, the stronger the distinction ability of the word, the larger the TF-IDF value, the greater the importance of the feature word to the text, the calculation formula is as follows:
Figure FDA0003493478990000031
Figure FDA0003493478990000032
TFIDF=TF×IDF
wherein s is the number of times of appearance of the visual words, v is the total number of visual words, TFwRepresenting the frequency of occurrence of the word w in all words, P being the total number of pictures, PwThe number of words w that appear.
CN202210104480.0A 2022-01-28 2022-01-28 Outdoor location re-identification method based on saliency region detection Pending CN114494736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104480.0A CN114494736A (en) 2022-01-28 2022-01-28 Outdoor location re-identification method based on saliency region detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210104480.0A CN114494736A (en) 2022-01-28 2022-01-28 Outdoor location re-identification method based on saliency region detection

Publications (1)

Publication Number Publication Date
CN114494736A true CN114494736A (en) 2022-05-13

Family

ID=81476827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210104480.0A Pending CN114494736A (en) 2022-01-28 2022-01-28 Outdoor location re-identification method based on saliency region detection

Country Status (1)

Country Link
CN (1) CN114494736A (en)

Similar Documents

Publication Publication Date Title
Chaudhuri et al. Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method
Jin Kim et al. Learned contextual feature reweighting for image geo-localization
Bosch et al. Scene classification using a hybrid generative/discriminative approach
CN106682233B (en) Hash image retrieval method based on deep learning and local feature fusion
Lopez-Antequera et al. Appearance-invariant place recognition by discriminatively training a convolutional neural network
CN107679250B (en) Multi-task layered image retrieval method based on deep self-coding convolutional neural network
CN107885764B (en) Rapid Hash vehicle retrieval method based on multitask deep learning
CN107679078B (en) Bayonet image vehicle rapid retrieval method and system based on deep learning
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
Bai et al. VHR object detection based on structural feature extraction and query expansion
Lynen et al. Placeless place-recognition
US9679226B1 (en) Hierarchical conditional random field model for labeling and segmenting images
Lin et al. RSCM: Region selection and concurrency model for multi-class weather recognition
EP2054855B1 (en) Automatic classification of objects within images
Lee et al. Place recognition using straight lines for vision-based SLAM
Xia et al. Loop closure detection for visual SLAM using PCANet features
CN104794219A (en) Scene retrieval method based on geographical position information
WO2010107471A1 (en) Semantic event detection using cross-domain knowledge
Chen et al. Clues from the beaten path: Location estimation with bursty sequences of tourist photos
CN104281572B (en) A kind of target matching method and its system based on mutual information
Mishkin et al. Place recognition with WxBS retrieval
Dong et al. A novel loop closure detection method using line features
Zhan et al. A method of hierarchical image retrieval for real-time photogrammetry based on multiple features
Garcia-Fidalgo et al. Vision-based topological mapping and localization by means of local invariant features and map refinement
Korrapati et al. Vision-based sparse topological mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination