CN117292268A - Vector retrieval positioning method, system and medium for panoramic image of outdoor scene - Google Patents

Vector retrieval positioning method, system and medium for panoramic image of outdoor scene Download PDF

Info

Publication number
CN117292268A
CN117292268A CN202211696927.4A CN202211696927A CN117292268A CN 117292268 A CN117292268 A CN 117292268A CN 202211696927 A CN202211696927 A CN 202211696927A CN 117292268 A CN117292268 A CN 117292268A
Authority
CN
China
Prior art keywords
remote sensing
image
panoramic image
outdoor scene
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211696927.4A
Other languages
Chinese (zh)
Inventor
高跃
肖罡
徐阳
刘小兰
黄晋
赵斯杰
张蔚
万可谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Kejun Industrial Co ltd
Original Assignee
Jiangxi Kejun Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Kejun Industrial Co ltd filed Critical Jiangxi Kejun Industrial Co ltd
Priority to CN202211696927.4A priority Critical patent/CN117292268A/en
Publication of CN117292268A publication Critical patent/CN117292268A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vector retrieval positioning method, a vector retrieval positioning system and a vector retrieval positioning medium for an outdoor scene panoramic image, wherein the method comprises the steps of respectively carrying out view projection transformation on remote sensing images in a target remote sensing database corresponding to a target area and extracting the characteristics of the remote sensing images; acquiring an outdoor scene panoramic image acquired in a to-be-positioned sub-area in a target area, and extracting image features from the outdoor scene panoramic image to obtain panoramic image features; and determining the positioning position of the sub-region to be positioned based on the distance between the panoramic image characteristic and the remote sensing image characteristic of each remote sensing image in the target remote sensing database. The invention carries out slicing and projection transformation based on the remote sensing image, realizes high-speed image positioning in a real-time environment, has the advantages of low consumption of computing resources and high positioning efficiency, and can effectively solve the problems that the existing positioning method based on registration in the outdoor large environment has high requirements on data quality and the high requirements on computing resources are difficult to be applied to practical floor engineering.

Description

Vector retrieval positioning method, system and medium for panoramic image of outdoor scene
Technical Field
The invention relates to the technical field of computer vision, and discloses a vector retrieval positioning method, a vector retrieval positioning system and a vector retrieval positioning medium for an outdoor scene panoramic image.
Background
The image-based positioning algorithm is used as a cross discipline technology for fusing a plurality of scientific research fields such as computer vision, machine learning, multi-view geometry, image retrieval and the like, and has wide application prospect and huge research value in the fields such as robot navigation positioning, reality enhancement, three-dimensional reconstruction, landmark recognition and the like. Image-based localization techniques are also an integral part of the task of high-level image processing. At present, the positioning mainly adopts the following two modes: (1) point cloud registration: and (5) registering the point clouds (Point Cloud Registration), namely splicing the point clouds and registering the point clouds, and for the point clouds with overlapping information of two frames, transforming the overlapping part of the point clouds under the same unified coordinate system by solving a transformation matrix (a rotation matrix R and a translation matrix T). However, because a complete scene and a three-dimensional point cloud model of a target to be registered are difficult to obtain in an actual scene, and huge data volume of point clouds brings great calculation amount to registration, a positioning method based on the point cloud registration is difficult to apply in actual engineering; (2) image registration: image registration (Image registration) is a process of matching and overlapping two or more images acquired at different times, with different sensors (imaging devices) or under different conditions (weather, illuminance, imaging position and angle, etc.), and has been widely used in the fields of remote sensing data analysis, computer vision, image processing, etc. The process of the process registration technique is as follows: firstly, extracting features of two images to obtain feature points; finding matched characteristic point pairs by carrying out similarity measurement; then obtaining image space coordinate transformation parameters through the matched characteristic point pairs; and finally, registering the images by the coordinate transformation parameters. However, in practical application, large computing resources are consumed, and practical use is difficult. However, the need for computational resources, whether point cloud registration or image registration, makes it difficult to apply in practical engineering, and the requirement for data accuracy is more limiting to its conventional use.
Disclosure of Invention
The invention aims to solve the technical problems that the high demand of the registration-based positioning method for data quality and the high demand for computing resources are difficult to be applied to practical floor engineering in the existing outdoor large environment.
In order to solve the technical problems, the invention adopts the following technical scheme:
a vector retrieval positioning method of an outdoor scene panoramic image comprises the following steps:
s101, respectively carrying out view projection transformation on remote sensing images in a target remote sensing database corresponding to a target area, and extracting image features from the remote sensing images subjected to the view projection transformation to obtain remote sensing image features; acquiring an outdoor scene panoramic image acquired in a to-be-positioned sub-area in a target area, and extracting image features from the outdoor scene panoramic image to obtain panoramic image features;
s102, determining the positioning position of the sub-area to be positioned based on the distances between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database.
Alternatively, the outdoor scene panoramic image acquired at the to-be-positioned sub-area in the target area in step S101 refers to an outdoor scene panoramic image acquired at an intermediate position of the to-be-positioned sub-area in the target area.
Optionally, the outdoor scene panoramic image in step S101 includes n sizesPanoramic image P of h×w 1 ~P n
Optionally, extracting image features from the panoramic image of the outdoor scene in step S101 to obtain panoramic image features includes: for n panoramic images P of size H W 1 ~P n Extracting features to obtain n panoramic image feature vectors with the size of 1 XC respectively; and fusing the n feature vectors with the size of 1 XC to obtain a feature vector with the size of 1 XC as the panoramic image feature.
Alternatively, the fusing of the n feature vectors with the size of 1×c refers to averaging the n feature vectors with the size of 1×c.
Optionally, the functional expression for performing perspective projective transformation in step S101 is:
in the above, (x) i s ,y i s ) Representing the coordinate of the ith pixel point in the image after the projection transformation of the visual angle, A a Representing the size A of an image after perspective projection transformation a ×A a Side length of (x) i t ,y i t ) Representing the coordinate of the ith pixel point in the remote sensing image, H g For the height of the remote sensing image, W g The width of the remote sensing image is the same as the size of the panoramic image.
Optionally, in step S101, the target remote sensing database includes m remote sensing images I 1 t ~I m t Obtaining m visual angle projection transformed images I after visual angle projection transformation 1 s ~I m s And extracting image features to obtain m feature vectors with the size of 1 XC to be respectively used as the remote sensing image features corresponding to each remote sensing image.
Optionally, determining the positioning location of the sub-region to be positioned based on the distance between the panoramic image feature and the remote sensing image feature of each remote sensing image in the target remote sensing database in step S102 includes: and respectively calculating the distance between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database, finding out the remote sensing image corresponding to the minimum distance, and taking the longitude and latitude and the altitude coordinates recorded by the remote sensing image in the target remote sensing database as the positioning positions of the sub-areas to be positioned.
In addition, the invention also provides a retrieval and positioning system based on the outdoor scene panoramic image, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the vector retrieval and positioning method of the outdoor scene panoramic image.
Furthermore, the present invention provides a computer readable storage medium having stored therein a computer program for being programmed or configured by a microprocessor to perform the vector retrieval positioning method of an outdoor scene panoramic image.
Compared with the prior art, the invention has the following advantages: the invention carries out slicing and projection transformation based on the remote sensing image, realizes high-speed image positioning in a real-time environment, has the advantages of low calculation resource consumption, high positioning efficiency and low data quality requirement, and can effectively solve the problems that the high data quality requirement and the high requirement on the calculation resource are difficult to be applied to practical floor engineering in the existing positioning method based on registration in the outdoor large environment.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, the vector retrieval positioning method of the outdoor scene panoramic image of the embodiment includes:
s101, respectively carrying out view projection transformation on remote sensing images in a target remote sensing database corresponding to a target area, and extracting image features from the remote sensing images subjected to the view projection transformation to obtain remote sensing image features; acquiring an outdoor scene panoramic image acquired in a to-be-positioned sub-area in a target area, and extracting image features from the outdoor scene panoramic image to obtain panoramic image features;
s102, determining the positioning position of the sub-area to be positioned based on the distances between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database.
In this embodiment, the outdoor scene panoramic image acquired in the to-be-positioned sub-area in the target area in step S101 refers to an outdoor scene panoramic image acquired at the middle position of the to-be-positioned sub-area in the target area. It should be noted that, in the present embodiment, only the application of the panoramic image acquisition method is involved, and the panoramic image acquisition method is not dependent on a specific panoramic image acquisition method.
In order to reduce the data quality requirement of the outdoor scene panoramic image by the method of the present embodiment, in the present embodiment, the outdoor scene panoramic image in step S101 includes n panoramic images P with a size of h×w 1 ~P n Through n panoramic images P with size of H×W 1 ~P n The method has the advantages that the contained image information is more, and the problem that the data quality requirement on the panoramic image of the outdoor scene is higher due to the fact that the single image information is too small can be avoided.
Undoubtedly, for n panoramic images P of size h×w 1 ~P n The method can adopt a mode of independent processing, namely, the panoramic image features are independently extracted, then the positioning positions of the sub-areas to be positioned are determined based on the distances between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database, and finally, the final positioning positions are obtained by voting the obtained n positioning positions; in addition, a fusion processing mode can be adopted to simplify the subsequent processing and voting process. For example, as an alternative implementation manner, extracting image features from the panoramic image of the outdoor scene in step S101 of this embodiment to obtain panoramic image features includes: for n panoramic images P of size H W 1 ~P n Extracting features to obtain n panoramic image feature vectors with the size of 1 XC respectively; fusing n feature vectors with the size of 1 XC to obtain a feature vector with the size of 1 XC as the feature of the panoramic image, and enabling only one panoramic image to be processed later through the feature vector fusionThe method can achieve the purposes of reducing the data quality requirement of the method for the panoramic image of the outdoor scene and simplifying the data processing resource consumption. For each of n panoramic images P having a size of h×w 1 ~P n The feature extraction network may be used as needed, for example, as an alternative embodiment, a ResNet network is used in this embodiment to extract the features of n panoramic images P with size of H×W 1 ~P n And extracting the characteristics. For a panoramic image P of size h×w 1 ~P n ResNet network first passes through 3 layers of convolution layers to obtain A a /8×A a Characteristic diagram of/8×256, then a a /8×A a The characteristic diagram of/8×256 is that the input passes through 2 convolution layers and a channel average layer to obtain A a /16×A a The feature map of/16×512 is used as the panoramic image feature. Since the ResNet network is an existing deep neural network, only the basic application of the ResNet network is involved in this embodiment, and details thereof will not be described here.
It should be noted that, fusing n feature vectors with a size of 1×c to obtain a feature vector with a size of 1×c may be performed in a desired manner, for example, a machine learning manner, where a machine learning model (for example, a convolutional neural network CNN) may be trained by n feature vectors with a size of 1×c and a data sample of the feature vector with a size of 1×c, and then fusing n feature vectors with a size of 1×c using the trained machine learning model may mean that n feature vectors with a size of 1×c are averaged. However, in view of the implementation and calculation efficiency in engineering, in this embodiment, the use of the feature vector fusion with n sizes of 1×c means that the feature vectors with n sizes of 1×c are averaged, so that the feature vector fusion with n sizes of 1×c is simply and quickly implemented, and besides the averaging, the feature vector fusion with n sizes of 1×c may also be implemented by adopting other data statistics index calculation modes as needed, which is not listed here.
In this embodiment, in step S101, the target remote sensing database includes m remote sensing images I 1 t ~I m t An m Zhang Quanjing image is acquired in a target positioning area and is marked as I 1 t ~I m t . The target area and the sub-area to be positioned are both in inclusion relationship, i.e., the target area is larger than the sub-area to be positioned.
In this embodiment, the function expression for performing perspective projection transformation in step S101 is as follows:
in the above, (x) i s ,y i s ) Representing the coordinate of the ith pixel point in the image after the projection transformation of the visual angle, A a Representing the size A of an image after perspective projection transformation a ×A a Side length of (x) i t ,y i t ) Representing the coordinate of the ith pixel point in the remote sensing image, H g For the height of the remote sensing image, W g The width of the remote sensing image is the same as the size of the panoramic image.
In this embodiment, in step S101, the target remote sensing database includes m remote sensing images I 1 t ~I m t Obtaining m visual angle projection transformed images I after visual angle projection transformation 1 s ~I m s And extracting image features to obtain m feature vectors with the size of 1 XC to be respectively used as the remote sensing image features corresponding to each remote sensing image. In this embodiment, the transformed image I is projected for m viewing angles 1 s ~I m s The image feature extraction is realized by adopting a ResNet network, aiming at the size A a ×A a Is transformed into an image I 1 s ~I m s ResNet network first passes through 3 layers of convolution layers to obtain A a /8×A a Characteristic diagram of/8×256, then a a /8×A a The characteristic diagram of/8×256 is that the input passes through 2 convolution layers and a channel average layer to obtain A a /16×A a The characteristic diagram of/16×512 is taken as the characteristic of the remote sensing image. Due to ResNet networkFor the existing deep neural network, only the basic application of the ResNet network is involved in this embodiment, so its details are not described here.
In step S102 of this embodiment, determining the positioning location of the sub-area to be positioned based on the distance between the panoramic image feature and the remote sensing image feature of each remote sensing image in the target remote sensing database includes: and respectively calculating the distance between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database, finding out the remote sensing image corresponding to the minimum distance, and taking the longitude and latitude and the altitude coordinates recorded by the remote sensing image in the target remote sensing database as the positioning positions of the sub-areas to be positioned. Specifically, in this embodiment, distances between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database are calculated respectively, and the obtained distance set is: { d 1 ,d 2 ,...,d m }, where d 1 ~d m Respectively calculating the distance between the panoramic image characteristics and the remote sensing image characteristics of m remote sensing images in the target remote sensing database; for distance set { d 1 ,d2,..,d m Sorting to obtain minimum distance d p Find the minimum distance d p And extracting longitude and latitude and altitude coordinates (Lat, lng, E) stored in a database of the remote sensing image, wherein the longitude and latitude and altitude coordinates are the positioning positions of the sub-areas to be positioned. It should be noted that, the calculation of the above distance may be a feasible geometric distance according to needs, for example, in this embodiment, a euclidean distance is used, and the function expression is as follows:
in the above, d i For panoramic image feature F p Remote sensing image characteristics of ith remote sensing image in target remote sensing databaseDistance between them.
In summary, the vector search positioning method for the outdoor scene panoramic image in the embodiment performs slicing and projection transformation based on the remote sensing image, realizes high-speed image positioning in a real-time environment, has the advantages of low computing resource consumption, high positioning efficiency and low data quality requirement, and can effectively solve the problems that the existing positioning method based on registration in the outdoor large environment has high data quality requirement and the high requirement on computing resources is difficult to be applied to practical floor engineering.
In addition, the invention also provides a retrieval and positioning system based on the outdoor scene panoramic image, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the vector retrieval and positioning method of the outdoor scene panoramic image. Furthermore, the present invention provides a computer readable storage medium having stored therein a computer program for being programmed or configured by a microprocessor to perform the vector retrieval positioning method of an outdoor scene panoramic image.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (10)

1. The vector retrieval and positioning method for the panoramic image of the outdoor scene is characterized by comprising the following steps of:
s101, respectively carrying out view projection transformation on remote sensing images in a target remote sensing database corresponding to a target area, and extracting image features from the remote sensing images subjected to the view projection transformation to obtain remote sensing image features; acquiring an outdoor scene panoramic image acquired in a to-be-positioned sub-area in a target area, and extracting image features from the outdoor scene panoramic image to obtain panoramic image features;
s102, determining the positioning position of the sub-area to be positioned based on the distances between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database.
2. The vector retrieving and positioning method of an outdoor scene panorama image according to claim 1, wherein the outdoor scene panorama image collected at the to-be-positioned sub-area in the target area in step S101 is an outdoor scene panorama image collected at an intermediate position of the to-be-positioned sub-area in the target area.
3. The method for vector search positioning of outdoor scene panorama image according to claim 1, wherein the outdoor scene panorama image in step S101 comprises n panorama images P of size hxw 1 ~P n
4. A method for vector search positioning of an outdoor scene panoramic image according to claim 3, wherein extracting image features of the outdoor scene panoramic image in step S101 to obtain panoramic image features comprises: for n panoramic images P of size H W 1 ~P n Extracting features to obtain n panoramic image feature vectors with the size of 1 XC respectively; and fusing the n feature vectors with the size of 1 XC to obtain a feature vector with the size of 1 XC as the panoramic image feature.
5. The method for vector search positioning of panoramic image of outdoor scene as recited in claim 4, wherein said merging n feature vectors of size 1 xc means averaging n feature vectors of size 1 xc.
6. The method of claim 1, wherein the function expression for performing perspective projection transformation in step S101 is:
in the above, (x) i s ,y i s ) Representing the coordinate of the ith pixel point in the image after the projection transformation of the visual angle, A a Representing the size A of an image after perspective projection transformation a ×A a Side length of (x) i t ,y i t ) Representing the coordinate of the ith pixel point in the remote sensing image, H g For the height of the remote sensing image, W g The width of the remote sensing image is the same as the size of the panoramic image.
7. The method for vector search positioning of panoramic image of outdoor scene as recited in claim 1, wherein the target remote sensing database in step S101 contains m remote sensing images I in total 1 t ~I m t Obtaining m visual angle projection transformed images I after visual angle projection transformation 1 s ~I m s And extracting image features to obtain m feature vectors with the size of 1 XC to be respectively used as the remote sensing image features corresponding to each remote sensing image.
8. The method according to claim 7, wherein determining the positioning position of the sub-region to be positioned based on the distance between the panoramic image feature and the remote sensing image feature of each remote sensing image in the target remote sensing database in step S102 comprises: and respectively calculating the distance between the panoramic image features and the remote sensing image features of each remote sensing image in the target remote sensing database, finding out the remote sensing image corresponding to the minimum distance, and taking the longitude and latitude and the altitude coordinates recorded by the remote sensing image in the target remote sensing database as the positioning positions of the sub-areas to be positioned.
9. A system for locating a panoramic image of an outdoor scene, comprising a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to perform the method for locating a panoramic image of an outdoor scene according to any one of claims 1 to 8.
10. A computer readable storage medium having a computer program stored therein, wherein the computer program is for being programmed or configured by a microprocessor to perform the vector retrieval positioning method of an outdoor scene panoramic image according to any one of claims 1-8.
CN202211696927.4A 2022-12-28 2022-12-28 Vector retrieval positioning method, system and medium for panoramic image of outdoor scene Pending CN117292268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211696927.4A CN117292268A (en) 2022-12-28 2022-12-28 Vector retrieval positioning method, system and medium for panoramic image of outdoor scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211696927.4A CN117292268A (en) 2022-12-28 2022-12-28 Vector retrieval positioning method, system and medium for panoramic image of outdoor scene

Publications (1)

Publication Number Publication Date
CN117292268A true CN117292268A (en) 2023-12-26

Family

ID=89252379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211696927.4A Pending CN117292268A (en) 2022-12-28 2022-12-28 Vector retrieval positioning method, system and medium for panoramic image of outdoor scene

Country Status (1)

Country Link
CN (1) CN117292268A (en)

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN111178236B (en) Parking space detection method based on deep learning
Toft et al. Long-term 3d localization and pose from semantic labellings
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
CN113168717B (en) Point cloud matching method and device, navigation method and equipment, positioning method and laser radar
Lim et al. Real-time image-based 6-dof localization in large-scale environments
US9420265B2 (en) Tracking poses of 3D camera using points and planes
Irschara et al. From structure-from-motion point clouds to fast location recognition
WO2012155121A2 (en) Systems and methods for estimating the geographic location at which image data was captured
EP2166375B1 (en) System and method of extracting plane features
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
Toft et al. Single-image depth prediction makes feature matching easier
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
Armagan et al. Accurate Camera Registration in Urban Environments Using High-Level Feature Matching.
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
WO2020015501A1 (en) Map construction method, apparatus, storage medium and electronic device
CA2787856A1 (en) Systems and methods for estimating the geographic location at which image data was captured
US20200191577A1 (en) Method and system for road image reconstruction and vehicle positioning
CN115953471A (en) Indoor scene multi-scale vector image retrieval and positioning method, system and medium
Gao et al. Complete and accurate indoor scene capturing and reconstruction using a drone and a robot
CN117292268A (en) Vector retrieval positioning method, system and medium for panoramic image of outdoor scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination