WO2021057797A1 - Procédé et appareil de positionnement, terminal et support de mémorisation - Google Patents

Procédé et appareil de positionnement, terminal et support de mémorisation Download PDF

Info

Publication number
WO2021057797A1
WO2021057797A1 PCT/CN2020/117156 CN2020117156W WO2021057797A1 WO 2021057797 A1 WO2021057797 A1 WO 2021057797A1 CN 2020117156 W CN2020117156 W CN 2020117156W WO 2021057797 A1 WO2021057797 A1 WO 2021057797A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
key frame
map
information
Prior art date
Application number
PCT/CN2020/117156
Other languages
English (en)
Chinese (zh)
Inventor
金珂
马标
李姬俊男
刘耀勇
蒋燚
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021057797A1 publication Critical patent/WO2021057797A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • This application relates to indoor positioning technology, but is not limited to positioning methods and devices, terminals, and storage media.
  • PDR Pedestrian Dead Reckoning
  • the embodiments of the present application provide a positioning method and device, a terminal, and a storage medium in order to solve at least one problem existing in the related art.
  • the embodiment of the present application provides a positioning method, which includes:
  • the pose information of the image acquisition device is determined.
  • an embodiment of the present application provides a positioning device, which includes: a first determination module, a first search module, a second determination module, a first extraction module, a first matching module, and a third determination module, wherein:
  • the first determining module is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
  • the first search module is configured to search for an area identifier corresponding to the current network feature information from a preset first map
  • the second determining module is configured to determine the target area where the image acquisition device is located according to the area identifier
  • the first extraction module is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
  • the first matching module is configured to match the image features corresponding to the first image features from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature;
  • the third determining module is configured to determine the pose information of the image acquisition device according to the second image feature.
  • An embodiment of the present application provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the positioning method when the program is executed.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned positioning method are realized.
  • the embodiments of the present application provide a positioning method, device, terminal, and storage medium.
  • the area identification corresponding to the current network feature information; according to the area identification, the target area where the image acquisition device is located is determined;
  • the image acquisition device is used to collect the image to be processed, and extract the information of the image to be processed
  • determine the pose information of the image capture device in this way, first use the preset first map to coarsely locate the image capture device, and then use the preset second map corresponding to the target area of the coarse location , To accurately position the image acquisition device based on the key frame image in the preset second map to obtain the pose information of the image acquisition device, which improves the positioning accuracy.
  • FIG. 1 is a schematic diagram of the implementation process of a positioning method according to an embodiment of this application.
  • FIG. 2A is a schematic diagram of another implementation process of the positioning method according to an embodiment of this application.
  • 2B is a schematic diagram of another implementation process of the positioning method according to the embodiment of this application.
  • 3A is a schematic diagram of another implementation process of the positioning method according to an embodiment of the application.
  • FIG. 3B is a schematic diagram of a scene of a positioning method according to an embodiment of this application.
  • FIG. 3C is a schematic diagram of another scene of the positioning method according to the embodiment of this application.
  • FIG. 4 is a schematic diagram of the structure of a ratio vector in an embodiment of the application.
  • FIG. 5A is a diagram of an application scenario for determining a matching frame image according to an embodiment of the application
  • 5B is a schematic structural diagram of determining location information of a collection device according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of the composition structure of a positioning device according to an embodiment of the application.
  • FIG. 1 is a schematic diagram of the implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step S101 Determine current network feature information of the current location of the network where the image acquisition device is located.
  • the network characteristic information may be the signal strength of the network where the image acquisition device is located, or the distribution of the signal strength of the network where the image acquisition device is located.
  • Step S102 searching for an area identifier corresponding to the current network feature information from the preset first map.
  • the preset first map may be understood as a wireless fidelity (Wireless Fidelity, WiFi) fingerprint map, that is, the identification information of each area and the signal strength of the network corresponding to the area are stored in the preset first map , Or the distribution of the network signal strength corresponding to the area, so that the identification information of the area corresponds to the signal strength of the network one-to-one.
  • a wireless fidelity Wireless Fidelity, WiFi
  • WiFi Wireless Fidelity, WiFi
  • Step S103 Determine the target area where the image acquisition device is located according to the area identifier.
  • the target area where the image acquisition device is located can be uniquely determined based on the area identification.
  • Step S104 Use the image acquisition device to collect an image to be processed, and extract a first image feature of the image to be processed.
  • the first image feature includes: description information and two-dimensional (2 Dimensions, 2D) position information of the feature points of the image to be processed.
  • step S102 first, extract the first image feature of the image to be processed, for example, extract the feature point of the image to be processed; determine the description information of the feature point and the feature point in the image to be processed 2D coordinate information; where the description information of the feature point can be understood as the descriptor information that can uniquely identify the feature point.
  • Step S105 Match the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature.
  • the second image feature includes: 2D coordinate information, three-dimensional (3 Dimensions, 3D) position information, and description information of the feature point of the key frame image containing the identification information of the target area.
  • the preset second map corresponding to the target area can be understood as a part of the global map corresponding to the key frame image that identifies the identification information of the target area in the global map. For example, for all the key frame images to identify the identification information of the region corresponding to the key frame image, then after the target area is determined, the key frame image that identifies the identification information of the target area can be determined according to the identification information of the target area.
  • the second map that is, the set of key frame images in the preset second map is a set of key frame images that identify the identification information of the target area, and each sample feature point corresponds to the ratio of the key frame image Ratio vector collection.
  • the step S102 can be understood as selecting a second image feature that has a higher degree of matching with the first image feature from the image features of the key frame image stored in the preset second map.
  • Step S106 Determine the pose information of the image acquisition device according to the second image feature.
  • the pose information includes the collection orientation of the image collection device and the position of the image collection device.
  • the location information of the image acquisition device is determined based on the 3D coordinate information of the feature point of the key frame image corresponding to the second image feature and the 2D coordinate information of the feature point of the image to be processed corresponding to the first image feature. For example, first, in the three-dimensional coordinate space where the image acquisition device is located, the 2D coordinate information of the feature points of the image to be processed is converted into 3D coordinate information, and then the 3D coordinate information is combined with the three-dimensional coordinate system of the preset second map The 3D coordinate information of the feature points of the key frame image is compared to determine the position information of the image acquisition device. In this way, considering both the 2D coordinate information and the 3D coordinate information of the feature point, when the image acquisition device is positioned, both the position of the image acquisition device and the acquisition orientation of the image acquisition device can be obtained, which improves the positioning accuracy.
  • the image acquisition device is roughly positioned based on the preset first map to obtain the target area; then, the image feature is selected from the image features of the key frame image in the preset second map.
  • the matched second image feature is used to realize the precise positioning of the image acquisition device.
  • the first preset map is used for coarse positioning, and then the preset second map is used to accurately position the image acquisition device based on the key frame image. To determine the location and collection orientation of the image capture device, thereby improving the positioning accuracy.
  • FIG. 2A is a schematic diagram of another implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 2A, the method includes the following steps:
  • Step S201 Divide the coverage area of the current network into multiple areas.
  • the coverage of the current network can be divided into multiple grids. As shown in Figure 3B, the coverage of the current network is divided into grids with 4 rows and 7 columns.
  • An area has identification information that can uniquely identify the area, for example, the identity document (ID) of the area.
  • ID identity document
  • Step S202 Determine the network characteristic information of the multiple wireless access points in the current network in each area.
  • APs wireless Access Points
  • the step S202 can be understood as determining AP31 and AP32 respectively.
  • Step S203 Store the identification information of each area and the network feature information corresponding to each area to obtain the preset first map.
  • the network feature information corresponding to each area can be understood as the signal strength of all APs that can be detected in the area.
  • the identification information of each area is different.
  • the identification information of each area and the network feature information corresponding to the area are stored in a preset first map in a one-to-one correspondence.
  • the above steps S201 to S203 give a way to create a preset first map.
  • the identification information of each area corresponds to the network feature information that can be detected in that area.
  • the image acquisition device is determined
  • the network feature information of the network where it is located can roughly determine the area where the image acquisition device is located in the preset first map.
  • Step S204 Determine target feature information that matches the current network feature information from the network feature information stored in the preset first map.
  • the network feature information corresponding to each area is stored in the preset first map. Based on the current network feature information, the target feature information with higher similarity to the current network feature information can be found in the preset first map .
  • Step S205 According to the corresponding relationship between the network feature information and the area identification information stored in the preset first map, search for the area identification corresponding to the current network feature information.
  • the network feature information in the preset first map corresponds to the identification information of the area one-to-one, so after the target feature information is determined, the correspondence between the network feature information stored in the first map and the identification information of the area can be set Relationship, locate the target area where the image capture device is located, so as to achieve rough positioning of the image capture device, for example, determine the room where the image capture device is located.
  • the above steps S204 and S205 give a way to realize "from the preset first map, find the area identifier corresponding to the current network feature information", in this way, the network information of the image acquisition device is obtained.
  • the target area where the image capture device is located is found, thereby achieving a rough understanding of the image capture device. Positioning.
  • Step S206 selecting a plurality of key frame images meeting preset conditions from the sample image library to obtain a key frame image set.
  • a preset number of corner points are selected from the sample image; the corner points are pixels in the sample image that are significantly different from the preset number of surrounding pixels; for example, 150 corner points are selected point.
  • the second step if the number of the same corner points contained in two sample images with adjacent acquisition times is greater than or equal to a certain threshold, it is determined that the scene corresponding to the sample image is a continuous scene; the acquisition times of the two sample images are adjacent, and It can be understood as two consecutive sample images. Determine the number of the same corner points contained in the two sample images. The larger the number, the higher the correlation between the two sample images and the higher the correlation between the two sample images. It is an image from a continuous scene. Continuous scenes, such as a single indoor environment, such as a bedroom, a living room, or a single meeting room.
  • the third step is to determine that the scene corresponding to the sample image is a discrete scene if the number of the same corner points contained in two sample images adjacent to the acquisition time is less than a certain threshold.
  • the scene corresponding to the sample image is a discrete scene
  • the scene corresponding to the sample image is a continuous scene
  • Step S207 Using the identification information of the region corresponding to each key frame image, identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images.
  • the area corresponding to the image acquisition device that collects the key frame image is determined, as well as the identification information of the area, and the identification information is used to identify the key frame image, so that each key frame image is marked with The identification information of the corresponding area.
  • Step S208 Extract the image features of each identified key frame image to obtain a key image feature set.
  • the image feature of the identified key frame includes: 2D coordinate information, 3D coordinate information of the feature point of the key frame image, and description information that can uniquely identify the feature point.
  • Step S209 Determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set.
  • the different sample feature points and the ratio vector set are stored in the preset bag-of-words model, so that the preset bag-of-words model can be used to retrieve the matching of the image to be processed from the key frame image Frame image.
  • the step S209 can be implemented through the following process:
  • the first average number of times is determined according to the first number of sample images contained in the sample image library and the first number of times that the i-th sample feature point appears in the sample image library.
  • the first average number is used to indicate the average number of times the i-th sample feature point appears in each sample image; for example, the first number of sample images is N, and the i-th sample feature point appears in the sample image library
  • the first number of times is n i , based on this, the first average number of times idf(i) can be obtained.
  • the second average number is used to indicate the proportion of the sample feature points contained in the j-th key frame image occupied by the i-th sample feature point; for example, the second number is The second quantity is Then the second average number tf(i,I t ) can be obtained.
  • the ratio of the sample feature points in the key frame image is obtained, and the ratio vector set is obtained. For example, multiply the first average number and the second average number to get the ratio vector
  • Step S210 Store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
  • the preset second map is a part of the global map
  • the ratio vector set corresponding to the identified key frame image and the key image feature set are stored in the preset second map, so that when the image acquisition device is located, use this
  • the ratio vector set is compared with the ratio vector set corresponding to the image to be processed determined by using a preset bag-of-words model to determine a matching frame image that is highly similar to the image to be processed from the key image feature set.
  • the above steps S206 to S210 give a way to create a global map.
  • the obtained key frame image is identified by the identification information of the area, so that each key frame image in the global map obtained is an identification There is identification information of the corresponding area.
  • Step S211 Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map.
  • the target area is determined, based on the identification information of the target area, a part of the global map corresponding to the key frame image of the identification information of the target area can be found in the global map, that is, the second map is preset.
  • Step S212 Use a partial global map corresponding to the key frame image that identifies the identification information of the target area as the preset second map.
  • the above steps S211 and S212 give a way to determine the preset second map.
  • the key frame image that identifies the identification information of the target area is searched for in the global map, and this part is identified with the identification information of the target area
  • the part of the global map corresponding to the key frame image of is used as the preset second map.
  • Step S213 According to the first image feature of the image to be processed, the second image feature is matched from the image feature of the key frame image stored in the preset second map corresponding to the target area.
  • step S213 can be implemented through the following steps:
  • the ratios of different sample feature points in the feature point set are respectively determined, and the first ratio vector is obtained.
  • the preset bag-of-words model includes multiple different sample feature points and the ratio of multiple sample feature points among the feature points contained in the key frame image.
  • the first ratio vector may be determined based on the number of sample images, the number of sample feature points appearing in the sample image, the number of sample feature points appearing in the image to be processed, and the total number of sample feature points appearing in the image to be processed.
  • the second step is to obtain the second ratio vector.
  • the second ratio vector is the ratio of the multiple sample feature points among the feature points contained in the key frame image; the second ratio vector is pre-stored in a preset bag-of-words model, Therefore, when the image features of the image to be processed need to be matched, the second ratio vector is obtained from the preset bag-of-words model.
  • the determination process of the second ratio vector is similar to the determination process of the first ratio vector; and the dimensions of the first ratio vector and the second ratio vector are the same.
  • the third step is to match a second image feature from the image features of the key frame image according to the first image feature, the first ratio vector and the second ratio vector.
  • the third step can be achieved through the following process:
  • a first vector v ratio here one by one comparison image to be processed 1 and the ratio of each second vector v 2 of the key frame image, using the ratio of these two vectors is calculated, to determine whether each key frame image and the image to be processed
  • similar key frame images with similarity greater than or equal to the second threshold are screened out, and a set of similar key frame images is obtained.
  • the similar key frame images to which the similar image features belong are determined to obtain a set of similar key frame images.
  • the second image feature with the highest similarity to the first image feature is selected; for example, first, the time difference between the acquisition times of at least two similar key frame images is determined, And the similarity difference between the image features of the at least two similar key frame images and the first image feature; then, the time difference is smaller than the third threshold, and the similarity difference is smaller than the fourth threshold.
  • Frame images are combined to obtain a joint frame image; that is, multiple similar key frame images that are close in acquisition time and close to the image to be processed are selected, indicating that these key frame images may be continuous pictures.
  • Such multiple similar key frame images are combined together to form a joint frame image (which can also become an island), so that multiple joint frame images are obtained; finally, from the image characteristics of the joint frame image, select the The second image feature whose first image feature similarity meets the preset similarity threshold. For example, firstly, the sum of the similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature is separately determined; in this way, the multiple key frame images contained in the multiple joint frame images are determined one by one. The sum of the similarity between the image feature of the frame image and the first image feature.
  • the joint frame image with the largest sum of similarity is determined as the target joint frame image with the highest similarity to the image to be processed; finally, according to the description information of the feature points of the target joint frame image and the image to be processed
  • the description information of the feature point is selected from the image features of the target joint frame image, and the second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
  • the description information of the feature points of the target joint frame image and the description information of the feature points of the image to be processed can uniquely identify the feature points of the target joint frame image and the feature points of the image to be processed, based on these two
  • This description information can very accurately select the second image feature with the highest similarity to the first image feature from the image features of the target joint frame image. This ensures the accuracy of matching the first image feature of the image to be processed with the second image feature, and ensures that the selected second image feature is extremely similar to the first image feature.
  • Step S214 Determine the pose information of the image acquisition device according to the second image feature.
  • step S214 can be implemented through the following process:
  • the image containing the second image feature is determined as a matching frame image of the image to be processed.
  • the key frame image containing the second image feature indicates that the key frame image is very similar to the image to be processed, so the key frame image is used as the matching frame image of the image to be processed.
  • the second step is to determine the target Euclidean distance between any two feature points contained in the matching frame image and which is less than the first threshold to obtain the target Euclidean distance set.
  • the Euclidean distance between any two feature points contained in the matching frame image first, determine the Euclidean distance between any two feature points contained in the matching frame image, and then select the Euclidean distance less than the first threshold as the target Euclidean distance to obtain the target Euclidean distance.
  • Distance set This is to process a feature point in the image to be processed to get a target Euclidean distance set, then to process multiple feature points in the image to be processed, you can get multiple Euclidean distance sets.
  • the target Euclidean distance that is less than the first threshold can also be regarded as first determining the smallest Euclidean distance from a plurality of Euclidean distances, and then judging whether the smallest Euclidean distance is less than the first threshold, and if it is less than, then determining the smallest Euclidean distance.
  • the distance is the target Euclidean distance, then the target Euclidean distance set is the set with the smallest Euclidean distance among multiple Euclidean distance sets.
  • the third step if the number of target Euclidean distances included in the target Euclidean distance set is greater than the fifth threshold, based on the 3D coordinate information of the feature points of the key frame image corresponding to the second image feature and the first image feature The corresponding 2D coordinate information of the feature point of the image to be processed determines the position information of the image acquisition device.
  • the number of target Euclidean distances included in the target Euclidean distance set is greater than the fifth threshold, it indicates that the number of target Euclidean distances is large enough, and it also indicates that there are enough feature points that match the features of the first image. It shows that the similarity between this key frame image and the image to be processed is sufficiently high.
  • the 3D coordinate information of the feature point of the key frame image and the 2D coordinate information of the feature point of the image to be processed corresponding to the first image feature are used as the front-end pose tracking algorithm (Perspectives-n-Point, PnP) algorithm Input, first find out the 2D coordinate information (for example, 2D coordinates) of the feature point in the current frame of the image to be processed in the current coordinate system (for example, 3D coordinates), and then according to the map coordinate system
  • the 3D coordinate information of the feature points of the key frame image and the 3D coordinate information of the feature points in the current frame of the image to be processed in the current coordinate system can solve the position information of the image acquisition device.
  • the position and posture of the image acquisition device can be provided in the positioning result at the same time, so the positioning accuracy of the image acquisition device is improved.
  • the image acquisition device is roughly positioned through the preset first map to determine the target area where the image acquisition device is located, and then the constructed preset first map is loaded based on the identification information of the target area.
  • Second map and use the preset bag-of-words model to retrieve the matching frame image corresponding to the image to be processed.
  • the 2D coordinate information of the feature point of the image to be processed and the 3D coordinate information of the feature point of the key frame image are combined.
  • the PnP algorithm it can obtain the precise position and collection orientation of the current image acquisition device in the map to achieve the positioning purpose; in this way, the positioning purpose can be achieved through the key frame image, and the position of the image acquisition device in the map coordinate system can be obtained. And the acquisition orientation improves the accuracy of the positioning results and has strong robustness.
  • FIG. 2B is a schematic diagram of another implementation process of the positioning method according to the embodiment of this application. As shown in FIG. 2B, the method includes the following steps:
  • Step S221 Determine current network feature information of the current location of the network where the image acquisition device is located;
  • Step S222 searching for an area identifier corresponding to the current network feature information from the preset first map
  • Step S223 Determine the target area where the image acquisition device is located according to the area identifier.
  • Step S224 According to the first image feature of the image to be processed, the second image feature is matched from the image feature of the key frame image stored in the preset second map corresponding to the target area.
  • Step S225 Determine the feature point of the key frame image corresponding to the second image feature and the map coordinates in the map coordinate system corresponding to the preset second map.
  • the feature point corresponding to the second image feature in the preset second map is acquired, and the 3D coordinates in the map coordinate system corresponding to the preset second map are acquired.
  • Step S226 Determine the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located.
  • map coordinates are used as the input of the PnP algorithm, and the current coordinates of the feature point in the current coordinate system of the image acquisition device are obtained.
  • Step S227 Determine a conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates.
  • map coordinates and the current coordinates are compared, and the rotation vector and the translation vector of the image acquisition device relative to the map coordinate system in the current coordinate system are determined.
  • Step S228 Determine the position of the image acquisition device in the map coordinate system and the position of the image acquisition device relative to the current coordinate system according to the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system.
  • the collection orientation of the map coordinate system The collection orientation of the map coordinate system.
  • the rotation vector is used to rotate the current coordinates of the image acquisition device to determine the acquisition orientation of the image acquisition device relative to the map coordinate system
  • the translation vector is used to translate the current coordinates of the image acquisition device to determine that the image acquisition device is in the map coordinates The position in the department.
  • FIG. 3A is a schematic diagram of another implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 3A, the method includes the following steps:
  • Step S301 Load the created preset first map and save it locally.
  • the preset first map may be understood as a WiFi fingerprint map
  • the process of creating the preset first map may be implemented in an offline stage when the current network is not connected.
  • the offline stage in order to collect fingerprints at various locations, first, build a database, that is, perform multiple measurements in multiple areas to obtain the database.
  • the multiple areas can be within the network coverage area or more, including the collection of images to be processed
  • the area where the image capture device is located For example, the area arbitrarily designated by the developer who built the database.
  • the establishment of the corresponding relationship between the location and the fingerprint in the database is usually carried out in the offline stage. As shown in FIG. 3B, the geographic area is covered by a rectangular grid.
  • the geographic area is divided into a grid of 4 rows and 7 columns.
  • AP31 and AP32 are wireless access points in the network.
  • AP31 and AP32 are deployed in this area for communication.
  • the signal strength sent by the AP is used to construct fingerprint information.
  • the average signal strength from each AP is obtained.
  • the collection time is about 5 to 15 minutes, about once every second, and the mobile device may have different orientations and angles during the collection.
  • the average signal strength sample can also be used.
  • Distribution as a fingerprint Each grid point corresponds to a two-dimensional vector (ie fingerprint), thereby constructing a WiFi fingerprint map (ie, the preset first map).
  • the fingerprint ⁇ is an N-dimensional vector.
  • the grid granularity of the preset first map is allowed to be very large, and can reach the room level, because the preset first map is only used for coarse positioning.
  • Step S302 Select a key frame image that meets a preset condition from the sample image library.
  • step S303 the image features in the key frame image are extracted in real time during the acquisition process.
  • image feature extraction is a process of interpretation and annotation of key frame images.
  • step S303 it is necessary to extract the 2D coordinate information, 3D coordinate information, and description information of the feature point of the key frame image (that is, the descriptor sub-information of the feature point); among them, the 3D coordinate information of the feature point of the key frame image is the key
  • the 2D coordinate information of the feature points of the frame image is obtained by mapping in the preset three-dimensional coordinate system where the second map is located.
  • the number of extraction is 150 (150 is the empirical value, the number of feature points is too small, the tracking failure rate is high, the number of feature points is too large, which affects the efficiency of the algorithm), which is used for images Tracking; and extract the descriptor of the feature point for feature point matching;
  • the 3D coordinate information (ie depth information) of the feature point is calculated by the triangulation method, which is used to determine the location of the image acquisition device.
  • Step S304 Determine the ratio of each sample feature point in the key frame image in real time during the acquisition process to obtain a ratio vector.
  • step S304 can be understood as, during the acquisition of the key frame image, for the current frame image, the ratio vector of the key frame image is extracted in real time.
  • the word bag model is described in the form of a vocabulary tree.
  • the bag-of-words model includes sample image database 41, which is the root node of the vocabulary tree; sample images 42, 43 and 44, which are leaf nodes 42, 43 and 44; sample feature points 1 to 3 are different samples in sample image 42 Feature points, sample feature points 4 to 6 are different sample feature points in the sample image 43, and sample feature points 7 to 9 are different sample feature points in the sample image 44.
  • Ratio vector In the process of scoring, multiple parameters need to be obtained, for example, to determine the number of sample images N (ie the first number), the number of times the sample feature point w i appears in the sample image n i (ie the first number), I t Is the image I collected at time t, T the number that appears in the key frame image I w i is the sample feature points collected at time (i.e., the number of times a second), The total number of sample feature points (i.e., the second number), by the sample feature point scale, floating point vector w to give each keyframe-dimensional image is a key frame image I t appeared, i.e. the ratio of the vector, the vector can also be the ratio As the feature information of the preset bag-of-words model.
  • an offline preset second map that depends on the key frame image is constructed.
  • the preset second map stores the image characteristics of the key frame image in a binary format (including: 2D coordinate information, 2D coordinate Information and description information, such as 2D coordinates, 3D coordinates, and descriptive sub-information) are sent to the local device.
  • 2D coordinate information including: 2D coordinate information, 2D coordinate Information and description information, such as 2D coordinates, 3D coordinates, and descriptive sub-information
  • Step S305 Use the identification information of the region corresponding to the key frame image to label the key frame image, so that the identified key frame image is associated with the preset first map to obtain a global map.
  • the key frame image is annotated during the collection process
  • the annotation content is the area ID
  • the annotation content of the key frame image associated with the WiFi fingerprint map is the area ID.
  • the area ID and the grid point when the preset first map is created are corresponding. In this mode, it means that one area of the preset first map corresponds to one area ID, and one area ID corresponds to multiple key frame images. As shown in FIG.
  • the identification information identified in the key frame image 331 and the key frame image 332 is ID341, ID341 is the identification information of area 33; the identification information of key frame image 333 is ID342, and ID342 is the identification information of area 34; key frame The identification information of the image 334 and the key frame image 335 is ID 343, which is the identification information of the area 35; the identification information of the key frame image 336 is ID 344, and the ID 344 is the identification information of the area 36.
  • the above steps S301 to S304 construct a WiFi fingerprint map (that is, the preset first map), and a global map, and the preset second map stores the feature point information of the visual key frame in a binary format (including 2D coordinates, 3D coordinates and descriptor information) and label information to the local.
  • the two maps will be loaded and used separately.
  • step S306 the image acquisition device is roughly positioned by the preset first map to obtain the target area where the image acquisition device is located.
  • Step S307 Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map, and obtain a preset second map.
  • the preset second map can be understood as a local map of the global map.
  • step S308 image acquisition is performed by using the image acquisition device to obtain an image to be processed.
  • Step S309 in the process of acquiring the image to be processed, extract the first image feature in the current frame of the image to be processed in real time.
  • extracting the first image feature in the current frame of the image to be processed in real time is similar to the process of step S303, but there is no need to determine the 3D coordinate information of the image to be processed, because there is no need to provide the 3D image of the image to be processed in the subsequent PnP algorithm. Coordinate information.
  • step S310 the matching frame image of the current frame of the image to be processed in the preset second map is retrieved through the bag-of-words model.
  • the search for the matching frame image of the current frame of the image to be processed in the preset second map through the bag-of-words model can be understood as using the feature information of the bag-of-words model, that is, the ratio vector set to retrieve the current frame of the image to be processed The matching frame image in the preset second map.
  • the step S310 can be implemented through the following process:
  • the first step is to find the similarity between the current frame of the image to be processed and each key frame image.
  • the similarity s(v 1 , v 2 ) is calculated as follows: First, determine v 1 and v 2 , v 1 and v 2 Respectively represent the first ratio vector of each sample feature point contained in the word bag model in the current frame of the image to be processed, and the second ratio vector of each sample feature point in the key frame image. Based on v 1 and v 2 , the similarity between the current frame of the image to be processed and each key frame image can be determined. If the bag-of-words model contains w sample feature points, then the first ratio vector and the second ratio vector are both w-dimensional vectors. The similar key frame images whose similarity reaches the second threshold among the key frame images are filtered out to form a set of similar key frame images.
  • similar key frame images whose time stamp difference is less than the third threshold and similarity difference less than the fourth threshold are selected from the set of similar key frame images to join together to obtain a joint frame image (or called an island).
  • the second step can be understood as selecting similar key frame images with close timestamps and similarity matching scores in the similar key frame image set, and the similar key frame images are combined together to form an island; in this way, the similar key frame image set is divided into Multiple joint frame images (ie multiple islands) are created.
  • the ratio of the similarity between the first key frame image and the last key frame image in the joint frame image is very small, and the similarity ratio
  • the third step is to respectively determine the sum of similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature.
  • the joint frame image with the largest sum of similarity is determined as the target joint frame image with the highest similarity to the image to be processed, and the current frame with the image to be processed is found from the target joint frame image The matching frame image with the highest similarity.
  • step S311 the PnP algorithm is used to determine the current position and the acquisition orientation of the image acquisition device in the map coordinate system.
  • step S311 can be implemented through the following steps:
  • the Nth feature point F CN of the current frame X C of the image to be processed is traversed through all feature points of the matched frame image X 3 and the Euclidean distance between any two feature points in the matched frame image is determined.
  • the current frame Xc51 of the image to be processed is a matching frame image X352 that matches the current frame Xc51.
  • the second step is to select the group with the smallest Euclidean distance (that is, the target Euclidean distance set) for threshold judgment. If it is less than the first threshold, it is determined as the target Euclidean distance, and then the target Euclidean distance set is formed. Otherwise, the target Euclidean distance set is not formed. Go to the first step until all the feature points of X C are traversed, and then go to the third step. For example, as shown in Figure 5A, by comparing multiple Euclidean distances, a set of minimum Euclidean distance combinations ⁇ F 1 , F 2 , F 3 ⁇ are obtained.
  • the third step is to form the target Euclidean distance set, which can be expressed as ⁇ F 1 ,F 2 ,F 3 ⁇ . If the number of elements in the target Euclidean distance set is greater than the fifth threshold, proceed to the fourth step, otherwise the algorithm ends and the matching frame is output X 3 location information.
  • the input of the PnP algorithm is the 3D coordinates of the feature points in the key frame image and the 2D coordinates of the feature points in the current frame of the image to be processed
  • the output of the algorithm is the position of the current frame of the image to be processed in the map coordinate system.
  • the PnP algorithm does not directly obtain the pose matrix of the image acquisition device based on the matching pair sequence, but first obtains the 3D coordinates of the feature points in the key frame image marked with the identification information of the target area in the current coordinate system, and then according to the feature points In the 3D coordinate system of the map coordinate system and the 3D coordinate of the current coordinate system, the rotation vector and translation vector of the current coordinate system relative to the map coordinate system are solved, and then the acquisition orientation of the image acquisition device is solved based on the rotation vector, based on the translation vector Solve the location of the image capture device.
  • the solution of the PnP algorithm starts from the law of cosines.
  • the location of the collection device is determined through the transformation from the map coordinate system to the current coordinate system.
  • the fusion positioning part mainly includes coarse positioning using the preset first map and fine positioning based on the visual key frame image.
  • the coarse positioning process determines the user's approximate location and also determines the local visual map to be loaded; fine positioning uses a monocular camera to collect the current image to be processed, and load the preset second map selected by the coarse positioning target area.
  • the bag-of-words model is used to retrieve and match the corresponding matching frame images, and finally the PnP algorithm is used to solve the current accurate pose of the image acquisition device in the map coordinate system to achieve the positioning purpose.
  • the indoor positioning method combining wireless indoor positioning and visual key frame images helps users locate their own position in real time and with high accuracy.
  • Use the preset first map for example, WiFi fingerprint map
  • the preset second map corresponding to the visual key frame image.
  • the embodiments of the present application can combine WiFi fingerprint maps and visual key frame maps for large-scale indoor scenes, with high positioning accuracy and strong robustness.
  • the embodiment of the present application provides a positioning device, which includes each module included and each unit included in each module, which can be implemented by a processor in a computer device; of course, it can also be implemented by a specific logic circuit;
  • the processor may be a central processing unit, a microprocessor, a digital signal processor, or a field programmable gate array.
  • the device 600 includes: a first determining module 601, a first searching module 602, a second determining module 603, a first extracting module 604, The first matching module 605 and the third determining module 606, wherein:
  • the first determining module 601 is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
  • the first search module 602 is configured to search for an area identifier corresponding to the current network feature information from a preset first map;
  • the second determining module 603 is configured to determine the target area where the image acquisition device is located according to the area identifier
  • the first extraction module 604 is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
  • the first matching module 605 is configured to match the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area, and obtain the second Image feature
  • the third determining module 606 is configured to determine the pose information of the image acquisition device according to the second image feature.
  • the device further includes:
  • the first dividing module is configured to divide the coverage area of the current network into multiple regions
  • a fourth determining module configured to determine the network characteristic information of the multiple wireless access points in the current network in each area
  • the first storage module is configured to store the identification information of each area and the network feature information corresponding to each area as the preset first map; wherein the identification information of each area is different.
  • the first determining module 601 includes:
  • the first determining submodule is configured to determine target feature information that matches the current network feature information from the network feature information stored in the preset first map;
  • the second determining sub-module is configured to search for the area identifier corresponding to the current network characteristic information according to the correspondence between the network characteristic information and the area identification information stored in the preset first map.
  • the device further includes:
  • the second extraction module is configured to extract the feature point set of the image to be processed
  • a fifth determining module configured to determine the description information of each feature point in the feature point set and the two-dimensional coordinate information of each feature point in the image to be processed;
  • the sixth determining module is configured to determine the description information and the two-dimensional coordinate information as the first image feature.
  • the device further includes:
  • the first selection module is configured to select multiple key frame images meeting preset conditions from the sample image library to obtain a set of key frame images
  • the first identification module is configured to use identification information of the region corresponding to each key frame image to identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images;
  • the third extraction module is configured to extract the image features of each identified key frame image to obtain a key image feature set
  • the fourth extraction module is configured to extract feature points of the sample image from the sample image library to obtain a sample feature point set containing different feature points;
  • the seventh determining module is configured to determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set;
  • the second storage module is configured to store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
  • the seventh determining module includes:
  • the third determining submodule is configured to determine the first average number of times according to the first number of sample images contained in the sample image library and the first number of times the i-th sample feature point appears in the sample image library; wherein, i is an integer greater than or equal to 1; the first average number is configured to indicate the average number of times the i-th sample feature point appears in each sample image;
  • the fourth determining submodule is configured to, based on the second number of occurrences of the i-th sample feature point in the j-th key frame image and the second number of sample feature points contained in the j-th key frame image, Determine the second average number; where j is an integer greater than or equal to 1; the second average number is used to indicate the ratio of the i-th sample feature point to the sample feature points contained in the j-th key frame image;
  • the fifth determining submodule is configured to obtain the ratio of the sample feature points in the key frame image according to the first average number and the second average number, and obtain the ratio vector set.
  • the device further includes:
  • An eighth determining module configured to determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map;
  • the ninth determining module is configured to use a partial global map corresponding to the key frame image that identifies the identification information of the target area as the preset second map.
  • the first matching module 605 includes:
  • the sixth determining sub-module is configured to respectively determine the ratios of different sample feature points in the feature point set to obtain the first ratio vector
  • the first obtaining submodule is configured to obtain a second ratio vector, where the second ratio vector is the ratio of the multiple sample feature points among the feature points contained in the key frame image;
  • the first matching submodule is configured to match the first image feature corresponding to the first image feature from the image features of the key frame image identifying the identification information of the target area according to the first ratio vector and the second ratio vector 2. Image features.
  • the first matching submodule includes:
  • the first determining unit is configured to determine, according to the first ratio vector and the second ratio vector, from the image features of the key frame image of the identification information identifying the target area, determining the same as the first image feature Similar image features whose similarity is greater than the first threshold;
  • the second determining unit is configured to determine similar key frame images to which the similar image features belong to obtain a set of similar key frame images
  • the first selection unit is configured to select, from the image features of the similar key frame images, a second image feature whose similarity with the first image feature meets a preset similarity threshold.
  • the first selection unit includes:
  • the first determining subunit is configured to determine the time difference between the acquisition times of at least two similar key frame images, and the similarity between the image features of the at least two similar key frame images and the first image features, respectively difference;
  • the first joint subunit is configured to combine similar key frame images whose time difference is less than a second threshold and whose similarity difference is less than a third threshold to obtain a joint frame image;
  • the first selection subunit is configured to select, from the image features of the joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold.
  • the first selection subunit is configured to respectively determine the sum of the similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature; And the largest joint frame image is determined as the target joint frame image with the highest similarity to the image to be processed; according to the description information of the feature point of the target joint frame image and the description information of the feature point of the image to be processed, Among the image features of the target joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
  • the device further includes:
  • a tenth determining module configured to determine the image containing the second image feature as a matching frame image of the image to be processed
  • An eleventh determining module configured to determine a target Euclidean distance between any two feature points contained in the matching frame image and less than a fourth threshold, to obtain a target Euclidean distance set;
  • the seventh determining submodule is configured to determine the pose information of the image acquisition device according to the second image feature if the number of target Euclidean distances included in the target Euclidean distance set is greater than a fifth threshold.
  • the seventh determining submodule includes:
  • the third determining unit is configured to determine the feature point of the key frame image corresponding to the second image feature, and the map coordinates in the map coordinate system corresponding to the preset second map;
  • the fourth determining unit is configured to determine the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located;
  • a fifth determining unit configured to determine a conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates;
  • the sixth determining unit is configured to determine the position of the image acquisition device in the map coordinate system and the image acquisition device based on the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system The collection orientation relative to the map coordinate system.
  • the above positioning method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable The automatic test line of the device containing the storage medium executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the positioning method provided in the foregoing embodiment are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
  • the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable The equipment automatic test line executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.
  • the target area where the image acquisition device is located is determined according to the network feature information of the current network where the image acquisition device configured to collect the image to be processed is located and the preset first map;
  • the first image feature is to match the second image feature from the image features of the key frame image stored in the preset second map corresponding to the target area; according to the second image feature, determine the image capture device Posture information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de positionnement, un appareil de positionnement, un terminal, et un support de mémorisation, le procédé consistant à : selon des informations de caractéristique de réseau d'un réseau actuel dans lequel un dispositif de collecte d'image utilisé pour collecter une image à traiter est situé et une première carte prédéfinie, déterminer une région cible dans laquelle le dispositif de collecte d'image est situé ; selon une première caractéristique d'image de l'image à traiter, mettre en concordance une seconde caractéristique d'image parmi des caractéristiques d'image d'une trame d'image clé mémorisée dans une seconde carte prédéfinie correspondant à la région cible ; et déterminer des informations de pose du dispositif de collecte d'image selon la seconde caractéristique d'image.
PCT/CN2020/117156 2019-09-27 2020-09-23 Procédé et appareil de positionnement, terminal et support de mémorisation WO2021057797A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910922471.0 2019-09-27
CN201910922471.0A CN110645986B (zh) 2019-09-27 2019-09-27 定位方法及装置、终端、存储介质

Publications (1)

Publication Number Publication Date
WO2021057797A1 true WO2021057797A1 (fr) 2021-04-01

Family

ID=69011607

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117156 WO2021057797A1 (fr) 2019-09-27 2020-09-23 Procédé et appareil de positionnement, terminal et support de mémorisation

Country Status (2)

Country Link
CN (1) CN110645986B (fr)
WO (1) WO2021057797A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140674A (zh) * 2021-10-20 2022-03-04 郑州信大先进技术研究院 结合图像处理及数据挖掘技术的电子证据可用性鉴别方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110645986B (zh) * 2019-09-27 2023-07-14 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质
CN111447553B (zh) * 2020-03-26 2021-10-15 云南电网有限责任公司电力科学研究院 一种基于wifi的增强视觉slam方法及装置
CN111506687B (zh) * 2020-04-09 2023-08-08 北京华捷艾米科技有限公司 一种地图点数据提取方法、装置、存储介质及设备
CN111511017B (zh) * 2020-04-09 2022-08-16 Oppo广东移动通信有限公司 定位方法及装置、设备、存储介质
CN111680596B (zh) * 2020-05-29 2023-10-13 北京百度网讯科技有限公司 基于深度学习的定位真值校验方法、装置、设备及介质
CN111623783A (zh) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 一种初始定位方法、视觉导航设备、仓储系统
CN112362047A (zh) * 2020-11-26 2021-02-12 浙江商汤科技开发有限公司 定位方法及装置、电子设备和存储介质
CN112529887B (zh) * 2020-12-18 2024-02-23 广东赛诺科技股份有限公司 一种基于gis地图数据懒加载方法及系统
CN112509053B (zh) * 2021-02-07 2021-06-04 深圳市智绘科技有限公司 机器人位姿的获取方法、装置及电子设备
CN113063424B (zh) * 2021-03-29 2023-03-24 湖南国科微电子股份有限公司 一种商场内导航方法、装置、设备及存储介质
CN113259883B (zh) * 2021-05-18 2023-01-31 南京邮电大学 一种面向手机用户的多源信息融合的室内定位方法
CN113657164A (zh) * 2021-07-15 2021-11-16 美智纵横科技有限责任公司 标定目标对象的方法、装置、清扫设备和存储介质
CN114427863A (zh) * 2022-04-01 2022-05-03 天津天瞳威势电子科技有限公司 车辆定位方法及系统、自动泊车方法及系统、存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661300A (zh) * 2013-11-22 2015-05-27 高德软件有限公司 定位方法、装置、系统及移动终端
CN104936283A (zh) * 2014-03-21 2015-09-23 中国电信股份有限公司 室内定位方法、服务器和系统
US20150371102A1 (en) * 2014-06-18 2015-12-24 Delta Electronics, Inc. Method for recognizing and locating object
CN105372628A (zh) * 2015-11-19 2016-03-02 上海雅丰信息科技有限公司 一种基于Wi-Fi的室内定位导航方法
CN105828296A (zh) * 2016-05-25 2016-08-03 武汉域讯科技有限公司 一种利用图像匹配与wi-fi融合的室内定位方法
CN105974357A (zh) * 2016-04-29 2016-09-28 北京小米移动软件有限公司 终端的定位方法及装置
CN108495259A (zh) * 2018-03-26 2018-09-04 上海工程技术大学 一种渐进式室内定位服务器及定位方法
CN110645986A (zh) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311733B2 (en) * 2005-02-15 2012-11-13 The Invention Science Fund I, Llc Interactive key frame image mapping system and method
JP4564564B2 (ja) * 2008-12-22 2010-10-20 株式会社東芝 動画像再生装置、動画像再生方法および動画像再生プログラム
US9297881B2 (en) * 2011-11-14 2016-03-29 Microsoft Technology Licensing, Llc Device positioning via device-sensed data evaluation
US20150092048A1 (en) * 2013-09-27 2015-04-02 Qualcomm Incorporated Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation
CN106934339B (zh) * 2017-01-19 2021-06-11 上海博康智能信息技术有限公司 一种目标跟踪、跟踪目标识别特征的提取方法和装置
CN108764297B (zh) * 2018-04-28 2020-10-30 北京猎户星空科技有限公司 一种可移动设备位置的确定方法、装置及电子设备
CN109086350B (zh) * 2018-07-13 2021-07-30 哈尔滨工业大学 一种基于WiFi的混合图像检索方法
CN109579856A (zh) * 2018-10-31 2019-04-05 百度在线网络技术(北京)有限公司 高精度地图生成方法、装置、设备及计算机可读存储介质
CN109658445A (zh) * 2018-12-14 2019-04-19 北京旷视科技有限公司 网络训练方法、增量建图方法、定位方法、装置及设备
CN109948525A (zh) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 拍照处理方法、装置、移动终端以及存储介质
CN109993113B (zh) * 2019-03-29 2023-05-02 东北大学 一种基于rgb-d和imu信息融合的位姿估计方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661300A (zh) * 2013-11-22 2015-05-27 高德软件有限公司 定位方法、装置、系统及移动终端
CN104936283A (zh) * 2014-03-21 2015-09-23 中国电信股份有限公司 室内定位方法、服务器和系统
US20150371102A1 (en) * 2014-06-18 2015-12-24 Delta Electronics, Inc. Method for recognizing and locating object
CN105372628A (zh) * 2015-11-19 2016-03-02 上海雅丰信息科技有限公司 一种基于Wi-Fi的室内定位导航方法
CN105974357A (zh) * 2016-04-29 2016-09-28 北京小米移动软件有限公司 终端的定位方法及装置
CN105828296A (zh) * 2016-05-25 2016-08-03 武汉域讯科技有限公司 一种利用图像匹配与wi-fi融合的室内定位方法
CN108495259A (zh) * 2018-03-26 2018-09-04 上海工程技术大学 一种渐进式室内定位服务器及定位方法
CN110645986A (zh) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140674A (zh) * 2021-10-20 2022-03-04 郑州信大先进技术研究院 结合图像处理及数据挖掘技术的电子证据可用性鉴别方法
CN114140674B (zh) * 2021-10-20 2024-04-16 郑州信大先进技术研究院 结合图像处理及数据挖掘技术的电子证据可用性鉴别方法

Also Published As

Publication number Publication date
CN110645986A (zh) 2020-01-03
CN110645986B (zh) 2023-07-14

Similar Documents

Publication Publication Date Title
WO2021057797A1 (fr) Procédé et appareil de positionnement, terminal et support de mémorisation
CN109947975B (zh) 图像检索装置、图像检索方法及其中使用的设定画面
RU2608261C2 (ru) Автоматическое генерирование тега на основании содержания изображения
CN107133325B (zh) 一种基于街景地图的互联网照片地理空间定位方法
US9489402B2 (en) Method and system for generating a pictorial reference database using geographical information
Liu et al. Finding perfect rendezvous on the go: accurate mobile visual localization and its applications to routing
WO2020259360A1 (fr) Procédé et dispositif de localisation, terminal et support d'enregistrement
WO2020259361A1 (fr) Procédé et appareil de mise à jour de carte et terminal et support de stockage
CN101300588A (zh) 确定收集中的特定人的方法
CN111323024B (zh) 定位方法及装置、设备、存储介质
US20230351794A1 (en) Pedestrian tracking method and device, and computer-readable storage medium
US20070070217A1 (en) Image analysis apparatus and image analysis program storage medium
EP2711890A1 (fr) Dispositif de fourniture d'informations, procédé de fourniture d'informations, programme de traitement de fourniture d'informations, support d'enregistrement enregistrant un programme de traitement de fourniture d'informations, et système de fourniture d'informations
US9288636B2 (en) Feature selection for image based location determination
CN104486585A (zh) 一种基于gis的城市海量监控视频管理方法及系统
JPWO2011136341A1 (ja) 情報提供装置、情報提供方法、情報提供処理プログラム、及び情報提供処理プログラムを記録した記録媒体
Revaud et al. Did it change? learning to detect point-of-interest changes for proactive map updates
CN105740777B (zh) 信息处理方法及装置
Park et al. Estimating the camera direction of a geotagged image using reference images
EP3580690B1 (fr) Méthodologie bayésienne pour détection d'objet/de caractéristique géospatiale
US20150379040A1 (en) Generating automated tours of geographic-location related features
Liu et al. Robust and accurate mobile visual localization and its applications
US20150134689A1 (en) Image based location determination
CN116664812B (zh) 一种视觉定位方法、视觉定位系统及电子设备
Das et al. Event-based location matching for consumer image collections

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20868466

Country of ref document: EP

Kind code of ref document: A1