WO2019242392A1 - 一种数据库构建方法、一种定位方法及其相关设备 - Google Patents
一种数据库构建方法、一种定位方法及其相关设备 Download PDFInfo
- Publication number
- WO2019242392A1 WO2019242392A1 PCT/CN2019/082981 CN2019082981W WO2019242392A1 WO 2019242392 A1 WO2019242392 A1 WO 2019242392A1 CN 2019082981 W CN2019082981 W CN 2019082981W WO 2019242392 A1 WO2019242392 A1 WO 2019242392A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature point
- scene feature
- database
- information
- scene
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3859—Differential updating map data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3602—Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3848—Data obtained from both position sensors and additional sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
- H04W64/003—Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment
Definitions
- the present application relates to the field of communications, and in particular, to a database construction method, a positioning method, and related equipment.
- a visual positioning method is proposed.
- the principle is to establish a database by identifying a feature point that matches the real-time scene and the same scene in the database.
- the database stores scene key frames and scene feature points. Key frames are used to represent real-world images.
- Scene feature points are visual scene feature points extracted from scene key frames, and scene feature points belong to key frames.
- the scene feature point also has a descriptor to describe the scene feature point, and the scene feature point has different descriptor information under different natural conditions.
- the embodiment of the present application discloses a database construction method, a positioning method, and related equipment, which are used to construct a database according to the second scene feature point information corresponding to the target natural condition information, so that the database can be used for positioning with more accurate positioning.
- a first aspect of the present application provides a database construction method, including:
- a target image set that satisfies a preset coincidence degree requirement is determined from the image set.
- the image set is acquired according to the preset distance interval, and whether the coincidence degree of the image acquired according to the preset distance interval meets the requirements.
- the image set is acquired according to the preset angular interval, and then calculated according to the preset Whether the coincidence degree of the images obtained by setting the angle interval meets the requirements.
- the image refers to the image of the mobile device and its surrounding environment.
- the image acquisition method can be obtained by installing a camera on the mobile device, and the mobile device can also have an image acquisition function, which is not limited here.
- the target image set includes at least one image, and each image is captured under a unique natural condition, so each image corresponds to a type of natural condition information.
- the way of determining the natural condition information is: the mobile device obtains the position information of the mobile device through the global positioning system GPS, lidar, millimeter wave radar, and / or inertial measurement unit IMU, and then sends the position information to the climate server to obtain the natural conditions of the current position. information.
- the network device analyzes and processes the target image set to obtain scene feature point information.
- each image in the target image set corresponds to a unique type of natural condition information.
- Each scene feature point in the scene feature point set is separately related to the natural condition information.
- Corresponding relationships are established to obtain scene feature point information sets.
- the scene feature point information set includes at least one scene feature point information, and each scene feature point information includes a 3D coordinate, a pixel coordinate, a key frame ID, and a descriptor information of the scene feature point, where each descriptor includes a This kind of natural condition information, the 3D coordinates, pixel coordinates and key frame ID of the scene feature points are static indicators, and the descriptor information is a dynamic indicator, which will change with the natural conditions.
- the process of visual positioning is the process of determining the same scene feature points through the comparison of scene feature points, and the stationary scene feature points are generally used as the scene feature points for comparison to make the positioning more accurate.
- the life value calculation is needed.
- the size of the life value is used to represent the probability that the scene feature point is a static scene feature point. The larger the life value, the greater the probability that the scene feature point is a static feature point, and vice versa.
- the same scene feature point may be captured by a single mobile device or may be captured by multiple mobile devices.
- First determine the scene feature point information set. For a single mobile device, that is, when the scene feature point is observed by a mobile device, Scene feature points whose health value is greater than the first preset health value threshold, and then determine the first scene feature point information. For multiple mobile devices, that is, when the scene feature points are observed by two or more mobile devices, the health value If the scene feature point is greater than the second preset health threshold, the scene feature point is the second scene feature point.
- This embodiment has the following advantages: After determining a target image set that meets the requirements of the preset image coincidence degree, the scene feature point information set is determined according to the target image set and the natural condition information corresponding to each image in the graphic set, and then the scene is determined to obtain the scene Second scene feature point information corresponding to a scene feature point in a feature information set where a single mobile device has a life value greater than a first preset life value threshold and a plurality of mobile devices have a life value greater than a second preset life value threshold, When the second scene feature point information does not match the scene feature point information preset in the database, a database is constructed according to the second scene feature point information.
- the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
- the database makes the positioning more accurate when the constructed database is used for positioning.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database specifically includes: the second scene feature does not exist in the database Point information.
- the second scene feature point information is stored in the database and the stored second scene feature point information is stored. It includes the 3D coordinates, pixel coordinates, and key frame ID and descriptor information of the second scene feature point.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database specifically includes: the second scene feature point information exists in the database, However, the second scene feature point information does not include target descriptor information about target natural condition information.
- the second scene feature point information in the database is the 3D coordinates, pixel coordinates, the key frame ID, and the descriptor 1 of the second scene feature point.
- the second scene feature point information determined in the image is 3D coordinates, pixel coordinates, key frame IDs, and descriptor 2 information of the second scene feature points.
- the target descriptor information related to the target natural condition information needs to be added to the second scene feature point of the database, that is, the information of descriptor 2.
- the method further includes :
- the difference between the preset 3D coordinates of any scene feature point in the database and the 3D coordinates of the second scene feature point is less than the first preset threshold value, it is determined that the second scene feature point exists in the database, and it also exists Feature point information of the second scene.
- the method further includes:
- the second scene feature point information exists in the database, determine at least one descriptor of the second scene feature point information preset in the database, and determine whether there is at least one descriptor in the at least one descriptor corresponding to the target descriptor information in the image
- the distance of the descriptor is less than the preset distance threshold
- the distance between the descriptors preset in the database and the descriptors corresponding to the target descriptor information in the second scene feature point information is greater than the second preset distance threshold, determine whether the second scene feature point information preset in the database is not Include target descriptor information.
- a scene feature point has a life value f of a single mobile device, where f
- the calculation formula is:
- n represents the number of times a scene feature point is observed in a single mobile device.
- a single mobile device can perform multiple experiments to obtain images, so a scene feature point may be observed multiple times by a single mobile device.
- Said n 0 is the average of the number of times that any scene feature point obtained by model training was observed by at least one mobile device
- ⁇ is any scene feature point obtained by model training that was observed by at least one mobile device in advance The degree of variance.
- the life value calculation formula of the scene feature point on a single mobile device is described, which increases the implementability of the solution.
- a scene feature point may be captured by multiple mobile devices, and the scene feature points have multiple life values on multiple mobile devices.
- F the calculation formula of F is: The f is the life value of the feature point of the scene on a single mobile device, the B is a weight coefficient corresponding to each mobile device in the multiple mobile devices, and one mobile device in the multiple mobile devices corresponds to one weight coefficient.
- a scene feature point is captured by three mobile devices, and each of the three mobile devices corresponds to a weight coefficient. Then the life value of the scene feature point on a single mobile device is multiplied by the weight coefficient of the mobile device. The three mobile devices are then summed to obtain the life value of the scene feature point when it is observed by multiple mobile devices.
- the plurality of mobile devices represent at least two mobile devices.
- life value calculation formulas of the scene feature points on multiple mobile devices are described, which increases the implementability of the solution.
- ⁇ t is different from the mobile device to the time between the observed feature points spaced about the same scene
- geometric continuity index ⁇ g is observed with different mobile devices Euclidean distance between the feature point related to the same scene, and described consistency ⁇ c
- the description distance between feature points of the same scene observed by different mobile devices is related.
- the determining a target image set that meets a preset image coincidence degree requirement includes:
- the image is first selected according to the preset distance interval d k + 1.
- the selected image is determined as the target image.
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is not within the preset accuracy range, it is calculated according to the difference between the coincidence degree of the image corresponding to the distance interval d k + 1 and the preset coincidence degree threshold. It is also necessary to increase or decrease the distance interval selection image based on the distance interval d k + 1 in order to make the selected image meet the preset coincidence degree requirement, so as to obtain the distance interval d k + 2 of the next selected image, and then use d k + 2 as the distance of the selected image interval repeat the above steps, if the selected image d k + 2 satisfy the coincidence requirement, then d k + 2 in accordance with the selected image to obtain the target image set.
- the preset distance interval d k + 1 the preset distance interval d k + 1 ;
- the distance interval of the images is selected, and ⁇ is the coincidence degree between the images calculated when the images are selected according to the distance interval d k .
- the determining a target image set that meets a preset image coincidence degree requirement includes:
- the image is first selected according to the preset angle interval ⁇ k + 1.
- the difference between the coincidence degree of the selected image and the preset coincidence threshold is within the preset accuracy range, the selected image is determined as the target. Images, and then repeatedly selecting images at preset angular intervals to obtain a target image set.
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is calculated and calculated. How many angular interval selection images need to be increased or decreased based on the angular interval ⁇ k + 1 in order to make the selected image meet the preset coincidence degree requirement, so as to obtain the angular interval ⁇ k + 2 of the next selected image, and then ⁇ k + 2 as the angle of the selected image interval repeat the above steps, if the selected image ⁇ k + 2 satisfy the requirement coincidence, the ⁇ k + 2 in accordance with the selected image to obtain the target image set.
- the preset angle interval ⁇ k + 1 is a preset angle interval ⁇ k + 1 .
- the scene feature point information includes a feature corresponding to the natural condition information.
- Descriptor information, and obtaining the scene feature point information set according to the target image set and the natural condition information includes:
- a network device processes a target image to obtain scene feature points of the target image
- the scene feature point information also includes information such as 3D coordinates and pixel coordinates of the scene feature points.
- the target image to which the scene feature point belongs may be multiple, that is, the scene feature point is included in multiple target images, and the natural condition information corresponding to each target image is generally different, so there may be multiple scene feature point information.
- the descriptor information does not exclude that the natural condition information corresponding to two or more target images is the same.
- Steps 1> and 2> are repeatedly performed, and the foregoing processing is performed on each image until the scene feature point information set is obtained.
- the method for determining the scene feature point information set is described, and the integrity of the solution is increased.
- the method further includes:
- the third scene feature point information is deleted from the database, and the size of the FNCS may represent the scene The probability that the feature points are used in positioning and how many descriptors are included in the scene feature point information.
- the preset FNCS can be determined through multiple experiments in advance.
- Scene feature points with low FNCS values can be deleted to facilitate database management.
- the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, M is the total number of times the scene feature point is located, and m i is the number of times that the scene feature point is used in the positioning. The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
- a second aspect of the embodiments of the present application provides a positioning method, which is characterized in that it includes:
- the network device acquires the real-time image of the mobile device.
- the real-time image refers to the image of the mobile device and its surrounding environment.
- the real-time image can be acquired by installing a camera on the mobile device.
- the mobile device can also have The image acquisition function is not limited here.
- the real-time image is analyzed and processed to obtain at least one first descriptor.
- the first descriptor includes a mobile device or an external camera of the mobile device.
- the mobile device is located.
- Location target natural condition information When the real-time image is captured, the mobile device is located.
- the real-time image contains at least one feature point information.
- the natural condition information when the real-time image is taken is certain. Therefore, one feature point information includes only one descriptor information, and the real-time image obtains at least one descriptor.
- the information contains the same information about the natural conditions of the target.
- the target natural condition information may be determined by a network device, or may be determined by a mobile device and sent to the network device.
- the target natural condition information is determined in the following manner: first, the network device or the mobile device determines the location information of the mobile device when the real-time image is captured, It can be determined by the global positioning system GPS, lidar, millimeter wave radar, and / or inertial measurement unit IMU, and the network device or mobile device can then determine the target natural condition information based on the position information.
- the comparison method is specifically:
- the descriptor information preset in the database is obtained after the database is constructed.
- the network device determines a target image set that meets the requirements of the preset image coincidence, the network device determines the target image set according to the target image set and the target image set.
- a set of scene feature point information is obtained from the natural condition information corresponding to each image. From the scene feature point information set, first scene feature point information corresponding to a first scene feature point that meets a preset life value requirement is selected to determine the first scene.
- the second descriptor information corresponding to the target natural condition information in the feature point information is used to construct a database according to the second descriptor information when the second descriptor information does not match the descriptor information preset in the database.
- the specific process of constructing the database is similar to the process of constructing the database of the first aspect described in this application, and details are not described herein again.
- the real-time image is visually located according to the same descriptor information.
- the locating the real-time image by using the same descriptor information includes:
- the first scene feature point information After determining the same descriptor information in the real-time image as in the database, determine the first scene feature point information to which the same descriptor information belongs in the database, search the database to obtain the 3D coordinates, pixel coordinates and other information of the first scene feature point, and then combine The first scene feature point information and the positioning calculation formula obtain the position of the target mobile device when shooting a real-time image.
- the positioning calculation formula is:
- the position of the target mobile device when the real-time image is captured Which said Is the pixel coordinates of the feature point of the first scene in the real-time image, ⁇ C is the internal parameter matrix of the camera, and ⁇ C is used to convert 3D coordinates into pixel coordinates.
- Feature points in the database The pose of the image relative to the world coordinate system, said Is the pixel coordinates of the first scene feature point in the database, the value of i is 1 to n, the n is a positive integer, the first scene feature point and the first scene feature point correspond.
- a total of n scene feature points in the real-time image match the scene feature points in the database.
- the pixel coordinates obtained from the transformation match the pixel coordinates of the scene feature points in the database Consistent, subtracting the two results in a reprojection error model.
- the real-time pose of the car can be obtained.
- the method further includes:
- a manner of comparing the descriptor information is similar to the manner of comparing the descriptor information described in the second aspect, and details are not described herein again.
- the descriptor information of different descriptors is added to the database, so that the database can be used for positioning more accurately.
- the database can be updated according to different descriptor information, so that the database is more complete and the positioning is more accurate.
- the according to The constructing the database by the different descriptor information includes:
- the second scene feature point information When the second scene feature point does not exist in the database, the second scene feature point information also does not exist.
- the second scene feature point is a scene feature point to which different descriptors belong, so at this time, it is necessary to add a different description to the database.
- the second scene feature point information of the sub information In this embodiment, the number of different descriptors may be multiple.
- a second scene feature point information of a real-time image can only contain a different descriptor. Therefore, a second scene feature point needs to be added to the database.
- the number of information is also plural.
- the Constructing the database by describing different descriptor information includes:
- the second scene feature point information When the second scene feature point exists in the database, the second scene feature point information also exists, but the second scene feature point information in the database is different from the second scene feature point information in the real-time image, that is, the second scene feature point The information does not include the different descriptor information.
- the different descriptor information needs to be added to the second scene feature point information of the database.
- the second scene feature point information of the database is 3D coordinates, pixel coordinates, and key frame IDs and descriptors 1 of the second scene feature points.
- the second scene feature point information determined in the real-time image is the 3D coordinates of the second scene feature points, pixel coordinates, and the key frame ID and descriptor 2 information.
- the information of the descriptor 2 needs to be added to the second scene feature point information of the database.
- the method further includes:
- the difference between the preset 3D coordinates of any scene feature point in the database and the 3D coordinates of the second scene feature point is less than the first preset threshold value, it is determined that the second scene feature point exists in the database, and also Feature point information of the second scene.
- the three aspects of the present application provide a database, which is deployed on a server;
- the database is formed by the second scene feature point information that does not match the preset scene feature point information in the database, and the second scene feature point information is the life value of the first scene feature point information on multiple mobile devices.
- Scene feature point information corresponding to a scene feature point that is larger than a second preset health value threshold where the first scene feature point information is a set of scene feature point information whose life value on a single mobile device is greater than the first preset health value threshold
- Scene feature point information corresponding to a scene feature point the scene feature point information set is obtained according to a target image set and natural condition information corresponding to each image, and the scene feature point set includes at least one scene feature point information
- the The target image set includes at least one image that satisfies the requirement of the coincidence degree of the preset image, and each image corresponds to a type of natural condition information.
- the process of forming the database in this embodiment is similar to the process of constructing the database in the first aspect, and details are not repeated here.
- the completed database can be used for visual positioning, making the positioning more accurate.
- the server further includes a processor
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes: the second scene feature point information does not exist in the database;
- the forming of the database from the feature information of the second scene includes:
- the second scene feature point information When the second scene feature point information does not exist in the database, the second scene feature point information is added to the database, so the database is formed by the server adding the second scene feature point information to the database.
- the feature information of the second scene includes target descriptor information about target natural condition information.
- the server further includes a processor
- the mismatch between the second scene feature point information and the preset scene feature point information in the database includes: the second scene feature point information exists in the database, and the second scene feature point information does not include information about the target Target descriptor information for natural condition information;
- the forming of the database from the feature information of the second scene includes:
- the scene feature point information preset in the database is the same as the second scene feature point information, but the descriptor information of the two is different, different descriptor information is added to the database, so the database is stored in the database by the server.
- the target descriptor information is added to the preset second scene feature point information.
- a fourth aspect of the present application provides a network device, including:
- a determining unit configured to determine a target image set that meets a requirement of a preset image coincidence degree, the target image set includes at least one image, and each image corresponds to a type of natural condition information;
- a processing unit configured to obtain a scene feature point information set according to the target image set and natural condition information corresponding to each image, where the scene feature point set includes at least one scene feature point information;
- the determining unit is further configured to determine, in the scene feature point information set, first scene feature point information corresponding to a scene feature point whose life value of a single mobile device is greater than a first preset life value threshold, the life value
- the size is used to represent the probability that the scene feature point is a static scene feature point
- the determining unit is further configured to determine, from the first scene feature point information, second scene feature point information corresponding to a scene feature point whose life value of multiple mobile devices is greater than a second preset life value threshold;
- a database construction unit is configured to construct the database according to the second scene feature point information when the second scene feature point information does not match the scene feature point information preset in the database.
- the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
- the database makes the positioning more accurate when the constructed database is used for positioning.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
- the second scene feature point information does not exist in the database
- the database construction unit is specifically configured to add the second scene feature point information to the database, and the second scene feature point information includes target descriptor information about target natural condition information.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
- the second scene feature point information exists in the database, and the second scene feature point information does not include target descriptor information about target natural condition information;
- the database construction unit is specifically configured to add the target descriptor information to the second scene feature point information preset in the database.
- the determining unit is further configured to perform all operations corresponding to the feature point information of the second scene.
- the determination unit is further configured to determine at least one description of the second scene feature point preset in a database Sub-information
- the network device further includes:
- a judging unit configured to judge whether there is a distance between a descriptor corresponding to the descriptor information and a descriptor corresponding to the target descriptor information in the at least one descriptor information being smaller than a preset distance threshold;
- the determining unit is further configured to determine that if the distance between the descriptor corresponding to the descriptor information and the descriptor corresponding to the target descriptor information is smaller than a preset distance threshold in the at least one descriptor information, The set second scene feature point information does not include target descriptor information.
- the scene feature point has a life value f of a single mobile device
- the calculation formula of f is: Wherein, n represents the number of times that the scene feature point is observed in a single mobile device, n 0 is an average value of the number of times that a preset scene feature point is observed, and ⁇ is a preset scene The variance of the number of times a feature point was observed.
- the life value calculation formula of the scene feature point on a single mobile device is described, which increases the implementability of the solution.
- the life value of the scene feature point on multiple mobile devices is F
- the calculation formula of F is:
- the f is the life value of the scene feature point on a single mobile device
- the B is a weight coefficient corresponding to each mobile device
- one mobile device of the plurality of mobile devices corresponds to one weight coefficient.
- life value calculation formulas of the scene feature points on multiple mobile devices are described, which increases the implementability of the solution.
- the formula for the ⁇ i ⁇ t + ⁇ g + ⁇ c, the gamma] t is the time continuity index of the scene feature points observed on multiple mobile devices, ⁇ g is the geometric continuity index of the scene feature points observed on multiple mobile devices, and ⁇ c is The scene feature point is a description consistency index observed on multiple mobile devices.
- the determining unit is specifically configured to select an image according to a preset distance interval .
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
- the preset distance interval d k + 1 the preset distance interval d k + 1 ;
- the distance interval of the images is selected at time, and ⁇ is the degree of overlap between the images when the images are selected according to the distance interval d k .
- the determining unit is specifically configured to select an image according to a preset angle interval ;
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
- the preset angle interval ⁇ k + 1 the preset angle interval ⁇ k + 1 ;
- the angular interval of the images is selected at time, and ⁇ is the degree of coincidence between the images when the images are selected according to the angular interval ⁇ k .
- the scene feature point information includes information related to the natural condition Corresponding descriptor information, the processing unit is specifically used for 1> processing a target image to obtain scene feature points;
- the method for determining the scene feature point information set is described, and the integrity of the solution is increased.
- the determining unit is further configured to determine the Feature point information of the third scene in the database;
- the database construction unit is further configured to delete the third scene feature from the database when the feature quantity control score FNCS of the third scene feature point corresponding to the third scene feature point information is less than a preset FNCS threshold. Point information.
- Scene feature points with low FNCS values can be deleted to facilitate database management.
- the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, the The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
- a fifth aspect of the present application provides a positioning method, where the method is applied to a visual positioning system, and the method includes:
- a determining unit configured to determine at least one first descriptor information according to the real-time image, where the first descriptor includes target natural condition information when the real-time image is captured;
- the determining unit is further configured to compare the descriptor information preset in the database with the at least one first descriptor information to determine the same descriptor information, and the descriptor information preset in the database is determined by the After the network device determines a target image set that satisfies a preset image coincidence degree requirement, the scene feature point information set is obtained according to the target image set and natural condition information corresponding to each image in the target image set, and the scene feature points are obtained from the scene feature points.
- the first scene feature point information corresponding to the first scene feature point that satisfies the preset life value requirements is selected from the information set, and the second scene descriptor information corresponding to the target natural condition information in the first scene feature point information is used to construct the It is obtained after the database is described that the second descriptor information does not match the descriptor information preset in the database, and the scene feature point information includes descriptor information corresponding to the natural condition information;
- a positioning unit configured to use the same descriptor information to locate the real-time image.
- the positioning unit is specifically configured to determine first scene feature point information corresponding to the same descriptor information in a database
- the position of the target mobile device when the real-time image is captured is calculated and calculated according to the first scene feature point information and a positioning calculation formula.
- the positioning calculation formula is: a position of the target mobile device when the real-time image is captured
- the determining unit is further configured to preset a description in a database. Comparing the sub-information with the at least one first descriptor information to determine different descriptor information;
- the network device further includes a database construction unit
- the database construction unit is specifically configured to construct the database according to the different descriptor information.
- a database may be displayed according to different descriptor information, so that the database is more complete, and the positioning is more accurate.
- the A database construction unit is specifically configured to add the second scene feature point information including the different descriptor information to the database.
- the constructing A unit specifically configured to add the different descriptor information to the second scene feature point information of the database.
- the determining unit is further configured to determine the second scene corresponding to the feature information of the second scene 3D coordinates of feature points;
- a sixth aspect of the present application provides a network device, characterized in that the network device includes: a memory, a transceiver, a processor, and a bus system;
- the memory is used for storing a program
- the processor is configured to execute a program in the memory, and includes the following steps:
- the target image set includes at least one image, and each image corresponds to a type of natural condition information
- the bus system is configured to connect the memory and the processor to enable the memory and the processor to communicate.
- the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
- the database makes the positioning more accurate when the constructed database is used for positioning.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
- the second scene feature point information does not exist in the database
- the processor is specifically configured to:
- the second scene feature point information is added to a database, and the second scene feature point information includes target descriptor information about target natural condition information.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
- the second scene feature point information exists in the database, and the second scene feature point information does not include target descriptor information about target natural condition information;
- the processor is specifically configured to:
- the target descriptor information is added to the second scene feature point information preset in the database.
- the determining unit and the processor are further configured to:
- the determining unit and the processor are further configured to:
- the scene feature point has a life value f of a single mobile device
- the calculation formula of f is: Wherein, n represents the number of times that the scene feature point is observed in a single mobile device, n 0 is an average value of the number of times that a preset scene feature point is observed, and ⁇ is a preset scene The variance of the number of times a feature point was observed.
- the life value calculation formula of the scene feature point on a single mobile device is described, which increases the implementability of the solution.
- the life value of the scene feature point on multiple mobile devices is F
- the calculation formula of F is:
- the f is the life value of the scene feature point on a single mobile device
- the B is a weight coefficient corresponding to each mobile device
- one mobile device of the plurality of mobile devices corresponds to one weight coefficient.
- life value calculation formulas of the scene feature points on multiple mobile devices are described, which increases the implementability of the solution.
- the processor is specifically configured to:
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
- the preset distance interval d k + 1 the preset distance interval d k + 1 ;
- the distance interval of the images is selected at time, and ⁇ is the degree of overlap between the images when the images are selected according to the distance interval d k .
- the processor is specifically configured to:
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
- the preset angle interval ⁇ k + 1 the preset angle interval ⁇ k + 1 ;
- the angular interval of the images is selected at time, and ⁇ is the degree of coincidence between the images when the images are selected according to the angular interval ⁇ k .
- the scene feature point information includes information related to the natural condition Corresponding descriptor information, the processor is specifically configured to:
- the method for determining the scene feature point information set is described, and the integrity of the solution is increased.
- the processor is further configured to:
- the third scene feature point information is deleted from the database.
- Scene feature points with low FNCS values can be deleted to facilitate database management.
- the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, the The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
- a seventh aspect of the present application provides a network device, which belongs to a vision positioning system, and the network device includes: a memory, a transceiver, a processor, and a bus system;
- the transceiver is configured to acquire a real-time image
- the memory is used for storing a program
- the processor is configured to execute a program in the memory, and includes the following steps:
- the descriptor information preset in the database is determined by the network device to satisfy the preset image overlap
- the scene feature point information set is obtained according to the target image set and the natural condition information corresponding to each image in the target image set, and a preset life is selected from the scene feature point information set.
- the first scene feature point information corresponding to the first scene feature point required by the value is obtained by constructing the database according to the second descriptor information corresponding to the target natural condition information in the first scene feature point information.
- Second descriptor information does not match the descriptor information preset in the database, and the scene feature point information includes descriptor information corresponding to the natural condition information;
- the bus system is configured to connect the memory and the processor to enable the memory and the processor to communicate.
- the processor is specifically configured to:
- the position of the target mobile device when the real-time image is captured is calculated and calculated according to the first scene feature point information and a positioning calculation formula.
- the positioning calculation formula is: a position of the target mobile device when the real-time image is captured
- the processor is further configured to:
- a database may be displayed according to different descriptor information, so that the database is more complete, and the positioning is more accurate.
- the The processor is specifically used for:
- the second scene feature point information including the different descriptor information is added to the database.
- the processing The device is specifically used for:
- the different descriptor information is added to the second scene feature point information of the database.
- the processor is further configured to:
- An eighth aspect of the present application provides a computer-readable storage medium.
- the computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the methods described in the above aspects.
- a ninth aspect of the present application provides a computer program product containing instructions that, when run on a computer, causes the computer to perform the methods described in the above aspects.
- FIG. 1 is a schematic diagram showing a relationship between a data structure and a data type in an image database of the present application
- FIG. 2 is a schematic diagram when an embodiment of the present application is applied to a car end
- FIG. 3 is a schematic structural diagram of a visual positioning system of the present application.
- FIG. 4 is a schematic diagram of another structure of a visual positioning system of the present application.
- FIG. 5 is a schematic diagram of an embodiment of a database construction method of the present application.
- FIG. 6 is a schematic diagram of an embodiment of selecting a target image set according to the present application.
- FIG. 7 is a schematic diagram of an embodiment in which feature point information of a second scene is selected in this application.
- FIG. 8 is a schematic diagram showing the relationship between the life value of scene feature points and the number of times the scene feature points are observed;
- FIG. 9 is a schematic diagram of another embodiment of a database construction method of the present application.
- FIG. 10 is a schematic diagram of an embodiment for determining whether a scene feature point exists in a database
- FIG. 11 is a schematic diagram of an embodiment of a positioning method according to the present application.
- FIG. 12 (a) is a case where the scene feature point information in the image of the present application does not match the preset scene feature point information in the database;
- FIG. 12 (b) is another case where the scene feature point information in the image of the application does not match the preset scene feature point information in the database;
- FIG. 13 is a schematic diagram of another embodiment of a positioning method of the present application.
- FIG. 14 is a schematic structural diagram of a network device according to the present application.
- FIG. 15 is another schematic structural diagram of a network device of the present application.
- FIG. 16 is another schematic structural diagram of a network device of the present application.
- the database stores scene key frame information, scene feature point information, and descriptor information, and the three have an association relationship.
- the scene key frame information includes an image, a position, and an attitude.
- a scene key frame has at least one scene feature point.
- the scene feature point information includes ID information of the key frame of the scene to which the scene feature point belongs, pixel coordinates, 3D coordinates, and descriptor information.
- a scene feature point has at least one descriptor. Part of the descriptor information is the scene feature point descriptor ⁇ in the traditional visual field, and the other part is the natural condition attribute E of the scene when the scene feature point is collected. When the natural condition attribute E changes, the descriptor ⁇ also changes.
- the pixel coordinates, 3D coordinates, and ID information of the key frame of the scene to which the scene feature points belong are static attributes of the scene feature points and will not change due to changes in the external environment.
- the descriptor information is different under different natural conditions.
- Different natural conditions refer to different viewing directions, different weather, and / or different lighting conditions. Different natural conditions may also be other situations. limited.
- the embodiment of the present application is mainly applied to a visual positioning system.
- the principle of visual positioning is to compare the scene feature points of the captured image with the scene feature points in the database. If the scene feature points of the captured image are in the database, Corresponding scene feature point comparisons are consistent, they are considered to be the same scene feature points, and then the 3D coordinates of the consistent scene feature points are compared in the database for positioning.
- This application can be applied to positioning during the movement of mobile devices such as drones, V2X car terminals, and mobile phones.
- taking the real-time positioning of the vehicle First, determine the real-time image information of the vehicle A during the driving process and the positioning information obtained through non-visual positioning such as GPS. Vehicle A then sends the real-time image to the server. At the same time, vehicle A can send the positioning information to the server. After receiving the server, it determines the natural condition information for the location, or after vehicle A determines the natural condition information based on the positioning information, it sends the natural condition information. To the server, and then the server finds multiple descriptor information about the natural condition information in the real-time image, and then compares the determined multiple descriptors with the preset descriptors stored in the database. The database belongs to the server. Used to store scene key frames and scene feature point information.
- the descriptors in the database are the same as the descriptors in the real-time image, the descriptors that are successfully compared in the database are found, and the descriptors are successfully compared. It is proved that the scene feature point to which the descriptor belongs is the same scene feature point.
- the 3D coordinates of feature points in the same scene can be used by the car end to locate using the 3D coordinates of feature points in the same scene.
- the descriptors in the real-time image may be exactly the same as the descriptors in the database. At this time, the 3D coordinates of the feature points of the scene to which the identical descriptors belong are directly used for positioning. If only part of the descriptors exist in the real-time image, the phase can be found in the database.
- the corresponding descriptors are first positioned using the 3D coordinates of the feature points of the scene to which the same descriptor belongs. After the positioning is completed, the different descriptors are obtained, and then the different descriptors are updated to the database in order to optimize the database. , So that the optimized database positioning is more accurate in subsequent positioning.
- the database construction process is based on a large number of images, and the scene feature points in the image are selected according to the life value algorithm to obtain a large amount of scene feature point information, and then compared in the database For existing scene feature point information, update and optimize the database, and update scene feature point information that does not exist in the database into the database.
- the life value algorithm can accurately select representative scene feature points, making the database more accurate for visual positioning.
- Figure 3 shows a possible structure of a visual positioning system, where the locator is used to obtain the positioning information of the mobile device, and optionally, it can also obtain the posture information; the image acquirer is used to capture the image of the mobile device; the mobile device It is used to receive the image sent by the image acquirer and the positioning information sent by the locator, and then send it to the network device.
- the network device can also directly obtain the image and positioning information without the mobile device, that is, the network device directly communicates with the image acquirer and the locator.
- the connection is not limited here.
- the network device is used to compare scene feature points after receiving the image to achieve positioning, and it can also update and manage its own database.
- one possible situation is that the mobile device sends positioning information to a network device as described above, and another possible situation is that the mobile device sends the natural condition information to the network device after determining the natural condition information according to the positioning information, and No positioning information is sent, which is not limited here.
- the locator can be: a global positioning system, a camera, a lidar, a millimeter wave radar, and an inertial measurement unit.
- the IMU can obtain the positioning information and the attitude of the mobile device.
- the locator may be a component of a mobile device, or may be an external device connected to the mobile device, which is not specifically limited herein.
- Mobile devices can be: vehicles, mobile phones and drones.
- the image acquirer may specifically be a camera.
- the image acquirer may be a component of a mobile device or an external device connected to the mobile device, which is not limited herein.
- the network device may be a cloud server or a mobile device with data processing capabilities, which is not limited here.
- a data model for visual positioning shown in FIG. 1 is preset in a database of a network device, and the data model introduces scene key frames, scene feature points, and relationships between descriptors.
- the embodiment of the present application proposes a database construction method and a positioning method.
- the application includes two parts. One part is the network device side database construction process. Database, making the database better for visual positioning. The other part is the process of visual positioning after the database is constructed.
- the two parts are introduced below, and the database construction process is shown in Figure 5:
- the network device obtains data information.
- the data information may be image information, location information, posture, or natural condition information, which is not specifically limited herein.
- the method for the network device to obtain the image information of the mobile device during driving is as follows: the camera can be installed on the mobile device to acquire the image captured by the camera, and the mobile device can also have an image acquisition function, and the network device acquires the image captured by the mobile device. Image. During the operation of the mobile device, the image is captured every certain period of time. The acquired image is mainly the image information of the surrounding environment during the mobile device's movement. The selected time period is manually set and can be 0.01S or 0.001S. It is not limited here.
- the image information includes at least one image.
- the posture and real-time location information of the mobile device are different when each image is taken.
- the posture indicates the driving angle and direction of the mobile device.
- the real-time location information of the mobile device can be obtained through the global positioning system GPS. , Lidar, millimeter wave radar and / or inertial measurement unit IMU.
- a target image set that satisfies a preset coincidence degree requirement is selected according to the data information.
- the process of determining the target image set may be: the mobile device may first filter the acquired images according to a preset coincidence degree requirement, and then send the screening result to the network device, and the target image filtering process may also be a network device Execution, that is, the network device obtains the target image set after obtaining the image, and the specific image set is not limited here.
- the basis for determining the target image set is different when the mobile device is going straight and turning. When the car is going straight, it needs to determine the target image set that meets the requirements at a certain distance. Need to determine the set of target images that meet the requirements at a certain angular interval, the specific steps are shown in Figure 6 below:
- a distance interval or an angular interval is defined in advance, and the expected selected image is determined according to the interval. For example, on a straight road, an image is obtained every 1 m of a car driving. Or on curved roads, an image is acquired every 5 degrees of car driving angle change.
- the degree of coincidence ⁇ between images n old of the number of feature points in the same scene between the current image and the neighboring image / number of new feature points in the scene between the current image and the neighboring image, n new .
- the number of scene feature points in the current image is n total
- n total n old + n new .
- the calculation formula is:
- ⁇ * is the preset coincidence degree threshold
- ⁇ * is generally taken as 1
- ⁇ ⁇ is the preset accuracy value
- ⁇ ⁇ ranges from 0.1 to 0.2.
- ⁇ * and ⁇ ⁇ can also take other values, specifically this There are no restrictions.
- the preset accuracy range is 0- to ⁇ ⁇ .
- the distance interval of the selected image is redefined. It is first determined that the distance interval (or angular interval) ⁇ d k needs to be increased.
- d k is the distance interval of the last selected image
- d k and ⁇ * and ⁇ have been obtained in the above steps.
- a new distance interval d k + k d k + ⁇ d k for selecting key frames of the scene is obtained.
- d k + 1 is again determined as the distance interval for acquiring the keyframe image of the scene, and returns to step A to re-execute the above process until a distance interval d k + n is obtained .
- the coincidence degree meets the preset conditions .
- the image selected according to the distance interval d k is the target image, and multiple target images are selected according to d k to obtain the target image set.
- the network device processes the target image set to obtain the scene feature points.
- the scene feature points can be considered as the pixel points in the target image that have a large difference in gray value with other pixel points.
- the target points are then determined based on the location of the mobile device when each target image is taken.
- the natural condition information of the location and establish the correspondence between the scene feature points and the natural condition information to obtain scene feature point information. It can be understood that in addition to the natural condition information, the scene feature point information also includes 3D coordinates, pixel coordinates, and descriptor information of the scene feature points.
- a scene feature point there may be multiple images that include the same scene feature points, so the correspondence between scene feature points and natural condition information can be one-to-one correspondence, or a scene feature point can correspond to multiple types of natural condition information.
- the sub-information changes with the change of natural condition information, so a scene feature point information may include multiple descriptive sub-information.
- target image 1 and target image 2 there are target image 1 and target image 2 in the target image set.
- Target image 1 is shot on a sunny day and the light intensity is 400 lx.
- Target image 2 is shot on a cloudy day and the light intensity is 300 lx.
- the target image 1 has scene feature points. 1 and scene feature point 2, the target image 2 has scene feature point 2 and scene feature point 3, and the target image set is parsed to obtain scene feature point 1, scene feature point 2 and scene feature point 3.
- the scene feature point has 1 Descriptors, corresponding to the natural condition information of the target image 1, there are two descriptors in the scene characteristic point 2, respectively corresponding to the natural condition information of the target image 1 and the natural condition information of the target image 2, and there are One descriptor corresponds to the natural condition information of the target image 2.
- a representative scene feature point is selected.
- the representative scene feature points can be signs, road signs, and Scene feature points related to buildings and other objects.
- the way to select scene feature points is to select based on the life value of the scene feature points.
- the size of the life value can represent the probability that the scene feature points are static scene feature points. The larger the life value, the scene feature points are static scene feature points. The greater the probability.
- the process of selecting scene feature points includes:
- the mean value n 0 and the variance ⁇ of the number of times the scene feature points are observed are determined according to FIG. 8. Then calculate the first life value of each scene feature point in the scene feature point set.
- the calculation formula is:
- n represents the number of times a scene feature point in a scene feature point set is observed in a single car end.
- the scene feature point is the first scene feature point, and the first scene feature point after screening is determined from the perspective of multiple mobile devices. Whether the health requirements are met.
- first life value of a scene feature point is less than or equal to a first preset threshold, it means that the first life value of the scene feature point is too low, and the scene feature point is discarded.
- Determining the first scene feature point obtained by multiple mobile devices can be determined according to the 3D coordinates or pixel coordinates of the scene feature point, or it can be determined in other ways whether the scene feature points obtained by multiple mobile devices are the same scene feature
- the points are not limited here. For example, among the scene feature points obtained by multiple mobile devices, scene feature points with the same 3D coordinates or 3D coordinate differences within a preset difference range belong to the same scene feature point.
- f is the life value of the scene feature point on a single mobile device
- B is the weight coefficient corresponding to each mobile device.
- a scene feature point generally has a different weight coefficient corresponding to each mobile device.
- ⁇ 1 and ⁇ 2 is a preset value, visible, different mobile devices of the same scene observed time interval feature points and its negative correlation ⁇ t.
- ⁇ g and ⁇ c are similar to ⁇ t , and details are not repeated here. It should be noted that when calculating the geometric continuity index ⁇ g , ⁇ is defined as the Euclidean distance between different mobile devices to observe feature points of the same scene. When calculating the description consistency ⁇ c , ⁇ is defined as the description distance between feature points observed by different mobile devices in the same scene.
- the first scene feature point whose second life value is greater than or equal to the second preset threshold is the second scene feature point, and the second scene feature point is a representative mature scene feature point. Information is added to the database.
- the feature point of the first scene is discarded.
- the second scene feature point information is compared with the scene feature point information preset in the database. If the scene feature point information preset in the database does not match the second scene feature point information, a database is constructed based on the second scene feature point information .
- the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
- the database makes the positioning more accurate when the constructed database is used for positioning.
- the scene feature point information includes 3D coordinates of the scene feature points, pixel coordinates, descriptor information related to natural conditions, and a key frame ID to which the scene feature points belong, where the 3D coordinates, pixel coordinates, and scene feature points belong.
- the key frame ID represents the static index of the feature points of the scene, which is generally fixed, and the descriptor information is a dynamic index, which will change as the natural conditions change.
- the preset feature point information in the database and the second scene A possible case where the feature point information does not match is that the second scene feature point information does not exist in the database, or the second scene feature point information exists in the database, but the second scene feature point information in the database and the second scene feature determined in the image
- the descriptor information contained in the point information is different. Please refer to FIG. 9, which will be described below.
- the second scene feature point information corresponding to the scene feature points whose health values of the multiple mobile devices are greater than a second preset life value threshold is determined.
- determining whether a feature point of the second scene exists in the database specifically includes the following steps:
- the 3D coordinates of the second scene feature point corresponding to the second scene feature point information are determined. Since the scene feature point may be observed by multiple mobile devices, first when obtaining a scene feature point viewed from multiple mobile devices, multiple The 3D coordinates of the feature points of the scene observed by the mobile device, and then the average of multiple 3D coordinates is calculated And standard deviation ⁇ , and then the 3D coordinates and mean of the scene feature points measured at each vehicle end For comparison, when the Euclidean distance between the two is greater than 3 ⁇ , it means that the 3D coordinate error measured by the car end is large, and the 3D coordinate is deleted.
- the number of car ends is N
- the 3D coordinates of the feature points of the same scene observed by the N car ends are 3D1, 3D2, 3D3, and 3Dn, respectively.
- 3D1 is the 3D coordinates of the feature points of the scene. If at least one of the 3D coordinates (for example, 3D1) is The Euclidean distance between them is greater than 3 ⁇ , then delete 3D1, and then use 3D2, 3D3 to 3Dn to average it. Repeat the above steps afterwards.
- 3 ⁇ is a fixed value and is preset by the system. The value of 3 ⁇ is not specifically limited here.
- the scene feature point After calculating the 3D coordinates of the scene feature points, compare them with the 3D coordinates of any scene feature point in the image database. When the Euclidean distance between the two is less than ⁇ d , judge it and the scene feature point in the database. Belonging to the same scene feature point, when the 3D coordinates of each scene feature point preset in the database are compared with the 3D coordinates of the scene feature point, and the Euclidean distance is greater than the first preset threshold, the scene feature point is judged Is a new scene feature point, and new scene feature point information (ie, second scene feature point information) is added to the database.
- the specific value of ⁇ d is not limited here. At the same time, in this embodiment, the number of feature points of the new scene is not limited.
- the new scene feature point information added in the database includes pixel coordinates, 3D coordinates, key frame IDs, and target descriptor information of the new scene feature points.
- the new scene feature point information may also include information other than the target descriptor information. Other descriptor information is not limited here.
- the scene feature point 4 does not exist in the database, and the descriptor 10 of the scene feature point 4 does not exist.
- the descriptor 10 is a different descriptor of this embodiment.
- the information of the second scene feature point containing different descriptors that is, the information of the scene feature point 4 is added to the database.
- Target descriptor information Specifically:
- the preset second scene feature point information in the database includes at least one descriptor information.
- the target descriptor information is descriptor information about target natural condition information.
- the descriptor corresponding to the target descriptor information is the descriptor whose distance to the other descriptors is the smallest from all the descriptors of the feature points of the scene.
- the target descriptor information corresponds to The descriptor is any one of all the descriptors of the characteristic points of the scene.
- the target descriptor is judged to be a new descriptor.
- the distance between the target descriptor and a certain descriptor in the database is less than or equal to the preset When it is far from the threshold, it is judged as the same descriptor.
- the target descriptor is a new descriptor
- the information of the target descriptor is stored in a database. If it is the same descriptor, no update processing is performed. In this embodiment, the number of new descriptors is not limited herein.
- the database includes the same scene feature points 1, scene feature points 2 and scene feature points 3 as the target image, but the scene feature points 3 of the database do not include scene feature points 3 of the image
- the descriptor information corresponding to different descriptors 9 is added to the database.
- M is the total number of times the scene feature points are located, and mi represents the number of times the scene feature points are used in the positioning.
- the calculation formula of the feature quantity control score FNCS is:
- a camera is installed on the target mobile device, or a camera is installed at a certain distance on the road to obtain real-time image information of the target mobile device during driving. It can be understood that the real-time image obtained is a picture of the surrounding road and environment during the driving of the vehicle.
- the camera After the camera captures the real-time image, it can be sent directly to the network device, or it can be sent to the network device via the target mobile device, which is not limited here.
- the target mobile device may also have an image acquisition function.
- the network device processes the real-time image to obtain at least one first descriptor.
- the first descriptor includes target natural condition information when the real-time image is captured.
- the target natural condition information may be determined by the network device or by the mobile device. Then send it to the network device, which is not limited here.
- the target natural condition information when capturing real-time images is determined based on the real-time positioning information of the mobile device, and the real-time positioning information of the mobile device can be obtained through GPS, Lidar, and / or millimeter-wave radar, or inertial Obtained by the measurement unit IMU, which is not specifically limited here. After obtaining the real-time positioning information, it is determined that the natural condition information of the position is the target natural condition information.
- the imaging of the same scene under different viewing directions, different weather and different lighting conditions is different.
- the pixels around a corner of a sign on a road when the weather is clear are significantly different from the pixels around that corner when the weather is dark; for example, the pixels around the corners on the front of the sign and the pixels around the back are also very different. Obvious; in this way, the description of the corners of the sign board at the same location is very different in different weather, different light, and different perspectives.
- the real-time image captured under the natural condition corresponding to the target natural condition information has unique natural condition, so a scene feature point information in the real-time image contains only one kind of descriptor information, but A real-time image has multiple scene feature points, so at least one descriptor in the real-time image includes target natural condition information.
- One possible case is that there are N first descriptors and N There are M first descriptors in a descriptor including target natural condition information. N and M are positive integers and M is less than or equal to N.
- Each first descriptor information in at least one first descriptor information is compared with the descriptor information preset in the database to determine the same descriptor information.
- the real-time image includes Descriptor 1, Descriptor 2, ... Descriptor N, which are compared with the descriptors in the database.
- Descriptor 1, Descriptor 5, ... Descriptor N-1 and Descriptor N are found in the database. The same descriptor.
- the method for judging whether the descriptor is the same as the descriptor in the database is similar to the method for judging whether the descriptor is the same when the database is constructed, that is, according to the distance between the descriptors, the details are not described here again.
- the descriptor information preset in the database is obtained after the database is constructed according to steps 501 to 505 of the embodiment. The details are not repeated here.
- the real-time image is compared with the database. It is found that Descriptor 1 and Descriptor 4 can find the corresponding descriptor in the database. Then determine the feature point 1 of the scene to which Descriptor 1 belongs and the scene to which Descriptor 1 belongs to the database. Feature point 1 is the same scene feature point, while scene feature point 4 to which descriptor 4 belongs and scene feature point 4 to which descriptor 4 belongs in the database are the same scene feature points, and 3D coordinates of the same scene feature points are found: 3D coordinates 1 and 3D coordinates 2. Then use 3D coordinate 1 and 3D coordinate 2 for positioning calculation.
- Network devices use the same scene feature point information for positioning calculations. After determining the feature points of the same scene in the database, the posture of the mobile device is obtained according to a preset algorithm.
- the calculation formula for the positioning is:
- ⁇ C is the internal parameter matrix of the camera, which can convert 3D coordinates into pixel coordinates.
- the mobile device performs positioning based on the calculation result. After the network device calculates the positioning result, it returns the positioning result to the mobile device so that the mobile device can perform the positioning operation.
- the positioning calculation can also be performed by a mobile device.
- the network device determines the feature points of the same scene in the database, it sends the information of the feature points of the same scene to the mobile device.
- the mobile device obtains the pose information and performs positioning according to a preset algorithm. operating.
- the same scene feature point information sent by the network device to the mobile device specifically includes: the pixel coordinates of the scene feature points, the key frame pose of the scene feature points, and the 3D coordinates of the scene feature points. Not limited.
- the specific process for positioning after the database is constructed is explained. After the database is obtained by constructing as shown in FIG. 5, since the database contains more descriptor information of different natural conditions, the database is used During positioning, the real-time image can be matched with the database to more of the same descriptor information, so that the positioning is more accurate.
- the database can also be updated according to different descriptor information, so that the database can store more complete information.
- the method further includes comparing the descriptor information preset in the database with the at least one first descriptor information, determining different descriptor information, and according to different descriptors.
- Information to build the database Building a database based on different descriptor information specifically includes two cases:
- the network device determines whether the second scene feature point exists in its own database, and the judgment method is similar to the way of determining whether the scene feature points are the same when the database is constructed, that is, according to the 3D coordinates, the details are not described herein again. If the second scene feature point does not exist in the database, obviously the second scene feature point information does not exist in the database.
- FIG. 12 (a) Please refer to FIG. 12 (a).
- the descriptor 10 is a different description of this embodiment. child.
- the information of the second scene feature point containing different descriptors that is, the information of the scene feature point 4 is added to the database.
- the second scene feature point information to which different descriptors belong exists in the database, but the second scene feature point information of the database does not include target descriptor information.
- the second scene feature point information does not include the different descriptors determined above, please refer to FIG. 12 (b).
- the database includes real-time images The same scene feature point 1, scene feature point 2, and scene feature point 3, but the information of the scene feature point 3 of the database does not include the target descriptor in the real-time image scene feature point 3, that is, descriptor 9.
- the descriptor information corresponding to different descriptors 9 is added to the database.
- the 3D coordinates of the second scene feature point need to be updated to the database synchronously, because the real-time image information only contains descriptor information and pixels The coordinate information does not include 3D coordinates.
- the 3D coordinates of feature points of different scenes are added to the database.
- the way to determine the 3D coordinates of feature points in different scenes is: after using the same descriptor to obtain real-time image positioning results, determine the 3D coordinates of feature points in different scenes with a binocular camera, or jointly with a monocular camera and IMU
- the method for determining the 3D coordinates of feature points in different scenes is not specifically limited herein.
- the database in the real-time positioning process, is continuously improved and updated according to different descriptor information, and there is a better use of the database for positioning.
- the process of real-time positioning is a process of data interaction between a mobile device and a network device. Please refer to FIG. 13, which will be described below:
- a mobile device sends a real-time image to a network device.
- the mobile device can also send the location information of the mobile device when capturing the real-time image or the natural conditions of the location of the mobile device to the network device.
- the network device determines at least one first descriptor information according to the real-time image.
- the network device compares the descriptor information preset in the database with at least one first descriptor information, and determines the same descriptor information and different descriptor information.
- the network device uses the same descriptor information to locate the real-time image.
- the mobile device performs a positioning operation according to a positioning calculation result determined by the network device.
- the positioning calculation operation may also be performed by a network device, which is not specifically limited herein.
- the network device constructs a database according to different descriptor information.
- steps 1301 to 1306 of the embodiment are similar to the steps of the embodiment shown in FIG. 11 described above, and details are not described herein again.
- FIG. 14 a possible structure of a network device is shown in FIG. 14, including:
- a determining unit 1401 is configured to determine a target image set that meets a requirement of a preset image coincidence degree, the target image set includes at least one image, and each image corresponds to a type of natural condition information;
- a processing unit 1402 configured to obtain a scene feature point information set according to the target image set and natural condition information corresponding to each image, where the scene feature point set includes at least one scene feature point information;
- the determining unit 1401 is further configured to determine, in the scene feature point information set, first scene feature point information corresponding to a scene feature point whose life value of a single mobile device is greater than a first preset life value threshold, the life The magnitude of the value is used to indicate the probability that the scene feature point is a static scene feature point;
- the determining unit 1401 is further configured to determine, from the first scene feature point information, second scene feature point information corresponding to a scene feature point whose life value of multiple mobile devices is greater than a second preset life value threshold;
- a database construction unit 1403 is configured to construct the database according to the second scene feature point information when the second scene feature point information does not match the scene feature point information preset in the database.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
- the second scene feature point information does not exist in the database
- the database construction unit 1403 is specifically configured to add the second scene feature point information to the database, where the second scene feature point information includes target descriptor information about target natural condition information.
- the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
- the second scene feature point information exists in the database, and the second scene feature point information does not include target descriptor information about target natural condition information;
- the database construction unit 1403 is specifically configured to add the target descriptor information to the second scene feature point information preset in the database.
- the determining unit 1401 is further configured to 3D coordinate of the second scene feature point corresponding to the second scene feature point information
- the determining unit 1401 is further configured to determine at least one descriptor information of the second scene feature point preset in a database
- the network device further includes:
- a judging unit 1404 configured to judge whether there is a distance between a descriptor corresponding to the descriptor information and a descriptor corresponding to the target descriptor information in the at least one descriptor information being smaller than a preset distance threshold;
- the determining unit 1401 is further configured to determine the database if a distance between a descriptor corresponding to the descriptor information and the descriptor corresponding to the target descriptor information is smaller than a preset distance threshold in the at least one descriptor information.
- the preset second scene feature point information does not include target descriptor information.
- the life value of the scene feature point on a single mobile device is f
- the calculation formula of f is:
- n represents the number of times that the scene feature point is observed in a single mobile device
- n 0 is an average value of the number of times that a preset scene feature point is observed
- ⁇ is a preset scene The variance of the number of times a feature point was observed.
- the life value of the scene feature point on multiple mobile devices is F
- the calculation formula of F is:
- the f is the life value of the scene feature point on a single mobile device
- the B is a weight coefficient corresponding to each mobile device
- one mobile device of the plurality of mobile devices corresponds to one weight coefficient.
- the determining unit 1401 is specifically configured to select an image according to a preset distance interval
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
- the preset distance interval d k + 1 the preset distance interval d k + 1 ;
- the distance interval of the images is selected at time, and ⁇ is the degree of overlap between the images when the images are selected according to the distance interval d k .
- the determining unit 1401 is specifically configured to select an image according to a preset angular interval
- the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
- the preset angle interval ⁇ k + 1 is the preset angle interval ⁇ k + 1 ;
- the angular interval of the images is selected at time, and ⁇ is the degree of coincidence between the images when the images are selected according to the angular interval ⁇ k .
- the scene feature point information includes descriptor information corresponding to the natural condition information, and the processing unit 1402 is specifically configured to 1> process a target image to obtain scene feature points;
- the determining unit 1401 is further configured to determine third scene feature point information in the database after construction is completed;
- the database construction unit 1403 is further configured to delete the third scene from the database when the feature quantity control score FNCS of the third scene feature point corresponding to the third scene feature point information is less than a preset FNCS threshold. Feature point information.
- the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, the The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
- FIG. 15 Another possible structure of the network device is shown in FIG. 15:
- An obtaining unit 1501 configured to obtain a real-time image
- a determining unit 1502 configured to determine at least one first descriptor information according to the real-time image, where the first descriptor information includes target natural condition information when the real-time image is captured;
- the determining unit 1502 is further configured to compare the descriptor information preset in the database with the at least one first descriptor information to determine the same descriptor information, and the descriptor information preset in the database is determined by the After the network device determines a target image set that satisfies a preset image coincidence degree requirement, the scene feature point information set is obtained according to the target image set and natural condition information corresponding to each image in the target image set, and from the scene characteristics The first scene feature point information corresponding to the first scene feature point that satisfies the preset life value requirements is selected from the point information set, and then is constructed based on the second descriptor information corresponding to the target natural condition information in the first scene feature point information. It is obtained after the database that the second descriptor information does not match the descriptor information preset in the database, and the scene feature point information includes descriptor information corresponding to the natural condition information;
- a positioning unit 1503 is configured to use the same descriptor information to locate the real-time image.
- the positioning unit 1503 is specifically configured to determine first scene feature point information corresponding to the same descriptor information in a database
- the position of the target mobile device when the real-time image is captured is calculated and calculated according to the first scene feature point information and a positioning calculation formula.
- the positioning calculation formula is: the position of the target mobile device when the real-time image is captured Which said Is the pixel coordinates of the feature point of the first scene in the real-time image, ⁇ C is the internal parameter matrix of the camera, and ⁇ C is used to convert 3D coordinates into pixel coordinates.
- Feature points in the database The pose of the image relative to the world coordinate system, said Is the pixel coordinates of the first scene feature point in the database, the value of i is 1 to n, the n is a positive integer, the first scene feature point and the first scene feature point information correspond.
- the determining unit 1502 is further configured to compare the descriptor information preset in the database with the at least one first descriptor information to determine different descriptor information;
- the network device further includes a database construction unit 1504;
- the database construction unit 1504 is specifically configured to construct the database according to the different descriptor information.
- the database construction unit 1504 is specifically configured to add the different descriptor information to the database.
- the second scene feature point information is specifically configured to add the different descriptor information to the database.
- the database constructing unit 1504 is specifically configured to add the second scene feature point information to the database. Describe different descriptor information.
- the determining unit 1502 is further configured to determine a 3D coordinate of the second scene feature point corresponding to the second scene feature point information
- An embodiment of the present application further provides a computer storage medium, where the computer storage medium stores a program, and the program executes including some or all of the steps described in the foregoing method embodiments.
- the network device 1600 includes:
- the receiver 1601, the transmitter 1602, the processor 1603, and the memory 1604 (the number of processors 1603 in the network device 1600 may be one or more, and one processor is taken as an example in FIG. 16).
- the receiver 1601, the transmitter 1602, the processor 1603, and the memory 1604 may be connected through a bus or other manners. In FIG. 16, a connection through a bus is taken as an example.
- the memory 1604 may include a read-only memory and a random access memory, and provide instructions and data to the processor 1603. A part of the memory 1604 may further include a non-volatile random access memory (English full name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
- the memory 1604 stores an operating system and operation instructions, executable modules or data structures, or a subset thereof, or an extended set thereof.
- the operation instructions may include various operation instructions for implementing various operations.
- the operating system may include various system programs for implementing various basic services and processing hardware-based tasks.
- the processor 1603 controls the operation of the network device.
- the processor 1603 may also be referred to as a central processing unit (full name in English: Central Processing Unit, English for short: CPU).
- CPU Central Processing Unit
- the various components of the network equipment are coupled together through a bus system.
- the bus system may include a power bus, a control bus, and a status signal bus in addition to the data bus.
- various buses are called bus systems in the figure.
- the method disclosed in the foregoing embodiment of the present application may be applied to the processor 1603, or implemented by the processor 1603.
- the processor 1603 may be an integrated circuit chip and has a signal processing capability.
- each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1603 or an instruction in the form of software.
- the above processor 1603 may be a general-purpose processor, a digital signal processor (full English name: digital processing, English abbreviation: DSP), an application specific integrated circuit (full English name: Application Specific Integrated Circuit, English abbreviation: ASIC), field programmable Gate array (full name in English: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- DSP digital signal processor
- ASIC Application Specific Integrated Circuit
- FPGA field programmable Gate array
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- a software module may be located in a mature storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, and the like.
- the storage medium is located in the memory 1604, and the processor 1603 reads the information in the memory 1604 and completes the steps of the foregoing method in combination with its hardware.
- the receiver 1601 can be used to receive input digital or character information, and generate signal inputs related to network device related settings and function control.
- the transmitter 1602 can include display devices such as a display screen, and the transmitter 1602 can be used to output numbers through an external interface Or character information.
- the processor 1603 is configured to execute the foregoing database construction method and positioning method.
- the device embodiments described above are only schematic, and the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be A physical unit can be located in one place or distributed across multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.
- the connection relationship between the modules indicates that there is a communication connection between them, which can be specifically implemented as one or more communication buses or signal lines.
- this application can be implemented by means of software plus necessary general hardware, and of course, it can also be implemented by dedicated hardware including dedicated integrated circuits, dedicated CPUs, dedicated memories, Dedicated components and so on.
- all functions performed by computer programs can be easily implemented with corresponding hardware, and the specific hardware structure used to implement the same function can also be diverse, such as analog circuits, digital circuits, or special purpose circuits. Circuit, etc.
- software program implementation is a better implementation for this application.
- the technical solution of the present application that is essentially or contributes to the existing technology can be embodied in the form of a software product, which is stored in a readable storage medium, such as a computer's floppy disk , U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or CD-ROM, etc., including several instructions to make a computer device (can be A personal computer, a server, or a network device, etc.) execute the methods described in the embodiments of the present application.
- a computer device can be A personal computer, a server, or a network device, etc.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, a computer, a server, or a data center. Transmission via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
- wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
- wireless such as infrared, wireless, microwave, etc.
- the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
- the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (Solid State Disk (SSD)), and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Mobile Radio Communication Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (46)
- 一种数据库构建方法,其特征在于,包括:确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息集合,所述场景特征点集合中包括至少一个场景特征点信息;确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为静态场景特征点的概率;确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库。
- 根据权利要求1所述的方法,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:所述数据库中不存在所述第二场景特征点信息;所述根据所述第二场景特征点信息构建所述数据库包括:在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
- 根据权利要求1所述的方法,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;所述根据所述第二场景特征点信息构建所述数据库包括:在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息。
- 根据权利要求2或3所述的方法,其特征在于,所述在数据库中增加所述第二场景特征点信息之前,所述方法还包括:确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
- 根据权利要求3所述的方法,其特征在于,所述在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息之前,所述方法还包括:确定数据库中预置的所述第二场景特征点的至少一个描述子信息;判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述 子信息对应的描述子的距离小于预置距离门限;若否,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
- 根据权利要求7所述的方法,其特征在于,所述β i的计算公式为:β i=γ t+γ g+γ c,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述确定满足预置图像重合度要求的目标图像集合包括:按照预置距离间隔选择图像;当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
- 根据权利要求9所述的方法,其特征在于,所述预置距离间隔d k+1;所述预置距离间隔的计算公式为:d k+1=d k+d k(α *-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述确定满足预置图像重合度要求的目标图像集合包括:按照预置角度间隔选择图像;当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
- 根据权利要求11所述的方法,其特征在于,所述预置角度间隔θ k+1;所述预置角度间隔的计算公式为:θ k+1=θ k+θ k(α *-α),其中,所述2 *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述2为按照角度间隔θ k选择图像时 图像之间的重合度。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述根据所述目标图像集合和所述自然条件信息得到场景特征点信息集合包括:1>对一张所述目标图像进行处理得到场景特征点;2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自然条件信息构成所述场景特征点信息;重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述第二场景特征点信息构建所述数据库之后,所述方法还包括:确定构建完成后所述数据库中的第三场景特征点信息;当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
- 一种定位方法,其特征在于,所述方法应用于视觉定位系统,所述方法包括:获取实时图像;根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;利用所述相同的描述子信息对所述实时图像进行定位。
- 根据权利要求16所述的方法,其特征在于,所述利用所述相同的描述子信息对所述实时图像进行定位包括:确定所述相同描述子信息在数据库中对应的第一场景特征点信息;根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
- 根据权利要求16至18中任一项所述的方法,其特征在于,所述根据所述实时图像确定至少一个第一描述子信息之后,所述方法还包括:将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息;根据所述不同的描述子信息构建所述数据库。
- 根据权利要求19所述的方法,其特征在于,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述根据所述不同的描述子信息构建所述数据库包括:在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
- 根据权利要求19所述的方法,其特征在于,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述根据所述不同的描述子信息构建所述数据库包括:在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
- 根据权利要求19所述的方法,其特征在于,所述根据所述不同的描述子信息构建所述数据库之前,所述方法还包括:确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
- 一种网络设备,其特征在于,所述网络设备包括:存储器、收发器、处理器以及总线系统;其中,所述存储器用于存储程序;所述处理器用于执行所述存储器中的程序,包括如下步骤:确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息集合,所述场景特征点集合中包括至少一个场景特征点信息;确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为 静态场景特征点的概率;确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库;所述总线系统用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
- 根据权利要求23所述的网络设备,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:所述数据库中不存在所述第二场景特征点信息;所述处理器具体用于:在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
- 根据权利要求23所述的网络设备,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;所述处理器具体用于:在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息。
- 根据权利要求24或25所述的网络设备,其特征在于,所述处理器还用于:确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
- 根据权利要求25所述的网络设备,其特征在于,所述处理器还用于:确定数据库中预置的所述第二场景特征点的至少一个描述子信息;判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限;若否,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
- 根据权利要求29所述的网络设备,其特征在于,所述β i的计算公式为:β i=γ t+γ g+γ c,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
- 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述处理器具体用于:按照预置距离间隔选择图像;当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
- 根据权利要求31所述的网络设备,其特征在于,所述预置距离间隔d k+1;所述预置距离间隔的计算公式为:d k+1=d ++d k(α *-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
- 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述处理器具体用于:按照预置角度间隔选择图像;当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
- 根据权利要求33所述的网络设备,其特征在于,所述预置角度间隔θ k+1;所述预置角度间隔的计算公式为:θ k+1=θ k+θ k(α *-α),其中,所述α *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述α为按照角度间隔θ k选择图像时图像之间的重合度。
- 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述处理器具体用于:1>对一张所述目标图像进行处理得到场景特征点;2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自然条件信息构成所述场景特征点信息;重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
- 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述处理器还用于:确定构建完成后所述数据库中的第三场景特征点信息;当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
- 一种网络设备,其特征在于,所述网络设备属于视觉定位系统,所述网络设备包括:存储器、收发器、处理器以及总线系统;所述收发器,用于获取实时图像;其中,所述存储器用于存储程序;所述处理器用于执行所述存储器中的程序,包括如下步骤:根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;利用所述相同的描述子信息对所述实时图像进行定位;所述总线系统用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
- 根据权利要求38所述的网络设备,其特征在于,所述处理器具体用于:确定所述相同描述子信息在数据库中对应的第一场景特征点信息;根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
- 根据权利要求38至40中任一项所述的网络设备,其特征在于,所述处理器还用于:将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的 描述子信息;根据所述不同的描述子信息构建所述数据库。
- 根据权利要求41所述的网络设备,其特征在于,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述处理器具体用于:在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
- 根据权利要求41所述的网络设备,其特征在于,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述处理器具体用于:在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
- 根据权利要求41所述的网络设备,其特征在于,所述处理器还用于:确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
- 一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1至15中任意一项所述的方法。
- 一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求16至22中任意一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19822178.0A EP3800443B1 (en) | 2018-06-20 | 2019-04-17 | Database construction method, positioning method and relevant device therefor |
BR112020025901-2A BR112020025901B1 (pt) | 2018-06-20 | 2019-04-17 | Método de construção de banco de dados, método de posicionamento, dispositivo de rede e meio de armazenamento legível por computador |
US17/126,908 US11644339B2 (en) | 2018-06-20 | 2020-12-18 | Database construction method, positioning method, and related device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810642562.4A CN110688500B (zh) | 2018-06-20 | 2018-06-20 | 一种数据库构建方法、一种定位方法及其相关设备 |
CN201810642562.4 | 2018-06-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/126,908 Continuation US11644339B2 (en) | 2018-06-20 | 2020-12-18 | Database construction method, positioning method, and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019242392A1 true WO2019242392A1 (zh) | 2019-12-26 |
Family
ID=68983245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/082981 WO2019242392A1 (zh) | 2018-06-20 | 2019-04-17 | 一种数据库构建方法、一种定位方法及其相关设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11644339B2 (zh) |
EP (1) | EP3800443B1 (zh) |
CN (2) | CN113987228A (zh) |
BR (1) | BR112020025901B1 (zh) |
WO (1) | WO2019242392A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113987228A (zh) * | 2018-06-20 | 2022-01-28 | 华为技术有限公司 | 一种数据库构建方法、一种定位方法及其相关设备 |
CN111238497B (zh) | 2018-11-29 | 2022-05-06 | 华为技术有限公司 | 一种高精度地图的构建方法及装置 |
US11128539B1 (en) * | 2020-05-05 | 2021-09-21 | Ciena Corporation | Utilizing images to evaluate the status of a network system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102707724A (zh) * | 2012-06-05 | 2012-10-03 | 清华大学 | 一种无人机的视觉定位与避障方法及系统 |
EP3018448A1 (en) * | 2014-11-04 | 2016-05-11 | Volvo Car Corporation | Methods and systems for enabling improved positioning of a vehicle |
CN106447585A (zh) * | 2016-09-21 | 2017-02-22 | 武汉大学 | 城市地区和室内高精度视觉定位系统及方法 |
CN106931963A (zh) * | 2017-04-13 | 2017-07-07 | 高域(北京)智能科技研究院有限公司 | 环境数据共享平台、无人飞行器、定位方法和定位系统 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998136B (zh) * | 2009-08-18 | 2013-01-16 | 华为技术有限公司 | 单应矩阵的获取方法、摄像设备的标定方法及装置 |
JP5062498B2 (ja) * | 2010-03-31 | 2012-10-31 | アイシン・エィ・ダブリュ株式会社 | 風景マッチング用参照データ生成システム及び位置測位システム |
CN104715479A (zh) * | 2015-03-06 | 2015-06-17 | 上海交通大学 | 基于增强虚拟的场景复现检测方法 |
US20160363647A1 (en) | 2015-06-15 | 2016-12-15 | GM Global Technology Operations LLC | Vehicle positioning in intersection using visual cues, stationary objects, and gps |
WO2017114581A1 (en) * | 2015-12-30 | 2017-07-06 | Telecom Italia S.P.A. | System for generating 3d images for image recognition based positioning |
US10838601B2 (en) * | 2016-06-08 | 2020-11-17 | Huawei Technologies Co., Ltd. | Processing method and terminal |
CN106295512B (zh) * | 2016-07-27 | 2019-08-23 | 哈尔滨工业大学 | 基于标识的多纠正线室内视觉数据库构建方法以及室内定位方法 |
US10339708B2 (en) * | 2016-11-01 | 2019-07-02 | Google Inc. | Map summarization and localization |
US9940729B1 (en) * | 2016-11-18 | 2018-04-10 | Here Global B.V. | Detection of invariant features for localization |
CN108121764B (zh) * | 2016-11-26 | 2022-03-11 | 星克跃尔株式会社 | 图像处理装置、图像处理方法、电脑程序及电脑可读取记录介质 |
CN106851231B (zh) * | 2017-04-06 | 2019-09-06 | 南京三宝弘正视觉科技有限公司 | 一种视频监控方法及系统 |
CN109325978B (zh) * | 2017-07-31 | 2022-04-05 | 深圳市腾讯计算机系统有限公司 | 增强现实显示的方法、姿态信息的确定方法及装置 |
CN108615247B (zh) * | 2018-04-27 | 2021-09-14 | 深圳市腾讯计算机系统有限公司 | 相机姿态追踪过程的重定位方法、装置、设备及存储介质 |
CN113987228A (zh) * | 2018-06-20 | 2022-01-28 | 华为技术有限公司 | 一种数据库构建方法、一种定位方法及其相关设备 |
CN110660254B (zh) * | 2018-06-29 | 2022-04-08 | 北京市商汤科技开发有限公司 | 交通信号灯检测及智能驾驶方法和装置、车辆、电子设备 |
CN111488773B (zh) * | 2019-01-29 | 2021-06-11 | 广州市百果园信息技术有限公司 | 一种动作识别方法、装置、设备及存储介质 |
CN112348885A (zh) * | 2019-08-09 | 2021-02-09 | 华为技术有限公司 | 视觉特征库的构建方法、视觉定位方法、装置和存储介质 |
CN112348886B (zh) * | 2019-08-09 | 2024-05-14 | 华为技术有限公司 | 视觉定位方法、终端和服务器 |
WO2021121306A1 (zh) * | 2019-12-18 | 2021-06-24 | 北京嘀嘀无限科技发展有限公司 | 视觉定位方法和系统 |
-
2018
- 2018-06-20 CN CN202111113556.8A patent/CN113987228A/zh active Pending
- 2018-06-20 CN CN201810642562.4A patent/CN110688500B/zh active Active
-
2019
- 2019-04-17 EP EP19822178.0A patent/EP3800443B1/en active Active
- 2019-04-17 BR BR112020025901-2A patent/BR112020025901B1/pt active IP Right Grant
- 2019-04-17 WO PCT/CN2019/082981 patent/WO2019242392A1/zh unknown
-
2020
- 2020-12-18 US US17/126,908 patent/US11644339B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102707724A (zh) * | 2012-06-05 | 2012-10-03 | 清华大学 | 一种无人机的视觉定位与避障方法及系统 |
EP3018448A1 (en) * | 2014-11-04 | 2016-05-11 | Volvo Car Corporation | Methods and systems for enabling improved positioning of a vehicle |
CN106447585A (zh) * | 2016-09-21 | 2017-02-22 | 武汉大学 | 城市地区和室内高精度视觉定位系统及方法 |
CN106931963A (zh) * | 2017-04-13 | 2017-07-07 | 高域(北京)智能科技研究院有限公司 | 环境数据共享平台、无人飞行器、定位方法和定位系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3800443A4 |
Also Published As
Publication number | Publication date |
---|---|
BR112020025901B1 (pt) | 2022-11-16 |
US20210103759A1 (en) | 2021-04-08 |
EP3800443A4 (en) | 2021-10-27 |
CN110688500B (zh) | 2021-09-14 |
BR112020025901A2 (pt) | 2021-03-16 |
CN110688500A (zh) | 2020-01-14 |
CN113987228A (zh) | 2022-01-28 |
US11644339B2 (en) | 2023-05-09 |
EP3800443B1 (en) | 2023-01-18 |
EP3800443A1 (en) | 2021-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11776280B2 (en) | Systems and methods for mapping based on multi-journey data | |
CN110567469B (zh) | 视觉定位方法、装置、电子设备及系统 | |
CN113989450B (zh) | 图像处理方法、装置、电子设备和介质 | |
WO2019242392A1 (zh) | 一种数据库构建方法、一种定位方法及其相关设备 | |
WO2021139176A1 (zh) | 基于双目摄像机标定的行人轨迹跟踪方法、装置、计算机设备及存储介质 | |
CN111260779B (zh) | 地图构建方法、装置及系统、存储介质 | |
WO2021027692A1 (zh) | 视觉特征库的构建方法、视觉定位方法、装置和存储介质 | |
WO2023065657A1 (zh) | 地图构建方法、装置、设备、存储介质及程序 | |
CN114419165B (zh) | 相机外参校正方法、装置、电子设备和存储介质 | |
CN112946679B (zh) | 一种基于人工智能的无人机测绘果冻效应检测方法及系统 | |
CN114565863A (zh) | 无人机图像的正射影像实时生成方法、装置、介质及设备 | |
CN112270709B (zh) | 地图构建方法及装置、计算机可读存储介质和电子设备 | |
CN113029128A (zh) | 视觉导航方法及相关装置、移动终端、存储介质 | |
CN113379748B (zh) | 一种点云全景分割方法和装置 | |
CN115880555B (zh) | 目标检测方法、模型训练方法、装置、设备及介质 | |
US20230053952A1 (en) | Method and apparatus for evaluating motion state of traffic tool, device, and medium | |
CN115049792B (zh) | 一种高精度地图构建处理方法及系统 | |
WO2023116327A1 (zh) | 基于多类型地图的融合定位方法及电子设备 | |
CN114998629A (zh) | 卫星地图与航拍图像模板匹配方法、无人机定位方法 | |
WO2024083010A1 (zh) | 一种视觉定位方法及相关装置 | |
CN112884834A (zh) | 视觉定位方法及系统 | |
CN118310500A (zh) | 地图数据采集方法、高精地图的更新方法、车辆及服务器 | |
CN116612184A (zh) | 一种基于无人机视觉的相机位姿确定方法 | |
CN115995026A (zh) | 地图生成方法、装置、电子设备和存储介质 | |
CN114858156A (zh) | 即时定位与地图构建方法及无人移动设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19822178 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112020025901 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2019822178 Country of ref document: EP Effective date: 20201230 |
|
ENP | Entry into the national phase |
Ref document number: 112020025901 Country of ref document: BR Kind code of ref document: A2 Effective date: 20201217 |