WO2019242392A1 - 一种数据库构建方法、一种定位方法及其相关设备 - Google Patents

一种数据库构建方法、一种定位方法及其相关设备 Download PDF

Info

Publication number
WO2019242392A1
WO2019242392A1 PCT/CN2019/082981 CN2019082981W WO2019242392A1 WO 2019242392 A1 WO2019242392 A1 WO 2019242392A1 CN 2019082981 W CN2019082981 W CN 2019082981W WO 2019242392 A1 WO2019242392 A1 WO 2019242392A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
scene feature
database
information
scene
Prior art date
Application number
PCT/CN2019/082981
Other languages
English (en)
French (fr)
Inventor
柴勋
王军
周经纬
邓炯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19822178.0A priority Critical patent/EP3800443B1/en
Priority to BR112020025901-2A priority patent/BR112020025901B1/pt
Publication of WO2019242392A1 publication Critical patent/WO2019242392A1/zh
Priority to US17/126,908 priority patent/US11644339B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/003Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment

Definitions

  • the present application relates to the field of communications, and in particular, to a database construction method, a positioning method, and related equipment.
  • a visual positioning method is proposed.
  • the principle is to establish a database by identifying a feature point that matches the real-time scene and the same scene in the database.
  • the database stores scene key frames and scene feature points. Key frames are used to represent real-world images.
  • Scene feature points are visual scene feature points extracted from scene key frames, and scene feature points belong to key frames.
  • the scene feature point also has a descriptor to describe the scene feature point, and the scene feature point has different descriptor information under different natural conditions.
  • the embodiment of the present application discloses a database construction method, a positioning method, and related equipment, which are used to construct a database according to the second scene feature point information corresponding to the target natural condition information, so that the database can be used for positioning with more accurate positioning.
  • a first aspect of the present application provides a database construction method, including:
  • a target image set that satisfies a preset coincidence degree requirement is determined from the image set.
  • the image set is acquired according to the preset distance interval, and whether the coincidence degree of the image acquired according to the preset distance interval meets the requirements.
  • the image set is acquired according to the preset angular interval, and then calculated according to the preset Whether the coincidence degree of the images obtained by setting the angle interval meets the requirements.
  • the image refers to the image of the mobile device and its surrounding environment.
  • the image acquisition method can be obtained by installing a camera on the mobile device, and the mobile device can also have an image acquisition function, which is not limited here.
  • the target image set includes at least one image, and each image is captured under a unique natural condition, so each image corresponds to a type of natural condition information.
  • the way of determining the natural condition information is: the mobile device obtains the position information of the mobile device through the global positioning system GPS, lidar, millimeter wave radar, and / or inertial measurement unit IMU, and then sends the position information to the climate server to obtain the natural conditions of the current position. information.
  • the network device analyzes and processes the target image set to obtain scene feature point information.
  • each image in the target image set corresponds to a unique type of natural condition information.
  • Each scene feature point in the scene feature point set is separately related to the natural condition information.
  • Corresponding relationships are established to obtain scene feature point information sets.
  • the scene feature point information set includes at least one scene feature point information, and each scene feature point information includes a 3D coordinate, a pixel coordinate, a key frame ID, and a descriptor information of the scene feature point, where each descriptor includes a This kind of natural condition information, the 3D coordinates, pixel coordinates and key frame ID of the scene feature points are static indicators, and the descriptor information is a dynamic indicator, which will change with the natural conditions.
  • the process of visual positioning is the process of determining the same scene feature points through the comparison of scene feature points, and the stationary scene feature points are generally used as the scene feature points for comparison to make the positioning more accurate.
  • the life value calculation is needed.
  • the size of the life value is used to represent the probability that the scene feature point is a static scene feature point. The larger the life value, the greater the probability that the scene feature point is a static feature point, and vice versa.
  • the same scene feature point may be captured by a single mobile device or may be captured by multiple mobile devices.
  • First determine the scene feature point information set. For a single mobile device, that is, when the scene feature point is observed by a mobile device, Scene feature points whose health value is greater than the first preset health value threshold, and then determine the first scene feature point information. For multiple mobile devices, that is, when the scene feature points are observed by two or more mobile devices, the health value If the scene feature point is greater than the second preset health threshold, the scene feature point is the second scene feature point.
  • This embodiment has the following advantages: After determining a target image set that meets the requirements of the preset image coincidence degree, the scene feature point information set is determined according to the target image set and the natural condition information corresponding to each image in the graphic set, and then the scene is determined to obtain the scene Second scene feature point information corresponding to a scene feature point in a feature information set where a single mobile device has a life value greater than a first preset life value threshold and a plurality of mobile devices have a life value greater than a second preset life value threshold, When the second scene feature point information does not match the scene feature point information preset in the database, a database is constructed according to the second scene feature point information.
  • the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
  • the database makes the positioning more accurate when the constructed database is used for positioning.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database specifically includes: the second scene feature does not exist in the database Point information.
  • the second scene feature point information is stored in the database and the stored second scene feature point information is stored. It includes the 3D coordinates, pixel coordinates, and key frame ID and descriptor information of the second scene feature point.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database specifically includes: the second scene feature point information exists in the database, However, the second scene feature point information does not include target descriptor information about target natural condition information.
  • the second scene feature point information in the database is the 3D coordinates, pixel coordinates, the key frame ID, and the descriptor 1 of the second scene feature point.
  • the second scene feature point information determined in the image is 3D coordinates, pixel coordinates, key frame IDs, and descriptor 2 information of the second scene feature points.
  • the target descriptor information related to the target natural condition information needs to be added to the second scene feature point of the database, that is, the information of descriptor 2.
  • the method further includes :
  • the difference between the preset 3D coordinates of any scene feature point in the database and the 3D coordinates of the second scene feature point is less than the first preset threshold value, it is determined that the second scene feature point exists in the database, and it also exists Feature point information of the second scene.
  • the method further includes:
  • the second scene feature point information exists in the database, determine at least one descriptor of the second scene feature point information preset in the database, and determine whether there is at least one descriptor in the at least one descriptor corresponding to the target descriptor information in the image
  • the distance of the descriptor is less than the preset distance threshold
  • the distance between the descriptors preset in the database and the descriptors corresponding to the target descriptor information in the second scene feature point information is greater than the second preset distance threshold, determine whether the second scene feature point information preset in the database is not Include target descriptor information.
  • a scene feature point has a life value f of a single mobile device, where f
  • the calculation formula is:
  • n represents the number of times a scene feature point is observed in a single mobile device.
  • a single mobile device can perform multiple experiments to obtain images, so a scene feature point may be observed multiple times by a single mobile device.
  • Said n 0 is the average of the number of times that any scene feature point obtained by model training was observed by at least one mobile device
  • is any scene feature point obtained by model training that was observed by at least one mobile device in advance The degree of variance.
  • the life value calculation formula of the scene feature point on a single mobile device is described, which increases the implementability of the solution.
  • a scene feature point may be captured by multiple mobile devices, and the scene feature points have multiple life values on multiple mobile devices.
  • F the calculation formula of F is: The f is the life value of the feature point of the scene on a single mobile device, the B is a weight coefficient corresponding to each mobile device in the multiple mobile devices, and one mobile device in the multiple mobile devices corresponds to one weight coefficient.
  • a scene feature point is captured by three mobile devices, and each of the three mobile devices corresponds to a weight coefficient. Then the life value of the scene feature point on a single mobile device is multiplied by the weight coefficient of the mobile device. The three mobile devices are then summed to obtain the life value of the scene feature point when it is observed by multiple mobile devices.
  • the plurality of mobile devices represent at least two mobile devices.
  • life value calculation formulas of the scene feature points on multiple mobile devices are described, which increases the implementability of the solution.
  • ⁇ t is different from the mobile device to the time between the observed feature points spaced about the same scene
  • geometric continuity index ⁇ g is observed with different mobile devices Euclidean distance between the feature point related to the same scene, and described consistency ⁇ c
  • the description distance between feature points of the same scene observed by different mobile devices is related.
  • the determining a target image set that meets a preset image coincidence degree requirement includes:
  • the image is first selected according to the preset distance interval d k + 1.
  • the selected image is determined as the target image.
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is not within the preset accuracy range, it is calculated according to the difference between the coincidence degree of the image corresponding to the distance interval d k + 1 and the preset coincidence degree threshold. It is also necessary to increase or decrease the distance interval selection image based on the distance interval d k + 1 in order to make the selected image meet the preset coincidence degree requirement, so as to obtain the distance interval d k + 2 of the next selected image, and then use d k + 2 as the distance of the selected image interval repeat the above steps, if the selected image d k + 2 satisfy the coincidence requirement, then d k + 2 in accordance with the selected image to obtain the target image set.
  • the preset distance interval d k + 1 the preset distance interval d k + 1 ;
  • the distance interval of the images is selected, and ⁇ is the coincidence degree between the images calculated when the images are selected according to the distance interval d k .
  • the determining a target image set that meets a preset image coincidence degree requirement includes:
  • the image is first selected according to the preset angle interval ⁇ k + 1.
  • the difference between the coincidence degree of the selected image and the preset coincidence threshold is within the preset accuracy range, the selected image is determined as the target. Images, and then repeatedly selecting images at preset angular intervals to obtain a target image set.
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is calculated and calculated. How many angular interval selection images need to be increased or decreased based on the angular interval ⁇ k + 1 in order to make the selected image meet the preset coincidence degree requirement, so as to obtain the angular interval ⁇ k + 2 of the next selected image, and then ⁇ k + 2 as the angle of the selected image interval repeat the above steps, if the selected image ⁇ k + 2 satisfy the requirement coincidence, the ⁇ k + 2 in accordance with the selected image to obtain the target image set.
  • the preset angle interval ⁇ k + 1 is a preset angle interval ⁇ k + 1 .
  • the scene feature point information includes a feature corresponding to the natural condition information.
  • Descriptor information, and obtaining the scene feature point information set according to the target image set and the natural condition information includes:
  • a network device processes a target image to obtain scene feature points of the target image
  • the scene feature point information also includes information such as 3D coordinates and pixel coordinates of the scene feature points.
  • the target image to which the scene feature point belongs may be multiple, that is, the scene feature point is included in multiple target images, and the natural condition information corresponding to each target image is generally different, so there may be multiple scene feature point information.
  • the descriptor information does not exclude that the natural condition information corresponding to two or more target images is the same.
  • Steps 1> and 2> are repeatedly performed, and the foregoing processing is performed on each image until the scene feature point information set is obtained.
  • the method for determining the scene feature point information set is described, and the integrity of the solution is increased.
  • the method further includes:
  • the third scene feature point information is deleted from the database, and the size of the FNCS may represent the scene The probability that the feature points are used in positioning and how many descriptors are included in the scene feature point information.
  • the preset FNCS can be determined through multiple experiments in advance.
  • Scene feature points with low FNCS values can be deleted to facilitate database management.
  • the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, M is the total number of times the scene feature point is located, and m i is the number of times that the scene feature point is used in the positioning. The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
  • a second aspect of the embodiments of the present application provides a positioning method, which is characterized in that it includes:
  • the network device acquires the real-time image of the mobile device.
  • the real-time image refers to the image of the mobile device and its surrounding environment.
  • the real-time image can be acquired by installing a camera on the mobile device.
  • the mobile device can also have The image acquisition function is not limited here.
  • the real-time image is analyzed and processed to obtain at least one first descriptor.
  • the first descriptor includes a mobile device or an external camera of the mobile device.
  • the mobile device is located.
  • Location target natural condition information When the real-time image is captured, the mobile device is located.
  • the real-time image contains at least one feature point information.
  • the natural condition information when the real-time image is taken is certain. Therefore, one feature point information includes only one descriptor information, and the real-time image obtains at least one descriptor.
  • the information contains the same information about the natural conditions of the target.
  • the target natural condition information may be determined by a network device, or may be determined by a mobile device and sent to the network device.
  • the target natural condition information is determined in the following manner: first, the network device or the mobile device determines the location information of the mobile device when the real-time image is captured, It can be determined by the global positioning system GPS, lidar, millimeter wave radar, and / or inertial measurement unit IMU, and the network device or mobile device can then determine the target natural condition information based on the position information.
  • the comparison method is specifically:
  • the descriptor information preset in the database is obtained after the database is constructed.
  • the network device determines a target image set that meets the requirements of the preset image coincidence, the network device determines the target image set according to the target image set and the target image set.
  • a set of scene feature point information is obtained from the natural condition information corresponding to each image. From the scene feature point information set, first scene feature point information corresponding to a first scene feature point that meets a preset life value requirement is selected to determine the first scene.
  • the second descriptor information corresponding to the target natural condition information in the feature point information is used to construct a database according to the second descriptor information when the second descriptor information does not match the descriptor information preset in the database.
  • the specific process of constructing the database is similar to the process of constructing the database of the first aspect described in this application, and details are not described herein again.
  • the real-time image is visually located according to the same descriptor information.
  • the locating the real-time image by using the same descriptor information includes:
  • the first scene feature point information After determining the same descriptor information in the real-time image as in the database, determine the first scene feature point information to which the same descriptor information belongs in the database, search the database to obtain the 3D coordinates, pixel coordinates and other information of the first scene feature point, and then combine The first scene feature point information and the positioning calculation formula obtain the position of the target mobile device when shooting a real-time image.
  • the positioning calculation formula is:
  • the position of the target mobile device when the real-time image is captured Which said Is the pixel coordinates of the feature point of the first scene in the real-time image, ⁇ C is the internal parameter matrix of the camera, and ⁇ C is used to convert 3D coordinates into pixel coordinates.
  • Feature points in the database The pose of the image relative to the world coordinate system, said Is the pixel coordinates of the first scene feature point in the database, the value of i is 1 to n, the n is a positive integer, the first scene feature point and the first scene feature point correspond.
  • a total of n scene feature points in the real-time image match the scene feature points in the database.
  • the pixel coordinates obtained from the transformation match the pixel coordinates of the scene feature points in the database Consistent, subtracting the two results in a reprojection error model.
  • the real-time pose of the car can be obtained.
  • the method further includes:
  • a manner of comparing the descriptor information is similar to the manner of comparing the descriptor information described in the second aspect, and details are not described herein again.
  • the descriptor information of different descriptors is added to the database, so that the database can be used for positioning more accurately.
  • the database can be updated according to different descriptor information, so that the database is more complete and the positioning is more accurate.
  • the according to The constructing the database by the different descriptor information includes:
  • the second scene feature point information When the second scene feature point does not exist in the database, the second scene feature point information also does not exist.
  • the second scene feature point is a scene feature point to which different descriptors belong, so at this time, it is necessary to add a different description to the database.
  • the second scene feature point information of the sub information In this embodiment, the number of different descriptors may be multiple.
  • a second scene feature point information of a real-time image can only contain a different descriptor. Therefore, a second scene feature point needs to be added to the database.
  • the number of information is also plural.
  • the Constructing the database by describing different descriptor information includes:
  • the second scene feature point information When the second scene feature point exists in the database, the second scene feature point information also exists, but the second scene feature point information in the database is different from the second scene feature point information in the real-time image, that is, the second scene feature point The information does not include the different descriptor information.
  • the different descriptor information needs to be added to the second scene feature point information of the database.
  • the second scene feature point information of the database is 3D coordinates, pixel coordinates, and key frame IDs and descriptors 1 of the second scene feature points.
  • the second scene feature point information determined in the real-time image is the 3D coordinates of the second scene feature points, pixel coordinates, and the key frame ID and descriptor 2 information.
  • the information of the descriptor 2 needs to be added to the second scene feature point information of the database.
  • the method further includes:
  • the difference between the preset 3D coordinates of any scene feature point in the database and the 3D coordinates of the second scene feature point is less than the first preset threshold value, it is determined that the second scene feature point exists in the database, and also Feature point information of the second scene.
  • the three aspects of the present application provide a database, which is deployed on a server;
  • the database is formed by the second scene feature point information that does not match the preset scene feature point information in the database, and the second scene feature point information is the life value of the first scene feature point information on multiple mobile devices.
  • Scene feature point information corresponding to a scene feature point that is larger than a second preset health value threshold where the first scene feature point information is a set of scene feature point information whose life value on a single mobile device is greater than the first preset health value threshold
  • Scene feature point information corresponding to a scene feature point the scene feature point information set is obtained according to a target image set and natural condition information corresponding to each image, and the scene feature point set includes at least one scene feature point information
  • the The target image set includes at least one image that satisfies the requirement of the coincidence degree of the preset image, and each image corresponds to a type of natural condition information.
  • the process of forming the database in this embodiment is similar to the process of constructing the database in the first aspect, and details are not repeated here.
  • the completed database can be used for visual positioning, making the positioning more accurate.
  • the server further includes a processor
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes: the second scene feature point information does not exist in the database;
  • the forming of the database from the feature information of the second scene includes:
  • the second scene feature point information When the second scene feature point information does not exist in the database, the second scene feature point information is added to the database, so the database is formed by the server adding the second scene feature point information to the database.
  • the feature information of the second scene includes target descriptor information about target natural condition information.
  • the server further includes a processor
  • the mismatch between the second scene feature point information and the preset scene feature point information in the database includes: the second scene feature point information exists in the database, and the second scene feature point information does not include information about the target Target descriptor information for natural condition information;
  • the forming of the database from the feature information of the second scene includes:
  • the scene feature point information preset in the database is the same as the second scene feature point information, but the descriptor information of the two is different, different descriptor information is added to the database, so the database is stored in the database by the server.
  • the target descriptor information is added to the preset second scene feature point information.
  • a fourth aspect of the present application provides a network device, including:
  • a determining unit configured to determine a target image set that meets a requirement of a preset image coincidence degree, the target image set includes at least one image, and each image corresponds to a type of natural condition information;
  • a processing unit configured to obtain a scene feature point information set according to the target image set and natural condition information corresponding to each image, where the scene feature point set includes at least one scene feature point information;
  • the determining unit is further configured to determine, in the scene feature point information set, first scene feature point information corresponding to a scene feature point whose life value of a single mobile device is greater than a first preset life value threshold, the life value
  • the size is used to represent the probability that the scene feature point is a static scene feature point
  • the determining unit is further configured to determine, from the first scene feature point information, second scene feature point information corresponding to a scene feature point whose life value of multiple mobile devices is greater than a second preset life value threshold;
  • a database construction unit is configured to construct the database according to the second scene feature point information when the second scene feature point information does not match the scene feature point information preset in the database.
  • the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
  • the database makes the positioning more accurate when the constructed database is used for positioning.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
  • the second scene feature point information does not exist in the database
  • the database construction unit is specifically configured to add the second scene feature point information to the database, and the second scene feature point information includes target descriptor information about target natural condition information.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
  • the second scene feature point information exists in the database, and the second scene feature point information does not include target descriptor information about target natural condition information;
  • the database construction unit is specifically configured to add the target descriptor information to the second scene feature point information preset in the database.
  • the determining unit is further configured to perform all operations corresponding to the feature point information of the second scene.
  • the determination unit is further configured to determine at least one description of the second scene feature point preset in a database Sub-information
  • the network device further includes:
  • a judging unit configured to judge whether there is a distance between a descriptor corresponding to the descriptor information and a descriptor corresponding to the target descriptor information in the at least one descriptor information being smaller than a preset distance threshold;
  • the determining unit is further configured to determine that if the distance between the descriptor corresponding to the descriptor information and the descriptor corresponding to the target descriptor information is smaller than a preset distance threshold in the at least one descriptor information, The set second scene feature point information does not include target descriptor information.
  • the scene feature point has a life value f of a single mobile device
  • the calculation formula of f is: Wherein, n represents the number of times that the scene feature point is observed in a single mobile device, n 0 is an average value of the number of times that a preset scene feature point is observed, and ⁇ is a preset scene The variance of the number of times a feature point was observed.
  • the life value calculation formula of the scene feature point on a single mobile device is described, which increases the implementability of the solution.
  • the life value of the scene feature point on multiple mobile devices is F
  • the calculation formula of F is:
  • the f is the life value of the scene feature point on a single mobile device
  • the B is a weight coefficient corresponding to each mobile device
  • one mobile device of the plurality of mobile devices corresponds to one weight coefficient.
  • life value calculation formulas of the scene feature points on multiple mobile devices are described, which increases the implementability of the solution.
  • the formula for the ⁇ i ⁇ t + ⁇ g + ⁇ c, the gamma] t is the time continuity index of the scene feature points observed on multiple mobile devices, ⁇ g is the geometric continuity index of the scene feature points observed on multiple mobile devices, and ⁇ c is The scene feature point is a description consistency index observed on multiple mobile devices.
  • the determining unit is specifically configured to select an image according to a preset distance interval .
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
  • the preset distance interval d k + 1 the preset distance interval d k + 1 ;
  • the distance interval of the images is selected at time, and ⁇ is the degree of overlap between the images when the images are selected according to the distance interval d k .
  • the determining unit is specifically configured to select an image according to a preset angle interval ;
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
  • the preset angle interval ⁇ k + 1 the preset angle interval ⁇ k + 1 ;
  • the angular interval of the images is selected at time, and ⁇ is the degree of coincidence between the images when the images are selected according to the angular interval ⁇ k .
  • the scene feature point information includes information related to the natural condition Corresponding descriptor information, the processing unit is specifically used for 1> processing a target image to obtain scene feature points;
  • the method for determining the scene feature point information set is described, and the integrity of the solution is increased.
  • the determining unit is further configured to determine the Feature point information of the third scene in the database;
  • the database construction unit is further configured to delete the third scene feature from the database when the feature quantity control score FNCS of the third scene feature point corresponding to the third scene feature point information is less than a preset FNCS threshold. Point information.
  • Scene feature points with low FNCS values can be deleted to facilitate database management.
  • the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, the The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
  • a fifth aspect of the present application provides a positioning method, where the method is applied to a visual positioning system, and the method includes:
  • a determining unit configured to determine at least one first descriptor information according to the real-time image, where the first descriptor includes target natural condition information when the real-time image is captured;
  • the determining unit is further configured to compare the descriptor information preset in the database with the at least one first descriptor information to determine the same descriptor information, and the descriptor information preset in the database is determined by the After the network device determines a target image set that satisfies a preset image coincidence degree requirement, the scene feature point information set is obtained according to the target image set and natural condition information corresponding to each image in the target image set, and the scene feature points are obtained from the scene feature points.
  • the first scene feature point information corresponding to the first scene feature point that satisfies the preset life value requirements is selected from the information set, and the second scene descriptor information corresponding to the target natural condition information in the first scene feature point information is used to construct the It is obtained after the database is described that the second descriptor information does not match the descriptor information preset in the database, and the scene feature point information includes descriptor information corresponding to the natural condition information;
  • a positioning unit configured to use the same descriptor information to locate the real-time image.
  • the positioning unit is specifically configured to determine first scene feature point information corresponding to the same descriptor information in a database
  • the position of the target mobile device when the real-time image is captured is calculated and calculated according to the first scene feature point information and a positioning calculation formula.
  • the positioning calculation formula is: a position of the target mobile device when the real-time image is captured
  • the determining unit is further configured to preset a description in a database. Comparing the sub-information with the at least one first descriptor information to determine different descriptor information;
  • the network device further includes a database construction unit
  • the database construction unit is specifically configured to construct the database according to the different descriptor information.
  • a database may be displayed according to different descriptor information, so that the database is more complete, and the positioning is more accurate.
  • the A database construction unit is specifically configured to add the second scene feature point information including the different descriptor information to the database.
  • the constructing A unit specifically configured to add the different descriptor information to the second scene feature point information of the database.
  • the determining unit is further configured to determine the second scene corresponding to the feature information of the second scene 3D coordinates of feature points;
  • a sixth aspect of the present application provides a network device, characterized in that the network device includes: a memory, a transceiver, a processor, and a bus system;
  • the memory is used for storing a program
  • the processor is configured to execute a program in the memory, and includes the following steps:
  • the target image set includes at least one image, and each image corresponds to a type of natural condition information
  • the bus system is configured to connect the memory and the processor to enable the memory and the processor to communicate.
  • the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
  • the database makes the positioning more accurate when the constructed database is used for positioning.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
  • the second scene feature point information does not exist in the database
  • the processor is specifically configured to:
  • the second scene feature point information is added to a database, and the second scene feature point information includes target descriptor information about target natural condition information.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
  • the second scene feature point information exists in the database, and the second scene feature point information does not include target descriptor information about target natural condition information;
  • the processor is specifically configured to:
  • the target descriptor information is added to the second scene feature point information preset in the database.
  • the determining unit and the processor are further configured to:
  • the determining unit and the processor are further configured to:
  • the scene feature point has a life value f of a single mobile device
  • the calculation formula of f is: Wherein, n represents the number of times that the scene feature point is observed in a single mobile device, n 0 is an average value of the number of times that a preset scene feature point is observed, and ⁇ is a preset scene The variance of the number of times a feature point was observed.
  • the life value calculation formula of the scene feature point on a single mobile device is described, which increases the implementability of the solution.
  • the life value of the scene feature point on multiple mobile devices is F
  • the calculation formula of F is:
  • the f is the life value of the scene feature point on a single mobile device
  • the B is a weight coefficient corresponding to each mobile device
  • one mobile device of the plurality of mobile devices corresponds to one weight coefficient.
  • life value calculation formulas of the scene feature points on multiple mobile devices are described, which increases the implementability of the solution.
  • the processor is specifically configured to:
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
  • the preset distance interval d k + 1 the preset distance interval d k + 1 ;
  • the distance interval of the images is selected at time, and ⁇ is the degree of overlap between the images when the images are selected according to the distance interval d k .
  • the processor is specifically configured to:
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
  • the preset angle interval ⁇ k + 1 the preset angle interval ⁇ k + 1 ;
  • the angular interval of the images is selected at time, and ⁇ is the degree of coincidence between the images when the images are selected according to the angular interval ⁇ k .
  • the scene feature point information includes information related to the natural condition Corresponding descriptor information, the processor is specifically configured to:
  • the method for determining the scene feature point information set is described, and the integrity of the solution is increased.
  • the processor is further configured to:
  • the third scene feature point information is deleted from the database.
  • Scene feature points with low FNCS values can be deleted to facilitate database management.
  • the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, the The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
  • a seventh aspect of the present application provides a network device, which belongs to a vision positioning system, and the network device includes: a memory, a transceiver, a processor, and a bus system;
  • the transceiver is configured to acquire a real-time image
  • the memory is used for storing a program
  • the processor is configured to execute a program in the memory, and includes the following steps:
  • the descriptor information preset in the database is determined by the network device to satisfy the preset image overlap
  • the scene feature point information set is obtained according to the target image set and the natural condition information corresponding to each image in the target image set, and a preset life is selected from the scene feature point information set.
  • the first scene feature point information corresponding to the first scene feature point required by the value is obtained by constructing the database according to the second descriptor information corresponding to the target natural condition information in the first scene feature point information.
  • Second descriptor information does not match the descriptor information preset in the database, and the scene feature point information includes descriptor information corresponding to the natural condition information;
  • the bus system is configured to connect the memory and the processor to enable the memory and the processor to communicate.
  • the processor is specifically configured to:
  • the position of the target mobile device when the real-time image is captured is calculated and calculated according to the first scene feature point information and a positioning calculation formula.
  • the positioning calculation formula is: a position of the target mobile device when the real-time image is captured
  • the processor is further configured to:
  • a database may be displayed according to different descriptor information, so that the database is more complete, and the positioning is more accurate.
  • the The processor is specifically used for:
  • the second scene feature point information including the different descriptor information is added to the database.
  • the processing The device is specifically used for:
  • the different descriptor information is added to the second scene feature point information of the database.
  • the processor is further configured to:
  • An eighth aspect of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the methods described in the above aspects.
  • a ninth aspect of the present application provides a computer program product containing instructions that, when run on a computer, causes the computer to perform the methods described in the above aspects.
  • FIG. 1 is a schematic diagram showing a relationship between a data structure and a data type in an image database of the present application
  • FIG. 2 is a schematic diagram when an embodiment of the present application is applied to a car end
  • FIG. 3 is a schematic structural diagram of a visual positioning system of the present application.
  • FIG. 4 is a schematic diagram of another structure of a visual positioning system of the present application.
  • FIG. 5 is a schematic diagram of an embodiment of a database construction method of the present application.
  • FIG. 6 is a schematic diagram of an embodiment of selecting a target image set according to the present application.
  • FIG. 7 is a schematic diagram of an embodiment in which feature point information of a second scene is selected in this application.
  • FIG. 8 is a schematic diagram showing the relationship between the life value of scene feature points and the number of times the scene feature points are observed;
  • FIG. 9 is a schematic diagram of another embodiment of a database construction method of the present application.
  • FIG. 10 is a schematic diagram of an embodiment for determining whether a scene feature point exists in a database
  • FIG. 11 is a schematic diagram of an embodiment of a positioning method according to the present application.
  • FIG. 12 (a) is a case where the scene feature point information in the image of the present application does not match the preset scene feature point information in the database;
  • FIG. 12 (b) is another case where the scene feature point information in the image of the application does not match the preset scene feature point information in the database;
  • FIG. 13 is a schematic diagram of another embodiment of a positioning method of the present application.
  • FIG. 14 is a schematic structural diagram of a network device according to the present application.
  • FIG. 15 is another schematic structural diagram of a network device of the present application.
  • FIG. 16 is another schematic structural diagram of a network device of the present application.
  • the database stores scene key frame information, scene feature point information, and descriptor information, and the three have an association relationship.
  • the scene key frame information includes an image, a position, and an attitude.
  • a scene key frame has at least one scene feature point.
  • the scene feature point information includes ID information of the key frame of the scene to which the scene feature point belongs, pixel coordinates, 3D coordinates, and descriptor information.
  • a scene feature point has at least one descriptor. Part of the descriptor information is the scene feature point descriptor ⁇ in the traditional visual field, and the other part is the natural condition attribute E of the scene when the scene feature point is collected. When the natural condition attribute E changes, the descriptor ⁇ also changes.
  • the pixel coordinates, 3D coordinates, and ID information of the key frame of the scene to which the scene feature points belong are static attributes of the scene feature points and will not change due to changes in the external environment.
  • the descriptor information is different under different natural conditions.
  • Different natural conditions refer to different viewing directions, different weather, and / or different lighting conditions. Different natural conditions may also be other situations. limited.
  • the embodiment of the present application is mainly applied to a visual positioning system.
  • the principle of visual positioning is to compare the scene feature points of the captured image with the scene feature points in the database. If the scene feature points of the captured image are in the database, Corresponding scene feature point comparisons are consistent, they are considered to be the same scene feature points, and then the 3D coordinates of the consistent scene feature points are compared in the database for positioning.
  • This application can be applied to positioning during the movement of mobile devices such as drones, V2X car terminals, and mobile phones.
  • taking the real-time positioning of the vehicle First, determine the real-time image information of the vehicle A during the driving process and the positioning information obtained through non-visual positioning such as GPS. Vehicle A then sends the real-time image to the server. At the same time, vehicle A can send the positioning information to the server. After receiving the server, it determines the natural condition information for the location, or after vehicle A determines the natural condition information based on the positioning information, it sends the natural condition information. To the server, and then the server finds multiple descriptor information about the natural condition information in the real-time image, and then compares the determined multiple descriptors with the preset descriptors stored in the database. The database belongs to the server. Used to store scene key frames and scene feature point information.
  • the descriptors in the database are the same as the descriptors in the real-time image, the descriptors that are successfully compared in the database are found, and the descriptors are successfully compared. It is proved that the scene feature point to which the descriptor belongs is the same scene feature point.
  • the 3D coordinates of feature points in the same scene can be used by the car end to locate using the 3D coordinates of feature points in the same scene.
  • the descriptors in the real-time image may be exactly the same as the descriptors in the database. At this time, the 3D coordinates of the feature points of the scene to which the identical descriptors belong are directly used for positioning. If only part of the descriptors exist in the real-time image, the phase can be found in the database.
  • the corresponding descriptors are first positioned using the 3D coordinates of the feature points of the scene to which the same descriptor belongs. After the positioning is completed, the different descriptors are obtained, and then the different descriptors are updated to the database in order to optimize the database. , So that the optimized database positioning is more accurate in subsequent positioning.
  • the database construction process is based on a large number of images, and the scene feature points in the image are selected according to the life value algorithm to obtain a large amount of scene feature point information, and then compared in the database For existing scene feature point information, update and optimize the database, and update scene feature point information that does not exist in the database into the database.
  • the life value algorithm can accurately select representative scene feature points, making the database more accurate for visual positioning.
  • Figure 3 shows a possible structure of a visual positioning system, where the locator is used to obtain the positioning information of the mobile device, and optionally, it can also obtain the posture information; the image acquirer is used to capture the image of the mobile device; the mobile device It is used to receive the image sent by the image acquirer and the positioning information sent by the locator, and then send it to the network device.
  • the network device can also directly obtain the image and positioning information without the mobile device, that is, the network device directly communicates with the image acquirer and the locator.
  • the connection is not limited here.
  • the network device is used to compare scene feature points after receiving the image to achieve positioning, and it can also update and manage its own database.
  • one possible situation is that the mobile device sends positioning information to a network device as described above, and another possible situation is that the mobile device sends the natural condition information to the network device after determining the natural condition information according to the positioning information, and No positioning information is sent, which is not limited here.
  • the locator can be: a global positioning system, a camera, a lidar, a millimeter wave radar, and an inertial measurement unit.
  • the IMU can obtain the positioning information and the attitude of the mobile device.
  • the locator may be a component of a mobile device, or may be an external device connected to the mobile device, which is not specifically limited herein.
  • Mobile devices can be: vehicles, mobile phones and drones.
  • the image acquirer may specifically be a camera.
  • the image acquirer may be a component of a mobile device or an external device connected to the mobile device, which is not limited herein.
  • the network device may be a cloud server or a mobile device with data processing capabilities, which is not limited here.
  • a data model for visual positioning shown in FIG. 1 is preset in a database of a network device, and the data model introduces scene key frames, scene feature points, and relationships between descriptors.
  • the embodiment of the present application proposes a database construction method and a positioning method.
  • the application includes two parts. One part is the network device side database construction process. Database, making the database better for visual positioning. The other part is the process of visual positioning after the database is constructed.
  • the two parts are introduced below, and the database construction process is shown in Figure 5:
  • the network device obtains data information.
  • the data information may be image information, location information, posture, or natural condition information, which is not specifically limited herein.
  • the method for the network device to obtain the image information of the mobile device during driving is as follows: the camera can be installed on the mobile device to acquire the image captured by the camera, and the mobile device can also have an image acquisition function, and the network device acquires the image captured by the mobile device. Image. During the operation of the mobile device, the image is captured every certain period of time. The acquired image is mainly the image information of the surrounding environment during the mobile device's movement. The selected time period is manually set and can be 0.01S or 0.001S. It is not limited here.
  • the image information includes at least one image.
  • the posture and real-time location information of the mobile device are different when each image is taken.
  • the posture indicates the driving angle and direction of the mobile device.
  • the real-time location information of the mobile device can be obtained through the global positioning system GPS. , Lidar, millimeter wave radar and / or inertial measurement unit IMU.
  • a target image set that satisfies a preset coincidence degree requirement is selected according to the data information.
  • the process of determining the target image set may be: the mobile device may first filter the acquired images according to a preset coincidence degree requirement, and then send the screening result to the network device, and the target image filtering process may also be a network device Execution, that is, the network device obtains the target image set after obtaining the image, and the specific image set is not limited here.
  • the basis for determining the target image set is different when the mobile device is going straight and turning. When the car is going straight, it needs to determine the target image set that meets the requirements at a certain distance. Need to determine the set of target images that meet the requirements at a certain angular interval, the specific steps are shown in Figure 6 below:
  • a distance interval or an angular interval is defined in advance, and the expected selected image is determined according to the interval. For example, on a straight road, an image is obtained every 1 m of a car driving. Or on curved roads, an image is acquired every 5 degrees of car driving angle change.
  • the degree of coincidence ⁇ between images n old of the number of feature points in the same scene between the current image and the neighboring image / number of new feature points in the scene between the current image and the neighboring image, n new .
  • the number of scene feature points in the current image is n total
  • n total n old + n new .
  • the calculation formula is:
  • ⁇ * is the preset coincidence degree threshold
  • ⁇ * is generally taken as 1
  • ⁇ ⁇ is the preset accuracy value
  • ⁇ ⁇ ranges from 0.1 to 0.2.
  • ⁇ * and ⁇ ⁇ can also take other values, specifically this There are no restrictions.
  • the preset accuracy range is 0- to ⁇ ⁇ .
  • the distance interval of the selected image is redefined. It is first determined that the distance interval (or angular interval) ⁇ d k needs to be increased.
  • d k is the distance interval of the last selected image
  • d k and ⁇ * and ⁇ have been obtained in the above steps.
  • a new distance interval d k + k d k + ⁇ d k for selecting key frames of the scene is obtained.
  • d k + 1 is again determined as the distance interval for acquiring the keyframe image of the scene, and returns to step A to re-execute the above process until a distance interval d k + n is obtained .
  • the coincidence degree meets the preset conditions .
  • the image selected according to the distance interval d k is the target image, and multiple target images are selected according to d k to obtain the target image set.
  • the network device processes the target image set to obtain the scene feature points.
  • the scene feature points can be considered as the pixel points in the target image that have a large difference in gray value with other pixel points.
  • the target points are then determined based on the location of the mobile device when each target image is taken.
  • the natural condition information of the location and establish the correspondence between the scene feature points and the natural condition information to obtain scene feature point information. It can be understood that in addition to the natural condition information, the scene feature point information also includes 3D coordinates, pixel coordinates, and descriptor information of the scene feature points.
  • a scene feature point there may be multiple images that include the same scene feature points, so the correspondence between scene feature points and natural condition information can be one-to-one correspondence, or a scene feature point can correspond to multiple types of natural condition information.
  • the sub-information changes with the change of natural condition information, so a scene feature point information may include multiple descriptive sub-information.
  • target image 1 and target image 2 there are target image 1 and target image 2 in the target image set.
  • Target image 1 is shot on a sunny day and the light intensity is 400 lx.
  • Target image 2 is shot on a cloudy day and the light intensity is 300 lx.
  • the target image 1 has scene feature points. 1 and scene feature point 2, the target image 2 has scene feature point 2 and scene feature point 3, and the target image set is parsed to obtain scene feature point 1, scene feature point 2 and scene feature point 3.
  • the scene feature point has 1 Descriptors, corresponding to the natural condition information of the target image 1, there are two descriptors in the scene characteristic point 2, respectively corresponding to the natural condition information of the target image 1 and the natural condition information of the target image 2, and there are One descriptor corresponds to the natural condition information of the target image 2.
  • a representative scene feature point is selected.
  • the representative scene feature points can be signs, road signs, and Scene feature points related to buildings and other objects.
  • the way to select scene feature points is to select based on the life value of the scene feature points.
  • the size of the life value can represent the probability that the scene feature points are static scene feature points. The larger the life value, the scene feature points are static scene feature points. The greater the probability.
  • the process of selecting scene feature points includes:
  • the mean value n 0 and the variance ⁇ of the number of times the scene feature points are observed are determined according to FIG. 8. Then calculate the first life value of each scene feature point in the scene feature point set.
  • the calculation formula is:
  • n represents the number of times a scene feature point in a scene feature point set is observed in a single car end.
  • the scene feature point is the first scene feature point, and the first scene feature point after screening is determined from the perspective of multiple mobile devices. Whether the health requirements are met.
  • first life value of a scene feature point is less than or equal to a first preset threshold, it means that the first life value of the scene feature point is too low, and the scene feature point is discarded.
  • Determining the first scene feature point obtained by multiple mobile devices can be determined according to the 3D coordinates or pixel coordinates of the scene feature point, or it can be determined in other ways whether the scene feature points obtained by multiple mobile devices are the same scene feature
  • the points are not limited here. For example, among the scene feature points obtained by multiple mobile devices, scene feature points with the same 3D coordinates or 3D coordinate differences within a preset difference range belong to the same scene feature point.
  • f is the life value of the scene feature point on a single mobile device
  • B is the weight coefficient corresponding to each mobile device.
  • a scene feature point generally has a different weight coefficient corresponding to each mobile device.
  • ⁇ 1 and ⁇ 2 is a preset value, visible, different mobile devices of the same scene observed time interval feature points and its negative correlation ⁇ t.
  • ⁇ g and ⁇ c are similar to ⁇ t , and details are not repeated here. It should be noted that when calculating the geometric continuity index ⁇ g , ⁇ is defined as the Euclidean distance between different mobile devices to observe feature points of the same scene. When calculating the description consistency ⁇ c , ⁇ is defined as the description distance between feature points observed by different mobile devices in the same scene.
  • the first scene feature point whose second life value is greater than or equal to the second preset threshold is the second scene feature point, and the second scene feature point is a representative mature scene feature point. Information is added to the database.
  • the feature point of the first scene is discarded.
  • the second scene feature point information is compared with the scene feature point information preset in the database. If the scene feature point information preset in the database does not match the second scene feature point information, a database is constructed based on the second scene feature point information .
  • the second scene feature point information is obtained by filtering in the foregoing manner, when the second scene feature point information related to certain natural condition information does not exist in the database, the second scene feature point information is used to construct the second scene feature point information.
  • the database makes the positioning more accurate when the constructed database is used for positioning.
  • the scene feature point information includes 3D coordinates of the scene feature points, pixel coordinates, descriptor information related to natural conditions, and a key frame ID to which the scene feature points belong, where the 3D coordinates, pixel coordinates, and scene feature points belong.
  • the key frame ID represents the static index of the feature points of the scene, which is generally fixed, and the descriptor information is a dynamic index, which will change as the natural conditions change.
  • the preset feature point information in the database and the second scene A possible case where the feature point information does not match is that the second scene feature point information does not exist in the database, or the second scene feature point information exists in the database, but the second scene feature point information in the database and the second scene feature determined in the image
  • the descriptor information contained in the point information is different. Please refer to FIG. 9, which will be described below.
  • the second scene feature point information corresponding to the scene feature points whose health values of the multiple mobile devices are greater than a second preset life value threshold is determined.
  • determining whether a feature point of the second scene exists in the database specifically includes the following steps:
  • the 3D coordinates of the second scene feature point corresponding to the second scene feature point information are determined. Since the scene feature point may be observed by multiple mobile devices, first when obtaining a scene feature point viewed from multiple mobile devices, multiple The 3D coordinates of the feature points of the scene observed by the mobile device, and then the average of multiple 3D coordinates is calculated And standard deviation ⁇ , and then the 3D coordinates and mean of the scene feature points measured at each vehicle end For comparison, when the Euclidean distance between the two is greater than 3 ⁇ , it means that the 3D coordinate error measured by the car end is large, and the 3D coordinate is deleted.
  • the number of car ends is N
  • the 3D coordinates of the feature points of the same scene observed by the N car ends are 3D1, 3D2, 3D3, and 3Dn, respectively.
  • 3D1 is the 3D coordinates of the feature points of the scene. If at least one of the 3D coordinates (for example, 3D1) is The Euclidean distance between them is greater than 3 ⁇ , then delete 3D1, and then use 3D2, 3D3 to 3Dn to average it. Repeat the above steps afterwards.
  • 3 ⁇ is a fixed value and is preset by the system. The value of 3 ⁇ is not specifically limited here.
  • the scene feature point After calculating the 3D coordinates of the scene feature points, compare them with the 3D coordinates of any scene feature point in the image database. When the Euclidean distance between the two is less than ⁇ d , judge it and the scene feature point in the database. Belonging to the same scene feature point, when the 3D coordinates of each scene feature point preset in the database are compared with the 3D coordinates of the scene feature point, and the Euclidean distance is greater than the first preset threshold, the scene feature point is judged Is a new scene feature point, and new scene feature point information (ie, second scene feature point information) is added to the database.
  • the specific value of ⁇ d is not limited here. At the same time, in this embodiment, the number of feature points of the new scene is not limited.
  • the new scene feature point information added in the database includes pixel coordinates, 3D coordinates, key frame IDs, and target descriptor information of the new scene feature points.
  • the new scene feature point information may also include information other than the target descriptor information. Other descriptor information is not limited here.
  • the scene feature point 4 does not exist in the database, and the descriptor 10 of the scene feature point 4 does not exist.
  • the descriptor 10 is a different descriptor of this embodiment.
  • the information of the second scene feature point containing different descriptors that is, the information of the scene feature point 4 is added to the database.
  • Target descriptor information Specifically:
  • the preset second scene feature point information in the database includes at least one descriptor information.
  • the target descriptor information is descriptor information about target natural condition information.
  • the descriptor corresponding to the target descriptor information is the descriptor whose distance to the other descriptors is the smallest from all the descriptors of the feature points of the scene.
  • the target descriptor information corresponds to The descriptor is any one of all the descriptors of the characteristic points of the scene.
  • the target descriptor is judged to be a new descriptor.
  • the distance between the target descriptor and a certain descriptor in the database is less than or equal to the preset When it is far from the threshold, it is judged as the same descriptor.
  • the target descriptor is a new descriptor
  • the information of the target descriptor is stored in a database. If it is the same descriptor, no update processing is performed. In this embodiment, the number of new descriptors is not limited herein.
  • the database includes the same scene feature points 1, scene feature points 2 and scene feature points 3 as the target image, but the scene feature points 3 of the database do not include scene feature points 3 of the image
  • the descriptor information corresponding to different descriptors 9 is added to the database.
  • M is the total number of times the scene feature points are located, and mi represents the number of times the scene feature points are used in the positioning.
  • the calculation formula of the feature quantity control score FNCS is:
  • a camera is installed on the target mobile device, or a camera is installed at a certain distance on the road to obtain real-time image information of the target mobile device during driving. It can be understood that the real-time image obtained is a picture of the surrounding road and environment during the driving of the vehicle.
  • the camera After the camera captures the real-time image, it can be sent directly to the network device, or it can be sent to the network device via the target mobile device, which is not limited here.
  • the target mobile device may also have an image acquisition function.
  • the network device processes the real-time image to obtain at least one first descriptor.
  • the first descriptor includes target natural condition information when the real-time image is captured.
  • the target natural condition information may be determined by the network device or by the mobile device. Then send it to the network device, which is not limited here.
  • the target natural condition information when capturing real-time images is determined based on the real-time positioning information of the mobile device, and the real-time positioning information of the mobile device can be obtained through GPS, Lidar, and / or millimeter-wave radar, or inertial Obtained by the measurement unit IMU, which is not specifically limited here. After obtaining the real-time positioning information, it is determined that the natural condition information of the position is the target natural condition information.
  • the imaging of the same scene under different viewing directions, different weather and different lighting conditions is different.
  • the pixels around a corner of a sign on a road when the weather is clear are significantly different from the pixels around that corner when the weather is dark; for example, the pixels around the corners on the front of the sign and the pixels around the back are also very different. Obvious; in this way, the description of the corners of the sign board at the same location is very different in different weather, different light, and different perspectives.
  • the real-time image captured under the natural condition corresponding to the target natural condition information has unique natural condition, so a scene feature point information in the real-time image contains only one kind of descriptor information, but A real-time image has multiple scene feature points, so at least one descriptor in the real-time image includes target natural condition information.
  • One possible case is that there are N first descriptors and N There are M first descriptors in a descriptor including target natural condition information. N and M are positive integers and M is less than or equal to N.
  • Each first descriptor information in at least one first descriptor information is compared with the descriptor information preset in the database to determine the same descriptor information.
  • the real-time image includes Descriptor 1, Descriptor 2, ... Descriptor N, which are compared with the descriptors in the database.
  • Descriptor 1, Descriptor 5, ... Descriptor N-1 and Descriptor N are found in the database. The same descriptor.
  • the method for judging whether the descriptor is the same as the descriptor in the database is similar to the method for judging whether the descriptor is the same when the database is constructed, that is, according to the distance between the descriptors, the details are not described here again.
  • the descriptor information preset in the database is obtained after the database is constructed according to steps 501 to 505 of the embodiment. The details are not repeated here.
  • the real-time image is compared with the database. It is found that Descriptor 1 and Descriptor 4 can find the corresponding descriptor in the database. Then determine the feature point 1 of the scene to which Descriptor 1 belongs and the scene to which Descriptor 1 belongs to the database. Feature point 1 is the same scene feature point, while scene feature point 4 to which descriptor 4 belongs and scene feature point 4 to which descriptor 4 belongs in the database are the same scene feature points, and 3D coordinates of the same scene feature points are found: 3D coordinates 1 and 3D coordinates 2. Then use 3D coordinate 1 and 3D coordinate 2 for positioning calculation.
  • Network devices use the same scene feature point information for positioning calculations. After determining the feature points of the same scene in the database, the posture of the mobile device is obtained according to a preset algorithm.
  • the calculation formula for the positioning is:
  • ⁇ C is the internal parameter matrix of the camera, which can convert 3D coordinates into pixel coordinates.
  • the mobile device performs positioning based on the calculation result. After the network device calculates the positioning result, it returns the positioning result to the mobile device so that the mobile device can perform the positioning operation.
  • the positioning calculation can also be performed by a mobile device.
  • the network device determines the feature points of the same scene in the database, it sends the information of the feature points of the same scene to the mobile device.
  • the mobile device obtains the pose information and performs positioning according to a preset algorithm. operating.
  • the same scene feature point information sent by the network device to the mobile device specifically includes: the pixel coordinates of the scene feature points, the key frame pose of the scene feature points, and the 3D coordinates of the scene feature points. Not limited.
  • the specific process for positioning after the database is constructed is explained. After the database is obtained by constructing as shown in FIG. 5, since the database contains more descriptor information of different natural conditions, the database is used During positioning, the real-time image can be matched with the database to more of the same descriptor information, so that the positioning is more accurate.
  • the database can also be updated according to different descriptor information, so that the database can store more complete information.
  • the method further includes comparing the descriptor information preset in the database with the at least one first descriptor information, determining different descriptor information, and according to different descriptors.
  • Information to build the database Building a database based on different descriptor information specifically includes two cases:
  • the network device determines whether the second scene feature point exists in its own database, and the judgment method is similar to the way of determining whether the scene feature points are the same when the database is constructed, that is, according to the 3D coordinates, the details are not described herein again. If the second scene feature point does not exist in the database, obviously the second scene feature point information does not exist in the database.
  • FIG. 12 (a) Please refer to FIG. 12 (a).
  • the descriptor 10 is a different description of this embodiment. child.
  • the information of the second scene feature point containing different descriptors that is, the information of the scene feature point 4 is added to the database.
  • the second scene feature point information to which different descriptors belong exists in the database, but the second scene feature point information of the database does not include target descriptor information.
  • the second scene feature point information does not include the different descriptors determined above, please refer to FIG. 12 (b).
  • the database includes real-time images The same scene feature point 1, scene feature point 2, and scene feature point 3, but the information of the scene feature point 3 of the database does not include the target descriptor in the real-time image scene feature point 3, that is, descriptor 9.
  • the descriptor information corresponding to different descriptors 9 is added to the database.
  • the 3D coordinates of the second scene feature point need to be updated to the database synchronously, because the real-time image information only contains descriptor information and pixels The coordinate information does not include 3D coordinates.
  • the 3D coordinates of feature points of different scenes are added to the database.
  • the way to determine the 3D coordinates of feature points in different scenes is: after using the same descriptor to obtain real-time image positioning results, determine the 3D coordinates of feature points in different scenes with a binocular camera, or jointly with a monocular camera and IMU
  • the method for determining the 3D coordinates of feature points in different scenes is not specifically limited herein.
  • the database in the real-time positioning process, is continuously improved and updated according to different descriptor information, and there is a better use of the database for positioning.
  • the process of real-time positioning is a process of data interaction between a mobile device and a network device. Please refer to FIG. 13, which will be described below:
  • a mobile device sends a real-time image to a network device.
  • the mobile device can also send the location information of the mobile device when capturing the real-time image or the natural conditions of the location of the mobile device to the network device.
  • the network device determines at least one first descriptor information according to the real-time image.
  • the network device compares the descriptor information preset in the database with at least one first descriptor information, and determines the same descriptor information and different descriptor information.
  • the network device uses the same descriptor information to locate the real-time image.
  • the mobile device performs a positioning operation according to a positioning calculation result determined by the network device.
  • the positioning calculation operation may also be performed by a network device, which is not specifically limited herein.
  • the network device constructs a database according to different descriptor information.
  • steps 1301 to 1306 of the embodiment are similar to the steps of the embodiment shown in FIG. 11 described above, and details are not described herein again.
  • FIG. 14 a possible structure of a network device is shown in FIG. 14, including:
  • a determining unit 1401 is configured to determine a target image set that meets a requirement of a preset image coincidence degree, the target image set includes at least one image, and each image corresponds to a type of natural condition information;
  • a processing unit 1402 configured to obtain a scene feature point information set according to the target image set and natural condition information corresponding to each image, where the scene feature point set includes at least one scene feature point information;
  • the determining unit 1401 is further configured to determine, in the scene feature point information set, first scene feature point information corresponding to a scene feature point whose life value of a single mobile device is greater than a first preset life value threshold, the life The magnitude of the value is used to indicate the probability that the scene feature point is a static scene feature point;
  • the determining unit 1401 is further configured to determine, from the first scene feature point information, second scene feature point information corresponding to a scene feature point whose life value of multiple mobile devices is greater than a second preset life value threshold;
  • a database construction unit 1403 is configured to construct the database according to the second scene feature point information when the second scene feature point information does not match the scene feature point information preset in the database.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
  • the second scene feature point information does not exist in the database
  • the database construction unit 1403 is specifically configured to add the second scene feature point information to the database, where the second scene feature point information includes target descriptor information about target natural condition information.
  • the mismatch between the second scene feature point information and the scene feature point information preset in the database includes:
  • the second scene feature point information exists in the database, and the second scene feature point information does not include target descriptor information about target natural condition information;
  • the database construction unit 1403 is specifically configured to add the target descriptor information to the second scene feature point information preset in the database.
  • the determining unit 1401 is further configured to 3D coordinate of the second scene feature point corresponding to the second scene feature point information
  • the determining unit 1401 is further configured to determine at least one descriptor information of the second scene feature point preset in a database
  • the network device further includes:
  • a judging unit 1404 configured to judge whether there is a distance between a descriptor corresponding to the descriptor information and a descriptor corresponding to the target descriptor information in the at least one descriptor information being smaller than a preset distance threshold;
  • the determining unit 1401 is further configured to determine the database if a distance between a descriptor corresponding to the descriptor information and the descriptor corresponding to the target descriptor information is smaller than a preset distance threshold in the at least one descriptor information.
  • the preset second scene feature point information does not include target descriptor information.
  • the life value of the scene feature point on a single mobile device is f
  • the calculation formula of f is:
  • n represents the number of times that the scene feature point is observed in a single mobile device
  • n 0 is an average value of the number of times that a preset scene feature point is observed
  • is a preset scene The variance of the number of times a feature point was observed.
  • the life value of the scene feature point on multiple mobile devices is F
  • the calculation formula of F is:
  • the f is the life value of the scene feature point on a single mobile device
  • the B is a weight coefficient corresponding to each mobile device
  • one mobile device of the plurality of mobile devices corresponds to one weight coefficient.
  • the determining unit 1401 is specifically configured to select an image according to a preset distance interval
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
  • the preset distance interval d k + 1 the preset distance interval d k + 1 ;
  • the distance interval of the images is selected at time, and ⁇ is the degree of overlap between the images when the images are selected according to the distance interval d k .
  • the determining unit 1401 is specifically configured to select an image according to a preset angular interval
  • the difference between the coincidence degree of the selected image and the preset coincidence degree threshold is within a preset accuracy range, it is determined that the selected image belongs to a target image set.
  • the preset angle interval ⁇ k + 1 is the preset angle interval ⁇ k + 1 ;
  • the angular interval of the images is selected at time, and ⁇ is the degree of coincidence between the images when the images are selected according to the angular interval ⁇ k .
  • the scene feature point information includes descriptor information corresponding to the natural condition information, and the processing unit 1402 is specifically configured to 1> process a target image to obtain scene feature points;
  • the determining unit 1401 is further configured to determine third scene feature point information in the database after construction is completed;
  • the database construction unit 1403 is further configured to delete the third scene from the database when the feature quantity control score FNCS of the third scene feature point corresponding to the third scene feature point information is less than a preset FNCS threshold. Feature point information.
  • the calculation formula of the feature quantity control score FNCS is: Said Is the probability that the scene feature point is used in positioning, the The ratio of the number of descriptors of the scene feature points to the total number of descriptors in the image to which the scene feature points belong.
  • FIG. 15 Another possible structure of the network device is shown in FIG. 15:
  • An obtaining unit 1501 configured to obtain a real-time image
  • a determining unit 1502 configured to determine at least one first descriptor information according to the real-time image, where the first descriptor information includes target natural condition information when the real-time image is captured;
  • the determining unit 1502 is further configured to compare the descriptor information preset in the database with the at least one first descriptor information to determine the same descriptor information, and the descriptor information preset in the database is determined by the After the network device determines a target image set that satisfies a preset image coincidence degree requirement, the scene feature point information set is obtained according to the target image set and natural condition information corresponding to each image in the target image set, and from the scene characteristics The first scene feature point information corresponding to the first scene feature point that satisfies the preset life value requirements is selected from the point information set, and then is constructed based on the second descriptor information corresponding to the target natural condition information in the first scene feature point information. It is obtained after the database that the second descriptor information does not match the descriptor information preset in the database, and the scene feature point information includes descriptor information corresponding to the natural condition information;
  • a positioning unit 1503 is configured to use the same descriptor information to locate the real-time image.
  • the positioning unit 1503 is specifically configured to determine first scene feature point information corresponding to the same descriptor information in a database
  • the position of the target mobile device when the real-time image is captured is calculated and calculated according to the first scene feature point information and a positioning calculation formula.
  • the positioning calculation formula is: the position of the target mobile device when the real-time image is captured Which said Is the pixel coordinates of the feature point of the first scene in the real-time image, ⁇ C is the internal parameter matrix of the camera, and ⁇ C is used to convert 3D coordinates into pixel coordinates.
  • Feature points in the database The pose of the image relative to the world coordinate system, said Is the pixel coordinates of the first scene feature point in the database, the value of i is 1 to n, the n is a positive integer, the first scene feature point and the first scene feature point information correspond.
  • the determining unit 1502 is further configured to compare the descriptor information preset in the database with the at least one first descriptor information to determine different descriptor information;
  • the network device further includes a database construction unit 1504;
  • the database construction unit 1504 is specifically configured to construct the database according to the different descriptor information.
  • the database construction unit 1504 is specifically configured to add the different descriptor information to the database.
  • the second scene feature point information is specifically configured to add the different descriptor information to the database.
  • the database constructing unit 1504 is specifically configured to add the second scene feature point information to the database. Describe different descriptor information.
  • the determining unit 1502 is further configured to determine a 3D coordinate of the second scene feature point corresponding to the second scene feature point information
  • An embodiment of the present application further provides a computer storage medium, where the computer storage medium stores a program, and the program executes including some or all of the steps described in the foregoing method embodiments.
  • the network device 1600 includes:
  • the receiver 1601, the transmitter 1602, the processor 1603, and the memory 1604 (the number of processors 1603 in the network device 1600 may be one or more, and one processor is taken as an example in FIG. 16).
  • the receiver 1601, the transmitter 1602, the processor 1603, and the memory 1604 may be connected through a bus or other manners. In FIG. 16, a connection through a bus is taken as an example.
  • the memory 1604 may include a read-only memory and a random access memory, and provide instructions and data to the processor 1603. A part of the memory 1604 may further include a non-volatile random access memory (English full name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
  • the memory 1604 stores an operating system and operation instructions, executable modules or data structures, or a subset thereof, or an extended set thereof.
  • the operation instructions may include various operation instructions for implementing various operations.
  • the operating system may include various system programs for implementing various basic services and processing hardware-based tasks.
  • the processor 1603 controls the operation of the network device.
  • the processor 1603 may also be referred to as a central processing unit (full name in English: Central Processing Unit, English for short: CPU).
  • CPU Central Processing Unit
  • the various components of the network equipment are coupled together through a bus system.
  • the bus system may include a power bus, a control bus, and a status signal bus in addition to the data bus.
  • various buses are called bus systems in the figure.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the processor 1603, or implemented by the processor 1603.
  • the processor 1603 may be an integrated circuit chip and has a signal processing capability.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1603 or an instruction in the form of software.
  • the above processor 1603 may be a general-purpose processor, a digital signal processor (full English name: digital processing, English abbreviation: DSP), an application specific integrated circuit (full English name: Application Specific Integrated Circuit, English abbreviation: ASIC), field programmable Gate array (full name in English: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA field programmable Gate array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • a software module may be located in a mature storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, and the like.
  • the storage medium is located in the memory 1604, and the processor 1603 reads the information in the memory 1604 and completes the steps of the foregoing method in combination with its hardware.
  • the receiver 1601 can be used to receive input digital or character information, and generate signal inputs related to network device related settings and function control.
  • the transmitter 1602 can include display devices such as a display screen, and the transmitter 1602 can be used to output numbers through an external interface Or character information.
  • the processor 1603 is configured to execute the foregoing database construction method and positioning method.
  • the device embodiments described above are only schematic, and the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be A physical unit can be located in one place or distributed across multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, which can be specifically implemented as one or more communication buses or signal lines.
  • this application can be implemented by means of software plus necessary general hardware, and of course, it can also be implemented by dedicated hardware including dedicated integrated circuits, dedicated CPUs, dedicated memories, Dedicated components and so on.
  • all functions performed by computer programs can be easily implemented with corresponding hardware, and the specific hardware structure used to implement the same function can also be diverse, such as analog circuits, digital circuits, or special purpose circuits. Circuit, etc.
  • software program implementation is a better implementation for this application.
  • the technical solution of the present application that is essentially or contributes to the existing technology can be embodied in the form of a software product, which is stored in a readable storage medium, such as a computer's floppy disk , U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or CD-ROM, etc., including several instructions to make a computer device (can be A personal computer, a server, or a network device, etc.) execute the methods described in the embodiments of the present application.
  • a computer device can be A personal computer, a server, or a network device, etc.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, a computer, a server, or a data center. Transmission via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (Solid State Disk (SSD)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种数据库构建方法,使用了人工智能领域的计算机视觉技术,用于根据与目标自然条件信息对应的第二场景特征点信息构建数据库,以便数据库用于定位时定位更加准确。该方法包括:确定满足预置图像重合度要求的目标图像集合(501);根据目标图像集合和每张图像对应的自然条件信息得到场景特征点信息集合(502);确定场景特征点信息集合中,在单个移动终端的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息(503);确定第一场景特征点信息中,在多个移动终端的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息(504);当第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据第二场景特征点信息构建数据库(505)。

Description

一种数据库构建方法、一种定位方法及其相关设备
本申请要求于2018年6月20日提交中国专利局、申请号为201810642562.4、申请名称为“一种数据库构建方法、一种定位方法及其相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种数据库构建方法、一种定位方法及其相关设备。
背景技术
无人驾驶技术中,车辆定位的重要性不言而喻,在车辆的运行过程中,需要实时获取车辆的最新位置,并对后续车辆的路线进行规划。一般可以采用(global positioning system,GPS)、相机、激光雷达、毫米波雷达和/或惯性测量单元(inertial measurement unit,IMU)等进行定位,但是,这些定位方式都不可避免的存在定位精度不准确的问题。
目前,提出了一种视觉定位方法,其原理是通过先建立数据库,识别匹配实时场景与数据库中的同一场景特征点来实现定位。数据库中存储有场景的关键帧和场景特征点,关键帧是用来表示现实世界的图像,场景特征点是指场景关键帧中提取出来的视觉场景特征点,且场景特征点从属于关键帧。场景特征点还具有描述子,用来形容该场景特征点,同时场景特征点在不同自然条件下具有不同的描述子信息。
当外界环境发生变化时,采集到的现实场景关键帧中的描述子信息也会发生变化,从而导致相应的场景特征点发生变化,在视觉定位过程中进行场景特征点匹配时,会出现现实场景关键帧中某些场景特征点与数据库中的场景特征点无法匹配的情况,最终导致定位不准确。
发明内容
本申请实施例公开了一种数据库构建方法、一种定位方法及相关设备,用于根据与目标自然条件信息对应的第二场景特征点信息构建数据库,以便数据库用于定位时定位更加准确。
本申请的第一方面提供了一种数据库构建方法,包括:
网络设备获取图像集合后,从图像集合中确定满足预置重合度要求的目标图像集合。当移动设备直行时,按照预置距离间隔获取图像集合,再计算按照预置距离间隔获取的图像重合度是否满足要求,当汽车弯行时,按照预置角度间隔获取图像集合,再计算按照预置角度间隔获取的图像重合度是否满足要求。
在本实施例中,图像是指移动设备及其周边环境的影像,图像的获取方式可以通过在移动设备上安装摄像头进行获取,移动设备也可以自身具备图像获取功能,具体此处不作限定。
在本实施例中,目标图像集合中包括至少一张图像,且每张图像都是在唯一的自然条件状况下拍摄的,因此每张图像对应一种自然条件信息。
自然条件信息的确定方式为:移动设备通过全球定位系统GPS、激光雷达、毫米波雷达和/或惯性测量单元IMU获取移动设备的位置信息,再将位置信息发送至气候服务器得到当前位置的自然条件信息。
网络设备对目标图像集合进行分析处理得到场景特征点信息,同时目标图像集合中每张图像对应的唯一的一种自然条件信息,将场景特征点集合中的每个场景特征点分别与自然条件信息建立对应关系,从而得到场景特征点信息集合。场景特征点信息集合中包括至少一个场景特征点信息,每个场景特征点信息中包括场景特征点的3D坐标、像素坐标、所属关键帧ID和描述子信息,其中每个描述子信息中包括一种自然条件信息,场景特征点的3D坐标、像素坐标和所属关键帧ID是静态指标,而描述子信息是动态指标,会随着自然条件状况发生变化。
视觉定位的过程是通过场景特征点比对确定相同场景特征点后,根据相同场景特征点进行定位的过程,而静止的场景特征点一般作为用于对比的场景特征点可以使定位更加准确,为了找出场景特征点集合中具有代表性的静止场景特征点,需要进行生命值计算。生命值的大小用于表示场景特征点为静态场景特征点的概率,生命值越大,场景特征点为静态特征点的概率越大,反之亦然。
同一场景特征点可能被单个移动设备拍摄到,也可能被多个移动设备拍摄的,首先确定场景特征点信息集合中,对于单个移动设备而言,即场景特征点被一个移动设备观察到时,生命值大于第一预置生命值门限的场景特征点,再确定第一场景特征点信息中,对于多个移动设备而言,即场景特征点被两个或以上移动设备观察到时,生命值大于第二预置生命值门限的场景特征点,该场景特征点即为第二场景特征点。
将按照上述方式进行筛选得到的第二场景特征点的信息与数据库中预置的场景特征点信息进行比对,当第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,则在数据库中更新第二场景特征点信息,使得数据库中的场景特征点信息更完善。
本实施例具有以下优点:确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及图形集合中每张图像对应的自然条件信息确定场景特征点信息集合,随后确定得到场景特征信息集合中在单个移动设备的生命值大于第一预置生命值门限,且在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息,当第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据该第二场景特征点信息构建数据库。在本实施例中,按照上述方式筛选得到第二场景特征点信息后,当数据库中不存在与某种自然条件信息相关的第二场景特征点信息时,根据该第二场景特征点信息构建该数据库,使得构建完成后的数据库用于定位时,定位更加准确。
基于第一方面,在第一方面的第一种可实现的方式中,第二场景特征点信息与数据库中预置的场景特征点信息不匹配具体包括:数据库中不存在所述第二场景特征点信息。
当数据库中不存在第二场景特征点时,显然数据库中也不可能存在第二场景特征点信息,此时,将第二场景特征点信息存储于数据库中,储存的第二场景特征点信息中包括第二场景特征点的3D坐标、像素坐标和所属关键帧ID和描述子信息。
在本实施例中,对根据第二场景特征点信息构建数据库的一种情况进行了说明,增加 了方案的可实施性。
基于第一方面,在第一方面的第二种可实现的方式中,第二场景特征点信息与数据库中预置的场景特征点信息不匹配具体包括:数据库中存在第二场景特征点信息,但是第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息。
当数据库中存在第二场景特征点信息,但是数据库中储存的第二场景特征点信息与从从图像中确定的第二场景特征点信息不相同,不相同的原因是两者的动态指标即描述子信息发生了变化,例如数据库的第二场景特征点信息为第二场景特征点的3D坐标、像素坐标和所属关键帧ID和描述子1的信息。图像中确定的第二场景特征点信息为第二场景特征点的3D坐标、像素坐标和所属关键帧ID和描述子2的信息。
此时需要在数据库的第二场景特征点中增加与目标自然条件信息相关的目标描述子信息,即描述子2的信息。
在本实施例中,对根据第二场景特征点信息构建数据库的另一种情况进行了说明,增加了方案实施的灵活性。
基于第一方面的第一种或第二种实现的方式,在第一方面的第三种实现的方式中,所述在数据库中增加所述第二场景特征点信息之前,所述方法还包括:
确定与第二场景特征点信息中第二场景特征点的3D坐标,该3D坐标是在确定第二场景特征点所属图像的定位信息时,同步确定得到的;
分别计算数据库中预置的每个场景特征点的3D坐标与第二场景特征点的3D坐标的差值,判断差值是否均大于第一预置门限值;
若是,确定数据库中不存在第二场景特征点信息;
若数据库中存在预置的任意一个场景特征点的3D坐标与第二场景特征点的3D坐标的差值小于第一预置门限值,则确定数据库中存在第二场景特征点,同时也存在第二场景特征点信息。
在本实施例中,对第二场景特征点是否在数据库中存在的判定方式进行了说明,增加了方案可实施性。
基于第一方面的第二种实现的方式,在第一方面的第四种实现的方式中,所述在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息之前,所述方法还包括:
当数据库中存在第二场景特征点信息时,确定数据库中预置的第二场景特征点信息的至少一个描述子,判断至少一个描述子中,是否存在一个描述子与图像中目标描述子信息对应的描述子的距离小于预置距离门限;
若数据库预置的描述子,与第二场景特征点信息中的目标描述子信息对应的描述子的距离均大于第二预置距离门限,则确定数据库预置的第二场景特征点信息中不包括目标描述子信息。
在本实施例中,对判断目标描述子信息是否在数据库中存在的判断方式进行了说明,增加了方案的可实施性。
基于第一方面及其第一方面的第一种至第二种实现的方式,在第一方面的第五种实现的方式中,场景特征点在单个移动设备的生命值为f,所述f的计算公式为:
Figure PCTCN2019082981-appb-000001
Figure PCTCN2019082981-appb-000002
其中,所述n表示某一场景特征点在单个移动设备中被观察到的次数,单个移动设备可以进行多次实验获取图像,因此某一场景特征点可能被单个移动设备观察到多次,所述n 0为事先通过模型训练得到的任一场景特征点被至少一个移动设备观察到的次数的平均值,所述σ为事先通过模型训练得到的任一场景特征点被至少一个移动设备观察到的次数的方差。
在本实施例中,对场景特征点在单个移动设备的生命值计算公式进行了说明,增加了方案的可实施性。
基于第一方面的第五种实现方式,在第一方面的第六种实现的方式中,某一场景特征点可能被多个移动设备拍摄到,场景特征点在多个移动设备的生命值为F,所述F的计算公式为:
Figure PCTCN2019082981-appb-000003
所述f为该场景特征点在单个移动设备的生命值,所述B为多个移动设备中每个移动设备对应的权重系数,且所述多个移动设备中一个移动设备对应一个权重系数。例如,场景特征点被3个移动设备拍摄到,3个移动设备中每个移动设备都对应一个权重系数,则用该场景特征点在单个移动设备的生命值乘以该移动设备的权重系数,再对三个移动设备求和得到场景特征点被多个移动设备观察到时的生命值。
在本实施例中,多个移动设备表示至少两个移动设备。
在本实施例中,对场景特征点在多个移动设备的生命值计算公式进行了说明,增加了方案的可实施性。
基于第一方面的第六种实现方式,在第一方面的第七种实现的方式中,β i的计算公式为:β i=γ tgc,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
其中,γ t与不同移动设备观测到同一场景特征点之间的时间间隔有关,几何连续性指标γ g与不同移动设备观测到同一场景特征点之间的欧式距离有关,描述一致性γ c与不同移动设备观测到同一场景特征点之间的描述距离有关。
在本实施例中,对权重系数β i的计算公式进行了说明,增加了方案的可实施性。
基于第一方面及其第一方面的第一种至第二种实现方式,在第一方面的第八种实现的方式中,所述确定满足预置图像重合度要求的目标图像集合包括:
当移动设备直行时,首先按照预置距离间隔d k+1选择图像,当选择的图像的重合度与预置重合度门限的差值在预置精度范围内时,确定选择的图像为目标图像,再按照上述预置距离间隔选择图像得到目标图像集合。
当选择的图像的重合度与预置重合度门限的差值不在预置精度范围内时,则根据距离 间隔d k+1对应的图像的重合度与预置重合度门限的差值,计算得到还需要在距离间隔d k+1基础上增加或减少多少距离间隔选择图像才能使得选择的图像满足预置重合度要求,从而得到下一次选择图像的距离间隔d k+2,再以d k+2作为选择图像的距离间隔重复执行上述步骤,若以d k+2选择图像满足重合度要求,则按照d k+2选择图像得到目标图像集合。
在本实施例中,通过选择满足重合度要求的图像,可以避免盲目选择图像而导致对图像进行处理时数据量过大的问题,同时也可以避免与数据库中已有场景特征点信息进行匹配时数据量过大的缺陷。
基于第一方面的第八种实现方式,在第一方面的第九种实现的方式中,所述预置距离间隔d k+1
所述预置距离间隔的计算公式为:d k+1=d k+d k*-α),其中,所述α *为预置的重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时计算得到的图像之间的重合度。
在本实施例中,对预置距离间隔的计算公式进行了说明,增加了方案的完整性。
基于第一方面及其第一方面的第一种至第二种实现方式,在第一方面的第十种实现的方式中,所述确定满足预置图像重合度要求的目标图像集合包括:
当移动设备弯行时,首先按照预置角度间隔θ k+1选择图像,当选择的图像的重合度与预置重合度门限的差值在预置精度范围内时,确定选择的图像为目标图像,再重复按照预置角度间隔选择图像得到目标图像集合。
当选择的图像的重合度与预置重合度门限的差值不在预置精度范围内时,则根据角度间隔θ k+1对应的图像的重合度与预置重合度门限差值,计算得到还需要在角度间隔θ k+1基础上增加或减少多少角度间隔选择图像才能使得选择的图像满足预置重合度要求,从而得到下一次选择图像的角度间隔θ k+2,再以θ k+2作为选择图像的角度间隔重复执行上述步骤,若以θ k+2选择图像满足重合度要求,则按照θ k+2选择图像得到目标图像集合。
在本实施例中,通过选择满足重合度要求的图像,可以避免盲目选择图像而导致对图像进行处理时数据量过大的问题,同时也可以避免与数据库中已有场景特征点信息进行匹配时数据量过大的缺陷。
基于第一方面的第十种实现方式,在第一方面的第十一种实现的方式中,所述预置角度间隔θ k+1
所述预置角度间隔的计算公式为:θ k+1=θ kk*-α),其中,所述α *为所述预置得重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述α为按照角度间隔θ k选择图像时计算得到的图像之间的重合度。
在本实施例中,对预置角度间隔的计算公式进行了说明,增加了方案的完整性。
基于第一方面及其第一方面的第一种至第二种实现方式,在第一方面的第十二种实现的方式中,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述根据所述目标图像集合和所述自然条件信息得到场景特征点信息集合包括:
1>网络设备对一张所述目标图像进行处理得到该目标图像的场景特征点;
2>建立场景特征点、所述场景特征点所属目标图像(或场景特征点所属关键帧ID)和 所述目标图像对应的所述自然条件信息的对应关系,从而构成所述场景特征点信息,该场景特征点信息中还包括场景特征点的3D坐标和像素坐标等信息。
其中,场景特征点所属目标图像可以为多个,即多个目标图像中均包括该场景特征点,每个目标图像对应的自然条件信息一般是不同的,因此场景特征点信息中可以有多个描述子信息,同时也不排除有两个或以上目标图像对应的自然条件信息相同。
重复执行步骤1>和2>,对每张图像均进行上述处理,直至构成得到所述场景特征点信息集合。
在本实施例中,对场景特征点信息集合的确定方式进行了说明,增加了方案的完整性。
基于第一方面及其第一方面的第一种至第二种实现方式,在第一方面的第十三种实现的方式中,所述根据所述第二场景特征点信息构建所述数据库之后,所述方法还包括:
确定构建完成后所述数据库中的场景特征点信息,该场景特征点信息为第三场景特征点信息;
当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息,FNCS的大小可以表示场景特征点在定位时被使用的概率以及场景特征点信息中包含描述子的多少。
在本实施例中,预置FNCS可以事先通过多次实验确定。
在本实施例中,对数据库构建完成后,场景特征点的管理进行了说明,可以删除FNCS值低的场景特征点,便于数据库的管理。
基于第一方面的第十三种实现方式,在第一方面的第十四种实现的方式中,所述特征数量控制得分FNCS的计算公式为:
Figure PCTCN2019082981-appb-000004
所述
Figure PCTCN2019082981-appb-000005
为所述场景特征点在定位时被使用的概率,M为在场景特征点所在位置进行定位的总次数,m i表示在进行定位时该场景特征点被使用的次数,所述
Figure PCTCN2019082981-appb-000006
为所述场景特征点的描述子数量占所述场景特征点所属图像中描述子总数的比例。
在本实施例中,对特征数量控制得分FNCS的计算公式进行了说明,增加了方案的可实施性。
本申请实施例的第二方面提供了一种定位方法,其特征在于,包括:
网络设备获取移动设备的实时图像,在本实施例中,实时图像是指移动设备及其周边环境的影像,实时图像的获取方式可以通过在移动设备上安装摄像头进行获取,移动设备也可以自身具备图像获取功能,具体此处不作限定。
网络设备获取实时图像后,对实时图像进行分析处理得到确定至少一个第一描述子信息,所述第一描述子信息中包括移动设备或移动设备的外接摄像头拍摄所述实时图像时,移动设备所在位置的目标自然条件信息,实时图像中包含至少一个特征点信息,实时图像拍摄时的自然条件信息是一定的,因此一个特征点信息中只包括一个描述子信息,进而实时图像得至少一个描述子信息都包含相同的目标自然条件信息。
目标自然条件信息可以由网络设备确定,也可以由移动设备确定后发送至网络设备, 目标自然条件信息的确定方式是:首先网络设备或移动设备确定拍摄所述实时图像时移动设备的位置信息,可以通过全球定位系统GPS、激光雷达、毫米波雷达和/或惯性测量单元IMU确定,网络设备或移动设备再根据位置信息确定得到目标自然条件信息。
将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,对比的方式具体为:
确定至少一个描述子信息中的一个描述子信息;
判断该描述子信息对应的描述子,是否与数据库中预置的任意一个描述子信息对应的描述子的距离小于预置距离门限;
若是,确定该描述子与数据库中与之距离间隔小于预置距离门限的描述子为同一个描述子,对应的描述子信息也相同,若否,则两者为不同的描述子,对应的描述子信息也不同。
重复执行上述步骤,将至少一个描述子信息中的每个描述子分别于数据库中预置的描述子进行对比,判断得到相同的描述子信息。
在本实施例中,数据库中预置的描述子信息由数据库构建完成后得到,网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,确定第一场景特征点信息中与目标自然条件信息对应的第二描述子信息,当所述第二描述子信息与所述数据库中预置的描述子信息不匹配时,再根据第二描述子信息构建数据库。
在本实施例中,构建数据库的具体过程与本申请所述第一方面数据库的构建过程类似,具体此处不再赘述。
确定得到相同的描述子信息后,根据相同的描述子信息,对实时图像进行视觉定位。
在本实施例中,对数据库构建完成后的定位过程进行说明,增加了方案的实用性。
基于第二方面,在第二方面的第一种实现方式中,所述利用所述相同的描述子信息对所述实时图像进行定位包括:
确定实时图像中与数据库中相同的描述子信息后,确定相同描述子信息在数据库中所属的第一场景特征点信息,查找数据库得到第一场景特征点的3D坐标、像素坐标等信息,再结合第一场景特征点信息和定位计算公式得到拍摄实时图像时目标移动设备的位置。
在本实施例中,对实时图像定位的具体方式进行了说明,增加了方案的可实施性。
基于第二方面的第一种实现方式中,在第二方面的第二种实现方式中,所述定位计算公式为:
拍摄所述实时图像时所述目标移动设备的位置
Figure PCTCN2019082981-appb-000007
Figure PCTCN2019082981-appb-000008
其中所述
Figure PCTCN2019082981-appb-000009
为实时图像中所述第一场景特征点的像素坐标,π C为相机的内参矩阵,所述π C用于将3D坐标转换为像素坐标,所述
Figure PCTCN2019082981-appb-000010
为数据库中场景特征点
Figure PCTCN2019082981-appb-000011
所属图像相对于世界坐标系的位姿,所述
Figure PCTCN2019082981-appb-000012
为所述数据库中所述第一场景特征点的像素坐标,所述i的取 值为1至n,所述n为正整数,所述第一场景特征点与所述第一场景特征点信息对应。
实时图像中共有n个场景特征点与数据库中的场景特征点相匹配。
Figure PCTCN2019082981-appb-000013
通过(π C) -1转换得到相对于汽车的3D坐标,然后通过
Figure PCTCN2019082981-appb-000014
转换得到相对于世界坐标系的3D坐标,接着再经过
Figure PCTCN2019082981-appb-000015
转换得到相对于数据库的像素坐标,
Figure PCTCN2019082981-appb-000016
为数据库中场景特征点
Figure PCTCN2019082981-appb-000017
所属关键帧相对于世界坐标系的位姿。转换得到的像素坐标与数据库中相匹配的场景特征点的像素坐标
Figure PCTCN2019082981-appb-000018
一致,把两者相减即得到了重投影误差模型。最后通过最优化方法使重投影误差模型的值最小,就可以得到汽车的实时位姿
Figure PCTCN2019082981-appb-000019
在本实施例中,对定位公式的具体算法进行了说明,增加了方案的完整性。
基于第二方面及其第二方面的第一种至第二种实现方式中,在第二方面的第三种实现方式中,所述根据所述实时图像确定至少一个第一描述子信息之后,所述方法还包括:
将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息;
在本实施例中,描述子信息对比的方式与上述第二方面所述的描述子信息对比方式类似,具体此处不再赘述。
若实时图像中存在与数据库中不同的描述子信息,则将不同描述子的描述子信息加入数据库,以便于数据库用于定位时定位更加准确。
在本实施例中,定位过程中可以根据不同描述子信息更新数据库,使得数据库更加完善,从而使得定位更加准确。
基于第二方面的第三种实现方式中,在第二方面的第四种实现方式中,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述根据所述不同的描述子信息构建所述数据库包括:
当数据库中不存在第二场景特征点时,同样的也不存在第二场景特征点信息,第二场景特征点为不同描述子所属的场景特征点,则此时需要在数据库中增加包含不同描述子信息的第二场景特征点信息。在本实施例中,可能存在不同描述子信息的个数可以为多个,实时图像的一个第二场景特征点信息只能包含一个不同的描述子,因此数据库中需要增加的第二场景特征点信息的个数也为多个。
在本实施例中,对根据不同描述子信息构建数据库的一种情况进行了说明,增加了方案的实用性。
基于第二方面的第三种实现方式中,在第二方面的第五种实现方式中,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述根据所述不同的描述子信息构建所述数据库包括:
当数据库中存在第二场景特征点时,同样的也存在第二场景特征点信息,但是数据库的第二场景特征点信息与实时图像中的第二场景特征点信息不同,即第二场景特征点信息中不包含所述不同的描述子信息,此时需要在数据库的第二场景特征点信息中增加所述不同描述子信息。例如数据库的第二场景特征点信息为第二场景特征点的3D坐标、像素坐标和所属关键帧ID和描述子1的信息。实时图像中确定的第二场景特征点信息为第二场景特 征点的3D坐标、像素坐标和所属关键帧ID和描述子2的信息。此时需要在数据库的第二场景特征点信息中增加描述子2的信息。
在本实施例中,对根据不同描述子信息构建数据库的另一种情况进行了说明,增加了方案的完整性。
基于第二方面的第三种实现方式,在第二方面的第六种实现方式中,所述根据所述不同的描述子信息构建所述数据库之前,所述方法还包括:
确定与第二场景特征点信息中包含的第二场景特征点的3D坐标,该3D坐标是在确定第二场景特征点所属图像的定位信息时,同步确定得到的;
分别计算数据库中预置的每个场景特征点的3D坐标与第二场景特征点的3D坐标的差值是否均大于第一预置门限值;
若是,确定数据库中不存在第二场景特征点信息;
若数据库中存在预置的任意一个场景特征点的3D坐标与第二场景特征点的3D坐标的差值小于第一预置门限值,则确定数据库中存在第二场景特征点,同时也存在第二场景特征点信息。
在本实施例中,对数据库中是否存在第二场景特征点的判断方式进行了说明,增加了方案的可实施性。
本申请的三方面提供了一种数据库,所述数据库部署于服务器;
所述数据库由与数据库中预置场景特征点信息不匹配的第二场景特征点信息构建形成,且所述第二场景特征点信息为第一场景特征点信息中在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的场景特征点信息,所述第一场景特征点信息为场景特征点信息集合中在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的场景特征点信息,所述场景特征点信息集合根据目标图像集合和每张图像对应的自然条件信息得到,且所述场景特征点集合中包括至少一个场景特征点信息,所述目标图像集合中包括至少一张满足预置图像重合度要求的图像,且每张图像对应一种自然条件信息。
本实施例中的数据库的形成过程,与第一方面进行数据库构建的过程类似,具体此处不再赘述。
在本实施例中,对数据库的构建方式进行了说明,数据库构建完成好可以用于视觉定位,使得定位更加准确。
基于第三方面,在第三方面的第一种实现方式中,所述服务器还包括处理器;
所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:所述数据库中不存在所述第二场景特征点信息;
所述数据库由第二场景特征点信息构建形成包括:
当第二场景特征点信息在数据库中不存在时,在数据库中增加第二场景特征点信息,因此所述数据库由所述服务器在所述数据库中增加所述第二场景特征点信息形成,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
在本实施例中,对第二场景特征点构建数据库的一种方式进行了说明,增加了方案的可实施性。
基于第三方面,在第三方面的第二种实现方式中,所述服务器还包括处理器;
所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;
所述数据库由第二场景特征点信息构建形成包括:
当数据库中预置的场景特征点信息与第二场景特征点信息相同,但是两者的描述子信息不同时,将不同的描述子信息加入数据库,因此所述数据库由所述服务器在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息形成。
在本实施例中,对第二场景特征点构建数据库的另一种方式进行了说明,增加了方案实施的多样性。
本申请的第四方面提供了一种网络设备,包括:
确定单元,用于确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;
处理单元,用于根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息集合,所述场景特征点集合中包括至少一个场景特征点信息;
所述确定单元,还用于确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为静态场景特征点的概率;
所述确定单元,还用于确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;
数据库构建单元,用于当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库。
在本实施例中,按照上述方式筛选得到第二场景特征点信息后,当数据库中不存在与某种自然条件信息相关的第二场景特征点信息时,根据该第二场景特征点信息构建该数据库,使得构建完成后的数据库用于定位时,定位更加准确。
基于第四方面,在第四方面的第一种可实现的方式中,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
所述数据库中不存在所述第二场景特征点信息;
所述数据库构建单元,具体用于在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
在本实施例中,对根据第二场景特征点信息构建数据库的一种情况进行了说明,增加了方案的可实施性。
基于第四方面,在第四方面的第二种可实现的方式中,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;
所述数据库构建单元具体用于在所述数据库预置的第二场景特征点信息中增加所述目 标描述子信息。
在本实施例中,对根据第二场景特征点信息构建数据库的另一种情况进行了说明,增加了方案实施的灵活性。
基于第四方面的第一种或第二种可实现方式,在第四方面的第三种可实现的方式中,所述确定单元,还用于与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
在本实施例中,对第二场景特征点是否在数据库中存在的判定方式进行了说明,增加了方案可实施性。
基于第四方面的第二种可实现方式,在第四方面的第四种可实现方式中,所述确定单元,还用于确定数据库中预置的所述第二场景特征点的至少一个描述子信息;
所述网络设备还包括:
判断单元,用于判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限;
所述确定单元,还用于若所述至少一个描述子信息中,不存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
在本实施例中,对判断目标描述子信息是否在数据库中存在的判断方式进行了说明,增加了方案的可实施性。
基于第四方面及其第四方面的第一种和第二种可实现方式,在第四方面的第五种可实现的方式中,所述场景特征点在单个移动设备的生命值为f,所述f的计算公式为:
Figure PCTCN2019082981-appb-000020
其中,所述n表示所述场景特征点在单个移动设备中被观察到的次数,所述n 0为预置的场景特征点被观察到的次数的平均值,所述σ为预置的场景特征点被观察到的次数的方差。
在本实施例中,对场景特征点在单个移动设备的生命值计算公式进行了说明,增加了方案的可实施性。
基于第四方面的第五种可实现方式,在第四方面的第六种可实现的方式中,所述场景特征点在多个移动设备的生命值为F,所述F的计算公式为:
Figure PCTCN2019082981-appb-000021
所述f为所述场景特征点在单个移动设备的生命值,所述B为每个移动设备对应的权重系数,且所述多个移动设备中一个移动设备对应一个权重系数。
在本实施例中,对场景特征点在多个移动设备的生命值计算公式进行了说明,增加了方案的可实施性。
基于第四方面的第六种可实现方式,在第四方面的第七种可实现的方式中,所述β i的计算公式为:β i=γ tgc,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
在本实施例中,对权重系数β i的计算公式进行了说明,增加了方案的可实施性。
基于第四方面及其第四方面的第一种和第二种可实现方式,在第四方面的第八种可实现的方式中,所述确定单元具体用于,按照预置距离间隔选择图像。
当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
在本实施例中,通过选择满足重合度要求的图像,可以避免盲目选择图像而导致对图像进行处理时数据量过大的问题,以及后续与数据库中已有场景特征点信息进行匹配时数据量过大的缺陷。
基于第四方面的第八种可实现方式,在第四方面的第九种可实现的方式中,所述预置距离间隔d k+1
所述预置距离间隔的计算公式为:d k+1=d k+d k*-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
在本实施例中,对预置距离间隔的计算公式进行了说明,增加了方案的完整性。
基于第四方面及其第四方面的第一种和第二种可实现方式,在第四方面的第十种可实现的方式中,所述确定单元,具体用于按照预置角度间隔选择图像;
当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
在本实施例中,通过选择满足重合度要求的图像,可以避免盲目选择图像而导致对图像进行处理时数据量过大的问题,以及后续与数据库中已有场景特征点信息进行匹配时数据量过大的缺陷。
基于第四方面的第十种可实现方式,在第四方面的第十一种可实现的方式中,所述预置角度间隔θ k+1
所述预置角度间隔的计算公式为:θ k+1=θ kk*-α),其中,所述α *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述α为按照角度间隔θ k选择图像时图像之间的重合度。
在本实施例中,对预置角度间隔的计算公式进行了说明,增加了方案的完整性。
基于第四方面及其第四方面的第一种和第二种可实现方式,在第四方面的第十二种可实现的方式中,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述处理单元,具体用于1>对一张所述目标图像进行处理得到场景特征点;
2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自 然条件信息构成所述场景特征点信息;
重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
在本实施例中,对场景特征点信息集合的确定方式进行了说明,增加了方案的完整性。
基于第四方面及其第四方面的第一种和第二种可实现方式,在第四方面的第十三种可实现的方式中,所述确定单元,还用于确定构建完成后所述数据库中的第三场景特征点信息;
所述数据库构建单元,还用于当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
在本实施例中,对数据库构建完成后,场景特征点的管理进行了说明,可以删除FNCS值低的场景特征点,便于数据库的管理。
基于第四方面的第十三种可实现方式,在第四方面的第十四种可实现的方式中,所述特征数量控制得分FNCS的计算公式为:
Figure PCTCN2019082981-appb-000022
所述
Figure PCTCN2019082981-appb-000023
为所述场景特征点在定位时被使用的概率,所述
Figure PCTCN2019082981-appb-000024
为所述场景特征点的描述子数量占所述场景特征点所属图像中描述子总数的比例。
在本实施例中,对特征数量控制得分FNCS的计算公式进行了说明,增加了方案的可实施性。
本申请的第五方面提供了一种定位方法,所述方法应用于视觉定位系统,所述方法包括:
获取单元,用于获取实时图像;
确定单元,用于根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;
所述确定单元,还用于将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;
定位单元,用于利用所述相同的描述子信息对所述实时图像进行定位。
在本实施例中,对数据库构建完成后的定位过程进行说明,增加了方案的实用性。
基于第五方面,在第五方面的第一种可实现方式中,所述定位单元,具体用于确定所述相同描述子信息在数据库中对应的第一场景特征点信息;
根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
在本实施例中,对实时图像定位的具体方式进行了说明,增加了方案的可实施性和实用性。
基于第五方面的第一种可实现方式,在第五方面的第二种可实现方式中,所述定位计算公式为:拍摄所述实时图像时所述目标移动设备的位置
Figure PCTCN2019082981-appb-000025
其中所述
Figure PCTCN2019082981-appb-000026
为实时图像中所述第一场景特征点的像素坐标,π C为相机的内参矩阵,所述π C用于将3D坐标转换为像素坐标,所述
Figure PCTCN2019082981-appb-000027
为数据库中场景特征点
Figure PCTCN2019082981-appb-000028
所属图像相对于世界坐标系的位姿,所述
Figure PCTCN2019082981-appb-000029
为所述数据库中所述第一场景特征点的像素坐标,所述i的取值为1至n,所述n为正整数,所述第一场景特征点与所述第一场景特征点信息对应。
在本实施例中,对定位公式的具体算法进行了说明,增加了方案的完整性。
基于第五方面及其第五方面的第一种至第二种可实现方式,在第五方面的得第三种可实现方式中,所述确定单元,还用于将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息;
所述网络设备还包括数据库构建单元;
所述数据库构建单元,具体用于根据所述不同的描述子信息构建所述数据库。
在本实施例中,定位过程中可以根据不同描述子信息更显数据库,使得数据库更加完善,从而使得定位更加准确。
基于第五方面的第三种可实现方式,在第五方面的第四种可实现方式中,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述数据库构建单元,具体用于在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
在本实施例中,对根据不同描述子信息构建数据库的一种情况进行了说明,增加了方案的实用性。
基于第五方面的第三种可实现方式,在第五方面的第五种可实现方式中,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述构建单元,具体用于在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
在本实施例中,对根据不同描述子信息构建数据库的另一种情况进行了说明,增加了方案的完整性。
基于第五方面的第三种可实现方式,在第五方面的第六种可实现方式中,所述确定单元,还用于确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
在本实施例中,对数据库中是否存在第二场景特征点的判断方式进行了说明,增加了 方案的可实施性。
本申请的第六方面提供了一种网络设备,其特征在于,所述网络设备包括:存储器、收发器、处理器以及总线系统;
其中,所述存储器用于存储程序;
所述处理器用于执行所述存储器中的程序,包括如下步骤:
确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;
根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息合,所述场景特征点集合中包括至少一个场景特征点信息;
确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为静态场景特征点的概率;
确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;
当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库;
所述总线系统用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
在本实施例中,按照上述方式筛选得到第二场景特征点信息后,当数据库中不存在与某种自然条件信息相关的第二场景特征点信息时,根据该第二场景特征点信息构建该数据库,使得构建完成后的数据库用于定位时,定位更加准确。
基于第六方面,在第六方面的第一种可实现的方式中,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
所述数据库中不存在所述第二场景特征点信息;
所述处理器具体用于:
在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
在本实施例中,对根据第二场景特征点信息构建数据库的一种情况进行了说明,增加了方案的可实施性。
基于第六方面,在第六方面的第二种可实现的方式中,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;
所述处理器具体用于:
在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息。
在本实施例中,对根据第二场景特征点信息构建数据库的另一种情况进行了说明,增加了方案实施的灵活性。
基于第六方面的第一种或第二种可实现方式,在第六方面的第三种可实现的方式中,所述确定单元,所述处理器还用于:
确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
在本实施例中,对第二场景特征点是否在数据库中存在的判定方式进行了说明,增加了方案可实施性。
基于第六方面的第二种可实现方式,在第六方面的第四种可实现方式中,所述确定单元,所述处理器还用于:
确定数据库中预置的所述第二场景特征点的至少一个描述子信息;
判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限;
若否,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
在本实施例中,对判断目标描述子信息是否在数据库中存在的判断方式进行了说明,增加了方案的可实施性。
基于第六方面及其第六方面的第一种和第二种可实现方式,在第六方面的第五种可实现的方式中,所述场景特征点在单个移动设备的生命值为f,所述f的计算公式为:
Figure PCTCN2019082981-appb-000030
其中,所述n表示所述场景特征点在单个移动设备中被观察到的次数,所述n 0为预置的场景特征点被观察到的次数的平均值,所述σ为预置的场景特征点被观察到的次数的方差。
在本实施例中,对场景特征点在单个移动设备的生命值计算公式进行了说明,增加了方案的可实施性。
基于第六方面的第五种可实现方式,在第六方面的第六种可实现的方式中,所述场景特征点在多个移动设备的生命值为F,所述F的计算公式为:
Figure PCTCN2019082981-appb-000031
所述f为所述场景特征点在单个移动设备的生命值,所述B为每个移动设备对应的权重系数,且所述多个移动设备中一个移动设备对应一个权重系数。
在本实施例中,对场景特征点在多个移动设备的生命值计算公式进行了说明,增加了方案的可实施性。
基于第六方面的第六种可实现方式,在第六方面的第七种可实现的方式中,所述β i的计算公式为:β i=γ tgc,所述γ t为所述场景特征点在多个移动设备被观测到 的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
在本实施例中,对权重系数β i的计算公式进行了说明,增加了方案的可实施性。
基于第六方面及其第六方面的第一种和第二种可实现方式,在第六方面的第八种可实现的方式中,所述处理器具体用于:
按照预置距离间隔选择图像;
当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
在本实施例中,通过选择满足重合度要求的图像,可以避免盲目选择图像而导致对图像进行处理时数据量过大的问题,以及后续与数据库中已有场景特征点信息进行匹配时数据量过大的缺陷。
基于第六方面的第八种可实现方式,在第六方面的第九种可实现的方式中,所述预置距离间隔d k+1
所述预置距离间隔的计算公式为:d k+1=d k+d k*-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
在本实施例中,对预置距离间隔的计算公式进行了说明,增加了方案的完整性。
基于第六方面及其第六方面的第一种和第二种可实现方式,在第六方面的第十种可实现的方式中,所述处理器具体用于:
按照预置角度间隔选择图像;
当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
在本实施例中,通过选择满足重合度要求的图像,可以避免盲目选择图像而导致对图像进行处理时数据量过大的问题,以及后续与数据库中已有场景特征点信息进行匹配时数据量过大的缺陷。
基于第六方面的第十种可实现方式,在第六方面的第十一种可实现的方式中,所述预置角度间隔θ k+1
所述预置角度间隔的计算公式为:θ k+1=θ kk*-α),其中,所述α *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述α为按照角度间隔θ k选择图像时图像之间的重合度。
在本实施例中,对预置角度间隔的计算公式进行了说明,增加了方案的完整性。
基于第六方面及其第六方面的第一种和第二种可实现方式,在第六方面的第十二种可实现的方式中,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述处理器具体用于:
1>对一张所述目标图像进行处理得到场景特征点;
2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自然条件信息构成所述场景特征点信息;
重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
在本实施例中,对场景特征点信息集合的确定方式进行了说明,增加了方案的完整性。
基于第六方面及其第六方面的第一种和第二种可实现方式,在第六方面的第十三种可实现的方式中,所述处理器还用于:
确定构建完成后所述数据库中的第三场景特征点信息;
当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
在本实施例中,对数据库构建完成后,场景特征点的管理进行了说明,可以删除FNCS值低的场景特征点,便于数据库的管理。
基于第六方面的第十三种可实现方式,在第六方面的第十四种可实现的方式中,所述特征数量控制得分FNCS的计算公式为:
Figure PCTCN2019082981-appb-000032
所述
Figure PCTCN2019082981-appb-000033
为所述场景特征点在定位时被使用的概率,所述
Figure PCTCN2019082981-appb-000034
为所述场景特征点的描述子数量占所述场景特征点所属图像中描述子总数的比例。
在本实施例中,对特征数量控制得分FNCS的计算公式进行了说明,增加了方案的可实施性。
本申请的第七方面提供了一种网络设备,所述网络设备属于视觉定位系统,所述网络设备包括:存储器、收发器、处理器以及总线系统;
所述收发器,用于获取实时图像;
其中,所述存储器用于存储程序;
所述处理器用于执行所述存储器中的程序,包括如下步骤:
根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;
将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;
利用所述相同的描述子信息对所述实时图像进行定位;
所述总线系统用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
基于第在本实施例中,对数据库构建完成后的定位过程进行说明,增加了方案的实用性。
基于第七方面,在第七方面的第一种可实现方式中,所述处理器具体用于:
确定所述相同描述子信息在数据库中对应的第一场景特征点信息;
根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
在本实施例中,对实时图像的定位的具体方式进行了说明,增加了方案的可实施性。
基于第七方面的第一种可实现方式,在第七方面的第二种可实现方式中,所述定位计算公式为:拍摄所述实时图像时所述目标移动设备的位置
Figure PCTCN2019082981-appb-000035
其中所述
Figure PCTCN2019082981-appb-000036
为实时图像中所述第一场景特征点的像素坐标,π C为相机的内参矩阵,所述π C用于将3D坐标转换为像素坐标,所述
Figure PCTCN2019082981-appb-000037
为数据库中场景特征点
Figure PCTCN2019082981-appb-000038
所属图像相对于世界坐标系的位姿,所述
Figure PCTCN2019082981-appb-000039
为所述数据库中所述第一场景特征点的像素坐标,所述i的取值为1至n,所述n为正整数,所述第一场景特征点与所述第一场景特征点信息对应。
在本实施例中,对定位公式的具体算法进行了说明,增加了方案的完整性。
基于第七方面及其第七方面的第一种至第二种可实现方式,在第七方面的得第三种可实现方式中,所述处理器还用于:
将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息;
根据所述不同的描述子信息构建所述数据库。
在本实施例中,定位过程中可以根据不同描述子信息更显数据库,使得数据库更加完善,从而使得定位更加准确。
基于第七方面的第三种可实现方式,在第七方面的第四种可实现方式中,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述处理器具体用于:
在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
在本实施例中,对根据不同描述子信息构建数据库的一种情况进行了说明,增加了方案的实用性。
基于第七方面的第三种可实现方式,在第七方面的第五种可实现方式中,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述处理器具体用于:
在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
在本实施例中,对根据不同描述子信息构建数据库的另一种情况进行了说明,增加了方案的完整性。
基于第七方面的第三种可实现方式,在第七方面的第六种可实现方式中,所述处理器还用于:
确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
在本实施例中,对数据库中是否存在第二场景特征点的判断方式进行了说明,增加了方案的可实施性。
本申请的第八方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
本申请的第九方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
附图说明
图1为本申请图像数据库中数据结构和数据类型的关系示意图;
图2为本申请实施例应用于车端时的示意图;
图3为本申请视觉定位系统的一种结构示意图;
图4为本申请视觉定位系统的另一种结构示意图;
图5为本申请数据库构建方法的一种实施例示意图;
图6为本申请选择目标图像集合的实施例示意图;
图7为本申请选择第二场景特征点信息的实施例示意图;
图8为本申请场景特征点生命值与场景特征点被观察次数的关系示意图;
图9为本申请数据库构建方法的另一种实施例示意图;
图10为本申请判别场景特征点在数据库中是否存在的一种实施例示意图;
图11为本申请定位方法的一种实施例示意图;
图12(a)为本申请图像中场景特征点信息与数据库中预置场景特征点信息不匹配的一种情况;
图12(b)为本申请图像中场景特征点信息与数据库中预置场景特征点信息不匹配的另一种情况;
图13为本申请定位方法的另一种实施例示意图;
图14为本申请网络设备的一种结构示意图;
图15为本申请网络设备的另一种结构示意图;
图16为本申请网络设备的另一种结构示意图。
具体实施方式
在本申请实施例中,如图1所示,数据库中储存有场景关键帧信息、场景特征点信息和描述子信息,三者之间具有关联关系。场景关键帧信息包括图像、位置和姿态,一个场景关键帧中具有至少一个场景特征点,场景特征点信息中包括场景特征点所属场景关键帧的ID信息,像素坐标、3D坐标以及描述子信息,一个场景特征点中具有至少一个描述子,描述子信息一部分是传统视觉领域中的场景特征点描述子ξ,另一部分是采集该场景特征点时场景的自然条件属性E。当自然条件属性E发生变化时,描述子ξ也会发生变化,例如同 一个图像分别在阴天和晴天被拍摄时,所拍摄图像的描述子不同。像素坐标、3D坐标以及场景特征点所属场景关键帧的ID信息是场景特征点的静态属性,不会因为外界环境改变而发生变化。
在本申请实施例中,不同的自然条件下描述子信息不同,不同的自然条件是指不同视角方向、不同天气和/或不同光照条件,不同的自然条件还可能为其他情况,具体此处不作限定。
本申请实施例主要应用于视觉定位系统,视觉定位的原理是将所拍摄到的图像的场景特征点与数据库中的场景特征点进行比对,若所拍摄到的图像的场景特征点与数据库中对应场景特征点比对一致,则认为是同一场景特征点,再利用数据库中比对一致的场景特征点的3D坐标进行定位。本申请可应用于无人机、V2X车端以及手机等移动设备移动过程中的定位。
如图2所示,以车辆在运行过程中的实时定位为例:首先确定车辆A在行驶过程中的实时图像信息以及通过GPS等非视觉定位方式定位得到的定位信息。随后车辆A将实时图像发送至服务器,同时车辆A可以将定位信息发送至服务器,服务器接收后确定出该位置的自然条件信息,或者车辆A根据定位信息确定得到自然条件信息后将自然条件信息发送至服务器,随后服务器在实时图像中找到关于该自然条件信息的多个描述子信息,再将确定得到的多个描述子与数据库中预置储存的描述子进行比对,数据库属于该服务器,用于储存场景关键帧及其场景特征点信息。
当数据库中的描述子与实时图像中的描述子比对相同时,找到数据库中比对成功的描述子,描述子比对成功,证明该描述子所属场景特征点为同一个场景特征点,找到相同场景特征点的3D坐标,车端可以利用相同场景特征点的3D坐标进行定位。实时图像中的描述子可能与数据库中的描述子完全相同,此时直接利用完全相同的描述子所属场景特征点的3D坐标进行定位,若实时图像中只存在部分描述子能在数据库中找到相对应的描述子,先用部分相同描述子所属场景特征点的3D坐标进行定位,定位完成后得到不相同部分描述子信息,再将不相同部分描述子信息更新到数据库中,以便实现数据库的优化,从而在后续进行定位时利用优化后的数据库定位更加准确。
在本应用场景实现定位之前,还存在数据库的构建过程,数据库的构建过程是以大量图像为基础,按照生命值算法选择图像中的场景特征点,得到大量的场景特征点信息,随后对比数据库中已经存在的场景特征点信息,对数据库进行更新优化,将数据库中不存在的场景特征点信息更新进入数据库。其中生命值算法能准确的将具有代表性的场景特征点筛选出来,使得数据库用于视觉定位更加准确。
图3所示为视觉定位系统的一种可能的结构,其中,定位器用于获取移动设备的定位信息,可选的,还可以获取姿态信息;图像获取器用于对移动设备进行图像捕捉;移动设备用于接收图像获取器发送的图像和定位器发送的定位信息后,再发送至网络设备,网络设备也可以不通过移动设备直接获取图像以及定位信息,即网络设备直接与图像获取器以及定位器连接,具体此处不作限定。网络设备用于接收图像后进行场景特征点比对从而实现定位,同时还可以对自身的数据库进行更新与管理。
可选的,一种可能的情况是上述所述移动设备发送定位信息至网络设备,另一种可能的情况是移动设备根据定位信息确定自然条件信息后,将自然条件信息发送至网络设备,而不作定位信息的发送,具体此处不作限定。
基于图3所示的视觉定位系统,一种可能的实体结构如图4所示:
定位器具体可以为:全球定位系统、相机、激光雷达、毫米波雷达和惯性测量单元,IMU可以获取定位信息,还可以获取移动设备的姿态。定位器可以是移动设备的组成部分,也可以是与移动设备相连接的外部设备,具体此处不作限定。
移动设备具体可以为:车辆、手机和无人机等。
图像获取器具体可以为:相机。图像获取器可以是移动设备的组成部分,也可以是与移动设备相连接的外部设备,具体此处不作限定。
网络设备具体可以为云端服务器,也可以为具有数据处理能力的移动设备,具体此处不作限定。网络设备的数据库中预置有如图1所示的用于视觉定位的数据模型,该数据模型介绍了场景关键帧、场景特征点以及描述子的关系。
基于上述网络设备与移动设备的结构,本申请实施例提出了一种数据库构建方法以及一种定位方法,本申请包括两部分,一部分是网络设备侧数据库的构建过程,其目的是通过合理的管理数据库,使得数据库更好的用于视觉定位。另一部分是数据库构建完成好进行视觉定位的过程。下面将分别对这两部分进行介绍,其中数据库的构建过程如图5所示:
501、确定满足预置图像重合度要求的目标图像集合。
网络设备获取数据信息,在本实施例中,数据信息可以是图像信息、位置信息,也可以是姿态,还可以是自然条件信息,具体此处不作限定。
其中,网络设备获取移动设备在行驶过程中的图像信息的方法为:可以通过在移动设备上安装摄像机从而与获取摄像头拍摄的图像,移动设备也可以具备图像获取功能,则网络设备获取移动设备拍摄的图像。移动设备在运行过程中,每隔一定时长进行一次图像拍摄,获取的图像主要是移动设备移动过程中周围环境的图像信息,选取的时长由人为设定,可以为0.01S或0.001S等,具体此处不作限定。
图像信息中包括至少一幅图像,每幅图像拍摄时移动设备的姿态和实时位置信息都是不一样的,姿态表示移动设备的行驶角度和方向,移动设备的实时位置信息可以通过全球定位系统GPS、激光雷达、毫米波雷达和/或惯性测量单元IMU获取。
在本实施例中,每幅图像及拍摄该图像时移动设备所在位置及移动设备的姿态是一一对应的。如下表1所示:
表1
Figure PCTCN2019082981-appb-000040
Figure PCTCN2019082981-appb-000041
得到图像信息后,根据数据信息选择满足预置重合度要求的目标图像集合。可选的,确定目标图像集合的过程可以是:移动设备可以先将获取到的图像按照预置重合度要求进行筛选,再将筛选结果发送至网络设备,目标图像的筛选过程也可以是网络设备执行,即网络设备获取图像后进行筛选得到目标图像集合,具体此处不作限定。
需要说明的是,移动设备在直行和弯行时,确定目标图像集合的依据是不一样的,汽车在直行时,需要按照一定的距离间隔确定满足要求的目标图像集合,汽车在弯行时,需要按照一定的角度间隔确定满足要求的目标图像集合,具体步骤如下图6所示:
A、按距离间隔(或角度间隔)d k选择图像。
事先定义一个距离间隔或角度间隔,按照该间隔确定预计选择的图像,例如直行道路上,汽车每行驶1m获取一张图像。或者在弯行道路上,汽车行驶角度每变化5度获取一张图像。
B、确定当前距离间隔(或角度间隔)d k下图像之间的重合度α。
图像选择完成后,计算所选择图像中相邻两图像之间的重合度,计算公式为:
图像之间的重合度α=当前图像与相邻图像相同场景特征点数量的n old/当前图像与相邻图像不同场景特征点的数量n new。当前图像中的场景特征点数量n total,且n total=n old+n new
C、判断所选择图像的重合度与预置重合度门限的差值是否在预置精度范围内。
计算公式为:|α *-α|<Δ α
其中,α *为预置重合度门限,α *一般取1,Δ α为预置精度值,Δ α的取值范围为0.1至0.2,α *和Δ α也可以取其他的值,具体此处不作限定。预置精度范围为0-至Δ α
D、若否,重新计算距离间隔(或角度间隔)。
若所选择图像的重合度与预置重合度门限的差值不在预置精度范围内,则重新定义选择图像的距离间隔。首先确定还需要增加了距离间隔(或角度间隔)Δd k
当移动设备直行时,Δd k=d k*-α);
其中,d k即为上一次选择图像的距离间隔,d k以及α *与α已在上述步骤中得出。
得到新的进行场景关键帧选取的距离间隔d k+k=d k+Δd k
同时将d k+1再次确定为获取场景关键帧图像的距离间隔,返回到步骤A重新执行上述流程,直至得到一个距离间隔d k+n,按照此距离间隔选取图像时重合度满足预置条件。
E、若是,确定选择的所述图像属于目标图像集合。
若所选择图像的重合度与预置重合度门限的差值在预置精度范围内,则按照距离间隔d k选择的图像即为目标图像,按照d k选择多张目标图像得到目标图像集合。
当移动设备弯行时,先确定出满足重合度要求的角度间隔,再筛选图像,具体过程与按照距离间隔筛选图像的方式类似,具体此处不再赘述。
502、根据目标图像集合和每张图像对应的自然条件信息得到场景特征点信息集合。
网络设备对目标图像集合进行处理得到场景特征点,场景特征点可以认为是目标图像中与其他像素点灰度值差别较大的像素点,再根据拍摄每张目标图像时移动设备的位置确定该位置的自然条件信息,建立场景特征点与自然条件信息的对应关系从而得到场景特征点信息。可以理解的是,场景特征点信息中除包含自然条件信息外,还包括该场景特征点的3D坐标、像素坐标和描述子信息。
在本实施例中,可能有多张图像都包括同一场景特征点,因此场景特征点与自然条件信息的对应关系可以为一一对应,也可以为一个场景特征点对应多种自然条件信息,描述子信息随自然条件信息的变化而变化,因此一个场景特征点信息中可能包括多个描述子信息。
例如目标图像集合中存在目标图像1和目标图像2,目标图像1在晴天且光照强度为400lx时拍摄,目标图像2在阴天,且光照强度为300lx时拍摄,目标图像1中有场景特征点1和场景特征点2,目标图像2中有场景特征点2和场景特征点3,对目标图像集合进行解析得到场景特征点1、场景特征点2和场景特征点3,场景特征点中有1个描述子,与目标图像1的自然条件信息对应,场景特征点2中有两个描述子,分别与目标图像1的自然条件信息和目标图像2的自然条件信息对应,场景特征点3中有1个描述子,与目标图像2的自然条件信息对应。
得到场景特征点集合后,选择出具有代表性的场景特征点,可以理解的是,静止物体用于定位是定位更加准确,因此代表场景特征点可以为与道路上的静止的标志牌、路标和建筑物等物体相关的场景特征点。具体的,选取代表场景特征点的方式为依据场景特征点的生命值进行选取,生命值大小可以表示场景特征点为静态场景特征点的概率,生命值越大,场景特征点为静态场景特征点的概率越大。首先从单个车端的角度计算场景特征点的生命值,进行场景特征点一次筛选,其次,由于一个场景特征点一般会被多个移动设备观察到,需要将一次筛选得到的场景特征点从多个移动设备的角度计算生命值,进行二次筛选,多个移动设备指至少两个移动设备。具体如下实施例步骤503和504所示:
503、确定场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息。
请参照图7,场景特征点的筛选过程包括:
A、根据预置模型关系计算场景特征点在单个车端被观察到的第一生命值。
首先进行模型训练得到如图8所示场景特征点被观察到的次数与生命值之间的关系。 可见,场景特征点被观察到的次数n与其生命值f之间呈现正态曲线,从单个移动设备的角度,当某一场景特征点只在数量比较少的几帧中出现,极有可能是噪声,需要丢弃掉;当其在数量比较多的几帧中出现,则它极有可能为与本实施例中移动设备同步运动的另一移动设备的图像,也需要舍弃。
根据图8确定场景特征点被观察到的次数的平均值n 0和方差σ。随后分别计算场景特征点集合中每个场景特征点的第一生命值。计算公式为:
Figure PCTCN2019082981-appb-000042
其中,n表示场景特征点集合中某个场景特征点在单个车端中被观察到的次数。
B、判断第一生命值是否大于第一预置门限值。
当某个场景特征点的第一生命值大于第一预置门限时,该场景特征点即为第一场景特征点,再从多个移动设备的角度判断经过一次筛选后的第一场景特征点是否满足生命值要求。
若某个场景特征点的第一生命值小于或等于第一预置门限时,表示该场景特征点的第一生命值过低,丢弃该场景特征点。
504、确定第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息。
C、计算场景特征点在多个移动设备被观察到时的第二生命值。
确定被多个移动设备都获取到的第一场景特征点,可以根据场景特征点的3D坐标或像素坐标确定,也可以通过其他方式确定多个移动设备获取到的场景特征点是否为同一场景特征点,具体此处不作限定。例如多个移动设备获取的场景特征点中,3D坐标相同或3D坐标差值在预置差值范围内的场景特征点属于同一场景特征点。
计算被多个移动设备观测到的第一场景特征点的第二生命值,计算公式为:
Figure PCTCN2019082981-appb-000043
f为场景特征点在单个移动设备的生命值,B为每个移动设备对应的权重系数。某一场景特征点对多个移动设备而言,每个移动设备对应的权重系数一般是不同的。权重系数β i的计算公式为:β i=γ tgc,γ t为场景特征点在多个移动设备被观测到的时间连续性指标,γ g为场景特征点在多个移动设备被观测到的几何连续性指标,γ c为场景特征点在多个移动设备被观测到的描述一致性指标。
其中,
Figure PCTCN2019082981-appb-000044
Δ t为不同移动设备观测到同一场景特征点的时间间隔,Δ 1和Δ 2为预置值,可见,不同移动设备观测到同一场景特征点的时间间隔与其γ t呈负相关。
γ g和γ c的计算过程与γ t类似,具体此处不再赘述。需要说明的是,在计算几何连续性指标γ g时,Δ定义为不同移动设备观测到同一场景特征点之间的欧式距离。在计算描述一致 性γ c时,Δ定义为不同移动设备观测到同一场景特征点之间的描述距离。
D、判断第二生命值是否大于第二预置门限。
确定第二生命值大于或等于第二预置门限的第一场景特征点为第二场景特征点,该第二场景特征点为具有代表性的成熟场景特征点,准备将第二场景特征点的信息加入数据库。
若第一场景特征点的生命值小于第二预置生命值门限,舍弃该第一场景特征点。
505、当第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据第二场景特征点信息构建数据库。
确定待加入数据库的第二场景特征点信息后,还需要判断数据库中是否已经存在该第二场景特征点信息,以避免数据库中场景特征点信息的重复加入。
将第二场景特征点信息与数据库中预置的场景特征点信息进行对比,若数据库中预置的场景特征点信息与第二场景特征点信息不匹配,则根据第二场景特征点信息构建数据库。
在本实施例中,按照上述方式筛选得到第二场景特征点信息后,当数据库中不存在与某种自然条件信息相关的第二场景特征点信息时,根据该第二场景特征点信息构建该数据库,使得构建完成后的数据库用于定位时,定位更加准确。
在本实施例中,一个场景特征点信息包括场景特征点的3D坐标、像素坐标、与自然条件相关的描述子信息和场景特征点所属关键帧ID,其中3D坐标、像素坐标和场景特征点所属关键帧ID表示场景特征点的静态指标,一般是固定不变的,而描述子信息属于动态指标,会随着自然条件变化而发生变化,因此数据库中预置的场景特征点信息与第二场景特征点信息不匹配可能的情况是,数据库中不存在第二场景特征点信息,或数据库中存在第二场景特征点信息,但是数据库的第二场景特征点信息与图像中确定的第二场景特征点信息所包含的描述子信息不同,请参照图9,下面将进行说明。
901、确定满足预置图像重合度要求的目标图像集合。
902、根据目标图像集合和每张图像对应的自然条件信息得到场景特征点信息集合。
903、确定场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息。
904、确定第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息。
905、当数据库中不存在第二场景特征点信息时,在数据库中增加第二场景特征点信息。
判断第二场景特征点在数据库中是否存在,若不存在,则对应的第二场景特征点信息在数据库中也不存在,将第二场景特征点的信息加入数据库中。
请参照图10,判断第二场景特征点在数据库中是否存在具体包括如下步骤:
A、确定与第二场景特征点信息对应的第二场景特征点的3D坐标。
首先确定与第二场景特征点信息对应的第二场景特征点的3D坐标,由于场景特征点可能被多个移动设备观察到,首先获取从多个移动设备观察某一场景特征点时,多个移动设备所观察到的该场景特征点的3D坐标,随后计算多个3D坐标的均值
Figure PCTCN2019082981-appb-000045
和标准差σ,然后将每个车端测量得到的场景特征点的3D坐标与均值
Figure PCTCN2019082981-appb-000046
进行比较,当两者之间的欧式距离大于3σ时,说明该车端测量的3D坐标误差较大,删除此3D坐标。之后使用剩余的3D坐标重新 计算场景特征点3D坐标的均值
Figure PCTCN2019082981-appb-000047
和标准差σ,并判断各3D坐标与均值
Figure PCTCN2019082981-appb-000048
的欧式距离是否小于3σ。如此重复,直至所有剩余的3D坐标与均值
Figure PCTCN2019082981-appb-000049
之间的欧式距离都小于3σ,输出这时的坐标均值
Figure PCTCN2019082981-appb-000050
作为该场景特征点的3D坐标。
例如车端个数为N个,N个车端观察到的同一场景特征点的3D坐标分别为3D1、3D2、3D3和3Dn。
首先计算得到
Figure PCTCN2019082981-appb-000051
判断
Figure PCTCN2019082981-appb-000052
Figure PCTCN2019082981-appb-000053
Figure PCTCN2019082981-appb-000054
若是,则
Figure PCTCN2019082981-appb-000055
即为该场景特征点的3D坐标,若其中至少一个3D坐标(例如3D1)到
Figure PCTCN2019082981-appb-000056
之间的欧式距离大于3σ,则删除3D1,再用3D2、3D3至3Dn取平均得到
Figure PCTCN2019082981-appb-000057
后重复执行上述步骤。需要说明的是,3σ是固定值,由系统预置,3σ的值具体此处不作限定。
B、根据第二场景特征点的3D坐标判断第二场景特征点在数据库中是否存在。
计算得到场景特征点的3D坐标后,将其与图像数据库中任意一个场景特征点的3D坐标进行比较,当两者之间的欧氏距离小于σ d时,判断其与数据库中的场景特征点属于同一个场景特征点,当数据库中预置的每个场景特征点的3D坐标与该场景特征点的3D坐标进行比较,欧式距离均大于第一预置门限值时,判断该场景特征点为新的场景特征点,并把新的场景特征点信息(即第二场景特征点信息)加入数据库。σ d的具体取值此处不作限定。同时,在本实施例中,新场景特征点的个数也不做限定。
在数据库中增加的新场景特征点信息中包括:新场景特征点的像素坐标、3D坐标、所属关键帧ID和目标描述子信息,新场景特征点信息中还可以包括除目标描述子信息以外的其他描述子信息,具体此处不作限定。
请参照图12(a)所示,例如相对于图像,数据库中不存在场景特征点4,进而也不存在场景特征点4的描述子10,描述子10即为本实施例的不同的描述子。此时,在数据库中增加包含不同描述子的第二场景特征点的信息,即场景特征点4的信息。
或,
906、当数据库中存在第二场景特征点信息,且第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息,在数据库预置的第二场景特征点信息中增加目标描述子信息。
按照上述实施例步骤905中A和B判断第二场景特征点在数据库中是否存在后,若数据库中存在第二场景特征点信息,则判断第二场景特征点中是否包含于目标自然条件信息相关的目标描述子信息。具体为:
C、确定数据库中预置的第二场景特征点的至少一个描述子信息。
当第二场景特征点在数据库中存在时,由于第二场景特征点在不同的自然条件信息下有不同的描述子,数据库预置的第二场景特征点信息中包含至少一个描述子信息。
D、判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限。
判断数据库的至少一个描述子信息中,是否存在一个描述子信息所对应的描述子,与图像中确定的第二场景特征点的目标描述子信息对应的描述子的距离小于预置距离门限。
在本实施例中,目标描述子信息为关于目标自然条件信息的描述子信息。一种可能的情况是,目标描述子信息对应的描述子为所属场景特征点的所有描述子中与其他描述子距离之和最小的描述子,另一种可能的情况是目标描述子信息对应的描述子为所属场景特征点的所有描述子中任意一个描述子。
当目标描述子与数据库中至少一个描述子之间的距离都大于预置距离门限时,判断目标描述子为新的描述子,当目标描述子与数据库中某一描述子距离小于或等于预置距离门限时,则判断为同一描述子。
若目标描述子为新描述子,将目标描述子的信息储存于数据库中。若为同一描述子,不做任何更新处理。在本实施例中,新描述子的个数此处不做限定。
请参照图12(b)所示,例如数据库中包括与目标图像相同的场景特征点1、场景特征点2和场景特征点3,但是数据库的场景特征点3中不包含图像的场景特征点3中的目标描述子,即描述子9。此时,在数据库中增加不同的描述子9对应的描述数子信息。
在本实施例中,对图像中场景特征点与数据库中场景特征点匹配的两种情况进行了说明,可以在更新完善数据库信息的同时,只更新与数据库中场景特征点信息不同的部分,避免了数据库中数据量的大量增加,便于数据库管理。
进一步的,在场景特征点或描述子更新完成后,为了优化数据库,使得数据库更易于管理,还可以通过筛选条件删除一部分场景特征点,只保留满足要求的场景特征点,场景特征点的保留遵循两个原则:
a、尽可能的保留在定位时经常被用到的场景特征点。用指标MNI表示,MNI的计算公式为:
Figure PCTCN2019082981-appb-000058
M为在场景特征点所在位置进行定位的总次数,m i表示在进行定位时该场景特征点被使用的次数。
b、尽可能保留动态描述子信息少的场景特征点。用指标FNI表示,DNI的计算公式为:
Figure PCTCN2019082981-appb-000059
Figure PCTCN2019082981-appb-000060
为某个场景特征点中的描述子数量占该场景特征点所属场景关键帧中描述子总数的比例。
综合上述两个标准,得到所述特征数量控制得分FNCS的计算公式为:
Figure PCTCN2019082981-appb-000061
场景特征点的FNCS值越大,表示该场景特征点经常被用到,而且具有较少的描述子,说明它对自然条件变化比较鲁棒,因此进行场景特征点管理时需要删除FNCS得分低的场景特征点。
上面对数据库的构建过程进行了说明,请参照下图11,下面将对数据库在定位时被使 用的情况及其定位时对数据库的同步更新情况进行说明。
1101、获取实时图像。
目标移动设备在行驶过程中,通过采用在目标移动设备上安装摄像头,或者道路上每隔一定的距离间隔安装一个摄像头,实时获取目标移动设备行驶过程中的图像信息。可以理解的是,获取的实时图像为车辆行驶过程中周围道路、环境的图片。
摄像机拍摄实时图像后可以直接发送至网络设备,也可以经由目标移动设备发送至网络设备,具体此处不作限定。同时目标移动设备本身也可以具有图像获取功能。
1102、根据实时图像确定至少一个第一描述子信息。
网络设备对实时图像进行处理得到至少一个第一描述子信息,该第一描述子信息中包括拍摄实时图像时的目标自然条件信息,目标自然条件信息可以由网络设备确定,也可以由移动设备确定后再发送至网络设备,具体此处不作限定。
其中,拍摄实时图像时的目标自然条件信息是根据移动设备的实时定位信息确定的,而移动设备的实时定位信息可以通过全球定位系统GPS、激光雷达和/或毫米波雷达,也可以通过和惯性测量单元IMU获取,具体此处不作限定。获取实时定位信息后,确定该位置的自然条件信息即为目标自然条件信息。
1103、将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息。
同一场景在不同视角方向、不同天气、不同光照条件下的成像是有差别的。比如天气晴朗时道路上某一标志牌角点周围的像素与天气阴暗时该角点周围的像素差别很明显;再比如该标志牌正面角点周围的像素和背面角点周围的像素差别也很明显;这样同一位置的标志牌角点在不同天气下、不同光照下、不同视角下的描述子差异很大。因此,可以理解的是,在目标自然条件信息对应的自然条件状况下拍摄的实时图像,其自然条件状况是唯一的,因此实时图像中的一个场景特征点信息只包含一种描述子信息,但是一张实时图像具有多个场景特征点,因此实时图像中存在至少一个描述子信息中包括目标自然条件信息,一种可能的情况是,实时图像中存在N个第一描述子信息,N个第一描述子信息中有M个第一描述子信息中包括目标自然条件信息,N和M均为正整数且M小于等于N。
将至少一个第一描述子信息中的每个第一描述子信息分别与数据库中预置的描述子信息进行比对,确定相同的描述子信息。例如实时图像中包括描述子1、描述子2…描述子N,分别于数据库中的描述子进行比对,发现描述子1、描述子5…描述子N-1和描述子N在数据库中存在相同的描述子。
判断描述子与数据库中描述子是否相同的方式与数据库构建时判断描述子是否相同的方式类似,即根据描述子的距离判断,具体此处不再赘述。
在本实施例中,数据库中预置的描述子信息由按照实施例步骤501至步骤505构建数据库后得到。具体此处不再赘述。
1104、利用相同的描述子信息对所述实时图像进行定位。
首先确定得到相同的描述子,在数据库中查找相同描述子所属场景特征点,找到这些场景特征点的3D坐标,再利用这些场景特征点的3D坐标进行定位。
例如图12(a)中,实时图像与数据库对比发现,描述子1和描述子4都能在数据库中找到对应描述子,则确定描述子1所属场景特征点1与数据库中描述子1所属场景特征点1为相同的场景特征点,同时描述子4所属场景特征点4与数据库中描述子4所属场景特征点4为相同的场景特征点,找到相同场景特征点的3D坐标:3D坐标1和3D坐标2。再利用3D坐标1和3D坐标2进行定位计算。
网络设备利用相同的场景特征点信息进行定位计算。确定数据库中相同场景特征点后,按照预置算法得到移动设备位姿,在本实施例中,定位的计算公式为:
Figure PCTCN2019082981-appb-000062
Figure PCTCN2019082981-appb-000063
为需要求解的移动设备位姿,
Figure PCTCN2019082981-appb-000064
为实时图像中场景特征点的像素坐标,该像素坐标为相对于实时图像的像素坐标,
Figure PCTCN2019082981-appb-000065
为地图数据库中与
Figure PCTCN2019082981-appb-000066
相匹配的场景特征点的像素坐标,该像素坐标为相对于数据库的像素坐标。i从1到n表示,实时图像中共有n个场景特征点与数据库中的场景特征点相匹配。π C为相机的内参矩阵,可以把3D坐标转换为像素坐标。
Figure PCTCN2019082981-appb-000067
通过(π C) -1转换得到相对于汽车的3D坐标,然后通过
Figure PCTCN2019082981-appb-000068
转换得到相对于世界坐标系的3D坐标,接着再经过
Figure PCTCN2019082981-appb-000069
转换得到相对于数据库的像素坐标,
Figure PCTCN2019082981-appb-000070
为数据库中场景特征点
Figure PCTCN2019082981-appb-000071
所属关键帧相对于世界坐标系的位姿。转换得到的像素坐标与数据库中相匹配的场景特征点的像素坐标
Figure PCTCN2019082981-appb-000072
一致,把两者相减即得到了重投影误差模型。最后通过最优化方法使重投影误差模型的值最小,就可以得到汽车的实时位姿
Figure PCTCN2019082981-appb-000073
移动设备根据计算结果进行定位。网络设备计算得到定位结果后将定位结果返回移动设备,以便移动设备执行定位操作。
需要说明的是,定位计算也可以由移动设备执行,网络设备确定数据库中相同场景特征点后,将相同场景特征点的信息发送至移动设备,移动设备按照预置算法得到位姿信息并执行定位操作。
在本实施例中,网络设备发送至移动设备的相同场景特征点信息具体包括:场景特征点的像素坐标、场景特征点所属关键帧位姿,还可以包括场景特征点的3D坐标,具体此处不作限定。
在本实施中,对数据库构建后用于定位的具体过程进行了说明,通过按照图5所示构建得到数据库后,由于该数据库中包含更多不同自然条件的描述子信息,因此将该数据库用于定位时,实时图像可以与数据库匹配到更多相同的描述子信息,从而使得定位更加准确。
需要说明的是,在定位过程中,还可以根据不同的描述子信息更新数据库,以便数据库储存信息更加完善。
根据实时图像确定至少一个第一描述子信息之后,还包括将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息,并根据不同的描述子信息构建所述数据库。根据不同的描述子信息构建数据库具体包括两种情况:
一、数据库中不存在不同描述子信息所属的第二场景特征点信息时。
网络设备判断自身数据库中是否存在第二场景特征点,判断的方式与数据库构建时判断场景特征点是否相同的方式类似,即根据3D坐标进行判断,具体此处不再赘述。若数据库中不存在第二场景特征点,显然数据库中也不存在第二场景特征点信息。
请参照图12(a)所示,例如相对于实时图像,数据库中不存在场景特征点4,进而也不存在场景特征点4的描述子10,描述子10即为本实施例的不同的描述子。此时,在数据库中增加包含不同描述子的第二场景特征点的信息,即场景特征点4的信息。
二、数据库中存在不同描述子所属的第二场景特征点信息,但是数据库的第二场景特征点信息不包括目标描述子信息。
当判断得到数据库中存在第二场景特征点信息时,此时第二场景特征点信息中不包括上述确定的不同描述子信息,请参照图12(b)所示,例如数据库中包括与实时图像相同的场景特征点1、场景特征点2和场景特征点3,但是数据库的场景特征点3的信息中不包含实时图像场景特征点3中的目标描述子,即描述子9。此时,在数据库中增加不同的描述子9对应的描述数子信息。
需要说明的是,若需要在数据库中增加目标描述子所属的第二场景特征点,则需要将第二场景特征点的3D坐标同步更新到数据库,由于实时图像信息中只包含描述子信息和像素坐标信息,不包含3D坐标,此时先利用数据库中相同的那部分描述子进行定位后,再在数据库中增加不同场景特征点的3D坐标。确定不同场景特征点的3D坐标的方式是:利用相同描述子进行定位得到实时图像的定位结果后,通过双目相机确定不同场景特征点的3D坐标,也可以通过单目相机与IMU共同确定,不同场景特征点3D坐标的确定方式具体此处不作限定。
在本实施例中,在实时定位过程中,根据不同的描述子信息不断完善更新数据库,有利用数据库更好的用于定位。
在本实施例中,实时定位的过程是移动设备和网络设备进行数据交互的过程,请参照图13,下面将进行说明:
1301、移动设备发送实时图像至网络设备。
移动设备除发送实时图像至网络设备外,还可以发送拍摄实时图像时移动设备的位置信息或移动设备所在位置的自然条件状况至网络设备。
1302、网络设备根据实时图像确定至少一个第一描述子信息。
1303、网络设备将数据库中预置的描述子信息与至少一个第一描述子信息进行对比,确定相同的描述子信息和不同的描述子信息。
1304、网络设备利用相同的描述子信息对实时图像进行定位。
1305、将定位结果发送至移动设备。
移动设备根据网络设备确定的定位计算结果执行定位操作。在本实施例中,定位计算操作也可以由网络设备执行,具体此处不作限定。
1306、网络设备根据不同的描述子信息构建数据库。
在本实施例中,实施例步骤1301至1306与上述图11所示实施例步骤类似,具体此处不再赘述。
上面从定位方法和数据库构建方法的角度对本申请实施例进行了叙述,下面对本申请实施例中网络设备的结构进行说明。
基于上述数据库构建方法,网络设备的一种可能的结构如图14所示,包括:
确定单元1401,用于确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;
处理单元1402,用于根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息集合,所述场景特征点集合中包括至少一个场景特征点信息;
所述确定单元1401,还用于确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为静态场景特征点的概率;
所述确定单元1401,还用于确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;
数据库构建单元1403,用于当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库。
可选的,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
所述数据库中不存在所述第二场景特征点信息;
所述数据库构建单元1403,具体用于在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
可选的,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;
所述数据库构建单元1403具体用于在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息。
可选的,所述确定单元1401,还用于与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
可选的,所述确定单元1401,还用于确定数据库中预置的所述第二场景特征点的至少一个描述子信息;
所述网络设备还包括:
判断单元1404,用于判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限;
所述确定单元1401,还用于若所述至少一个描述子信息中,不存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
可选的,在第四方面的第五种可实现的方式中,所述场景特征点在单个移动设备的生命值为f,所述f的计算公式为:
Figure PCTCN2019082981-appb-000074
其中,所述n表示所述场景特征点在单个移动设备中被观察到的次数,所述n 0为预置的场景特征点被观察到的次数的平均值,所述σ为预置的场景特征点被观察到的次数的方差。
可选的,所述场景特征点在多个移动设备的生命值为F,所述F的计算公式为:
Figure PCTCN2019082981-appb-000075
所述f为所述场景特征点在单个移动设备的生命值,所述B为每个移动设备对应的权重系数,且所述多个移动设备中一个移动设备对应一个权重系数。
可选的,所述β i的计算公式为:β i=γ tgc,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
可选的,所述确定单元1401具体用于,按照预置距离间隔选择图像;
当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
可选的,所述预置距离间隔d k+1
所述预置距离间隔的计算公式为:d k+1=d k+d k*-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
可选的,所述确定单元1401,具体用于按照预置角度间隔选择图像;
当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
可选的,所述预置角度间隔θ k+1
所述预置角度间隔的计算公式为:θ k+1=θ kk*-α),其中,所述α *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述α为按照角度间隔θ k选择图像时图像之间的重合度。
可选的,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述处理单元1402,具体用于1>对一张所述目标图像进行处理得到场景特征点;
2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自然条件信息构成所述场景特征点信息;
重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
可选的,所述确定单元1401,还用于确定构建完成后所述数据库中的第三场景特征点信息;
所述数据库构建单元1403,还用于当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
可选的,所述特征数量控制得分FNCS的计算公式为:
Figure PCTCN2019082981-appb-000076
所述
Figure PCTCN2019082981-appb-000077
为所述场景特征点在定位时被使用的概率,所述
Figure PCTCN2019082981-appb-000078
为所述场景特征点的描述子数量占所述场景特征点所属图像中描述子总数的比例。
基于上述的定位方法,网络设备的另一种可能的结构如图15所示:
获取单元1501,用于获取实时图像;
确定单元1502,用于根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;
所述确定单元1502,还用于将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;
定位单元1503,用于利用所述相同的描述子信息对所述实时图像进行定位。
可选的,所述定位单元1503,具体用于确定所述相同描述子信息在数据库中对应的第一场景特征点信息;
根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
可选的,所述定位计算公式为:拍摄所述实时图像时所述目标移动设备的位置
Figure PCTCN2019082981-appb-000079
其中所述
Figure PCTCN2019082981-appb-000080
为实时图像中所述第一场景特征点的像素坐标,π C为相机的内参矩阵,所述π C用于将3D坐标转换为像素坐标,所述
Figure PCTCN2019082981-appb-000081
为数据库中场景特征点
Figure PCTCN2019082981-appb-000082
所属图像相对于世界坐标系的位姿,所述
Figure PCTCN2019082981-appb-000083
为所述数据库中所述第一场景特征点的像素坐标,所述i的取值为1至n,所述n为正整数,所述第一场景 特征点与所述第一场景特征点信息对应。
可选的,所述确定单元1502,还用于将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息;
所述网络设备还包括数据库构建单元1504;
所述数据库构建单元1504,具体用于根据所述不同的描述子信息构建所述数据库。
可选的,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述数据库构建单元1504,具体用于在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
可选的,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述数据库构建单元1504,具体用于在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
可选的,所述确定单元1502,还用于确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
需要说明的是,上述装置各模块/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其带来的技术效果与本申请方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储有程序,该程序执行包括上述方法实施例中记载的部分或全部步骤。
接下来介绍本申请实施例提供的另一种网络设备,请参阅图16所示,网络设备1600包括:
接收器1601、发射器1602、处理器1603和存储器1604(其中网络设备1600中的处理器1603的数量可以一个或多个,图16中以一个处理器为例)。在本申请的一些实施例中,接收器1601、发射器1602、处理器1603和存储器1604可通过总线或其它方式连接,其中,图16中以通过总线连接为例。
存储器1604可以包括只读存储器和随机存取存储器,并向处理器1603提供指令和数据。存储器1604的一部分还可以包括非易失性随机存取存储器(英文全称:Non-Volatile Random Access Memory,英文缩写:NVRAM)。存储器1604存储有操作系统和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。操作系统可包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。
处理器1603控制网络设备的操作,处理器1603还可以称为中央处理单元(英文全称:Central Processing Unit,英文简称:CPU)。具体的应用中,网络设备的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线 和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
上述本申请实施例揭示的方法可以应用于处理器1603中,或者由处理器1603实现。处理器1603可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1603中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1603可以是通用处理器、数字信号处理器(英文全称:digital signal processing,英文缩写:DSP)、专用集成电路(英文全称:Application Specific Integrated Circuit,英文缩写:ASIC)、现场可编程门阵列(英文全称:Field-Programmable Gate Array,英文缩写:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1604,处理器1603读取存储器1604中的信息,结合其硬件完成上述方法的步骤。
接收器1601可用于接收输入的数字或字符信息,以及产生与网络设备的相关设置以及功能控制有关的信号输入,发射器1602可包括显示屏等显示设备,发射器1602可用于通过外接接口输出数字或字符信息。
本申请实施例中,处理器1603,用于执行前述数据库构建方法以及定位方法。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (46)

  1. 一种数据库构建方法,其特征在于,包括:
    确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;
    根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息集合,所述场景特征点集合中包括至少一个场景特征点信息;
    确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为静态场景特征点的概率;
    确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;
    当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库。
  2. 根据权利要求1所述的方法,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
    所述数据库中不存在所述第二场景特征点信息;
    所述根据所述第二场景特征点信息构建所述数据库包括:
    在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
  3. 根据权利要求1所述的方法,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
    所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;
    所述根据所述第二场景特征点信息构建所述数据库包括:
    在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息。
  4. 根据权利要求2或3所述的方法,其特征在于,所述在数据库中增加所述第二场景特征点信息之前,所述方法还包括:
    确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
    当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
    当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
  5. 根据权利要求3所述的方法,其特征在于,所述在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息之前,所述方法还包括:
    确定数据库中预置的所述第二场景特征点的至少一个描述子信息;
    判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述 子信息对应的描述子的距离小于预置距离门限;
    若否,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
  6. 根据权利要求1至3中任一项所述的方法,其特征在于,所述场景特征点在单个移动设备的生命值为f,所述f的计算公式为:
    Figure PCTCN2019082981-appb-100001
    其中,所述n表示所述场景特征点在单个移动设备中被观察到的次数,所述n 0为预置的场景特征点被观察到的次数的平均值,所述σ为预置的场景特征点被观察到的次数的方差。
  7. 根据权利要求6所述的方法,其特征在于,所述场景特征点在多个移动设备的生命值为F,所述F的计算公式为:
    Figure PCTCN2019082981-appb-100002
    所述f为所述场景特征点在单个移动设备的生命值,所述B为每个移动设备对应的权重系数,且所述多个移动设备中一个移动设备对应一个权重系数。
  8. 根据权利要求7所述的方法,其特征在于,所述β i的计算公式为:β i=γ tgc,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
  9. 根据权利要求1至3中任一项所述的方法,其特征在于,所述确定满足预置图像重合度要求的目标图像集合包括:
    按照预置距离间隔选择图像;
    当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
  10. 根据权利要求9所述的方法,其特征在于,所述预置距离间隔d k+1
    所述预置距离间隔的计算公式为:d k+1=d k+d k*-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
  11. 根据权利要求1至3中任一项所述的方法,其特征在于,所述确定满足预置图像重合度要求的目标图像集合包括:
    按照预置角度间隔选择图像;
    当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
  12. 根据权利要求11所述的方法,其特征在于,所述预置角度间隔θ k+1
    所述预置角度间隔的计算公式为:θ k+1=θ kk*-α),其中,所述2 *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述2为按照角度间隔θ k选择图像时 图像之间的重合度。
  13. 根据权利要求1至3中任一项所述的方法,其特征在于,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述根据所述目标图像集合和所述自然条件信息得到场景特征点信息集合包括:
    1>对一张所述目标图像进行处理得到场景特征点;
    2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自然条件信息构成所述场景特征点信息;
    重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
  14. 根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述第二场景特征点信息构建所述数据库之后,所述方法还包括:
    确定构建完成后所述数据库中的第三场景特征点信息;
    当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
  15. 根据权利要求14所述的方法,其特征在于,所述特征数量控制得分FNCS的计算公式为:
    Figure PCTCN2019082981-appb-100003
    所述
    Figure PCTCN2019082981-appb-100004
    为所述场景特征点在定位时被使用的概率,所述
    Figure PCTCN2019082981-appb-100005
    为所述场景特征点的描述子数量占所述场景特征点所属图像中描述子总数的比例。
  16. 一种定位方法,其特征在于,所述方法应用于视觉定位系统,所述方法包括:
    获取实时图像;
    根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;
    将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;
    利用所述相同的描述子信息对所述实时图像进行定位。
  17. 根据权利要求16所述的方法,其特征在于,所述利用所述相同的描述子信息对所述实时图像进行定位包括:
    确定所述相同描述子信息在数据库中对应的第一场景特征点信息;
    根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
  18. 根据权利要求17所述的方法,其特征在于,所述定位计算公式为:
    拍摄所述实时图像时所述目标移动设备的位置
    Figure PCTCN2019082981-appb-100006
    Figure PCTCN2019082981-appb-100007
    其中所述
    Figure PCTCN2019082981-appb-100008
    为实时图像中所述第一场景特征点的像素坐标,π C为相机的内参矩阵,所述π C用于将3D坐标转换为像素坐标,所述
    Figure PCTCN2019082981-appb-100009
    为数据库中场景特征点
    Figure PCTCN2019082981-appb-100010
    所属图像相对于世界坐标系的位姿,所述
    Figure PCTCN2019082981-appb-100011
    为所述数据库中所述第一场景特征点的像素坐标,所述i的取值为1至n,所述n为正整数,所述第一场景特征点与所述第一场景特征点信息对应。
  19. 根据权利要求16至18中任一项所述的方法,其特征在于,所述根据所述实时图像确定至少一个第一描述子信息之后,所述方法还包括:
    将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的描述子信息;
    根据所述不同的描述子信息构建所述数据库。
  20. 根据权利要求19所述的方法,其特征在于,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述根据所述不同的描述子信息构建所述数据库包括:
    在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
  21. 根据权利要求19所述的方法,其特征在于,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述根据所述不同的描述子信息构建所述数据库包括:
    在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
  22. 根据权利要求19所述的方法,其特征在于,所述根据所述不同的描述子信息构建所述数据库之前,所述方法还包括:
    确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
    当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
    当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
  23. 一种网络设备,其特征在于,所述网络设备包括:存储器、收发器、处理器以及总线系统;
    其中,所述存储器用于存储程序;
    所述处理器用于执行所述存储器中的程序,包括如下步骤:
    确定满足预置图像重合度要求的目标图像集合,所述目标图像集合中包括至少一张图像,且每张图像对应一种自然条件信息;
    根据所述目标图像集合和所述每张图像对应的自然条件信息得到场景特征点信息集合,所述场景特征点集合中包括至少一个场景特征点信息;
    确定所述场景特征点信息集合中,在单个移动设备的生命值大于第一预置生命值门限的场景特征点对应的第一场景特征点信息,所述生命值的大小用于表示所述场景特征点为 静态场景特征点的概率;
    确定所述第一场景特征点信息中,在多个移动设备的生命值大于第二预置生命值门限的场景特征点对应的第二场景特征点信息;
    当所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配时,根据所述第二场景特征点信息构建所述数据库;
    所述总线系统用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
  24. 根据权利要求23所述的网络设备,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
    所述数据库中不存在所述第二场景特征点信息;
    所述处理器具体用于:
    在数据库中增加所述第二场景特征点信息,所述第二场景特征点信息中包括关于目标自然条件信息的目标描述子信息。
  25. 根据权利要求23所述的网络设备,其特征在于,所述第二场景特征点信息与数据库中预置的场景特征点信息不匹配包括:
    所述数据库中存在所述第二场景特征点信息,且所述第二场景特征点信息中不包括关于目标自然条件信息的目标描述子信息;
    所述处理器具体用于:
    在所述数据库预置的第二场景特征点信息中增加所述目标描述子信息。
  26. 根据权利要求24或25所述的网络设备,其特征在于,所述处理器还用于:
    确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
    当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
    当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
  27. 根据权利要求25所述的网络设备,其特征在于,所述处理器还用于:
    确定数据库中预置的所述第二场景特征点的至少一个描述子信息;
    判断所述至少一个描述子信息中,是否存在一个描述子信息对应的描述子与目标描述子信息对应的描述子的距离小于预置距离门限;
    若否,确定所述数据库预置的第二场景特征点信息中不包括目标描述子信息。
  28. 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述场景特征点在单个移动设备的生命值为f,所述f的计算公式为:
    Figure PCTCN2019082981-appb-100012
    其中,所述n表示所述场景特征点在单个移动设备中被观察到的次数,所述n 0为预置的场景特征点被观察到的次数的平均值,所述σ为预置的场景特征点被观察到的次数的方差。
  29. 根据权利要求28所述的网络设备,其特征在于,所述场景特征点在多个移动设备的生命值为F,所述F的计算公式为:
    Figure PCTCN2019082981-appb-100013
    所述f为所述场景特征点在单个移 动设备的生命值,所述B为每个移动设备对应的权重系数,且所述多个移动设备中一个移动设备对应一个权重系数。
  30. 根据权利要求29所述的网络设备,其特征在于,所述β i的计算公式为:β i=γ tgc,所述γ t为所述场景特征点在多个移动设备被观测到的时间连续性指标,所述γ g为所述场景特征点在多个移动设备被观测到的几何连续性指标,所述γ c为所述场景特征点在多个移动设备被观测到的描述一致性指标。
  31. 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述处理器具体用于:
    按照预置距离间隔选择图像;
    当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
  32. 根据权利要求31所述的网络设备,其特征在于,所述预置距离间隔d k+1
    所述预置距离间隔的计算公式为:d k+1=d ++d k*-α),其中,所述α *为所述预置重合度门限,所述d k为前一时刻选择图像的距离间隔,所述α为按照距离间隔d k选择图像时图像之间的重合度。
  33. 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述处理器具体用于:
    按照预置角度间隔选择图像;
    当选择的所述图像的重合度与预置重合度门限的差值在预置精度范围时,确定选择的所述图像属于目标图像集合。
  34. 根据权利要求33所述的网络设备,其特征在于,所述预置角度间隔θ k+1
    所述预置角度间隔的计算公式为:θ k+1=θ kk*-α),其中,所述α *为所述预置重合度门限,所述θ k为前一时刻选择图像的角度间隔,所述α为按照角度间隔θ k选择图像时图像之间的重合度。
  35. 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息,所述处理器具体用于:
    1>对一张所述目标图像进行处理得到场景特征点;
    2>利用所述场景特征点、所述场景特征点所属目标图像和所述目标图像对应的所述自然条件信息构成所述场景特征点信息;
    重复执行步骤1>和2>,直至构成得到所述场景特征点信息集合。
  36. 根据权利要求23至25中任一项所述的网络设备,其特征在于,所述处理器还用于:
    确定构建完成后所述数据库中的第三场景特征点信息;
    当所述第三场景特征点信息对应的第三场景特征点的特征数量控制得分FNCS小于预置FNCS门限时,在所述数据库中删除所述第三场景特征点信息。
  37. 根据权利要求36所述的网络设备,其特征在于,所述特征数量控制得分FNCS的 计算公式为:
    Figure PCTCN2019082981-appb-100014
    所述
    Figure PCTCN2019082981-appb-100015
    为所述场景特征点在定位时被使用的概率,所述
    Figure PCTCN2019082981-appb-100016
    为所述场景特征点的描述子数量占所述场景特征点所属图像中描述子总数的比例。
  38. 一种网络设备,其特征在于,所述网络设备属于视觉定位系统,所述网络设备包括:存储器、收发器、处理器以及总线系统;
    所述收发器,用于获取实时图像;
    其中,所述存储器用于存储程序;
    所述处理器用于执行所述存储器中的程序,包括如下步骤:
    根据所述实时图像确定至少一个第一描述子信息,所述第一描述子信息中包括拍摄所述实时图像时的目标自然条件信息;
    将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定相同的描述子信息,所述数据库中预置的描述子信息由所述网络设备确定满足预置图像重合度要求的目标图像集合后,根据所述目标图像集合以及所述目标图像集合中每张图像对应的自然条件信息得到场景特征点信息集合,从所述场景特征点信息集合中选择满足预置生命值要求的第一场景特征点对应的第一场景特征点信息,再根据所述第一场景特征点信息中与目标自然条件信息对应的第二描述子信息构建所述数据库后得到,所述第二描述子信息与所述数据库中预置的描述子信息不匹配,所述场景特征点信息中包括与所述自然条件信息对应的描述子信息;
    利用所述相同的描述子信息对所述实时图像进行定位;
    所述总线系统用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
  39. 根据权利要求38所述的网络设备,其特征在于,所述处理器具体用于:
    确定所述相同描述子信息在数据库中对应的第一场景特征点信息;
    根据所述第一场景特征点信息和定位计算公式计算得到拍摄所述实时图像时目标移动设备的位置。
  40. 根据权利要求39所述的网络设备,其特征在于,所述定位计算公式为:
    拍摄所述实时图像时所述目标移动设备的位置
    Figure PCTCN2019082981-appb-100017
    Figure PCTCN2019082981-appb-100018
    其中所述
    Figure PCTCN2019082981-appb-100019
    为实时图像中所述第一场景特征点的像素坐标,π C为相机的内参矩阵,所述π C用于将3D坐标转换为像素坐标,所述
    Figure PCTCN2019082981-appb-100020
    为数据库中场景特征点
    Figure PCTCN2019082981-appb-100021
    所属图像相对于世界坐标系的位姿,所述
    Figure PCTCN2019082981-appb-100022
    为所述数据库中所述第一场景特征点的像素坐标,所述i的取值为1至n,所述n为正整数,所述第一场景特征点与所述第一场景特征点信息对应。
  41. 根据权利要求38至40中任一项所述的网络设备,其特征在于,所述处理器还用于:
    将数据库中预置的描述子信息与所述至少一个第一描述子信息进行对比,确定不同的 描述子信息;
    根据所述不同的描述子信息构建所述数据库。
  42. 根据权利要求41所述的网络设备,其特征在于,当所述数据库中不存在所述不同描述子信息所属的第二场景特征点信息时,所述处理器具体用于:
    在所述数据库中增加包含所述不同描述子信息的所述第二场景特征点信息。
  43. 根据权利要求41所述的网络设备,其特征在于,当所述数据库中存在所述不同描述子信息所属的第二场景特征点信息时,所述处理器具体用于:
    在所述数据库的第二场景特征点信息中增加所述不同描述子信息。
  44. 根据权利要求41所述的网络设备,其特征在于,所述处理器还用于:
    确定与所述第二场景特征点信息对应的所述第二场景特征点的3D坐标;
    当所述数据库中预置的每个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值均大于第一预置门限值时,确定所述数据库中不存在所述第二场景特征点信息;
    当所述数据库中预置的任意一个场景特征点的3D坐标与所述第二场景特征点的3D坐标的差值小于所述第一预置门限值时,确定所述数据库中存在所述第二场景特征点信息。
  45. 一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1至15中任意一项所述的方法。
  46. 一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求16至22中任意一项所述的方法。
PCT/CN2019/082981 2018-06-20 2019-04-17 一种数据库构建方法、一种定位方法及其相关设备 WO2019242392A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19822178.0A EP3800443B1 (en) 2018-06-20 2019-04-17 Database construction method, positioning method and relevant device therefor
BR112020025901-2A BR112020025901B1 (pt) 2018-06-20 2019-04-17 Método de construção de banco de dados, método de posicionamento, dispositivo de rede e meio de armazenamento legível por computador
US17/126,908 US11644339B2 (en) 2018-06-20 2020-12-18 Database construction method, positioning method, and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810642562.4A CN110688500B (zh) 2018-06-20 2018-06-20 一种数据库构建方法、一种定位方法及其相关设备
CN201810642562.4 2018-06-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/126,908 Continuation US11644339B2 (en) 2018-06-20 2020-12-18 Database construction method, positioning method, and related device

Publications (1)

Publication Number Publication Date
WO2019242392A1 true WO2019242392A1 (zh) 2019-12-26

Family

ID=68983245

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082981 WO2019242392A1 (zh) 2018-06-20 2019-04-17 一种数据库构建方法、一种定位方法及其相关设备

Country Status (5)

Country Link
US (1) US11644339B2 (zh)
EP (1) EP3800443B1 (zh)
CN (2) CN113987228A (zh)
BR (1) BR112020025901B1 (zh)
WO (1) WO2019242392A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113987228A (zh) * 2018-06-20 2022-01-28 华为技术有限公司 一种数据库构建方法、一种定位方法及其相关设备
CN111238497B (zh) 2018-11-29 2022-05-06 华为技术有限公司 一种高精度地图的构建方法及装置
US11128539B1 (en) * 2020-05-05 2021-09-21 Ciena Corporation Utilizing images to evaluate the status of a network system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707724A (zh) * 2012-06-05 2012-10-03 清华大学 一种无人机的视觉定位与避障方法及系统
EP3018448A1 (en) * 2014-11-04 2016-05-11 Volvo Car Corporation Methods and systems for enabling improved positioning of a vehicle
CN106447585A (zh) * 2016-09-21 2017-02-22 武汉大学 城市地区和室内高精度视觉定位系统及方法
CN106931963A (zh) * 2017-04-13 2017-07-07 高域(北京)智能科技研究院有限公司 环境数据共享平台、无人飞行器、定位方法和定位系统

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101998136B (zh) * 2009-08-18 2013-01-16 华为技术有限公司 单应矩阵的获取方法、摄像设备的标定方法及装置
JP5062498B2 (ja) * 2010-03-31 2012-10-31 アイシン・エィ・ダブリュ株式会社 風景マッチング用参照データ生成システム及び位置測位システム
CN104715479A (zh) * 2015-03-06 2015-06-17 上海交通大学 基于增强虚拟的场景复现检测方法
US20160363647A1 (en) 2015-06-15 2016-12-15 GM Global Technology Operations LLC Vehicle positioning in intersection using visual cues, stationary objects, and gps
WO2017114581A1 (en) * 2015-12-30 2017-07-06 Telecom Italia S.P.A. System for generating 3d images for image recognition based positioning
US10838601B2 (en) * 2016-06-08 2020-11-17 Huawei Technologies Co., Ltd. Processing method and terminal
CN106295512B (zh) * 2016-07-27 2019-08-23 哈尔滨工业大学 基于标识的多纠正线室内视觉数据库构建方法以及室内定位方法
US10339708B2 (en) * 2016-11-01 2019-07-02 Google Inc. Map summarization and localization
US9940729B1 (en) * 2016-11-18 2018-04-10 Here Global B.V. Detection of invariant features for localization
CN108121764B (zh) * 2016-11-26 2022-03-11 星克跃尔株式会社 图像处理装置、图像处理方法、电脑程序及电脑可读取记录介质
CN106851231B (zh) * 2017-04-06 2019-09-06 南京三宝弘正视觉科技有限公司 一种视频监控方法及系统
CN109325978B (zh) * 2017-07-31 2022-04-05 深圳市腾讯计算机系统有限公司 增强现实显示的方法、姿态信息的确定方法及装置
CN108615247B (zh) * 2018-04-27 2021-09-14 深圳市腾讯计算机系统有限公司 相机姿态追踪过程的重定位方法、装置、设备及存储介质
CN113987228A (zh) * 2018-06-20 2022-01-28 华为技术有限公司 一种数据库构建方法、一种定位方法及其相关设备
CN110660254B (zh) * 2018-06-29 2022-04-08 北京市商汤科技开发有限公司 交通信号灯检测及智能驾驶方法和装置、车辆、电子设备
CN111488773B (zh) * 2019-01-29 2021-06-11 广州市百果园信息技术有限公司 一种动作识别方法、装置、设备及存储介质
CN112348885A (zh) * 2019-08-09 2021-02-09 华为技术有限公司 视觉特征库的构建方法、视觉定位方法、装置和存储介质
CN112348886B (zh) * 2019-08-09 2024-05-14 华为技术有限公司 视觉定位方法、终端和服务器
WO2021121306A1 (zh) * 2019-12-18 2021-06-24 北京嘀嘀无限科技发展有限公司 视觉定位方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707724A (zh) * 2012-06-05 2012-10-03 清华大学 一种无人机的视觉定位与避障方法及系统
EP3018448A1 (en) * 2014-11-04 2016-05-11 Volvo Car Corporation Methods and systems for enabling improved positioning of a vehicle
CN106447585A (zh) * 2016-09-21 2017-02-22 武汉大学 城市地区和室内高精度视觉定位系统及方法
CN106931963A (zh) * 2017-04-13 2017-07-07 高域(北京)智能科技研究院有限公司 环境数据共享平台、无人飞行器、定位方法和定位系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3800443A4

Also Published As

Publication number Publication date
BR112020025901B1 (pt) 2022-11-16
US20210103759A1 (en) 2021-04-08
EP3800443A4 (en) 2021-10-27
CN110688500B (zh) 2021-09-14
BR112020025901A2 (pt) 2021-03-16
CN110688500A (zh) 2020-01-14
CN113987228A (zh) 2022-01-28
US11644339B2 (en) 2023-05-09
EP3800443B1 (en) 2023-01-18
EP3800443A1 (en) 2021-04-07

Similar Documents

Publication Publication Date Title
US11776280B2 (en) Systems and methods for mapping based on multi-journey data
CN110567469B (zh) 视觉定位方法、装置、电子设备及系统
CN113989450B (zh) 图像处理方法、装置、电子设备和介质
WO2019242392A1 (zh) 一种数据库构建方法、一种定位方法及其相关设备
WO2021139176A1 (zh) 基于双目摄像机标定的行人轨迹跟踪方法、装置、计算机设备及存储介质
CN111260779B (zh) 地图构建方法、装置及系统、存储介质
WO2021027692A1 (zh) 视觉特征库的构建方法、视觉定位方法、装置和存储介质
WO2023065657A1 (zh) 地图构建方法、装置、设备、存储介质及程序
CN114419165B (zh) 相机外参校正方法、装置、电子设备和存储介质
CN112946679B (zh) 一种基于人工智能的无人机测绘果冻效应检测方法及系统
CN114565863A (zh) 无人机图像的正射影像实时生成方法、装置、介质及设备
CN112270709B (zh) 地图构建方法及装置、计算机可读存储介质和电子设备
CN113029128A (zh) 视觉导航方法及相关装置、移动终端、存储介质
CN113379748B (zh) 一种点云全景分割方法和装置
CN115880555B (zh) 目标检测方法、模型训练方法、装置、设备及介质
US20230053952A1 (en) Method and apparatus for evaluating motion state of traffic tool, device, and medium
CN115049792B (zh) 一种高精度地图构建处理方法及系统
WO2023116327A1 (zh) 基于多类型地图的融合定位方法及电子设备
CN114998629A (zh) 卫星地图与航拍图像模板匹配方法、无人机定位方法
WO2024083010A1 (zh) 一种视觉定位方法及相关装置
CN112884834A (zh) 视觉定位方法及系统
CN118310500A (zh) 地图数据采集方法、高精地图的更新方法、车辆及服务器
CN116612184A (zh) 一种基于无人机视觉的相机位姿确定方法
CN115995026A (zh) 地图生成方法、装置、电子设备和存储介质
CN114858156A (zh) 即时定位与地图构建方法及无人移动设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19822178

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112020025901

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2019822178

Country of ref document: EP

Effective date: 20201230

ENP Entry into the national phase

Ref document number: 112020025901

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20201217