CN112783995B - V-SLAM map checking method, device and equipment - Google Patents

V-SLAM map checking method, device and equipment Download PDF

Info

Publication number
CN112783995B
CN112783995B CN202011628132.0A CN202011628132A CN112783995B CN 112783995 B CN112783995 B CN 112783995B CN 202011628132 A CN202011628132 A CN 202011628132A CN 112783995 B CN112783995 B CN 112783995B
Authority
CN
China
Prior art keywords
map
map node
positioning data
image
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011628132.0A
Other languages
Chinese (zh)
Other versions
CN112783995A (en
Inventor
崔蓝月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN202011628132.0A priority Critical patent/CN112783995B/en
Publication of CN112783995A publication Critical patent/CN112783995A/en
Priority to PCT/CN2021/142280 priority patent/WO2022143713A1/en
Application granted granted Critical
Publication of CN112783995B publication Critical patent/CN112783995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The embodiment of the invention provides a V-SLAM map checking method, a device and equipment, wherein the method comprises the following steps: aiming at each map node in the V-SLAM map, acquiring an image feature point corresponding to the map node; evaluating the image characteristic points to obtain an evaluation score; if the evaluation score is located in a first preset interval, determining that the map node passes verification; if the evaluation score is located in a second preset interval, supplementing or reconstructing the map node; if the evaluation score is located in a third preset interval, deleting the map node, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap; therefore, according to the scheme, the accuracy of the V-SLAM map is verified according to the image feature points corresponding to the map nodes.

Description

V-SLAM map checking method, device and equipment
Technical Field
The invention relates to the technical field of computer vision, in particular to a V-SLAM map checking method, device and equipment.
Background
V-SLAM (Vision-based instantaneous Localization and Mapping) can be used to locate, navigate, etc. a robot, or other mobile smart device. V-SLAM is understood to be a scheme in which: and acquiring a scene image by using a camera configured by the robot, and positioning, navigating and the like the robot in real time by matching the scene image with a pre-generated V-SLAM map.
The V-SLAM map is composed of a plurality of map nodes, which can be understood as the smallest data unit constituting the V-SLAM map. In the V-SLAM map, each map node corresponds to a set of image feature points. If wrong map nodes or map nodes with low accuracy exist in the V-SLAM map, positioning, navigation and the like are abnormal. Therefore, it is desirable to provide a scheme that can verify the accuracy of a V-SLAM map.
And the V-SLAM map is different from a general electronic map. For a general electronic map, each map node corresponds to geographical location information, and therefore, whether the displayed content of the electronic map is accurate or not can be checked based on the geographical location information corresponding to each map node. However, each map node in the V-SLAM map does not correspond to geographic position information, but only to a group of image feature points, and therefore, the accuracy of the V-SLAM map cannot be verified by adopting the scheme.
Disclosure of Invention
The embodiment of the invention aims to provide a V-SLAM map checking method, device and equipment so as to provide a scheme capable of checking the accuracy of a V-SLAM map.
In order to achieve the above object, an embodiment of the present invention provides a V-SLAM map verification method, including:
aiming at each map node in the V-SLAM map, acquiring an image feature point corresponding to the map node;
evaluating the image characteristic points to obtain an evaluation score;
if the evaluation score is located in a first preset interval, determining that the map node passes verification;
if the evaluation score is located in a second preset interval, the map node is supplemented or rebuilt;
and if the evaluation score is located in a third preset interval, deleting the map node, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap.
In order to achieve the above object, an embodiment of the present invention further provides a V-SLAM map verification apparatus, including: the first obtaining module, the evaluating module, the first determining module and the deleting module further comprise: an augmentation module and/or a reconstruction module; wherein the content of the first and second substances,
the first acquisition module is used for acquiring image feature points corresponding to each map node in the V-SLAM map;
the evaluation module is used for evaluating the image characteristic points to obtain evaluation scores; if the evaluation score is located in a first preset interval, triggering a first determining module; if the evaluation score is in a second preset interval, triggering an augmentation module and/or a reconstruction module; if the evaluation score is located in a third preset interval, triggering a deletion module, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap;
the first determining module is used for determining that the map node passes the verification;
the supplement module is used for supplementing the map nodes;
the reconstruction module is used for reconstructing the map node;
and the deleting module is used for deleting the map node.
In order to achieve the above object, an embodiment of the present invention further provides an electronic device, including a processor and a memory;
a memory for storing a computer program;
and the processor is used for realizing any V-SLAM map checking method when the program stored in the memory is executed.
By applying the embodiment of the invention, aiming at each map node in the V-SLAM map, the image characteristic point corresponding to the map node is obtained; evaluating the image characteristic points to obtain an evaluation score; if the evaluation score is located in a first preset interval, determining that the map node passes verification; if the evaluation score is located in a second preset interval, supplementing or reconstructing the map node; if the evaluation score is located in a third preset interval, deleting the map node, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap; therefore, according to the scheme, the accuracy of the V-SLAM map is verified according to the image feature points corresponding to the map nodes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first method for checking a V-SLAM map according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of various image segmentation methods according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a second method for checking a V-SLAM map according to an embodiment of the present invention;
fig. 4 is a third flowchart illustrating a V-SLAM map verification method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a V-SLAM map verification apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to achieve the above object, embodiments of the present invention provide a V-SLAM map verification method, device and apparatus, where the method and device may be applied to various electronic devices, and may also be applied to mobile intelligent devices, such as robots, and the like, and are not limited specifically.
The V-SLAM map is composed of a plurality of map nodes, the map nodes can be understood as the minimum data unit forming the V-SLAM map or can be understood as landmark points in the V-SLAM map, and each map node corresponds to one position area in the real physical space. The V-SLAM generally includes a map generation phase and a smart device location phase. The smart device may be a robot, an AGV (Automated Guided Vehicle), or other smart devices capable of moving. The smart device is able to navigate using a V-SLAM map.
In the map generation stage, aiming at the position area corresponding to each map node, image acquisition is carried out in the position area, the acquired image is analyzed to obtain a group of image characteristic points, and the group of image characteristic points and the map node are correspondingly stored.
In the intelligent device positioning stage, an image collector (such as a camera in the intelligent device) configured in the intelligent device is used for collecting a scene image, feature points in the scene image are extracted, the extracted feature points are matched with feature points corresponding to map nodes in a V-SLAM map, and positioning is carried out based on matching results.
The embodiment of the invention provides two V-SLAM map checking methods, wherein the first V-SLAM map checking method can be applied to a map generation stage, or after the map generation stage, the second V-SLAM map checking method can be applied to a positioning stage, such as a pilot navigation stage. The application stages described herein are merely illustrative and are not intended to limit embodiments of the present invention.
The first V-SLAM map verification method will be described in detail below. Fig. 1 is a first flowchart of a V-SLAM map verification method provided in an embodiment of the present invention, including:
s101: and aiming at each map node in the V-SLAM map, acquiring an image feature point corresponding to the map node.
The V-SLAM map is composed of a plurality of map nodes, which can be understood as the smallest data unit constituting the V-SLAM map. For example, in the process of generating a V-SLAM map, image acquisition is performed at a position 1, an image acquired at the position 1 is analyzed to obtain a group of image feature points, the position 1 corresponds to a map node, and the group of image feature points and the map node are correspondingly stored; and acquiring an image at the position 2, analyzing the image acquired at the position 2 to obtain another group of image feature points, wherein the position 2 corresponds to another map node, and the another group of image feature points is correspondingly stored with the another map node. In the V-SLAM map, each map node corresponds to a set of image feature points. Therefore, when the V-SLAM map is verified, the image feature points corresponding to each map node in the V-SLAM map can be directly acquired.
The image feature points may be expressed as a data set, each element in the set corresponds to an image feature point, the value of the element may include position information, a feature value, and the like of the image feature point, and each element may be expressed in a vector or a matrix form, and the specific form is not limited.
In one case, the image feature points may be feature points extracted by an ORB (organized Fast and Rotated Brief) algorithm, or feature points extracted by other algorithms, and a specific feature extraction algorithm is not limited.
The verification process of each map node in the V-SLAM map is similar, and the following steps are all explained for the same map node.
S102: and evaluating the image characteristic points to obtain an evaluation score.
For example, S102 may include: and evaluating the distribution uniformity and/or quality of the image feature points to obtain an evaluation score.
Introduction is first made to the evaluation of distribution uniformity:
in one case, the uniformity analysis algorithm may be used to calculate the distribution uniformity of the image feature points, and the specific algorithm is not limited.
In one embodiment, for each preset segmentation mode, the image where the image feature point is located may be segmented by the segmentation mode to obtain a plurality of segmented regions; calculating the distribution uniformity score corresponding to the segmentation mode according to the number of the image feature points in each segmentation region; and calculating the sum of the distribution uniformity scores corresponding to all the segmentation modes to obtain the evaluation score of the distribution uniformity.
For example, referring to fig. 2, the dividing manner may include: division from the vertical direction, division from the horizontal direction, division from the 45-degree direction, division from the 135-degree direction, division of the central region and the peripheral region, and the like. For each segmentation mode, the image where the image feature points are located may be subjected to region segmentation by using the segmentation mode, so as to obtain two segmentation regions. In the present embodiment, any division method shown in fig. 2 may be adopted, or another division method may be adopted, and the specific division method is not limited.
For example, if the image where the image feature point is located is subjected to region segmentation to obtain two segmented regions, the following equation 1 may be used to calculate the evaluation score of distribution uniformity:
Figure BDA0002879582180000051
wherein s isdAn evaluation score representing the distribution uniformity, m represents the number of division modes, i represents the identification of the division mode, NiIndicating the number of image feature points in any one of two divided regions when the image in which the image feature points are located is divided by the ith division method,
Figure BDA0002879582180000061
the average value of the number of image feature points in two divided areas is shown when the image in which the image feature points are located is divided by the ith division method. s isdThe smaller the value, the more uniform the distribution of the image feature points.
Assuming that the image in which the image feature points are located is divided into regions by the ith division method, if the number of image feature points in the divided region 1 is 680 and the number of image feature points in the divided region 2 is 720, the average of the numbers of image feature points in the two divided regions is 700, and the absolute values of the difference between the number of image feature points in the divided region 1 and the average and the absolute value of the difference between the number of image feature points in the divided region 2 and the average are 20, therefore N is equal to NiMay be the number of image feature points in any one of the two divided regions.
In equation 1, the sum of the distribution uniformity scores corresponding to all the division methods is calculated, and the value after the root of the sum of the scores is used as the evaluation score of the distribution uniformity. Alternatively, the sum of the scores may be directly used as the evaluation score of the distribution uniformity. The above equation 1 is merely an example, and the specific calculation method of the evaluation score of the distribution uniformity is not limited.
The following describes evaluating the quality of image feature points:
for example, the quality of the feature points of the image may be evaluated by using a corner detection algorithm, such as FAST (Features From Accelerated Segment Test) algorithm, or Harris corner detection algorithm, and the like, and the specific algorithm is not limited.
In one embodiment, the quality of the image feature points can be evaluated by using different corner detection algorithms respectively, so as to obtain quality scores of the image feature points corresponding to each corner detection algorithm respectively; and obtaining an evaluation score based on the quality scores of the image characteristic points respectively corresponding to each corner detection algorithm.
For example, a first quality score may be obtained by evaluating the quality of each image feature point using a first corner detection algorithm; evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second quality score; and obtaining a quality evaluation score of the image feature point according to the first quality score and the second quality score.
In one case, the average of the first quality score and the second quality score may be used as the quality evaluation score of the image feature point. In another case, the first quality score and the second quality score may be weighted, and the weighted result may be used as the quality evaluation score of the image feature point.
In one embodiment, the quality of each image feature point may be evaluated by using a first corner point detection algorithm to obtain a first score for each image feature point; selecting a target first score from the first scores by sorting the first scores, wherein the target first score is used as a quality score (which can be called as a first quality score) of the image feature point corresponding to the first corner detection algorithm; evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second score of each image feature point; and sorting the second scores, and selecting a target second score from the second scores as a quality score (which may be referred to as a second quality score) of the image feature point corresponding to the second corner detection algorithm. An evaluation score may then be derived based on the target first score and the target second score.
For example, the first score may be ranked from low to high, or the first score may be ranked from high to low; the ranking can be performed according to the second score from low to high, or the ranking can be performed according to the second score from high to low; the specific ordering is not limited.
A first score near the middle position can be selected from the sorted first scores as a target first score; a second score near the middle position may be selected from the sorted second scores as a target second score.
For example, assuming that there are 1000 feature points, the quality of each image feature point is evaluated by using the FAST algorithm, a first score of each image feature point is obtained, and the first score of the image feature point ranked at 500 th is selected as a target first score by ranking the first scores from high to low. And evaluating the quality of each image feature point by using a Harris corner detection algorithm to obtain a second score of each image feature point, and selecting the second score of the image feature point arranged at the 500 th position as a target second score according to the sequence from high to low of the second score. The image feature point with the first score ranked at 500 th and the image feature point with the second score ranked at 500 th may be different feature points or may be the same feature point.
Or, the first score of the designated position may be selected from the sorted first scores as the target first score; a second score for the specified location may be selected from the ranked second scores as a target second score. When the target score is selected from the ranked scores, the selected specific position may be set according to the actual requirement, which is not limited herein.
The following describes evaluating the distribution uniformity and quality of image feature points:
for example, the distribution uniformity of the image feature points can be evaluated to obtain a distribution uniformity score; and evaluating the quality of the image feature points to obtain a quality score. The average of the distribution uniformity score and the quality score may be used as an evaluation score of the image feature point. Alternatively, the distribution uniformity score and the quality score may be weighted, and the weighted result may be used as the evaluation score of the image feature point.
In one embodiment, the distribution uniformity of the image feature points can be evaluated to obtain a distribution uniformity score; evaluating the quality of each image feature point by utilizing a first corner point detection algorithm to obtain a first quality score; evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second quality score; and obtaining an evaluation score according to the distribution uniformity score, the first quality score and the second quality score.
For example, the mean of the distribution uniformity score, the first quality score, and the second quality score may be used as the evaluation score of the image feature point. Alternatively, the distribution uniformity score, the first quality score, and the second quality score may be weighted, and the weighted result may be used as the evaluation score of the image feature point.
For example, if the distribution uniformity score is calculated using equation 1 above, the more uniform the distribution of the image feature points, the lower the distribution uniformity score, and if the quality of each image feature point is evaluated using the FAST algorithm and the Harris corner detection algorithm, the higher the quality of the image feature point, the higher the first quality score and the second quality score.
In this case, in one example, a product of the distribution uniformity score and a first preset weight may be calculated as a first product; calculating the product of the first quality score and the second quality score and a second preset weight as a second product; and calculating a numerical value obtained by subtracting the second product from the first product to serve as an evaluation score.
In such an example, the evaluation score may be calculated using equation 2 below: s ═ α × Sd-β×Sf×ShWherein S represents an evaluation score, α represents a first preset weight, and β represents a second preset weightSetting the weight, SdRepresents the distribution uniformity score, SfRepresents a first quality score, SdA second quality score is represented. Alpha and beta can be set according to actual conditions, and specific numerical values are not limited.
In another example, a product of the distribution uniformity score and a first preset weight may be calculated as a first product; calculating the product of the first quality score and the second quality score and a second preset weight as a second product; and calculating a numerical value obtained by subtracting the first product from the second product to serve as an evaluation score.
In such an example, the evaluation score may be calculated using equation 3 below: s ═ β × Sf×Sh-α×SdWherein S represents an evaluation score, α represents a first preset weight, β represents a second preset weight, SdRepresents the distribution uniformity score, SfRepresenting a first quality score, SdA second quality score is represented. Alpha and beta can be set according to actual conditions, and specific numerical values are not limited.
S103: and judging whether the evaluation score is in a first preset interval, a second preset interval or a third preset interval. The first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap. If the current time is within the first preset interval, the step S104 is executed, if the current time is within the second preset interval, the step S105 is executed, and if the current time is within the third preset interval, the step S106 is executed.
S104: and determining that the map node passes the verification.
S105: the map node is supplemented or reconstructed.
S106: the map node is deleted.
In the above one embodiment, the distribution uniformity and/or quality of the image feature points are evaluated to obtain an evaluation score; in this embodiment, the more uniform the distribution of the image feature points, the closer the corresponding evaluation score is to the first preset interval, and the higher the quality, the closer the corresponding evaluation score is to the first preset interval.
In one example above, the evaluation score is: s ═ α × Sd-β×Sf×ShIn this example, the more uniform the distribution of the image feature points, the higher the quality, and the lower the evaluation score. Thus, the first preset interval may be a value interval not greater than a first preset threshold, the second preset interval may be a value interval greater than the first preset threshold and smaller than a second preset threshold, the third preset interval may be a value interval not smaller than the second preset threshold, and the first preset threshold is smaller than the second preset threshold. That is, in this example, if the evaluation score is not greater than the first preset threshold, it is determined that the map node has passed the verification; if the evaluation score is larger than a first preset threshold and smaller than a second preset threshold, supplementing or reconstructing the map node; and if the evaluation score is not less than the second preset threshold value, deleting the map node.
In this example, three sections may be divided for the evaluation score with reference to equation 4 below:
Figure BDA0002879582180000091
wherein S represents an evaluation score, T1Representing a first predetermined threshold value, T2Representing a second preset threshold. If S is less than or equal to T1If S is more than or equal to T, the map node passes the verification2Then the map node is not verified, if T1<S<T2And the map node quality is poor, and the map node can be supplemented or reconstructed.
In another example above, the score is evaluated: s ═ β × Sf×Sh-α×SdIn this example, the more uniform the distribution of the image feature points, the higher the quality, and the higher the evaluation score. In this way, the first preset interval may be a value interval not smaller than the second preset threshold, the second preset interval may be a value interval greater than the first preset threshold and smaller than the second preset threshold, the third preset interval may be a value interval not greater than the first preset threshold, and the first preset threshold is smaller than the second preset threshold. That is, in this example, if the evaluation score is notIf the map node is smaller than the second preset threshold, determining that the map node passes the verification; if the evaluation score is larger than a first preset threshold and smaller than a second preset threshold, supplementing or reconstructing the map node; and if the evaluation score is not greater than a first preset threshold value, deleting the map node.
In this example, three sections may be divided for the evaluation score with reference to equation 5 below:
Figure BDA0002879582180000101
wherein S represents an evaluation score, T1Representing a first predetermined threshold value, T2Representing a second preset threshold. If S ≧ T2Then the map node passes the check, if S is less than or equal to T1Then the map node is not verified, if T1<S<T2And the map node is poor in quality, and the map node can be supplemented or reconstructed.
Or, in other examples, different evaluation manners are adopted, in these examples, corresponding verification passing conditions may be set according to actual situations, and the idea of setting the verification passing conditions may be summarized as: the higher the quality of the image feature points is, the more uniformly the image feature points are distributed, the more the verification can be passed. The more uniform the distribution of the image characteristic points is, the more favorable the accuracy of the characteristic point matching in the subsequent positioning stage is. The specific verification conditions are not limited.
Supplementation may be understood as: and setting an augmentation node near the map node, carrying out image acquisition in a position area corresponding to the augmentation node, analyzing the acquired image to obtain a group of image characteristic points, and correspondingly storing the group of image characteristic points and the augmentation node.
In one embodiment, supplementing the map node may include: determining supplementary map nodes around the map node according to a first preset rule; indicating intelligent equipment to acquire images in a position area corresponding to the supplementary map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as first image characteristic points; correspondingly storing the first image feature point and the supplementary map node into the V-SLAM map.
For example, the first preset rule may be: supplementary map nodes are determined at 20CM before and after the map node. Or supplementary map nodes can be determined at the left and right 10CM of the map node, and the specific directions and distances of the supplementary nodes from the original map node are not limited.
Reconstruction can be understood as: and acquiring the image again in the position area corresponding to the map node, analyzing the acquired image to obtain a group of image characteristic points, and replacing the original image characteristic points of the map node with the group of image characteristic points.
In one embodiment, reconstructing the map node includes: determining a reconstructed map node around the map node according to a second preset rule; instructing intelligent equipment to acquire images in a position area corresponding to the reconstructed map node, and analyzing the images acquired by the intelligent equipment to obtain image feature points serving as second image feature points; and correspondingly storing the second image feature point and the reconstruction map node into the V-SLAM map, and deleting the map node and the corresponding image feature point stored in the V-SLAM map.
For example, the second preset rule may be: the reconstructed map node is determined at 20CM before and after the map node. Or the reconstructed map node can be determined at the left 10CM and the right 10CM of the map node, and the specific direction and the distance between the reconstructed node and the original map node are not limited. The second predetermined rule may be the same as or different from the first predetermined rule.
Or, in an embodiment, the second preset interval may be subdivided into a first sub-interval and a second sub-interval, and if the evaluation score is located in the first sub-interval of the second preset interval, the map node is supplemented; and if the evaluation score is located in a second subinterval in the second preset interval, reconstructing the map node, wherein the first subinterval and the second subinterval are sequentially continuous and do not overlap, the first subinterval is closer to the first subinterval, and the second subinterval is closer to the third subinterval.
It will be appreciated that the closer the evaluation score is to the first interval, the more desirable the evaluation score representing the image feature point, in which case the augmentation is performed on the basis of preserving the original map node, while the closer the evaluation score is to the third interval, the less desirable the evaluation score representing the image feature point, in which case the reconstruction is performed on the basis of not preserving the original map node.
By applying the embodiment shown in FIG. 1 of the invention, aiming at each map node in a V-SLAM map, image feature points corresponding to the map nodes are obtained; evaluating the image characteristic points to obtain an evaluation score; if the evaluation score is located in a first preset interval, determining that the map node passes verification; if the evaluation score is located in a second preset interval, supplementing or reconstructing the map node; if the evaluation score is located in a third preset interval, deleting the map node, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap; therefore, according to the method and the device, the accuracy of the V-SLAM map is verified according to the image feature points corresponding to the map nodes.
An embodiment of the present invention further provides another V-SLAM map verification method, and fig. 3 is a second flowchart of the V-SLAM map verification method provided in the embodiment of the present invention, where the second flowchart includes:
s301: acquiring a verified V-SLAM map, wherein map nodes in the verified V-SLAM map are as follows: and checking the passed map nodes, the supplemented map nodes or the reconstructed map nodes.
The verified V-SLAM map obtained in S301 may be a V-SLAM map obtained by performing verification using the embodiment shown in fig. 1.
S302: and generating a moving path of the intelligent equipment, wherein the moving path comprises each map node in the verified V-SLAM map.
For example, the smart device may be a robot, an AGV (Automated Guided Vehicle), or other smart devices capable of moving. The electronic device executing the embodiment shown in fig. 3 may be a background processing device communicatively connected to the smart device, and the specific device type is not limited. For convenience of description, an electronic apparatus that executes the embodiment shown in fig. 3 will be referred to as the present electronic apparatus in the following.
In one embodiment, the electronic device may instruct the smart device to move along the movement path at a fixed linear and angular velocity. In this embodiment, the electronic device may first generate a moving path that traverses all map nodes (all map nodes in the checked V-SLAM map), and then instruct the intelligent device to move along the moving path at a fixed linear velocity and angular velocity, so that the trial run stage may be smoother, and the obtained trial run data (the first positioning data and the second positioning data in the subsequent content) is more suitable for checking the V-SLAM map.
S303: and indicating the intelligent equipment to move along the moving path, and acquiring first positioning data corresponding to each map node, which is calculated by the intelligent equipment based on the verified V-SLAM map in the moving process. The first positioning data corresponding to one map node is as follows: and the first positioning data when the intelligent equipment is positioned in the position area corresponding to the map node.
The electronic equipment indicates the intelligent equipment to move along the moving path, and the intelligent equipment can utilize the verified V-SLAM map to navigate in the moving process. The first positioning data can be understood as positioning data in the navigation process of the intelligent equipment by using the verified V-SLAM map. In the embodiment of fig. 3, each map node of the verified V-SLAM map is verified based on the positioning data in the navigation process. For example, the smart device may perform a navigation run-in using the V-SLAM map, and may perform a check on each map node of the checked V-SLAM map based on the positioning data at the run-in stage. The verification process of each map node in the verified V-SLAM map is similar, and the following steps are all described for the same map node.
Each map node corresponds to a location area in real physical space. In the process of navigating the intelligent device by using the verified V-SLAM map, when the intelligent device moves to a position area corresponding to a map node, positioning data calculated based on the verified V-SLAM map can be acquired. In order to distinguish from positioning data generated in other ways in subsequent contents, positioning data calculated based on the verified V-SLAM map is referred to as first positioning data.
In the process of navigating the intelligent equipment by using the verified V-SLAM map, a scene image can be collected by using an image collector configured in the intelligent equipment. For example, a camera configured in the smart device may be used to collect an image, and the specific image collector is not limited. And extracting the feature points in the scene image, matching the extracted feature points with the feature points corresponding to the map nodes in the verified V-SLAM map, and positioning based on the matching result.
For example, the acquired scene image may be a video frame image, the feature points in each frame image may be respectively extracted, and the extracted feature points are matched with the feature points corresponding to the map nodes in the verified V-SLAM map, so that each frame image corresponds to one positioning result. Thus, the first positioning data corresponding to one map node includes: the positioning result corresponding to each frame of image or the confidence of the positioning result can be included; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires images according to the position area corresponding to the map node.
S304: and aiming at each map node in the verified V-SLAM map, carrying out accuracy analysis on the first positioning data corresponding to the map node to obtain an analysis result.
In one embodiment, the first positioning data corresponding to one map node includes: positioning results and positioning result confidence degrees corresponding to each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires images according to the position area corresponding to the map node.
In this embodiment, in S304, the number of frames of the image with the confidence of the positioning result greater than the fourth preset threshold may be counted in the first positioning data corresponding to the map node as the effective number of frames;
judging whether the effective frame number is greater than a first preset frame number threshold value or not to obtain a first judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the effective frame number to the total frame number as a first ratio; and judging whether the first ratio is larger than a first preset ratio threshold value or not to obtain a second judgment result which is used as an accuracy analysis result of the first positioning data.
The confidence threshold, that is, the fourth preset threshold, may be set according to an actual situation, and the specific numerical value is not limited. If the confidence coefficient is larger than the fourth preset threshold value, the positioning result is available, and if the confidence coefficient is not larger than the fourth preset threshold value, the positioning result is unavailable.
For example, in one case, the number of frames of the image with the confidence of the positioning result greater than the fourth preset threshold may be counted as the valid number of frames; and judging whether the effective frame number is greater than a preset frame number threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data. Or, in another case, the number of frames of the image with the confidence coefficient of the positioning result greater than the fourth preset threshold may also be counted as the effective number of frames; determining the total number of images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the effective frame number to the total frame number; and judging whether the ratio is larger than a preset ratio threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data.
Or, in another embodiment, the first positioning data corresponding to one map node includes: positioning results corresponding to each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires images according to the position area corresponding to the map node.
In this embodiment, in S304, the number of used frames may be obtained by counting the use condition of the positioning result corresponding to each frame of image by the intelligent device;
judging whether the number of the used frames is larger than a second preset frame number threshold value or not to obtain a third judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the using frame number to the total frame number to serve as a second ratio; and judging whether the second ratio is larger than a second preset ratio threshold value or not to obtain a fourth judgment result which is used as an accuracy analysis result of the first positioning data.
In some cases, the smart device may obtain two positioning results, one is based on the verified V-SLAM map, and the other is based on the self-configured sensor. The intelligent device can judge whether the positioning result obtained based on the verified V-SLAM map is accurate, if so, the positioning result obtained based on the verified V-SLAM map is used, and if not, the positioning result obtained based on a sensor configured by the intelligent device is used.
In this embodiment, the number of used frames can be obtained by counting the number of used frames of the positioning result corresponding to how many frames of images are used by the intelligent device (i.e., counting the use condition of the positioning result corresponding to each frame of image by the intelligent device). In one case, it is determined whether the number of used frames is greater than a preset frame number threshold, and the determination result is used as an accuracy analysis result of the first positioning data. Or, in another case, the total number of frames of the image acquired by the intelligent device in the whole process of passing through the position area corresponding to the map node can be determined; calculating the ratio of the number of the used frames to the total number of the frames; and judging whether the ratio is larger than a preset ratio threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data.
Or in another embodiment, after the intelligent device is instructed to move along the moving path, second positioning data corresponding to each map node, which is calculated by the intelligent device based on a sensor configured by the intelligent device during the moving process, may also be acquired; the second positioning data corresponding to one map node is: and the intelligent equipment is positioned in the position area corresponding to the map node.
In order to distinguish descriptions, positioning data calculated by the intelligent device based on the verified V-SLAM map is called first positioning data, and positioning data calculated by the intelligent device based on a sensor configured by the intelligent device is called second positioning data.
In S304, a position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node may be determined as an accuracy analysis result of the first positioning data.
As described above, in some cases, the smart device may obtain two positioning results, one is based on the verified V-SLAM map, and the other is based on the self-configured sensor. In this embodiment, the two positioning results may be compared, that is, the position deviation between the two positioning results is determined, and the position deviation is used as the accuracy analysis result of the first positioning data.
In one example, it may be determined whether the smart device is going straight through the map node; and if not, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data.
If the intelligent equipment passes through the map node in a straight line, calculating the driving angle of the intelligent equipment in the whole process of passing through the position area corresponding to the map node according to the positioning result; judging whether the driving angle meets a preset angle condition or not; if not, determining that the map node check fails; and if so, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data.
For example, whether the smart device moves straight through the map node may be determined according to the motion trajectory of the smart device. The acquisition mode of the motion trail of the intelligent device is not limited. If the first positioning data and the second positioning data do not pass through the device in a straight-through mode, the above embodiment is executed, the position deviation between the first positioning data and the second positioning data is directly determined, and the position deviation is used as an accuracy analysis result of the first positioning data.
And if the intelligent device passes through the map node, calculating a straight-going angle of the intelligent device in the whole process of passing through the position area corresponding to the map node. In one embodiment, the straight-ahead angular deviation and/or variance of the smart device throughout the course of passing through the location area corresponding to the map node may be calculated. For example, a coordinate system can be preset, and the driving angle of the intelligent device in the coordinate system can be calculated based on the verified V-SLAM map. Each frame of image corresponds to one driving angle, and the driving angle mean value corresponding to all the frames of images can be calculated in the whole process that the intelligent equipment passes through the position area corresponding to the map node. For each frame of image, the difference value between the driving angle corresponding to the frame of image and the mean value of the driving angle is the straight-going angle deviation. And carrying out variance operation on the driving angle corresponding to the frame of image and the driving angle mean value to obtain a straight-driving angle variance. Alternatively, the variance can be replaced by the standard deviation.
Taking calculation of the straight angle deviation and the variance as an example, a calculation deviation threshold and a variance threshold may be set, whether the straight angle deviation is greater than the deviation threshold and whether the straight angle variance is greater than the variance threshold are determined, if not, the above embodiment is executed again, the position deviation between the first positioning data and the second positioning data is determined, the position deviation is used as an accuracy analysis result of the first positioning data, and otherwise, it is determined that the map node check fails.
In S304, the accuracy of the first positioning data corresponding to the map node is analyzed, and any one of the above embodiments may be adopted, or the above described embodiments may be combined arbitrarily in a logical case, where the specific combination and execution sequence are not limited.
S305: and determining a verification result of the map node based on the analysis result.
In the above one case, counting the number of frames of the image with the confidence coefficient of the positioning result greater than the fourth preset threshold as the effective number of frames; and judging whether the effective frame number is greater than a preset frame number threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data. In this case, it may be determined that the map node passes the verification in a case where the valid frame number is greater than a preset frame number threshold.
In another case, counting the number of frames of the image with the confidence coefficient of the positioning result greater than a fourth preset threshold value as an effective number of frames; determining the total number of images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the effective frame number to the total frame number; and judging whether the ratio is larger than a preset ratio threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data. In this case, it may be determined that the map node passes the check in the case where the ratio is greater than the preset scale threshold.
In the above one case, the number of used frames is obtained by counting the positioning result corresponding to how many frames of images are used by the intelligent device. And judging whether the used frame number is greater than a preset frame number threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data. In this case, it may be determined that the map node passes the check in the case where the number of use frames is greater than the preset frame number threshold.
In the above one case, counting the number of frames used by the intelligent device according to the positioning result corresponding to the number of frames used; determining the total number of frames of images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the number of the used frames to the total number of the frames; and judging whether the ratio is larger than a preset ratio threshold value or not, and taking the judgment result as an accuracy analysis result of the first positioning data. In this case, it may be determined that the map node passes the check in the case where the ratio is greater than the preset scale threshold.
In the above-mentioned embodiment, the position deviation between the first positioning data and the second positioning data is determined as the analysis result of the first positioning data, and in this embodiment, it may be determined that the map node passes the verification in the case that the position deviation is smaller than the preset threshold value.
For example, if a map node fails verification, the map node may be deleted, supplemented or reconstructed. Supplementation may be understood as: and setting an augmentation node near the map node, carrying out image acquisition in a position area corresponding to the augmentation node, analyzing the acquired image to obtain a group of image characteristic points, and correspondingly storing the group of image characteristic points and the augmentation node. Reconstruction can be understood as: and acquiring the image again in the position area corresponding to the map node, analyzing the acquired image to obtain a group of image characteristic points, and replacing the original image characteristic points of the map node with the group of image characteristic points.
By applying the embodiment shown in fig. 3 of the invention, the verified V-SLAM map is obtained, and the map nodes in the verified V-SLAM map are as follows: checking passed map nodes, supplemented map nodes or reconstructed map nodes; generating a moving path of the intelligent equipment, wherein the moving path comprises each map node in the verified V-SLAM map; indicating the intelligent equipment to move along a moving path, and acquiring first positioning data corresponding to each map node, which is calculated by the intelligent equipment based on the verified V-SLAM map in the moving process; the first positioning data corresponding to one map node is as follows: first positioning data when the intelligent equipment is located in a position area corresponding to the map node; for each map node in the verified V-SLAM map, carrying out accuracy analysis on first positioning data corresponding to the map node to obtain an analysis result; and determining a verification result of the map node based on the analysis result. Therefore, according to the scheme, the accuracy of the V-SLAM map is verified according to the positioning data corresponding to the map nodes. In addition, the V-SLAM map after verification is subjected to secondary verification, and the verification accuracy is further improved.
As mentioned above, the above described embodiments may be combined in any logical way, and a specific combination is described below:
the first positioning data corresponding to one map node comprises: the corresponding positioning result and the confidence coefficient of the positioning result of each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires images according to the position area corresponding to the map node.
In addition, second positioning data corresponding to each map node, which is calculated by the intelligent device based on a sensor configured by the intelligent device in the moving process, is obtained; the second positioning data corresponding to one map node is: and the intelligent equipment is positioned in the position area corresponding to the map node.
For each map node in the verified V-SLAM map, the following steps may be included in S304:
counting the frame number of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value as an effective frame number;
judging whether the effective frame number is greater than a fifth preset threshold value or not;
if the number of the used frames is larger than the fifth preset threshold value, counting the using condition of the intelligent equipment on the positioning result corresponding to each frame of image to obtain the number of the used frames;
judging whether the number of the using frames is larger than a sixth preset threshold value or not;
if the current time is greater than the sixth preset threshold, judging whether the intelligent equipment passes through the map node in a straight line or not;
if the intelligent device passes through the map node in a straight-going manner, calculating the driving angle of the intelligent device in the whole process of passing through the position area corresponding to the map node according to the positioning result;
judging whether the driving angle meets a preset angle condition or not;
and if so, determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node.
S305 includes: judging whether the position deviation is larger than a seventh preset threshold value or not; and if not, determining that the map node passes the verification.
Various threshold values related in the embodiment of the invention can be set according to actual conditions, and specific numerical values are not limited.
The following describes a process of performing accuracy analysis on the first positioning data of a map node by using the above combination manner, taking any map node in the checked V-SLAM map as an example, with reference to fig. 4:
s401: acquiring first positioning data corresponding to a map node, wherein the first positioning data comprises: the corresponding positioning result and the confidence coefficient of the positioning result of each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, the image collector configured in the intelligent device collects images according to the position area corresponding to the map node.
S402: counting the number of frames of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value as an effective frame number, and judging whether the effective frame number is larger than P1If so, S403 is executed, and if not, S407 is executed.
S403: obtaining the number of using frames by counting the using condition of the positioning result corresponding to each frame of image by the intelligent equipment; judging whether the number of the used frames is more than P2If so, S404 is executed, and if not, S407 is executed.
S404: judging whether the intelligent equipment passes through the map node in a straight line or not; if yes, go to step S405, and if no, go to step S406.
S405: calculating the straight-going angle deviation and variance of the intelligent equipment in the whole process of passing through the position area corresponding to the map node; judging whether the straight angle deviation is larger than P3Whether the variance of the straight-ahead angle is greater than P4(ii) a If none of them is larger than the above, S406 is executed, otherwise S407 is executed.
S406: determining a positioning result obtained by the intelligent equipment based on the verified V-SLAM map and a positioning obtained by the intelligent equipment based on the self-configured sensorThe position deviation between the results is judged whether the position deviation is larger than P5If so, S407 is executed, and if not, S408 is executed.
S407: and determining that the map node check fails.
S408: and determining that the map node passes the verification.
By applying the combination mode, multi-dimensional verification is carried out on the map nodes, and the effective rate of the verification result is higher.
The embodiment shown in the figure 1 and the two V-SLAM map verification methods provided by the embodiment shown in the figure 3 are combined for use, so that the accuracy of the V-SLAM map can be more effectively verified. For example, the combination scheme of the two V-SLAM map verification methods may include: in the map generation stage or after the map generation stage, verifying the V-SLAM map by using a first V-SLAM map verification method, deleting map nodes which do not pass verification, and supplementing or reconstructing map nodes with poor quality; and then, in the positioning stage, a second V-SLAM map verification method is used for verifying the V-SLAM map, map nodes which do not pass verification are deleted, and map nodes with poor quality are supplemented or reconstructed.
For a V-SLAM navigation system, the navigation solution in actual operation depends on the accuracy of a V-SLAM map, and if abnormal map nodes exist in the V-SLAM map, such as map error nodes, map nodes with poor quality, and the like, some positioning abnormalities occur in the navigation process, such as situations of incapability of positioning, inaccurate positioning, and the like. Moreover, because of the coupling factors in the navigation process, the problem confirmation and reason investigation of the positioning abnormity also have certain difficulty.
By applying the embodiment of the invention, the accuracy of the V-SLAM map can be checked before the AGV uses the V-SLAM map in batch, so that map nodes which do not pass the check can be repaired in time, and thus, when the AGV uses the V-SLAM, the occurrence of abnormal positioning conditions can be reduced, on one hand, the performance index of a V-SLAM navigation system can be improved, and the product competitiveness of the V-SLAM navigation system can be improved, on the other hand, the map nodes which do not pass the check can be determined, and the problem confirmation and reason investigation on the abnormal positioning can be facilitated.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a V-SLAM map verification apparatus, including: the first obtaining module, the evaluating module, the first determining module and the deleting module further comprise: a supplementary module and/or a reconstruction module. Referring to fig. 5, the apparatus includes:
a first obtaining module 501, configured to obtain, for each map node in the V-SLAM map, an image feature point corresponding to the map node;
an evaluation module 502, configured to evaluate the image feature points to obtain an evaluation score; if the evaluation score is located in a first preset interval, triggering a first determining module; if the evaluation score is in a second preset interval, triggering an augmentation module and/or a reconstruction module; if the evaluation score is located in a third preset interval, triggering a deletion module, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap;
a first determining module 503, configured to determine that the map node passes verification;
an augmentation module 504 for augmenting the map node;
a reconstruction module 505, configured to reconstruct the map node;
a deleting module 506, configured to delete the map node.
In one embodiment, the evaluation module 502 is specifically configured to:
evaluating the distribution uniformity and/or quality of the image feature points to obtain an evaluation score; the more uniform the distribution of the image feature points, the closer the corresponding evaluation score is to the first preset interval, and the higher the quality, the closer the corresponding evaluation score is to the first preset interval.
In one embodiment, the evaluation module 502 includes:
a uniformity evaluation sub-module (not shown in the figure) for performing region segmentation on the image where the image feature points are located by using each preset segmentation mode to obtain a plurality of segmentation regions; calculating a distribution uniformity score corresponding to the segmentation mode according to the number of the image feature points in each segmentation region; and obtaining an evaluation score by calculating the sum of the distribution uniformity scores corresponding to all the segmentation modes.
In one embodiment, the plurality of divided regions are two divided regions; the segmentation mode comprises any one or more of the following modes: division from the vertical direction, division from the horizontal direction, division from the 45-degree direction, division from the 135-degree direction, and division of the central region and the peripheral region.
In one embodiment, the evaluation module 502 includes:
a quality evaluation submodule (not shown in the figure) for evaluating the quality of the image feature points by using different corner detection algorithms respectively to obtain quality scores of the image feature points corresponding to each corner detection algorithm respectively; and obtaining an evaluation score based on the quality scores of the image characteristic points respectively corresponding to each corner detection algorithm.
In one embodiment, the quality evaluation sub-module is specifically configured to:
evaluating the quality of each image feature point by using a first angle point detection algorithm to obtain a first score of each image feature point; selecting a target first score from the first scores by sorting the first scores, wherein the target first score is used as a quality score of an image feature point corresponding to the first corner point detection algorithm;
evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second score of each image feature point; sorting the second scores, and selecting a target second score from the second scores as a quality score of the image feature point corresponding to the second corner detection algorithm;
and obtaining an evaluation score based on the quality scores of the image characteristic points respectively corresponding to each corner detection algorithm.
In one embodiment, the evaluation module 502 includes: a first evaluation sub-module, a second evaluation sub-module, a third evaluation sub-module, and a fourth evaluation sub-module (not shown), wherein,
the first evaluation submodule is used for evaluating the distribution uniformity of the image characteristic points to obtain distribution uniformity scores, and the more uniform the distribution of the image characteristic points is, the lower the distribution uniformity scores are;
the second evaluation submodule is used for evaluating the quality of each image feature point by utilizing a first angle point detection algorithm to obtain a first quality score, wherein the higher the quality of the image feature points is, the higher the first quality score is;
the third evaluation submodule is used for evaluating the quality of each image feature point by utilizing a second corner detection algorithm to obtain a second quality score, wherein the higher the quality of the image feature points is, the higher the second quality score is;
the fourth evaluation submodule is used for calculating the product of the distribution uniformity score and the first preset weight to be used as a first product; calculating the product of the first quality score and the second quality score and a second preset weight as a second product; and calculating a numerical value obtained by subtracting the second product from the first product to serve as an evaluation score.
In one embodiment, the augmentation module 504 is specifically configured to augment the map node if the evaluation score is located in the first sub-interval of the second preset interval;
the reconstruction module 505 is specifically configured to reconstruct a map node for the map node if the evaluation score is located in a second subinterval in the second preset interval, where the first subinterval and the second subinterval are sequentially consecutive and do not overlap, the first subinterval is closer to the first subinterval, and the second subinterval is closer to the third subinterval.
In one embodiment, the augmentation module 504 is further configured to determine augmented map nodes around the map node according to a first preset rule; indicating intelligent equipment to acquire images in a position area corresponding to the supplementary map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as first image characteristic points; correspondingly storing the first image feature point and the supplementary map node into the V-SLAM map.
In one embodiment, the reconstruction module 505 is further configured to determine a reconstructed map node around the map node according to a second preset rule; indicating intelligent equipment to acquire images in a position area corresponding to the reconstructed map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as second image characteristic points; and correspondingly storing the second image feature point and the reconstruction map node into the V-SLAM map, and deleting the map node and the corresponding image feature point stored in the V-SLAM map.
In one embodiment, the apparatus further comprises: a second acquisition module, a generation module, a third acquisition module, an analysis module and a second determination module (not shown in the figures), wherein,
a second obtaining module, configured to obtain a verified V-SLAM map, where map nodes in the verified V-SLAM map are: checking passed map nodes, supplemented map nodes or reconstructed map nodes;
the generation module is used for generating a moving path of the intelligent equipment, wherein the moving path comprises each map node in the verified V-SLAM map;
the third acquisition module is used for indicating the intelligent equipment to move along the moving path and acquiring first positioning data corresponding to each map node calculated by the intelligent equipment based on the verified V-SLAM map in the moving process; the first positioning data corresponding to one map node is as follows: first positioning data when the intelligent equipment is located in a position area corresponding to the map node;
the analysis module is used for carrying out accuracy analysis on first positioning data corresponding to each map node in the verified V-SLAM map to obtain an analysis result;
and the second determination module is used for determining the verification result of the map node based on the analysis result.
In one embodiment, the first positioning data corresponding to one map node includes: the corresponding positioning result and the confidence coefficient of the positioning result of each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
the analysis module is specifically configured to:
counting the number of frames of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value in the first positioning data corresponding to the map node as an effective frame number;
judging whether the effective frame number is greater than a first preset frame number threshold value or not to obtain a first judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the effective frame number to the total frame number as a first ratio; and judging whether the first ratio is larger than a first preset ratio threshold value or not to obtain a second judgment result which is used as an accuracy analysis result of the first positioning data.
In one embodiment, the first positioning data corresponding to one map node includes: positioning results corresponding to each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
the analysis module is further configured to:
counting the use condition of the intelligent equipment on the positioning result corresponding to each frame of image to obtain the number of use frames;
judging whether the number of the used frames is larger than a second preset frame number threshold value or not to obtain a third judgment result which is used as an accuracy analysis result of the first positioning data; or determining the total number of the image frames acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the using frame number to the total frame number to serve as a second ratio; and judging whether the second ratio is larger than a second preset ratio threshold value or not to obtain a fourth judgment result which is used as an accuracy analysis result of the first positioning data.
In one embodiment, the apparatus further comprises:
a fourth obtaining module (not shown in the figure), configured to obtain second positioning data corresponding to each map node, where the second positioning data is calculated by the smart device based on a sensor configured by the smart device during the moving process; the second positioning data corresponding to one map node is: second positioning data when the intelligent device is located in a position area corresponding to the map node;
the analysis module is further configured to:
determining the position deviation between first positioning data corresponding to the map node and second positioning data corresponding to the map node, and taking the position deviation as an accuracy analysis result of the first positioning data;
the device further comprises: a judging module (not shown in the figure) for judging whether the intelligent device passes through the map node in a straight line; if not, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data; if so, calculating the driving angle of the intelligent equipment in the whole process of passing through the position area corresponding to the map node according to the positioning result; judging whether the driving angle meets a preset angle condition or not; if not, determining that the map node check fails; if yes, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data;
in one embodiment, the apparatus further comprises:
a fifth obtaining module (not shown in the figure), configured to obtain second positioning data corresponding to each map node, where the second positioning data is calculated by the smart device based on a sensor configured by the smart device during the moving process; the second positioning data corresponding to one map node is: second positioning data when the intelligent device is located in a position area corresponding to the map node;
the analysis module is further configured to:
counting the frame number of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value as an effective frame number;
judging whether the effective frame number is greater than a fifth preset threshold value or not;
if the number of the used frames is larger than the fifth preset threshold, counting the using condition of the positioning result corresponding to each frame of image by the intelligent equipment to obtain the number of the used frames;
judging whether the number of the using frames is larger than a sixth preset threshold value or not;
if the current time is greater than the sixth preset threshold, judging whether the intelligent equipment passes through the map node in a straight line or not;
if the intelligent device passes through the map node in a straight-going manner, calculating the driving angle of the intelligent device in the whole process of passing through the position area corresponding to the map node according to the positioning result;
judging whether the driving angle meets a preset angle condition or not;
if yes, determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node;
the determining a verification result of the map node based on the analysis result includes:
judging whether the position deviation is larger than a seventh preset threshold value or not;
if not, determining that the map node passes the verification;
the third obtaining module is further configured to: instructing the smart device to move along the movement path at a fixed linear and angular velocity.
By applying the embodiment shown in fig. 5 of the invention, for each map node in the V-SLAM map, the image feature point corresponding to the map node is obtained; evaluating the image characteristic points to obtain an evaluation score; if the evaluation score is located in a first preset interval, determining that the map node passes verification; if the evaluation score is located in a second preset interval, supplementing or reconstructing the map node; if the evaluation score is located in a third preset interval, deleting the map node, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap; therefore, according to the scheme, the accuracy of the V-SLAM map is verified according to the image feature points corresponding to the map nodes.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601 and a memory 602,
a memory 602 for storing a computer program;
the processor 601 is configured to implement any one of the above V-SLAM map verification methods when executing the program stored in the memory 602.
The electronic device may be a computer, a server, or other devices, and may also be a robot, an AGV (Automated Guided Vehicle), or other intelligent devices capable of moving, which is not limited specifically.
The Memory mentioned in the above electronic device may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In still another embodiment of the present invention, there is further provided a computer-readable storage medium having a computer program stored therein, the computer program, when executed by a processor, implementing any one of the above V-SLAM map verification methods.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any one of the above V-SLAM map verification methods.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, apparatus embodiments, device embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described for simplicity as they are substantially similar to method embodiments, where relevant, reference may be made to some descriptions of method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (20)

1. A V-SLAM map verification method is characterized by comprising the following steps:
aiming at each map node in the V-SLAM map, acquiring an image feature point corresponding to the map node;
evaluating the image characteristic points to obtain an evaluation score;
if the evaluation score is located in a first preset interval, determining that the map node passes verification;
if the evaluation score is located in a second preset interval, the map node is supplemented or rebuilt;
and if the evaluation score is located in a third preset interval, deleting the map node, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap.
2. The method of claim 1, wherein said evaluating said image feature points to obtain an evaluation score comprises:
evaluating the distribution uniformity and/or quality of the image feature points to obtain an evaluation score; the more uniform the distribution of the image feature points, the closer the corresponding evaluation score is to the first preset interval, and the higher the quality, the closer the corresponding evaluation score is to the first preset interval.
3. The method of claim 2, wherein evaluating the uniformity of distribution of the image feature points to obtain an evaluation score comprises:
aiming at each preset segmentation mode, carrying out region segmentation on the image where the image feature points are located by using the segmentation mode to obtain a plurality of segmentation regions; calculating a distribution uniformity score corresponding to the segmentation mode according to the number of the image feature points in each segmentation region;
and obtaining an evaluation score by calculating the sum of the distribution uniformity scores corresponding to all the segmentation modes.
4. The method according to claim 3, wherein the plurality of divided regions are two divided regions; the segmentation mode comprises any one or more of the following modes: division from the vertical direction, division from the horizontal direction, division from the 45-degree direction, division from the 135-degree direction, and division of the central region and the peripheral region.
5. The method of claim 2, wherein evaluating the quality of the image feature points to obtain an evaluation score comprises:
evaluating the quality of the image characteristic points by using different angular point detection algorithms respectively to obtain quality scores of the image characteristic points corresponding to each angular point detection algorithm respectively;
and obtaining an evaluation score based on the quality scores of the image characteristic points respectively corresponding to each corner detection algorithm.
6. The method according to claim 5, wherein the evaluating the quality of the image feature points by using different corner detection algorithms respectively to obtain a quality score of the image feature points corresponding to each corner detection algorithm respectively comprises:
evaluating the quality of each image feature point by using a first corner point detection algorithm to obtain a first score of each image feature point; selecting a target first score from the first scores by sorting the first scores, wherein the target first score is used as a quality score of an image feature point corresponding to the first corner point detection algorithm;
evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second score of each image feature point; and sorting the second scores, and selecting a target second score from the second scores as a quality score of the image feature point corresponding to the second corner point detection algorithm.
7. The method according to claim 2, wherein the evaluating the distribution uniformity and/or quality of the image feature points to obtain an evaluation score comprises:
evaluating the distribution uniformity of the image feature points to obtain a distribution uniformity score, wherein the more uniform the distribution of the image feature points is, the lower the distribution uniformity score is;
evaluating the quality of each image feature point by using a first angle detection algorithm to obtain a first quality score, wherein the higher the quality of the image feature points is, the higher the first quality score is;
evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second quality score, wherein the higher the quality of the image feature points is, the higher the second quality score is;
calculating the product of the distribution uniformity score and a first preset weight as a first product;
calculating the product of the first quality score and the second quality score and a second preset weight as a second product;
and calculating a numerical value obtained by subtracting the second product from the first product to serve as an evaluation score.
8. The method of claim 1, wherein the supplementing or reconstructing the map node if the evaluation score is within a second predetermined interval comprises:
if the evaluation score is located in a first subinterval in the second preset interval, supplementing the map node;
and if the evaluation score is located in a second subinterval in the second preset interval, reconstructing the map node, wherein the first subinterval and the second subinterval are sequentially continuous and do not overlap, the first subinterval is closer to the first preset interval, and the second subinterval is closer to the third preset interval.
9. Method according to claim 1 or 8, characterized in that the map node is supplemented by:
determining supplementary map nodes around the map node according to a first preset rule;
indicating intelligent equipment to acquire images in a position area corresponding to the supplementary map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as first image characteristic points;
correspondingly storing the first image feature point and the supplementary map node into the V-SLAM map;
reconstructing the map node, including:
determining a reconstructed map node around the map node according to a second preset rule;
indicating intelligent equipment to acquire images in a position area corresponding to the reconstructed map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as second image characteristic points;
and correspondingly storing the second image feature point and the reconstruction map node into the V-SLAM map, and deleting the map node and the corresponding image feature point stored in the V-SLAM map.
10. The method of claim 1, further comprising:
acquiring a verified V-SLAM map, wherein map nodes in the verified V-SLAM map are as follows: checking passed map nodes, supplemented map nodes or reconstructed map nodes;
generating a moving path of the intelligent equipment, wherein the moving path comprises each map node in the verified V-SLAM map;
indicating the intelligent equipment to move along the moving path, and acquiring first positioning data corresponding to each map node calculated by the intelligent equipment based on the verified V-SLAM map in the moving process; the first positioning data corresponding to one map node is as follows: first positioning data when the intelligent equipment is located in a position area corresponding to the map node;
for each map node in the verified V-SLAM map, performing accuracy analysis on first positioning data corresponding to the map node to obtain an analysis result;
and determining a verification result of the map node based on the analysis result.
11. The method of claim 10, wherein the first positioning data corresponding to one map node comprises: positioning results and positioning result confidence degrees corresponding to each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
the accuracy analysis is performed on the first positioning data corresponding to the map node to obtain an analysis result, and the analysis result includes:
counting the frame number of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value in the first positioning data corresponding to the map node as an effective frame number;
judging whether the effective frame number is greater than a first preset frame number threshold value or not to obtain a first judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the effective frame number to the total frame number as a first ratio; and judging whether the first ratio is larger than a first preset ratio threshold value or not to obtain a second judgment result which is used as an accuracy analysis result of the first positioning data.
12. The method of claim 10, wherein the first positioning data corresponding to one map node comprises: positioning results corresponding to each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
the accuracy analysis is performed on the first positioning data corresponding to the map node to obtain an analysis result, and the analysis result comprises the following steps:
counting the use condition of the intelligent equipment on the positioning result corresponding to each frame of image to obtain the number of use frames;
judging whether the number of the used frames is larger than a second preset frame number threshold value or not to obtain a third judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the using frame number to the total frame number to serve as a second ratio; and judging whether the second ratio is larger than a second preset ratio threshold value or not to obtain a fourth judgment result which is used as an accuracy analysis result of the first positioning data.
13. The method of claim 10, wherein after the instructing the smart device to move along the movement path, further comprising:
acquiring second positioning data corresponding to each map node, which is calculated by the intelligent equipment based on a sensor configured by the intelligent equipment in the moving process; the second positioning data corresponding to one map node is: second positioning data when the intelligent device is located in a position area corresponding to the map node;
the accuracy analysis is performed on the first positioning data corresponding to the map node to obtain an analysis result, and the analysis result comprises the following steps:
and determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as an accuracy analysis result of the first positioning data.
14. The method of claim 13, wherein the first positioning data corresponding to a map node comprises: positioning results corresponding to each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
before determining a position deviation between first positioning data corresponding to the map node and second positioning data corresponding to the map node as an accuracy analysis result of the first positioning data, the method further includes:
judging whether the intelligent equipment passes through the map node in a straight line or not;
if not, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data;
if so, calculating the driving angle of the intelligent equipment in the whole process of passing through the position area corresponding to the map node according to the positioning result; judging whether the driving angle meets a preset angle condition or not; if not, determining that the map node check fails; and if so, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data.
15. The method of claim 10, wherein the first positioning data corresponding to one map node comprises: the corresponding positioning result and the confidence coefficient of the positioning result of each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
the method further comprises the following steps: acquiring second positioning data corresponding to each map node, which is calculated by the intelligent equipment based on a sensor configured by the intelligent equipment in the moving process; the second positioning data corresponding to one map node is: second positioning data when the intelligent device is located in a position area corresponding to the map node;
the accuracy analysis is performed on the first positioning data corresponding to the map node to obtain an analysis result, and the analysis result comprises the following steps:
counting the frame number of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value as an effective frame number;
judging whether the effective frame number is greater than a fifth preset threshold value or not;
if the number of the used frames is larger than the fifth preset threshold, counting the using condition of the positioning result corresponding to each frame of image by the intelligent equipment to obtain the number of the used frames;
judging whether the number of the using frames is larger than a sixth preset threshold value or not;
if the current time is greater than the sixth preset threshold, judging whether the intelligent equipment passes through the map node in a straight line or not;
if the intelligent device passes through the map node in the straight line, calculating the driving angle of the intelligent device in the whole process of passing through the position area corresponding to the map node according to the positioning result;
judging whether the driving angle meets a preset angle condition or not;
if yes, determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node;
the determining a verification result of the map node based on the analysis result includes:
judging whether the position deviation is larger than a seventh preset threshold value or not;
and if not, determining that the map node passes the verification.
16. The method of claim 10, wherein the instructing the smart device to move along the movement path comprises:
instructing the smart device to move along the movement path at a fixed linear and angular velocity.
17. A V-SLAM map verification apparatus, comprising: the first obtaining module, the evaluating module, the first determining module and the deleting module further comprise: a supplementary module and/or a reconstruction module; wherein the content of the first and second substances,
the first acquisition module is used for acquiring image feature points corresponding to map nodes aiming at each map node in the V-SLAM map;
the evaluation module is used for evaluating the image characteristic points to obtain evaluation scores; if the evaluation score is located in a first preset interval, triggering a first determining module; if the evaluation score is in a second preset interval, triggering an augmentation module and/or a reconstruction module; if the evaluation score is located in a third preset interval, triggering a deletion module, wherein the first preset interval, the second preset interval and the third preset interval are sequentially continuous and do not overlap;
the first determining module is used for determining that the map node passes the verification;
the supplement module is used for supplementing the map nodes;
the reconstruction module is used for reconstructing the map node;
and the deleting module is used for deleting the map node.
18. The apparatus according to claim 17, wherein the evaluation module is specifically configured to:
evaluating the distribution uniformity and/or quality of the image feature points to obtain an evaluation score; the more uniform the distribution of the image feature points is, the closer the corresponding evaluation score is to the first preset interval, and the higher the quality is, the closer the corresponding evaluation score is to the first preset interval;
the evaluation module comprises:
the uniformity evaluation sub-module is used for carrying out region segmentation on the image where the image feature points are located by utilizing each preset segmentation mode to obtain a plurality of segmentation regions; calculating a distribution uniformity score corresponding to the segmentation mode according to the number of the image feature points in each segmentation region; obtaining an evaluation score by calculating the sum of distribution uniformity scores corresponding to all the segmentation modes;
the plurality of divided regions are two divided regions; the segmentation mode comprises any one or more of the following modes: dividing from the vertical direction, dividing from the horizontal direction, dividing from the 45-degree direction, dividing from the 135-degree direction, and dividing the central region and the peripheral region;
the evaluation module comprises:
the quality evaluation submodule is used for evaluating the quality of the image characteristic points by using different corner detection algorithms respectively to obtain quality scores of the image characteristic points corresponding to each corner detection algorithm respectively; obtaining an evaluation score based on the quality scores of the image characteristic points respectively corresponding to each corner detection algorithm;
the quality evaluation submodule is specifically configured to:
evaluating the quality of each image feature point by using a first corner point detection algorithm to obtain a first score of each image feature point; selecting a target first score from the first scores by sorting the first scores, wherein the target first score is used as a quality score of an image feature point corresponding to the first corner point detection algorithm;
evaluating the quality of each image feature point by using a second corner detection algorithm to obtain a second score of each image feature point; sorting the second scores, and selecting a target second score from the second scores as a quality score of the image feature points corresponding to the second corner point detection algorithm;
obtaining an evaluation score based on the quality scores of the image characteristic points respectively corresponding to each corner detection algorithm;
the evaluation module comprises:
the first evaluation submodule is used for evaluating the distribution uniformity of the image characteristic points to obtain distribution uniformity scores, and the more uniform the distribution of the image characteristic points is, the lower the distribution uniformity scores are;
the second evaluation submodule is used for evaluating the quality of each image feature point by utilizing a first angle point detection algorithm to obtain a first quality score, wherein the higher the quality of the image feature points is, the higher the first quality score is;
the third evaluation submodule is used for evaluating the quality of each image feature point by utilizing a second corner detection algorithm to obtain a second quality score, wherein the higher the quality of the image feature points is, the higher the second quality score is;
the fourth evaluation submodule is used for calculating the product of the distribution uniformity score and the first preset weight to be used as a first product;
calculating the product of the first quality score and the second quality score and a second preset weight as a second product;
calculating a numerical value obtained by subtracting the second product from the first product, and taking the numerical value as an evaluation score;
the supplement module is specifically configured to supplement the map node if the evaluation score is located in a first sub-interval of the second preset interval;
the reconstruction module is specifically configured to reconstruct a map node for the map node if the evaluation score is located in a second subinterval in the second preset interval, where the first subinterval and the second subinterval are consecutive and do not overlap with each other, the first subinterval is closer to the first preset interval, and the second subinterval is closer to the third preset interval;
the supplementary module is also used for determining supplementary map nodes around the map node according to a first preset rule; indicating intelligent equipment to acquire images in a position area corresponding to the supplementary map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as first image characteristic points; correspondingly storing the first image feature point and the supplementary map node into the V-SLAM map;
the reconstruction module is further used for determining a reconstructed map node around the map node according to a second preset rule; indicating intelligent equipment to acquire images in a position area corresponding to the reconstructed map node, and analyzing the images acquired by the intelligent equipment to obtain image characteristic points serving as second image characteristic points; and correspondingly storing the second image feature point and the reconstruction map node into the V-SLAM map, and deleting the map node and the corresponding image feature point stored in the V-SLAM map.
19. The apparatus of claim 17, further comprising:
a second obtaining module, configured to obtain a verified V-SLAM map, where map nodes in the verified V-SLAM map are: checking passed map nodes, supplemented map nodes or reconstructed map nodes;
the generation module is used for generating a moving path of the intelligent equipment, wherein the moving path comprises each map node in the verified V-SLAM map;
the third acquisition module is used for indicating the intelligent equipment to move along the moving path and acquiring first positioning data corresponding to each map node calculated by the intelligent equipment based on the verified V-SLAM map in the moving process; the first positioning data corresponding to one map node is as follows: first positioning data when the intelligent equipment is positioned in a position area corresponding to the map node;
the analysis module is used for carrying out accuracy analysis on first positioning data corresponding to each map node in the verified V-SLAM map to obtain an analysis result;
the second determination module is used for determining the verification result of the map node based on the analysis result;
wherein, the first positioning data corresponding to one map node comprises: the corresponding positioning result and the confidence coefficient of the positioning result of each frame of image; each frame of image is as follows: when the intelligent device moves to the position area corresponding to the map node, an image collector configured in the intelligent device acquires an image according to the position area corresponding to the map node;
the analysis module is specifically configured to:
counting the frame number of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value in the first positioning data corresponding to the map node as an effective frame number;
judging whether the effective frame number is greater than a first preset frame number threshold value or not to obtain a first judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the effective frame number to the total frame number as a first ratio; judging whether the first ratio is larger than a first preset ratio threshold value or not to obtain a second judgment result which is used as an accuracy analysis result of the first positioning data;
the analysis module is further configured to:
counting the use condition of the intelligent equipment on the positioning result corresponding to each frame of image to obtain the number of use frames;
judging whether the number of the used frames is larger than a second preset frame number threshold value or not to obtain a third judgment result which is used as an accuracy analysis result of the first positioning data; or, determining the total number of the images acquired by the intelligent equipment in the whole process of passing through the position area corresponding to the map node; calculating the ratio of the using frame number to the total frame number to serve as a second ratio; judging whether the second ratio is larger than a second preset ratio threshold value or not to obtain a fourth judgment result which is used as an accuracy analysis result of the first positioning data;
the device further comprises:
the fourth acquisition module is used for acquiring second positioning data corresponding to each map node, which is calculated by the intelligent equipment based on a sensor configured by the intelligent equipment in the moving process; the second positioning data corresponding to one map node is: second positioning data when the intelligent device is located in a position area corresponding to the map node;
the analysis module is further configured to:
determining the position deviation between first positioning data corresponding to the map node and second positioning data corresponding to the map node, and taking the position deviation as an accuracy analysis result of the first positioning data;
the device further comprises:
the judging module is used for judging whether the intelligent equipment passes through the map node in a straight line or not;
if not, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data;
if so, calculating the driving angle of the intelligent equipment in the whole process of passing through the position area corresponding to the map node according to the positioning result; judging whether the driving angle meets a preset angle condition or not; if not, determining that the map node check fails; if yes, executing the step of determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node as the accuracy analysis result of the first positioning data;
the device further comprises:
the fifth acquisition module is used for acquiring second positioning data corresponding to each map node, which is calculated by the intelligent equipment based on a sensor configured by the intelligent equipment in the moving process; the second positioning data corresponding to one map node is: second positioning data when the intelligent device is located in a position area corresponding to the map node;
the analysis module is further configured to:
counting the frame number of the image with the confidence coefficient of the positioning result larger than a fourth preset threshold value as an effective frame number;
judging whether the effective frame number is greater than a fifth preset threshold value or not;
if the number of the used frames is larger than the fifth preset threshold, counting the using condition of the positioning result corresponding to each frame of image by the intelligent equipment to obtain the number of the used frames;
judging whether the number of the using frames is larger than a sixth preset threshold value or not;
if the current time is greater than the sixth preset threshold, judging whether the intelligent equipment passes through the map node in a straight line or not;
if the intelligent device passes through the map node in a straight-going manner, calculating the driving angle of the intelligent device in the whole process of passing through the position area corresponding to the map node according to the positioning result;
judging whether the driving angle meets a preset angle condition or not;
if yes, determining the position deviation between the first positioning data corresponding to the map node and the second positioning data corresponding to the map node;
the determining a verification result of the map node based on the analysis result includes:
judging whether the position deviation is larger than a seventh preset threshold value or not;
if not, determining that the map node passes the verification;
the third obtaining module is further configured to: instructing the smart device to move along the movement path at a fixed linear and angular velocity.
20. An electronic device comprising a processor and a memory;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 16 when executing a program stored in the memory.
CN202011628132.0A 2020-12-31 2020-12-31 V-SLAM map checking method, device and equipment Active CN112783995B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011628132.0A CN112783995B (en) 2020-12-31 2020-12-31 V-SLAM map checking method, device and equipment
PCT/CN2021/142280 WO2022143713A1 (en) 2020-12-31 2021-12-29 V-slam map verification method and apparatus, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011628132.0A CN112783995B (en) 2020-12-31 2020-12-31 V-SLAM map checking method, device and equipment

Publications (2)

Publication Number Publication Date
CN112783995A CN112783995A (en) 2021-05-11
CN112783995B true CN112783995B (en) 2022-06-03

Family

ID=75754577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011628132.0A Active CN112783995B (en) 2020-12-31 2020-12-31 V-SLAM map checking method, device and equipment

Country Status (1)

Country Link
CN (1) CN112783995B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022143713A1 (en) * 2020-12-31 2022-07-07 杭州海康机器人技术有限公司 V-slam map verification method and apparatus, and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN110706282A (en) * 2019-10-31 2020-01-17 镁佳(北京)科技有限公司 Automatic calibration method and device for panoramic system, readable storage medium and electronic equipment
CN111047579A (en) * 2019-12-13 2020-04-21 中南大学 Characteristic quality evaluation method and image characteristic uniform extraction method
CN111144483A (en) * 2019-12-26 2020-05-12 歌尔股份有限公司 Image feature point filtering method and terminal
CN111819838A (en) * 2018-03-06 2020-10-23 富士胶片株式会社 Photographic evaluation chart, photographic evaluation chart generation device, photographic evaluation chart generation method, and photographic evaluation chart generation program
CN111915532A (en) * 2020-08-07 2020-11-10 北京字节跳动网络技术有限公司 Image tracking method and device, electronic equipment and computer readable medium
CN112115953A (en) * 2020-09-18 2020-12-22 南京工业大学 Optimized ORB algorithm based on RGB-D camera combined with plane detection and random sampling consistency algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN111819838A (en) * 2018-03-06 2020-10-23 富士胶片株式会社 Photographic evaluation chart, photographic evaluation chart generation device, photographic evaluation chart generation method, and photographic evaluation chart generation program
CN110706282A (en) * 2019-10-31 2020-01-17 镁佳(北京)科技有限公司 Automatic calibration method and device for panoramic system, readable storage medium and electronic equipment
CN111047579A (en) * 2019-12-13 2020-04-21 中南大学 Characteristic quality evaluation method and image characteristic uniform extraction method
CN111144483A (en) * 2019-12-26 2020-05-12 歌尔股份有限公司 Image feature point filtering method and terminal
CN111915532A (en) * 2020-08-07 2020-11-10 北京字节跳动网络技术有限公司 Image tracking method and device, electronic equipment and computer readable medium
CN112115953A (en) * 2020-09-18 2020-12-22 南京工业大学 Optimized ORB algorithm based on RGB-D camera combined with plane detection and random sampling consistency algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进ORB 算法的视觉里程计特征匹配方法;殷新凯;《软件》;20200430;第41卷(第4期);全文 *

Also Published As

Publication number Publication date
CN112783995A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN106646338B (en) A kind of quickly accurate indoor orientation method
CN109658454B (en) Pose information determination method, related device and storage medium
CN108182695B (en) Target tracking model training method and device, electronic equipment and storage medium
US20140253543A1 (en) Performance prediction for generation of point clouds from passive imagery
CN112668480B (en) Head attitude angle detection method and device, electronic equipment and storage medium
CN112132853B (en) Method and device for constructing ground guide arrow, electronic equipment and storage medium
CN111444294A (en) Track completion method and device and electronic equipment
CN112783995B (en) V-SLAM map checking method, device and equipment
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN116484036A (en) Image recommendation method, device, electronic equipment and computer readable storage medium
CN111932545A (en) Image processing method, target counting method and related device thereof
CN111540202B (en) Similar bayonet determining method and device, electronic equipment and readable storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
KR102260556B1 (en) Deep learning-based parking slot detection method and apparatus integrating global and local information
CN115482425A (en) Key point identification method, model training method, device and storage medium
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
CN115797310A (en) Method for determining inclination angle of photovoltaic power station group string and electronic equipment
CN115311652A (en) Object detection method and device, electronic equipment and readable storage medium
CN112199984B (en) Target rapid detection method for large-scale remote sensing image
CN115311680A (en) Human body image quality detection method and device, electronic equipment and storage medium
WO2022143713A1 (en) V-slam map verification method and apparatus, and device
CN112833912B (en) V-SLAM map verification method, device and equipment
CN112861689A (en) Searching method and device of coordinate recognition model based on NAS technology
CN113269678A (en) Fault point positioning method for contact network transmission line
CN112967399A (en) Three-dimensional time sequence image generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder