US11989908B2 - Visual positioning method, mobile machine using the same, and computer readable storage medium - Google Patents
Visual positioning method, mobile machine using the same, and computer readable storage medium Download PDFInfo
- Publication number
- US11989908B2 US11989908B2 US17/488,343 US202117488343A US11989908B2 US 11989908 B2 US11989908 B2 US 11989908B2 US 202117488343 A US202117488343 A US 202117488343A US 11989908 B2 US11989908 B2 US 11989908B2
- Authority
- US
- United States
- Prior art keywords
- feature point
- feature points
- instructions
- corner
- adjacent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000000007 visual effect Effects 0.000 title claims abstract description 32
- 238000012216 screening Methods 0.000 claims abstract description 25
- 230000004044 response Effects 0.000 claims description 43
- 230000008569 process Effects 0.000 claims description 31
- 238000007621 cluster analysis Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 description 9
- 238000013507 mapping Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000009827 uniform distribution Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000000246 remedial effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present disclosure relates to image data processing technology, and particular a visual positioning method, a mobile machine using the same, and a computer readable storage medium.
- a robot equipped with a vision sensor performs mapping and navigation in the scene where it is located cannot evaluate the positioning reliability in the scene based on the regional distribution characteristics of common visual plane features in a real time manner. Moreover, since the feature points of the visual plane features obtained during the mapping and navigation are random, it will be impossible to measure the current robustness of the mapping and navigation of the robot, and will be impossible to take corresponding early warning and remedial measures in time by, for example, disposing specific markers (e.g., two-dimensional codes) in the scene or adding other perception assistance equipment in advance so as to urgently avoid the dangerous areas with sparse visual features or poor positioning stability. Therefore, it merely relies on humans to check the quality of the built map or long-term monitor whether the robot has abnormal navigation behavior caused by positioning drift or loss for a long time, and is costly and inefficient.
- specific markers e.g., two-dimensional codes
- FIG. 1 is a flow chart of a visual positioning method according to an embodiment of the present disclosure.
- FIG. 2 is a schematic block diagram of a visual positioning apparatus according to an embodiment of the present disclosure.
- FIG. 3 is a schematic block diagram of a mobile machine according to an embodiment of the present disclosure.
- FIG. 4 is a flow chart of step S 4 in FIG. 1 .
- FIG. 5 is a flow chart of step S 45 in FIG. 4 .
- FIG. 6 is a flow chart of step S 46 in FIG. 4 .
- FIG. 7 is a flow chart of step S 5 in FIG. 1 .
- FIG. 1 is a flow chart of a visual positioning method according to an embodiment of the present disclosure.
- a visual positioning method is provided.
- the visual positioning method is a computer-implemented method executable for a processor, which may be applied to a mobile machine (e.g. a robot or a vehicle) having a camera.
- the method may be implemented through a visual positioning apparatus shown in FIG. 2 or a mobile machine shown in FIG. 3 .
- the method may include the following steps.
- the corner feature points refer to the feature points in an image that are for positioning, for example, the intersections of various objects with different colors in the image (a white wall will be disadvantageous to positioning because no coiner points can be extracted therefrom and its feature is too unobvious).
- the above-mentioned corner feature points may include FAST (features from accelerated segment test), ORB (Oriented FAST and Rotated BRIEF), Harris, SIFT (scale-invariant feature transform) SURF (speeded up robust features), and the like.
- corner feature points Since the extraction of the above-mentioned corner feature points only depends on the grayscale change of the local area of the planar image and is not limited by space constraints, the distribution of the corner feature points will be overly dense in some local areas such as the vicinity of object and the periphery of lamp that have dramatic grayscale changes, which leads to weak positioning robustness.
- the visual positioning of the mobile machine during mapping and navigation is realized on the basis of the above-mentioned corner feature points. Every time an image corresponding to the current moment is collected, a frame of current image is obtained, and the above-mentioned corner feature points are extracted from the current image and then cached. The total number of the corer feature points is denoted as N. Then, it determines whether N is lamer than a number threshold for meeting the requirement of stable positioning. If N is not larger than the number threshold, it will be considered as not meeting the requirement of stable positioning, and the positioning reliability of the current image will be set to zero and output, and then the image will be enabled in the subsequent positioning.
- N is larger than the number threshold, a corner point classification is performed on the corner feature points in the current image, so that the corner feature points with the same positioning effect are classified into the same cluster set. Then, in each cluster set, the corner feature points with uniform distribution and large pixel pitch are selected as valid feature points, thereby improving the robustness of positioning.
- the positioning reliability of visual positioning is determined to measure the robustness of positioning. Because when the total number of the corner feature points is determined, the more even the distribution of the corner feature points and the larger the pixel pitch between each other, the stronger the robustness of positioning.
- the above-mentioned overly dense corner feature points not only cannot produce equivalent gain value to the robustness of positioning, but also causes the problem of the degeneration of positioning because of easy to introduce matching errors.
- the corner feature points are classified and analyzed, and the valid feature points with uniform distribution and large pixel pitch are determined through downsampling, and the ratio of the valid feature points to all the extracted corner feature points is used as the positioning reliability, thereby measuring the robustness of positioning.
- the step S 4 of obtaining the cluster set(s) of the corner feature points includes:
- the above-mentioned first pixel pitch S 0 is a threshold of the minimum pixel pitch to be sampled.
- the above-mentioned predetermined area is centered on the cluster center, its S 0 /S 0 rectangular area is calculated, and the grayscale mean of the above-mentioned rectangular area is calculated to take as the criteria for screening the valid feature points.
- the above-mentioned second pixel pitch L 0 is a threshold of the maximum pixel pitch to be retrieved, and all the corner feature points within the second pixel pitch are retrieved to take as the adjacent feature points.
- a pixel deviation calculation is performed on the above-mentioned adjacent feature points and the cluster center, and clustering is performed based on the pixel deviation to take the corner feature points with the absolute value of the pixel deviation within a predetermined deviation range as belonging to the same category as the that of the cluster center so as to merge into the same cluster set.
- the adjacent feature points and the duster centers belong to the cluster sets of different positioning equivalent corner points, and each corner feature point at the beginning of the clustering is regarded as an independent positioning equivalent corner point, the adjacent feature points are merged into the cluster set to which the duster center belongs.
- the step S 45 of screening the designated adjacent feature point belong to the same cluster set with the first feature point based on the pixel deviation includes:
- each corner feature point is treated as an independent clustering to process by determining the duster center, the grayscale mean corresponding to the cluster center and the set of adjacent feature points corresponding to the cluster center so as to realize the clustering of the cluster set corresponding to the cluster center, and so on until the number of the cluster sets of all the corner feature points corresponding to the entire current image no longer changes, and then the cluster analysis of the corner feature points is terminated.
- the grayscale mean corresponding to each feature point is the mean of the pixels in the S 0 /S 0 rectangular area centered thereon.
- the above-mentioned predetermined deviation range is the pixel grayscale deviation threshold for determining whether two feature points belong to the boundary of the same object.
- the absolute values of the deviations of the grayscale means corresponding to the two feature points are compared, and if the above-mentioned absolute value is less than or equal to the pixel grayscale deviation threshold, the two feature points will be considered to be in a similar environment and on the boundary of the same object. Compared with determining the environmental similarity of the two feature points through the grayscale of each feature point itself, it has better robustness.
- the “first”, “second” and the like are used for distinction only, not for limitation or ranking, and similar terms in other places have the same function and will not be repeated herein.
- the corner feature points may be respectively stored in a k-d tree, and the first feature point is stored in a designated node of the k-d tree.
- the step S 46 of obtaining the cluster set(s) of the corner feature points by performing the cluster analysis on all the corner feature points according to the clustering process with the first feature point as the cluster center may include:
- the corner feature points of the current image are cached on one k-d tree (short for k-dimensional tree), which facilitates retrieval and avoids missed detection.
- k-d tree short for k-dimensional tree
- the root node of the k-d tree is used as the starting node, and the analysis is carried out step by step to the leaf node.
- the process of the duster analysis of the corner feature point corresponding to each node is as described above, which will not be repeated herein.
- the step S 5 of the screening the plurality of valid feature points from the cluster set may include:
- the starting feature point when calculating the minimum circumscribed polygon is involved in the downsampling set, then the starting feature point is considered to be the valid feature point, and then the feature points on the minimum circumscribed polygon that are adjacent to the starting feature point are evaluated, that is, the pixel distance between the adjacent feature point and the starting feature point is calculated.
- the preset threshold of the above-mentioned pixel pitch is 5. Because the pixel pitch between feature point 1 and feature point 2 is 4 which is less than the preset threshold of 5, feature point 2 will be discarded. Moreover, because the pixel pitch between feature point 1 and feature point 3 is calculated to be 8 which is larger than the preset threshold of 5, feature point 3 is involved in the downsampling set. At this time, the downsampling set includes feature point 1 and feature point 3 , then feature point 4 is analyzed according to feature point 1 and feature point 3 , and then the pixel pitch between feature point 1 and feature point 4 is calculated to be 10.
- the obtaining module 4 may include:
- the clustering unit may include:
- the screening module 5 may include:
- screening module 5 may further include;
- FIG. 3 is a schematic block diagram of a mobile machine according to an embodiment of the present disclosure.
- a mobile machine e.g., a robot or a vehicle
- the computing device includes a processor 61 , a storage, a network interface 63 , and a database 64 which are connected via a system bus.
- the processor 61 is for realizing calculations and controls.
- the storage includes a non-volatile storage medium and an internal memory 621 .
- the non-volatile storage medium stores an operating system, a computer program, and the database 64 .
- the internal memory 621 provides an environment for the execution of the operating system and computer program in the non-volatile storage medium.
- the database 64 is for storing all the data required during visual positioning.
- the network interface 63 is for connecting and communicating with an external terminal via a network. When the computer program is executed by the processor 61 , the above-mentioned visual positioning method is implemented.
- the processor 61 executes the above-mentioned visual positioning method which includes: extracting a plurality of corner feature points corresponding to a current image captured through the camera; determining whether a distance between each pair of the plurality of corner feature points is less than a first preset threshold; determining whether a grayscale value of each of the plurality of corner feature points with the distance less than the first preset threshold is within a second preset threshold range, in response to the distance between each pair of the plurality of corner feature points being less than the first preset threshold; obtaining one or more cluster sets of the corner feature points, in response to the grayscale value of the corner feature point with the distance less than the first preset threshold being within the second preset threshold range; screening a plurality of valid feature points from the one or more cluster sets, where the valid feature points are the evenly distributed corner feature points with an interval of a specified amount of pixels to each other in the same cluster set; determining a positioning reliability based on a ratio of an amount of the valid feature points to an amount of the plurality of corner feature points
- FIG. 3 is only a part of the structure related to the solution of the present disclosure, and does not constitute a limitation on the mobile machine to which the solution of the present disclosure is applied.
- a non-transitory computer readable storage medium storing with computer program.
- a visual positioning method is realized. The method includes: extracting a plurality of corner feature points corresponding to a current image captured through a camera of a mobile machine (e.g., the above-mentioned mobile machine of FIG.
- the above-mentioned computer-readable storage medium classifies and analyzes the corner feature points, performs downsampling through a preset algorithm so as to determine the valid feature points with uniform distribution and large pixel pitch, and uses the ratio of the valid feature points to all the extracted corner feature points as the positioning reliability to measure the robustness of positioning.
- Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM) or external cache memory.
- RAM can be in a variety of formats such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link DRAM (SLDRAM), rambus direct RAM (RDRAM), direct rambus DRAM (DRDRAM), and rambus DRAM. (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDRSDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous link DRAM
- RDRAM rambus direct RAM
- DRAM direct rambus DRAM
- RDRAM rambus DRAM
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
Description
-
- S1: extracting a plurality of corner feature points corresponding to a current image captured through the camera;
- S2: determining whether a distance between each pair of the plurality of corner feature points is less than a first preset threshold;
- S3: determining whether a grayscale value of each of the plurality of corner feature points with the distance less than the first preset threshold is within a second preset threshold range, in response to the distance between each pair of the plurality of corner feature points being less than the first preset threshold;
- S4: obtaining cluster sets) of the corner feature points, in response to the grayscale value of the corner feature point with the distance less than the first preset threshold being within the second preset threshold range;
- S5: screening a plurality of valid feature points from the cluster set(s), where the valid feature points are the evenly distributed corner feature points with an interval of a specified amount of pixels to each other in the same cluster set;
- S6: determining a positioning reliability based on a ratio of an amount of the valid feature points to an amount of the plurality of corner feature points; and
- S7: performing a visual positioning on the mobile machine based on the positioning reliability, in response to the positioning reliability being within a preset range.
-
- S41: selecting a first feature point as a cluster center, where the first feature point is any of the corner feature points with the distance less than the first preset threshold;
- S42: calculating a grayscale mean within a predetermined area with a first pixel pitch from the cluster center;
- S43: determining whether there are adjacent feature points within a distance range of a second pixel pitch from the cluster center, where the second pixel pitch is larger than the first pixel pitch, and the second pixel pitch is less than or equal to the first preset threshold;
- S44: calculating a pixel deviation between the adjacent feature point and the grayscale mean, in response to there being adjacent feature points within the distance range of the second pixel pitch from the cluster center;
- S45: screening a designated adjacent feature point belonging to the same cluster set with the first feature point based on the pixel deviation (the objects in the same cluster set that have high similarity usually represent the same object/target); and
- S46: obtaining the cluster set(s) of the corner feature points by performing a cluster analysis on all the corner feature points according to a clustering process with the first feature point as the cluster center.
-
- S451: obtaining a first pixel deviation corresponding to a first adjacent feature point, where the first adjacent feature point is any of all the adjacent feature points within the distance range of the second pixel pitch;
- S452: determining whether an absolute value of the first pixel deviation is less than or equal to a pixel grayscale deviation threshold;
- S453: determining whether the first adjacent feature point is in the cluster set to which the first feature point belongs, in response to the absolute value of the first pixel deviation being less than or equal to the pixel grayscale deviation threshold and
-
- S461: marking the designated node as a starting point of a retrieval path;
- S462: determining a lower-level node connected to the starting point of the retrieval path;
- S463: obtaining a second feature point stored in correspondence with the lower-level node
- S464: performing a cluster analysis on the second feature point according to the clustering process with the first feature point as the cluster center; and
- S465: obtaining each cluster set by traversing from the second feature point to a leaf node of the k-d tree.
-
- S51: obtaining pixel coordinates corresponding to all the corner feature points in a designated cluster set, where the designated cluster set is any of the cluster sets;
- S52: calculating a minimum circumscribed polygon based on the pixel coordinates corresponding to each of all the corner feature points; and
- S53: involving the corner feature points positioned on a periphery of the minimum circumscribed polygon in the cluster set to which the valid feature points belong.
-
- S521: obtaining a starting feature point when calculating the minimum circumscribed polygon;
- S522: adding the starting feature point to a downsampling set;
- S523: obtaining a second adjacent feature point adjacent to the starting feature point when calculating the minimum circumscribed polygon
- S524: calculating a pixel pitch between the starting feature point and the second adjacent feature point;
- S525: determining whether the pixel pitch between the starting feature point and the second adjacent feature point is larger than a third preset threshold;
- S526: adding the second adjacent feature point to the downsampling set, in response to the pixel pitch between the starting feature point and the second adjacent feature point being larger than the third preset threshold;
- S527: obtaining a down-sampled downsampling set by performing downsampling on all the corner feature points on the periphery of the minimum circumscribed polygon according to a downsampling process of the second adjacent feature points; and
- S528: involving the feature points in the down-sampled downsampling set in the cluster set to which the valid feature points belong.
-
- obtaining a third adjacent feature point adjacent to the second adjacent feature point when calculating the minimum circumscribed polygon;
- calculating a pixel pitch between the starting feature point and the third adjacent feature point, and a pixel pitch between the second adjacent feature point and the third adjacent feature point;
- determining whether the pixel pitch between the starting feature point and the third adjacent feature point and that between the second adjacent feature point and the third adjacent feature point are both larger than the third preset threshold;
- if yes, adding the third adjacent feature point to the downsampling set; and
- obtaining the down-sampled downsampling set by downsampling all the corner feature points on the periphery of the minimum circumscribed polygon according to the downsampling process of the second adjacent feature point and the third adjacent feature point.
-
- an extraction module 1 configured to extract a plurality of corner feature points corresponding to a current image captured through the camera;
- a
first determination module 2 configured to determine whether a distance between each pair of the plurality of corner feature points is less than a first preset threshold; - a
second determination module 3 configured to determine whether a grayscale value of each of the plurality of corner feature points with the distance less than the first preset threshold is within a second preset threshold range, in response to the distance between each pair of the plurality of corner feature points being less than the first preset threshold; - an obtaining
module 4 configured to obtain one or more cluster sets of the corner feature points, in response to the grayscale value of the corner feature point with the distance less than the first preset threshold being within the second preset threshold range; - a screening module 5 configured to screen a plurality of valid feature points from the one or more cluster sets, where the valid feature points are the evenly distributed corner feature points with an interval of a specified amount of pixels to each other in the same cluster set;
- a
calculation module 6 configured to determine a positioning reliability based on a ratio of an amount of the valid feature points to an amount of the plurality of corner feature points; and - a
determination module 7 configured to perform a visual positioning on the mobile machine based on the positioning reliability, in response to the positioning reliability being within a preset range.
-
- a selection unit configured to select a first feature point as a cluster center, where the first feature point is any of the corner feature points with the distance less than the first preset threshold;
- a first calculation unit configured to calculate a grayscale mean within a predetermined area with a first pixel pitch from the cluster center;
- a determination unit configured to determine whether there are adjacent feature points within a distance range of a second pixel pitch from the cluster center, where the second pixel pitch is larger than the first pixel pitch, and the second pixel pitch is less than or equal to the first preset threshold;
- a second calculation unit configured to calculate a pixel deviation between the adjacent feature point and the grayscale mean, in response to there being adjacent feature points within the distance range of the second pixel pitch from the cluster center;
- a screening unit configured to screen a designated adjacent feature point belonging to the same cluster set with the first feature point based on the pixel deviation; and
- a clustering unit configured to obtain the one or more cluster sets of the corner feature points by performing a cluster analysis on all the corner feature points according to a clustering process with the first feature point as the cluster center.
-
- a first obtaining subunit configured to obtain a first pixel deviation corresponding to a first adjacent feature point, where the first adjacent feature point is any of all the adjacent feature points within the distance range of the second pixel pitch;
- a first determination subunit configured to determine whether an absolute value of the first pixel deviation is less than or equal to a pixel grayscale deviation threshold;
- a second determination subunit configured to determine whether the first adjacent feature point is in the cluster set to which the first feature point belongs, in response to the absolute value of the first pixel deviation being less than or equal to the pixel grayscale deviation threshold; and
- a merging subunit configured to use the first adjacent feature point as the designated adjacent feature point to merge into the cluster set that the first feature point belongs to, in response to the first adjacent feature point not being in the cluster set to which the first feature point belongs.
-
- a marking subunit configured to mark the designated node as a starting point of a retrieval path;
- a determining subunit configured to determine a lower-level node connected to the starting point of the retrieval path;
- a second obtaining subunit configured to obtain a second feature point stored in correspondence with the lower-level node;
- a clustering subunit configured to perform a cluster analysis on the second feature point according to the clustering process with the first feature point as the cluster center; and
- a traversing subunit configured to obtain each cluster set by traversing from the second feature point to a leaf node of the k-d tree.
-
- a first obtaining unit configured to obtain pixel coordinates corresponding to all the corner feature points in a designated cluster set, where the designated cluster set is any of the cluster sets;
- a third calculation unit configured to calculate a minimum circumscribed polygon based on the pixel coordinates corresponding to each of all the corner feature points; and
- a first involving unit configured to involve the corner feature points positioned on a periphery of the minimum circumscribed polygon in the cluster set to which the valid feature points belong.
-
- a second obtaining unit configured to obtain a starting feature point when calculating the minimum circumscribed polygon;
- a first adding unit configured to add the starting feature point to a downsampling set;
- a fourth calculation unit configured to obtain a second adjacent feature point adjacent to the starting feature point when calculating the minimum circumscribed polygon;
- a fifth calculation unit configured to calculate a pixel pitch between the starting feature point and the second adjacent feature point;
- a second determining unit configured to determine whether the pixel pitch between the starting feature point and the second adjacent feature point is larger than a third preset threshold;
- a first adding unit configured to add the second adjacent feature point to the downsampling set, in response to the pixel pitch between the starting feature point and the second adjacent feature point being larger than the third preset threshold;
- a down-sampled downsampling set obtaining unit configured to obtain a down-sampled downsampling set by performing, downsampling on all the corner feature points on the periphery of the minimum circumscribed polygon according to a downsampling process of the second adjacent feature points; and
- a second involving unit configured to involve the feature points in the down-sampled downsampling set in the cluster set to which the valid feature points belong.
-
- a third obtaining unit configured to obtain a third adjacent feature point adjacent to the second adjacent feature point when calculating the minimum circumscribed polygon;
- a sixth obtaining unit configured to calculate a pixel pitch between the starting feature point and the third adjacent feature point, and a pixel pitch between the second adjacent feature point and the third adjacent feature point;
- a third determination unit configured to determine whether the pixel pitch between the starting feature point and the third adjacent feature point and that between the second adjacent feature point and the third adjacent feature point are both larger than the third preset threshold;
- a second adding unit configured to add the third adjacent feature point to the downsampling set in response to the pixel pitch between the starting feature point and the third adjacent feature point and that between the second adjacent feature point and the third adjacent feature point being both larger than the third preset threshold; and
- a downsampling unit configured to obtaining the down-sampled downsampling set by downsampling all the corner feature points on the periphery of the minimum circumscribed polygon according to the downsampling process of the second adjacent feature point and the third adjacent feature point.
Claims (20)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110618509.2A CN113077513B (en) | 2021-06-03 | 2021-06-03 | Visual positioning method and device and computer equipment |
CN202110618509.2 | 2021-06-03 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220392103A1 US20220392103A1 (en) | 2022-12-08 |
US11989908B2 true US11989908B2 (en) | 2024-05-21 |
Family
ID=76616984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/488,343 Active 2042-10-06 US11989908B2 (en) | 2021-06-03 | 2021-09-29 | Visual positioning method, mobile machine using the same, and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
US (1) | US11989908B2 (en) |
CN (1) | CN113077513B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150212521A1 (en) * | 2013-05-23 | 2015-07-30 | Irobot Corporation | Simultaneous Localization And Mapping For A Mobile Robot |
CN105678304A (en) | 2015-12-30 | 2016-06-15 | 浙江宇视科技有限公司 | Vehicle-logo identification method and apparatus |
CN109376734A (en) | 2018-08-13 | 2019-02-22 | 东南大学 | A kind of roadside assistance equipment direct bearing based on license plate corner feature drags and leads abductive approach |
US20200226762A1 (en) * | 2019-01-15 | 2020-07-16 | Nvidia Corporation | Graphical fiducial marker identification suitable for augmented reality, virtual reality, and robotics |
CN111899334A (en) * | 2020-07-28 | 2020-11-06 | 北京科技大学 | Visual synchronous positioning and map building method and device based on point-line characteristics |
US11348269B1 (en) * | 2017-07-27 | 2022-05-31 | AI Incorporated | Method and apparatus for combining data to construct a floor plan |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5754813B2 (en) * | 2012-01-10 | 2015-07-29 | Kddi株式会社 | Feature point detection program for detecting feature points in an image, image processing apparatus and method |
CN104008387B (en) * | 2014-05-19 | 2017-02-15 | 山东科技大学 | Lane line detection method based on feature point piecewise linear fitting |
CN107403424B (en) * | 2017-04-11 | 2020-09-18 | 阿里巴巴集团控股有限公司 | Vehicle loss assessment method and device based on image and electronic equipment |
CN108734743A (en) * | 2018-04-13 | 2018-11-02 | 深圳市商汤科技有限公司 | Method, apparatus, medium and electronic equipment for demarcating photographic device |
CN109344710B (en) * | 2018-08-30 | 2020-12-18 | 东软集团股份有限公司 | Image feature point positioning method and device, storage medium and processor |
CN109344742B (en) * | 2018-09-14 | 2021-03-16 | 腾讯科技(深圳)有限公司 | Feature point positioning method and device, storage medium and computer equipment |
CN111538855B (en) * | 2020-04-29 | 2024-03-08 | 浙江商汤科技开发有限公司 | Visual positioning method and device, electronic equipment and storage medium |
CN112365470A (en) * | 2020-11-12 | 2021-02-12 | 中运科技股份有限公司 | SIFT-based automatic matching evaluation method for advertisement materials and live photos, storage medium and computer equipment |
-
2021
- 2021-06-03 CN CN202110618509.2A patent/CN113077513B/en active Active
- 2021-09-29 US US17/488,343 patent/US11989908B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150212521A1 (en) * | 2013-05-23 | 2015-07-30 | Irobot Corporation | Simultaneous Localization And Mapping For A Mobile Robot |
CN105678304A (en) | 2015-12-30 | 2016-06-15 | 浙江宇视科技有限公司 | Vehicle-logo identification method and apparatus |
US11348269B1 (en) * | 2017-07-27 | 2022-05-31 | AI Incorporated | Method and apparatus for combining data to construct a floor plan |
CN109376734A (en) | 2018-08-13 | 2019-02-22 | 东南大学 | A kind of roadside assistance equipment direct bearing based on license plate corner feature drags and leads abductive approach |
US20200226762A1 (en) * | 2019-01-15 | 2020-07-16 | Nvidia Corporation | Graphical fiducial marker identification suitable for augmented reality, virtual reality, and robotics |
CN111899334A (en) * | 2020-07-28 | 2020-11-06 | 北京科技大学 | Visual synchronous positioning and map building method and device based on point-line characteristics |
Non-Patent Citations (1)
Title |
---|
Bellavia, F., Tegolo, D., & Valenti, C. (2011). Improving harris corner selection strategy. IET Computer Vision, 5(2), 87-96. Retrieved from https://www.proquest.com/scholarly-journals/improving-harris-corner-selection-strategy/docview/1626671171/se-2 (Year: 2011). * |
Also Published As
Publication number | Publication date |
---|---|
CN113077513A (en) | 2021-07-06 |
CN113077513B (en) | 2021-10-29 |
US20220392103A1 (en) | 2022-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109035299B (en) | Target tracking method and device, computer equipment and storage medium | |
US11403839B2 (en) | Commodity detection terminal, commodity detection method, system, computer device, and computer readable medium | |
US10762376B2 (en) | Method and apparatus for detecting text | |
CN107609557B (en) | Pointer instrument reading identification method | |
CN110414507B (en) | License plate recognition method and device, computer equipment and storage medium | |
CN108537286B (en) | Complex target accurate identification method based on key area detection | |
CN114037637B (en) | Image data enhancement method and device, computer equipment and storage medium | |
CN110738236B (en) | Image matching method and device, computer equipment and storage medium | |
CN110751149B (en) | Target object labeling method, device, computer equipment and storage medium | |
CN110596121A (en) | Keyboard appearance detection method and device and electronic system | |
US11657644B2 (en) | Automatic ruler detection | |
CN111739020B (en) | Automatic labeling method, device, equipment and medium for periodic texture background defect label | |
CN109255802B (en) | Pedestrian tracking method, device, computer equipment and storage medium | |
CN111814740B (en) | Pointer instrument reading identification method, device, computer equipment and storage medium | |
CN112613506A (en) | Method and device for recognizing text in image, computer equipment and storage medium | |
CN111354038B (en) | Anchor detection method and device, electronic equipment and storage medium | |
US11989908B2 (en) | Visual positioning method, mobile machine using the same, and computer readable storage medium | |
CN116958604A (en) | Power transmission line image matching method, device, medium and equipment | |
CN112580499A (en) | Text recognition method, device, equipment and storage medium | |
CN115631199B (en) | Pin needle defect detection method, device, equipment and storage medium | |
CN115601564B (en) | Colloid contour detection method and system based on image recognition | |
CN112036232A (en) | Image table structure identification method, system, terminal and storage medium | |
CN115100541B (en) | Satellite remote sensing data processing method, system and cloud platform | |
CN110689556A (en) | Tracking method and device and intelligent equipment | |
CN111652034A (en) | Ship retrieval method and device based on SIFT algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: UBTECH ROBOTICS CORP LTD, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUO, RUI;BI, ZHANJIA;XIONG, YOUJUN;REEL/FRAME:057631/0885 Effective date: 20210826 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: UBKANG (QINGDAO) TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:UBTECH ROBOTICS CORP LTD;REEL/FRAME:062350/0007 Effective date: 20230105 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |