CN117671011B - AGV positioning precision improving method and system based on improved ORB algorithm - Google Patents

AGV positioning precision improving method and system based on improved ORB algorithm Download PDF

Info

Publication number
CN117671011B
CN117671011B CN202410128865.XA CN202410128865A CN117671011B CN 117671011 B CN117671011 B CN 117671011B CN 202410128865 A CN202410128865 A CN 202410128865A CN 117671011 B CN117671011 B CN 117671011B
Authority
CN
China
Prior art keywords
improved
descriptors
depth level
matching
agv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410128865.XA
Other languages
Chinese (zh)
Other versions
CN117671011A (en
Inventor
侯梦魁
贾凯龙
李新宇
马永鑫
付周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202410128865.XA priority Critical patent/CN117671011B/en
Publication of CN117671011A publication Critical patent/CN117671011A/en
Application granted granted Critical
Publication of CN117671011B publication Critical patent/CN117671011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an AGV positioning accuracy improving method and system based on an improved ORB algorithm, and belongs to the technical field of robot navigation positioning. Performing quadtree division on each corner point to form a plurality of depth levels; each depth level is correspondingly provided with different description element splicing results, the description element splicing results under all depths are arranged, the low depth level is in front, and the high depth level is in back, so that an improved descriptor in a binary form is obtained; performing dimension reduction processing on the improved descriptors, sequentially matching the improved descriptors of any two adjacent binocular image frames from the lowest depth level to the highest depth level, completing matching of the improved descriptors of the two image frames when each depth level is successfully matched, and positioning an AGV according to a matching result; the invention can better cope with the feature matching under the scene of severe illumination change, so that the AGV is positioned more accurately in the working process.

Description

AGV positioning precision improving method and system based on improved ORB algorithm
Technical Field
The invention relates to the technical field of robot navigation and positioning, in particular to an AGV positioning precision improving method and system based on an improved ORB algorithm.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
AGV (Automated Guided Vehicle) can be used for handling of raw materials, transport of semi-finished products and sorting and shipment of finished products in factories and production lines. It can improve production efficiency, reduces the demand of manual handling. Currently, the visual SLAM (Simultaneous Localization AND MAPPING) algorithm based on ORB (Oriented fast and Rotated Brief) features is also widely applied to the positioning of the AGV in a factory, and has a relatively high research prospect.
However, these algorithms still need to be further improved in terms of positioning accuracy, real-time performance, and robustness, and main constraint factors include: (1) Due to factors such as changes of surrounding environment of a factory, AGV movement and the like, feature points at the edges of an image shot by a camera are not obvious enough, so that feature detection and extraction of the part become difficult, and the number of the feature points is reduced; (2) When describing the extracted feature points, the descriptors in the conventional ORB algorithm have single image information, and the feature matching accuracy is reduced due to insufficient uniqueness of the descriptors; (3) The prior ORB algorithm involves a large number of bit comparisons among descriptors in a feature matching stage, and reduces the operation efficiency of the algorithm to a certain extent.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides an AGV positioning accuracy improving method and system based on an improved ORB algorithm, and provides an improved descriptor which can better cope with feature matching under a scene of severe illumination change, improve the accuracy of algorithm feature matching, realize better motion estimation and enable the AGV to be positioned more accurately in the working process.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for improving AGV positioning accuracy based on an improved ORB algorithm.
An AGV positioning accuracy improving method based on an improved ORB algorithm comprises the following steps:
Extracting angular points according to the preprocessed binocular images, and determining the main directions of the angular points by using a gray centroid method;
for any corner, rotating a sampling area around the corner to a main direction to obtain a new sampling area, and performing quadtree division in the new sampling area to form a plurality of depth levels;
Each depth level is correspondingly provided with different description element splicing results, the description element splicing results under all depths are arranged, the low depth level is in front, and the high depth level is in back, so that an improved descriptor in a binary form is obtained;
and carrying out dimension reduction treatment on the improved descriptors, sequentially matching the improved descriptors of any two adjacent binocular image frames from the lowest depth level to the highest depth level, and completing matching of the improved descriptors of the two image frames when each depth level is successfully matched, and positioning the AGV according to a matching result.
As a further limitation of the first aspect of the present invention, after preprocessing the binocular image and before extracting the corner points, the method further comprises the following steps:
and constructing an image pyramid according to the preprocessed binocular images, extracting corner points on each layer of the image pyramid, and matching improved descriptors on corresponding pyramid levels of two adjacent binocular image frames.
As a further definition of the first aspect of the invention, the preprocessing is bilateral filtering preprocessing.
As a further limitation of the first aspect of the present invention, the result of the splicing of the description elements includes a combination of one or more of a descriptor reflecting pixel intensity information, a descriptor reflecting pixel horizontal gradient information, a descriptor reflecting pixel vertical gradient information, a descriptor reflecting pixel gradient direction, and a descriptor reflecting pixel depth information.
As a further limitation of the first aspect of the invention, the high depth level comprises a larger number of descriptor categories than the low depth level.
As a further limitation of the first aspect of the present invention, matching is performed sequentially from the lowest depth level to the highest depth level, including:
calculating the Hamming distance between two improved descriptors of the lowest depth level, judging that the two improved descriptors are dissimilar when the Hamming distance is larger than or equal to a set threshold value, otherwise, entering the next step;
calculating the Hamming distance between two improved descriptors of a higher level depth level, judging that the two improved descriptors are dissimilar when the Hamming distance is larger than or equal to a set threshold value, otherwise, entering the next step;
and sequentially judging the Hamming distance between the improved descriptors of each depth level until reaching the highest depth level, and judging that the two improved descriptors are successfully matched when the Hamming distance between the two improved descriptors of each depth level is smaller than a set threshold value.
As a further limitation of the first aspect of the present invention, the modified descriptor is subjected to dimension reduction processing by using a kernel principal component analysis method.
In a second aspect, the present invention provides an AGV positioning accuracy improvement system based on an improved ORB algorithm.
An improved ORB algorithm based AGV positioning accuracy improvement system comprising:
the corner main direction determining module is configured to: extracting angular points according to the preprocessed binocular images, and determining the main directions of the angular points by using a gray centroid method;
A depth level partitioning module configured to: for any corner, rotating a sampling area around the corner to a main direction to obtain a new sampling area, and performing quadtree division in the new sampling area to form a plurality of depth levels;
An improvement descriptor generating module configured to: each depth level is correspondingly provided with different description element splicing results, the description element splicing results under all depths are arranged, the low depth level is in front, and the high depth level is in back, so that an improved descriptor in a binary form is obtained;
A descriptor matching module configured to: and carrying out dimension reduction treatment on the improved descriptors, sequentially matching the improved descriptors of any two adjacent binocular image frames from the lowest depth level to the highest depth level, and completing matching of the improved descriptors of the two image frames when each depth level is successfully matched, and positioning the AGV according to a matching result.
In a third aspect, the present invention provides a computer readable storage medium having stored thereon a program which when executed by a processor performs the steps of the AGV positioning accuracy improvement method according to the first aspect of the present invention based on the improved ORB algorithm.
In a fourth aspect, the present invention provides an electronic device, including a memory, a processor, and a program stored in the memory and executable on the processor, where the processor implements the steps in the improved ORB algorithm based AGV positioning accuracy improvement method according to the first aspect of the present invention when the program is executed by the processor.
Compared with the prior art, the invention has the beneficial effects that:
1. The invention provides an improved descriptor which can better cope with feature matching under a scene of severe illumination change, thereby improving the accuracy of algorithm feature matching, obtaining better motion estimation and ensuring that the AGV of a factory is positioned more accurately in the working process.
2. According to the invention, by adopting different forms of descriptors at different depth levels, rapid hierarchical matching at the characteristic matching stage can be realized, so that the algorithm instantaneity is improved, and the efficiency and the accuracy of the AGV positioning in a factory are improved.
3. According to the invention, the image preprocessing thread based on bilateral filtering is added before the ORB_SLAM2 algorithm tracking thread, and then the processed image is used as the input of the tracking thread, so that the number and quality of image edge feature point extraction can be improved, the influence of image noise can be reduced, the number and accuracy of feature matching point pairs can be improved, and the positioning accuracy of the AGV in a factory can be improved.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a schematic flow chart of the improved ORB_SLAM2 algorithm provided in embodiment 1 of the present invention;
Fig. 2 is a schematic diagram of a binocular image frame preprocessing flow provided in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of the improved descriptor construction flow chart at a certain depth according to embodiment 1 of the present invention;
FIG. 4 is a diagram of meshing of 4 depth levels provided in embodiment 1 of the present invention;
fig. 5 is a schematic flow chart of the improved rapid hierarchical matching of descriptors according to embodiment 1 of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
Example 1:
The invention adopts AGV hardware equipment with a binocular camera sensor, and improves based on an ORB_SLAM2 algorithm frame, wherein the improved ORB_SLAM2 algorithm frame is shown in figure 1, and the improvement points are that preprocessing threads are added and ORB extraction improvement is carried out; other processes of tracking the thread (such as map initialization, constant speed tracking model, reference key frame tracking, repositioning tracking, local map tracking, determining whether to newly establish a key frame) and the local mapping thread and the closed loop detection thread all adopt contents in the existing algorithm frame, and are not repeated here.
In the invention, bilateral filtering is carried out on an input binocular image to enhance image edge characteristic points, and noise in the image is removed; constructing an image pyramid, extracting corner points and describing descriptors on each layer, so that the improved ORB features have scale invariance; determining a main direction of the improvement descriptor by using a gray centroid method adopted in the traditional Oriented Fast; feature point description using an improvement descriptor to enhance insensitivity of the improved ORB feature to illumination; the improved descriptor after the dimension reduction treatment based on the KPCA (KERNEL PRINCIPAL Component Analysis ) method is subjected to rapid hierarchical matching of feature points, so that the real-time performance of the algorithm is improved; and estimating the pose of the binocular camera according to the matched point pairs after feature matching, and completing the whole tracking process of the binocular vision odometer.
Specifically, the method comprises the following steps:
s1: and a set of image preprocessing module is added, so that the accuracy and the speed of feature matching are improved.
Due to the influence of the environment, the picture shot by the binocular camera is inevitably influenced by noise, and the extraction quality of the feature points is influenced; and feature point information of the image edge is easily ignored, so that the edge feature points are difficult to extract, and the number of feature matching is affected.
In order to solve the problem, before extracting and matching the feature points in the original image, the original image is subjected to image preprocessing, so that adverse effects of the environment are reduced, the quality of the image is improved, and the image preprocessing flow is shown in fig. 2.
Bilateral filtering is a filtering method that considers spatial distance and pixel value similarity. The method can smooth the image while maintaining the edge, is suitable for denoising and edge maintenance, and has obvious influence on the accuracy of feature matching due to the number and quality of edge feature points in the ORB algorithm, so that the method is used for preprocessing the image firstly instead of directly inputting the original binocular image frame in combination with the filtering mode. After the processing, the noise of the image can be obviously reduced, the edge of the image is enhanced, and the quantity and the accuracy of feature matching are improved.
The formula of bilateral filtering is shown as follows:
(1)。
Wherein: Is the pixel value after filtering,/> Is the pixel value in the original image,/>Is a normalized weight for normalizing the filtering result,/>A neighborhood window representing a filter, typically a fixed size window; /(I)Is a spatial domain kernel function, used to calculate the spatial distance weights between pixels,Is a pixel value domain kernel function used to calculate similarity weights between pixels.
In order to improve the positioning precision of an AGV trolley, the algorithm is ensured to have better real-time performance, and the image preprocessing is opened up and separated as a single thread to be divided into a plurality of parallel threads for implementation in bilateral filtering and the window of a filter is reduced in consideration of the time-consuming influence of the original ORB_SLAM2 algorithm caused by the addition of the image preprocessing stepThe real-time performance of the algorithm operation is comprehensively improved by adopting modes such as GPU acceleration and the like when necessary. The image obtained through the steps is used as the input of a Tracking thread, so that the image preprocessing process can be quickened, the higher quality image is ensured to be obtained, and the influence on the real-time performance of the SLAM system is reduced.
S2: based on the Oriented FAST corner, feature matching is performed in combination with the improvement descriptor.
The feature descriptors are mathematical representations that are used to describe features of regions surrounding a key point in an image. These descriptors are typically vectors that capture information such as structure and texture of the image region. In a visual SLAM system, descriptors of feature points are used to match between different frames in order to determine the motion of the camera. In the feature matching stage ORB_SLAM2 algorithm, rotated brief descriptors are adopted to describe corner features, but the image information utilized by the descriptors is not abundant, only the gray information of corner areas is considered, the uniqueness is not strong, and the accuracy of feature matching is greatly reduced especially under the condition of severe illumination change.
Aiming at the problems, the invention provides an improved descriptor for describing the corner characteristics of an Oriented FAST, which aims to improve the adaptability of the descriptor to intense illumination changes and the capability of distinguishing different characteristic points.
The extraction of FAST corner points is consistent with the original orb_slam2 algorithm and will not be described in detail. Firstly, determining the main direction of the corner point by using a gray centroid method, and then rotating the improved descriptor to the main direction to ensure that the descriptor has rotation invariance.
The moment defining the area image is:
(2)。
Wherein, Representing the gray value at pixel coordinates (x, y).
In a circular image area with a radius R, the image moments along the x and y directions of two coordinate axes are respectively:
(3)。
(4)。
The sum of the gray values of all pixels in the circular area is:
(5)。
The centroid coordinates of the image are:
(6)。
the principal direction of the keypoint can be expressed as a vector of directions from the centroid O of the circle to the centroid C The rotation angle of the key point is then noted as:
(7)。
When the illumination changes drastically, the relative relationship between pixels of the same local area of different images remains substantially unchanged. Based on this assumption, the descriptor of the Oriented FAST corner in orb_slam2 is modified to replace the original rotated brief descriptor.
Assume that K Fast corner points are extracted from one image, and the corner points are used forThe following are examples: to/>Centering on,/>For radius, form neighborhood range/>At the same time, in order to achieve improved rotation invariant properties of the descriptors, the sampling areas around the corner points are rotated to the main direction/>, calculated previouslyAnd a new sampling area is obtained. The descriptor construction is performed on the diagonal points on the new sampling area, and the construction flow chart is shown in fig. 3 (taking the construction of a certain corner point at a certain depth as an example, for example below the depth n, firstly, obtaining a pixel 1 and a pixel 2. Pixel j, each pixel corresponds to a bit, e.g. pixel 1 corresponds to bit1, pixel 2 corresponds to bit2 and pixel j corresponds to bitj, and j×4 n is obtained after stitching).
The sampling area is divided into four-way trees, so that a plurality of depth levels can be formed, and as shown in fig. 4, the grid sizes obtained through 4 times of four-way tree division are 2×2,4×4, 8×8 and 16×16 respectively.
The diagonal points are binary described, firstly consider the following at the same depth:
taking a mapping function Where x represents the average value of a certain pixel information of the local pixel block at the corresponding depth. The binary assignment rule is shown in the following formula:
(8)。
(9)。
(10)。
(11)。
(12)。
Wherein i represents the subscript of four small pixel blocks in each local pixel block, and the value range is [1,4]; by comparison of And mapping function/>Performing binary assignment on the size of the code; /(I)A descriptor for reflecting pixel intensity information; /(I)A descriptor for reflecting pixel horizontal gradient information; /(I)A descriptor for reflecting gradient information in the vertical direction of the pixel; to reflect the descriptor of the pixel gradient direction, wherein gradient direction/> ;/>To reflect descriptors of pixel depth information, they are all composed of 4-bit binary.
The description of the corner point at a certain depth is obtained by simply splicing binary descriptors of all the pixel information utilized at the same depth, and the binary descriptors are expressed byThe representation is:
(13)。
more generally, when more kinds of pixel information are considered, the method can be used for Performing expansion representation:
(14)。
Other more pixel information kinds may be color information of pixels when using an RGBD camera, etc.
The representation of binary descriptors is improved under all depth levels, and the fact that the descriptors with larger dimensions are used in the partitions with larger depth levels can better capture fine features in the image is considered, so that the descriptors with smaller dimensions are used in the partitions with smaller depth levels, excessive capturing of details can be avoided, and the algorithm is focused on main features.
Based on the description, the invention makes a description for each depth level and then splices the description:
(15)。
Wherein g represents the depth level, Representing the number of pixel information types used;
The meaning of the above formula is: after the image is subjected to quadtree division, when the depth level is 1,2,3,4, the number of kinds of pixel information used is 1,3,4,5 in turn, and the scalability is provided.
The better the performance of the descriptors when using more pixel information types or increasing the number of depth levels, but the size of the descriptors increases exponentially, in order to balance the descriptor size and performance, the invention sets the maximum depth size according to experience=4; Maximum amount of pixel information/>=5, Use/>,/>,/>,/>,/>Five pixel information.
The binary descriptors at all depths are arranged, the low depth level is in front, the high depth level is behind, the final binary descriptor can be obtained, and the binary bit number of the binary descriptor is improvedThe method comprises the following steps:
(16)。
Calculated to obtain 1588.
The formulation mode of the multi-depth level descriptor replaces the formulation mode of each depth level by adopting the same dimension, so that the calculation efficiency, the storage efficiency and the algorithm performance are balanced, the algorithm is more universal, and the adaptability is stronger.
S3: and improving descriptor dimension reduction processing.
The nonlinear KPCA dimension reduction method can remarkably reduce the size of the descriptors, reduce the algorithm memory, improve the feature matching efficiency, and avoid great adverse effects on the performance of the descriptors, so that the dimension reduction processing and screening 256 bits of 1588 bits of improved descriptors are used as feature representation.
Through the description of the improved descriptor combined with the Oriented FAST corner, the improved feature has better illumination insensitivity, and when the ambient illumination changes obviously, the improved algorithm can still ensure the accurate matching of the feature points, and the robustness of the original ORB_SLAM2 algorithm can be enhanced.
S4: and carrying out rapid hierarchical matching of the feature points based on the improved descriptors.
Feature matching of two sets of modified binary descriptors involves a large number of bit comparisons. Since the improvement descriptors are constructed on multiple depth levels of the feature point pixel neighborhood, which encode the feature points from coarse to fine, two improvement descriptors are similar only if they are similar in binary bits at each depth, and then they can form a pair of valid matches. A fast hierarchical matching flow chart is shown in fig. 5.
A specific strategy is to compare the bits of the modified descriptor of the first depth level first, filter many dissimilar descriptors preliminarily, and use the bits of the greater depth level for exact matching in the remainder. Specifically, only descriptors with hamming distances smaller than a certain proportion of bits in the current depth are further issued to the next depth for comparison, and the proportion can be set independently according to actual conditions; through the rapid hierarchical matching, the efficiency of feature matching is accelerated, the real-time performance of an algorithm is guaranteed, the accuracy of feature matching can be fully guaranteed, and a good initial value is provided for subsequent mismatching rejection.
Example 2:
The embodiment 2 of the invention provides an AGV positioning accuracy improving system based on an improved ORB algorithm, which comprises the following steps:
the corner main direction determining module is configured to: extracting angular points according to the preprocessed binocular images, and determining the main directions of the angular points by using a gray centroid method;
A depth level partitioning module configured to: for any corner, rotating a sampling area around the corner to a main direction to obtain a new sampling area, and performing quadtree division in the new sampling area to form a plurality of depth levels;
An improvement descriptor generating module configured to: each depth level is correspondingly provided with different description element splicing results, the description element splicing results under all depths are arranged, the low depth level is in front, and the high depth level is in back, so that an improved descriptor in a binary form is obtained;
A descriptor matching module configured to: and carrying out dimension reduction treatment on the improved descriptors, sequentially matching the improved descriptors of any two adjacent binocular image frames from the lowest depth level to the highest depth level, and completing matching of the improved descriptors of the two image frames when each depth level is successfully matched, and positioning the AGV according to a matching result.
The working method of each module of the system is the same as the corresponding steps in the AGV positioning accuracy improving method based on the improved ORB algorithm provided in embodiment 1, and will not be repeated here.
Example 3:
embodiment 3 of the present invention provides a computer readable storage medium having a program stored thereon, which when executed by a processor, implements the steps in the AGV positioning accuracy improvement method based on the improved ORB algorithm according to embodiment 1 of the present invention.
Example 4:
Embodiment 4 of the present invention provides an electronic device, including a memory, a processor, and a program stored in the memory and executable on the processor, where the processor implements the steps in the method for improving the positioning accuracy of an AGV based on the improved ORB algorithm according to embodiment 1 of the present invention when executing the program.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An AGV positioning accuracy improving method based on an improved ORB algorithm is characterized by comprising the following steps:
Extracting angular points according to the preprocessed binocular images, and determining the main directions of the angular points by using a gray centroid method;
for any corner, rotating a sampling area around the corner to a main direction to obtain a new sampling area, and performing quadtree division in the new sampling area to form a plurality of depth levels;
Each depth level is correspondingly provided with different description element splicing results, the description element splicing results under all depths are arranged, the low depth level is in front, and the high depth level is in back, so that an improved descriptor in a binary form is obtained;
Performing dimension reduction processing on the improved descriptors, sequentially matching the improved descriptors of any two adjacent binocular image frames from the lowest depth level to the highest depth level, completing matching of the improved descriptors of the two image frames when each depth level is successfully matched, and positioning an AGV according to a matching result;
The described descriptive element splicing result comprises one or a plurality of descriptors reflecting pixel intensity information, descriptors reflecting pixel horizontal gradient information, descriptors reflecting pixel vertical gradient information, descriptors reflecting pixel gradient direction and descriptors reflecting pixel depth information;
the high depth level contains a greater number of descriptor categories than the low depth level.
2. The AGV positioning accuracy improving method based on the improved ORB algorithm according to claim 1, wherein,
After preprocessing the binocular image and before extracting the corner points, the method further comprises the following steps:
and constructing an image pyramid according to the preprocessed binocular images, extracting corner points on each layer of the image pyramid, and matching improved descriptors on corresponding pyramid levels of two adjacent binocular image frames.
3. The AGV positioning accuracy improving method based on the improved ORB algorithm according to claim 1, wherein,
The preprocessing is bilateral filtering preprocessing.
4. An AGV positioning accuracy improving method based on an improved ORB algorithm according to any one of claims 1 to 3 wherein,
Matching is carried out sequentially from the lowest depth level to the highest depth level, and the method comprises the following steps:
calculating the Hamming distance between two improved descriptors of the lowest depth level, judging that the two improved descriptors are dissimilar when the Hamming distance is larger than or equal to a set threshold value, otherwise, entering the next step;
calculating the Hamming distance between two improved descriptors of a higher level depth level, judging that the two improved descriptors are dissimilar when the Hamming distance is larger than or equal to a set threshold value, otherwise, entering the next step;
and sequentially judging the Hamming distance between the improved descriptors of each depth level until reaching the highest depth level, and judging that the two improved descriptors are successfully matched when the Hamming distance between the two improved descriptors of each depth level is smaller than a set threshold value.
5. An AGV positioning accuracy improving method based on an improved ORB algorithm according to any one of claims 1 to 3 wherein,
And adopting a kernel principal component analysis method to perform dimension reduction treatment on the improved descriptor.
6. An improved ORB algorithm based AGV positioning accuracy improvement system comprising:
the corner main direction determining module is configured to: extracting angular points according to the preprocessed binocular images, and determining the main directions of the angular points by using a gray centroid method;
A depth level partitioning module configured to: for any corner, rotating a sampling area around the corner to a main direction to obtain a new sampling area, and performing quadtree division in the new sampling area to form a plurality of depth levels;
An improvement descriptor generating module configured to: each depth level is correspondingly provided with different description element splicing results, the description element splicing results under all depths are arranged, the low depth level is in front, and the high depth level is in back, so that an improved descriptor in a binary form is obtained;
A descriptor matching module configured to: performing dimension reduction processing on the improved descriptors, sequentially matching the improved descriptors of any two adjacent binocular image frames from the lowest depth level to the highest depth level, completing matching of the improved descriptors of the two image frames when each depth level is successfully matched, and positioning an AGV according to a matching result;
The described descriptive element splicing result comprises one or a plurality of descriptors reflecting pixel intensity information, descriptors reflecting pixel horizontal gradient information, descriptors reflecting pixel vertical gradient information, descriptors reflecting pixel gradient direction and descriptors reflecting pixel depth information;
the high depth level contains a greater number of descriptor categories than the low depth level.
7. A computer readable storage medium having a program stored thereon, which when executed by a processor performs the steps in the improved ORB algorithm based AGV positioning accuracy improvement method according to any of claims 1-5.
8. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor performs the steps in the improved ORB algorithm based AGV positioning accuracy improvement method of any of claims 1-5 when the program is executed by the processor.
CN202410128865.XA 2024-01-31 2024-01-31 AGV positioning precision improving method and system based on improved ORB algorithm Active CN117671011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410128865.XA CN117671011B (en) 2024-01-31 2024-01-31 AGV positioning precision improving method and system based on improved ORB algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410128865.XA CN117671011B (en) 2024-01-31 2024-01-31 AGV positioning precision improving method and system based on improved ORB algorithm

Publications (2)

Publication Number Publication Date
CN117671011A CN117671011A (en) 2024-03-08
CN117671011B true CN117671011B (en) 2024-05-28

Family

ID=90071591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410128865.XA Active CN117671011B (en) 2024-01-31 2024-01-31 AGV positioning precision improving method and system based on improved ORB algorithm

Country Status (1)

Country Link
CN (1) CN117671011B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961417A (en) * 2017-12-26 2019-07-02 广州极飞科技有限公司 Image processing method, device and mobile device control method
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB
CN112396595A (en) * 2020-11-27 2021-02-23 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on point-line characteristics in dynamic environment
CN113012212A (en) * 2021-04-02 2021-06-22 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN114199205A (en) * 2021-11-16 2022-03-18 河北大学 Binocular ranging method based on improved quadtree ORB algorithm
WO2023116327A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Multi-type map-based fusion positioning method and electronic device
CN116429082A (en) * 2023-02-28 2023-07-14 厦门大学 Visual SLAM method based on ST-ORB feature extraction
CN117036623A (en) * 2023-10-08 2023-11-10 长春理工大学 Matching point screening method based on triangulation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3214444A1 (en) * 2016-04-12 2017-10-19 Quidient, Llc Quotidian scene reconstruction engine
CN114096994A (en) * 2020-05-29 2022-02-25 北京小米移动软件有限公司南京分公司 Image alignment method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961417A (en) * 2017-12-26 2019-07-02 广州极飞科技有限公司 Image processing method, device and mobile device control method
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB
CN112396595A (en) * 2020-11-27 2021-02-23 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on point-line characteristics in dynamic environment
CN113012212A (en) * 2021-04-02 2021-06-22 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN114199205A (en) * 2021-11-16 2022-03-18 河北大学 Binocular ranging method based on improved quadtree ORB algorithm
WO2023116327A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Multi-type map-based fusion positioning method and electronic device
CN116429082A (en) * 2023-02-28 2023-07-14 厦门大学 Visual SLAM method based on ST-ORB feature extraction
CN117036623A (en) * 2023-10-08 2023-11-10 长春理工大学 Matching point screening method based on triangulation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Research on Feature Extraction and Matching Algorithm in Visual SLAM;Xin Chen等;《2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC)》;20210405;522-526 *
基于双层结构的矢量工程图符号识别算法;孙聪;《计算机辅助设计与图形学学报》;20171215;第29卷(第12期);2171-2179 *
无人机航拍图像拼接技术研究;刘强;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;20200615;第2020年卷(第06期);C031-182 *

Also Published As

Publication number Publication date
CN117671011A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN110853100B (en) Structured scene vision SLAM method based on improved point-line characteristics
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN111767960A (en) Image matching method and system applied to image three-dimensional reconstruction
CN113034600B (en) Template matching-based texture-free planar structure industrial part identification and 6D pose estimation method
CN111998862B (en) BNN-based dense binocular SLAM method
CN111738320B (en) Shielded workpiece identification method based on template matching
Wang et al. An improved ORB image feature matching algorithm based on SURF
CN111310688A (en) Finger vein identification method based on multi-angle imaging
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN111709893B (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN114241372A (en) Target identification method applied to sector-scan splicing
CN115239882A (en) Crop three-dimensional reconstruction method based on low-light image enhancement
CN116429082A (en) Visual SLAM method based on ST-ORB feature extraction
CN117011381A (en) Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision
CN114463397A (en) Multi-modal image registration method based on progressive filtering
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
CN117671011B (en) AGV positioning precision improving method and system based on improved ORB algorithm
CN111161219B (en) Robust monocular vision SLAM method suitable for shadow environment
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
CN110322476B (en) Target tracking method for improving STC and SURF feature joint optimization
CN117079272A (en) Bullet bottom socket mark feature identification method combining manual features and learning features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant