CN117011704A - Feature extraction method based on dotted line feature fusion and self-adaptive threshold - Google Patents
Feature extraction method based on dotted line feature fusion and self-adaptive threshold Download PDFInfo
- Publication number
- CN117011704A CN117011704A CN202310835924.2A CN202310835924A CN117011704A CN 117011704 A CN117011704 A CN 117011704A CN 202310835924 A CN202310835924 A CN 202310835924A CN 117011704 A CN117011704 A CN 117011704A
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- points
- feature
- extracted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 46
- 230000004927 fusion Effects 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000000265 homogenisation Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 4
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003706 image smoothing Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 2
- 238000013507 mapping Methods 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a characteristic extraction method based on point-line characteristic fusion and a self-adaptive threshold value, belongs to the field of automatic driving, and aims to solve the problems of poor positioning accuracy, weak robustness and the like in a low-texture and short-time rapid motion scene based on vision SLAM unmanned vehicle autonomous navigation. Firstly inputting an image, setting the image to be of a fixed resolution, secondly carrying out line feature extraction and ORB point homogenization feature extraction of a self-adaptive threshold value on the image through bilinear parallel processing of the image, and finally carrying out feature fusion to achieve feature extraction under a low-texture and low-illumination scene. The method has important significance for extracting the characteristics of the low-texture complex scene and accurately positioning and mapping the fast-motion scene.
Description
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to a feature extraction method based on point-line feature fusion and a self-adaptive threshold value.
Background
The visual SLAM refers to that the mobile device can calculate pose by using image information, and can simultaneously construct an environment map, and is widely applied to multiple fields such as three-dimensional reconstruction, robots, unmanned vehicle automatic driving and the like. However, as the actual application scene of the unmanned vehicle is changeable and complex, particularly under the environments of rapid motion, low texture and the like in a short time, the phenomena of motion blur, too few overlapping areas between two frames and the like can occur, so that the problems of low positioning precision, poor stability and the like can be caused. The problems of poor positioning precision, weak robustness and the like in a rapid motion scene in a low texture and a short time based on vision SLAM unmanned vehicle autonomous navigation are also solved, and the problems become one of research hot spots in recent years.
In recent years, by virtue of continuous efforts of scientific researchers to innovate and research, regardless of theory or technology, visual SLAM has been gradually perfected, but it is established under ideal conditions, such as clear and rich texture. However, few actual environments reach an ideal environment state, and most of the current visual SLAM technologies widely applied are built based on point features, so that the positioning accuracy of the system in a low-texture scene is poor, and the positioning accuracy is mainly because enough point features cannot be extracted to meet the requirements of pose estimation. Therefore, how to improve the positioning accuracy of the visual SLAM in a low-texture scene has been one of the focuses of the scholars.
Disclosure of Invention
In view of this, the present invention aims to design a feature extraction method based on point-line feature fusion and adaptive threshold to solve the above-mentioned problems in the background art, aiming at the current requirement that enough feature points cannot be extracted to meet pose estimation in low-texture scenes.
The invention discloses a point-line characteristic point fusion extraction method, which is realized by the following scheme:
step 1: image input, preparation of feature extraction: surrounding images are acquired in real time through a binocular camera, images are acquired through a fixed frame number, and the resolution of the images is set to 752 x 480.
Step 2: performing ORB feature point extraction by adopting a feature point extraction algorithm in ORBSLAM3 according to the self-adaptive threshold;
further, in the step 2, the performing the ORB feature point extraction by using the feature point extraction algorithm in the ORB lam3 according to the adaptive threshold value further includes the following steps:
step a: dividing an input image into four areas, and defining different FAST corner detection thresholds according to different subimage gray value confusion degrees corresponding to the areas;
step b: definition of image I 1 For the feature point image to be extracted, I 1 Has a height of h i Width w i To increase the degree of homogenization of ORB characteristics, I is 1 Divided into a height h i 2, width w i Four sub-images of/2;
step c: describing the discrete degree of gray values of all pixel points in each sub-image by using a variation coefficient, wherein the variation coefficient s is the ratio of a group of data standard deviation to an average value, and the calculation formula is as follows:
wherein g i Represents the gray value of a pixel point in the sub-image,representing the average gray value of the sub-image, t representing the number of pixel points, and g is defined as the larger the variation coefficient is, the higher the disorder degree of the gray is represented s =s×30, the initial corner detection threshold g is defined using the following formula th :
Step d: after setting an adaptive detection threshold according to the variation coefficient s, constructing eight layers of image pyramids for the sub-images in order to ensure scale invariance, extracting characteristic points for each layer of pyramids, and defining the requirement of the sub-images from I 1 The total number of ORB feature points extracted from the method is x t Scaling factor gamma of image pyramid s The number x of feature points to be extracted in the ith layer of each sub-image ti Can be expressed as:
where n is the number of pyramid layers, inv (gamma s ) Representing the inverse of the scaling factor;
step e: after feature points of each layer of pyramid are required to be extracted are calculated, dividing regions of the pyramid by using square grids with side length of 30 pixels, extracting FAST corner points in each grid, and if the number of corner points extracted in the grid is 0, reducing the region detection threshold to g th And (2) extracting again, if the corner points are still not extracted, discarding the grid, ensuring the number of the characteristic points, and repeatedly executing the operation until the corner points in all the grids are extracted;
step f: after all the corner points are extracted, the root node of the quadtree is defined as the whole sub-image by using the quadtree management corner points, the sub-image is divided into 4 areas and used as the sub-node of the root, if the number of the corner points in the sub-node is 2 (and above), the sub-node is continuously divided into the quadtree, if the number of the corner points is 1, the sub-node is reserved and is not continuously divided, if the number of the corner points is 0, the sub-node is deleted until the number of the feature points extracted by each layer of image in the pyramid reaches a set threshold value, the corner point with the highest Harris corresponding value in each sub-node is reserved, finally, the FAST corner points extracted from the four sub-images are combined, and corresponding descriptors are calculated, so that the ORB feature points can be extracted by self-adaptive threshold value homogenization.
Step 3: extracting line characteristics based on an EDLines algorithm;
further, the line feature extraction in the step 3 based on the EDLines algorithm further includes the following steps:
step I: in order to improve the recognition effect, distortion correction is added to an input image, and a parameter value is set for the distortion to be the camera leaving the factory;
step II: image smoothing, namely removing noise by using a Gaussian filter, and suppressing the noise in the image by filtering, wherein the Gaussian kernel is 5×5, and sigma=1;
step III: and calculating the gradient and the direction in the image by using a Sobel gradient operator, wherein the calculation formula is as follows:
where I (x, y) is the pixel value of the image at (x, y), g (x, y) is the magnitude of the gradient, and angle (x, y) is the angle of the horizontal line;
step IV: traversing each pixel, selecting pixels with gradient values larger than or equal to gradient values of adjacent pixels in the gradient direction, and defining the pixels as anchor points;
step V: selecting an anchor point as a starting point, selecting a pixel point with the largest gradient value as a next anchor point by comparing amplitude values of adjacent pixel points, and finally connecting the adjacent anchor points to form an edge pixel chain;
step VI: one or more line segments are segmented from the edge pixel chain, pixels are traversed in sequence, and the pixels are fitted by using a least square method, wherein the expression is as follows:
in which x is i ,y i Pixel coordinates until the result exceeds a threshold value, for example, 1 pixel has no error, cutting off until all pixels are processed, wherein the maximum root mean square fitting error and the shortest line length are involved in the line segment fitting process, the fitting error is calculated by using a formula (5), and the shortest line length calculation formula is as follows:
where N is the width of the input image.
Step 4: and extracting features by double threads, and carrying out dotted line feature fusion.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects:
1) According to the feature extraction method based on the point-line feature fusion and the self-adaptive threshold value, abundant line features exist in a low-texture scene, and the line features are not influenced by factors such as light rays, shielding, visual angle change and the like basically relative to the point features, so that the method can be suitable for various scenes, and therefore, under the scene, abundant line features are introduced, the point-line features are fused to solve the problem of insufficient feature matching, and the positioning precision and the robustness of a system are improved; 2) Aiming at the problem that the visual odometer is insensitive to moving objects and the accuracy of the visual odometer is reduced due to redundancy of feature points, in a feature extraction stage of the visual odometer, dividing an image into areas, adaptively setting a feature point extraction threshold according to a variation coefficient of regional gray scale, and managing the feature points by utilizing a quadtree structure to realize ORB (Oriented FAST and Rotated BRIEF, ORB) feature homogenization extraction; 3) The characteristic extraction is carried out in a mode of point-line characteristic fusion, firstly, characteristic points can be extracted uniformly in a normal texture scene in a point characteristic self-adaption mode, secondly, the point characteristics are extracted in a low texture scene due to the fact that the line characteristics are more advantageous, not only the self-adaption threshold value is adopted to extract the point characteristics, but also scene information is supplemented and described through the applicability of the line characteristics, parameters are better provided for robot pose estimation, more accuracy and robustness are achieved for subsequent characteristic matching, positioning and image construction, and finally, the point-line characteristics are extracted respectively in a high concurrency mode in a double-thread mode, and characteristic fusion is carried out.
Drawings
FIG. 1 is a flow chart of a feature extraction method based on dotted line feature fusion and adaptive threshold;
FIG. 2 is a flow chart of a method for extracting feature points by adopting an adaptive threshold ORB;
FIG. 3 is a schematic view of sub-image region division;
FIG. 4 is a schematic diagram of a build sub-image pyramid;
FIG. 5 is a schematic representation of feature point extraction based on a quadtree;
FIG. 6 is a feature extraction experimental test chart;
FIG. 7 is a flow chart of a line feature extraction method based on the EDLines algorithm;
fig. 8 is a schematic diagram of the distribution of the dotted line features in the low texture scene.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention clearer, the present invention is described below by way of specific examples in the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. The structures, proportions, sizes, etc. shown in the drawings are shown only in connection with the present disclosure for understanding and reading by those skilled in the art, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, proportional changes, or dimensional adjustments should not be made in the essential significance of the invention, and should still fall within the scope of the invention as defined by the claims without affecting the efficacy or achievement of the present invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
In this case, in order to avoid obscuring the present invention due to unnecessary details, only the structures and processing steps closely related to the aspects of the present invention are shown in the drawings, and other details not greatly related to the present invention are omitted.
As shown in fig. 1, a specific embodiment of the present invention includes the steps of:
step 1: image input, preparation of feature extraction: surrounding images are acquired through the binocular camera in real time, the binocular images are acquired through fixed frame numbers, and the resolution of the images is set to be 752 to 480, and the whole process is parallel processing, and the dotted line features are processed respectively as shown in fig. 1, so that the method has better robustness on low-texture images and can supplement features better, and the pose estimation of the unmanned vehicles is more accurate.
Step 2: performing ORB feature point extraction by adopting a feature point extraction algorithm in ORBSLAM3 according to the self-adaptive threshold;
further, as shown in fig. 2, the performing the ORB feature point extraction in the step 2 by using the feature point extraction algorithm in the ORB lam3 according to the adaptive threshold value further includes the following steps:
step a: as shown in fig. 3, an input image is divided into four areas (area one, area two, area three and area four), and different FAST corner detection thresholds are defined according to different degree of confusion of sub-image gray values corresponding to the areas;
step b: definition of image I 1 For the feature point image to be extracted, I 1 Has a height of h i Width w i To increase the degree of homogenization of ORB characteristics, I is 1 Divided into a height h i 2, width w i Four sub-images of/2;
step c: describing the discrete degree of gray values of all pixel points in each sub-image by using a variation coefficient, wherein the variation coefficient s is the ratio of a group of data standard deviation to an average value, and the calculation formula is as follows:
wherein g i Represents the gray value of a pixel point in the sub-image,representing the average gray value of the sub-image, t representing the number of pixel points, and g is defined as the larger the variation coefficient is, the higher the disorder degree of the gray is represented s =s×30, the initial corner detection threshold g is defined using the following formula th :
Step d: as shown in FIG. 4, after setting an adaptive detection threshold according to the coefficient of variation s, to ensure scale invariance, constructing eight layers of image pyramids for sub-images, and then extracting feature points from each layer of pyramids to define the requirement of I 1 The total number of ORB feature points extracted from the method is x t Scaling factor gamma of image pyramid s The number x of feature points to be extracted in the ith layer of each sub-image ti Can be expressed as:
where n is the number of pyramid layers, inv (gamma s ) Representing the inverse of the scaling factor;
step e: after feature points of each layer of pyramid are required to be extracted are calculated, dividing regions of the pyramid by using square grids with side length of 30 pixels, extracting FAST corner points in each grid, and if the number of corner points extracted in the grid is 0, reducing the region detection threshold to g th And (2) extracting again, if the corner points are still not extracted, discarding the grid, ensuring the number of the characteristic points, and repeatedly executing the operation until the corner points in all the grids are extracted;
step f: as shown in fig. 5, after all the corner points are extracted, using a quadtree to manage the corner points, defining a root node of the quadtree as a whole subimage, dividing the subimage into 4 areas as a root subnode, if the number of the corner points in the subnode is 2 (and above), continuing to divide the subnode into the quadtree, if the number of the corner points is 1, reserving the subnode and not continuing to divide, if the number of the corner points is 0, deleting the subnode until the number of the feature points extracted by each layer of image in the pyramid reaches a set threshold, reserving the corner point with the highest corresponding value in Harris in each subnode, finally merging the FAST corner points extracted from the four subimages, calculating corresponding descriptors, and completing the adaptive threshold homogenization extraction of the ORB feature points, and using a standard ORB feature extraction algorithm and the present improvement algorithm carried by a computer vision library (OPENCV), wherein the number of the extracted ORB feature points is 500 and 1000 respectively, and the extraction results are shown in table 1:
TABLE 1 feature extraction experimental results
It can be seen from the table that the improved feature point extraction algorithm realizes feature homogenization extraction, enhances the capability of describing images by the feature points, and the number of the finally extracted feature points is slightly more than the set extraction number because the final extraction result of each layer of pyramid is possibly more than the set extraction number.
Step 3: extracting line characteristics based on an EDLines algorithm;
further, as shown in fig. 7, the line feature extraction based on the EDLines algorithm in the step 3 further includes the following steps:
step I: in order to improve the recognition effect, distortion correction is added to an input image, and a parameter value is set for the distortion to be the camera leaving the factory;
step II: image smoothing, namely removing noise by using a Gaussian filter, and suppressing the noise in the image by filtering, wherein the Gaussian kernel is 5×5, and sigma=1;
step III: and calculating the gradient and the direction in the image by using a Sobel gradient operator, wherein the calculation formula is as follows:
where I (x, y) is the pixel value of the image at (x, y), g (x, y) is the magnitude of the gradient, and angle (x, y) is the angle of the horizontal line;
step IV: traversing each pixel, selecting pixels with gradient values larger than or equal to gradient values of adjacent pixels in the gradient direction, and defining the pixels as anchor points;
step V: selecting an anchor point as a starting point, selecting a pixel point with the largest gradient value as a next anchor point by comparing amplitude values of adjacent pixel points, and finally connecting the adjacent anchor points to form an edge pixel chain;
step VI: one or more line segments are segmented from the edge pixel chain, pixels are traversed in sequence, and the pixels are fitted by using a least square method, wherein the expression is as follows:
in which x is i ,y i Is the pixel coordinates until the result exceeds a threshold, e.g., 1 pixel is error free, intoLine cutting is carried out until all pixels are processed, the maximum root mean square fitting error and the shortest line length are involved in the line segment fitting process, the fitting error is calculated by using the formula (5), and the shortest line length calculation formula is as follows:
where N is the width of the input image.
Step 4: and extracting features by double threads, and carrying out dotted line feature fusion.
As shown in fig. 8, in a scene with low texture or weak illumination, sparse point features cannot meet actual feature requirements, so that the performance of a system is difficult to be guaranteed, only the influence of the point features can be avoided by a method of point feature fusion, and the two are performed in parallel, so that the positioning accuracy and robustness of an algorithm can be improved, the mapping effect of the system can be improved, and a map of the system is more visual.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the embodiments are to be considered in all respects as illustrative and not restrictive.
Furthermore, it should be understood that the foregoing examples are only for illustrating the technical scheme of the present invention, and although the present invention has been described in detail with reference to the examples, it should be understood by those skilled in the art that the technical scheme in each example may be properly combined to form other embodiments that can be understood by those skilled in the art without departing from the spirit and scope of the technical scheme of the present invention, and the scope of the claims of the present invention shall be covered.
Claims (3)
1. A feature extraction method based on point-line feature fusion and self-adaptive threshold is characterized by comprising the following steps: the method comprises the following steps:
step 1: inputting an image, and preparing feature extraction;
step 2: performing ORB feature point extraction by adopting a feature point extraction algorithm in ORBSLAM3 according to the self-adaptive threshold;
step 3: extracting line characteristics based on an EDLines algorithm;
step 4: and extracting features by double threads, and carrying out dotted line feature fusion.
2. The feature extraction method based on the dotted line feature fusion and the adaptive threshold according to claim 1, wherein: in the step 2, according to the adaptive threshold, the feature point extraction algorithm in the ORBSLAM3 is adopted to extract the ORB feature points, and the method further comprises the following steps:
step a: dividing an input image into four areas, and defining different FAST corner detection thresholds according to different subimage gray value confusion degrees corresponding to the areas;
step b: definition of image I 1 For the feature point image to be extracted, I 1 Has a height of h i Width w i To increase the degree of homogenization of ORB characteristics, I is 1 Divided into a height h i 2, width w i Four sub-images of/2;
step c: describing the discrete degree of gray values of all pixel points in each sub-image by using a variation coefficient, wherein the variation coefficient s is the ratio of a group of data standard deviation to an average value, and the calculation formula is as follows:
wherein g i Represents the gray value of a pixel point in the sub-image,representing the average gray value of the sub-image, t representing the number of pixel points, and g is defined as the larger the variation coefficient is, the higher the disorder degree of the gray is represented s =s×30, the initial corner detection threshold g is defined using the following formula th :
Step d: after setting an adaptive detection threshold according to the variation coefficient s, constructing eight layers of image pyramids for the sub-images in order to ensure scale invariance, extracting characteristic points for each layer of pyramids, and defining the requirement of the sub-images from I 1 The total number of ORB feature points extracted from the method is x t Scaling factor gamma of image pyramid s The number x of feature points to be extracted in the ith layer of each sub-image ti Can be expressed as:
where n is the number of pyramid layers, inv (gamma s ) Representing the inverse of the scaling factor;
step e: after feature points of each layer of pyramid are required to be extracted are calculated, dividing regions of the pyramid by using square grids with side length of 30 pixels, extracting FAST corner points in each grid, and if the number of corner points extracted in the grid is 0, reducing the region detection threshold to g th And (2) extracting again, if the corner points are still not extracted, discarding the grid, ensuring the number of the characteristic points, and repeatedly executing the operation until the corner points in all the grids are extracted;
step f: after all the corner points are extracted, the root node of the quadtree is defined as the whole sub-image by using the quadtree management corner points, the sub-image is divided into 4 areas and used as the sub-node of the root, if the number of the corner points in the sub-node is 2 (and above), the sub-node is continuously divided into the quadtree, if the number of the corner points is 1, the sub-node is reserved and is not continuously divided, if the number of the corner points is 0, the sub-node is deleted until the number of the feature points extracted by each layer of image in the pyramid reaches a set threshold value, the corner point with the highest Harris corresponding value in each sub-node is reserved, finally, the FAST corner points extracted from the four sub-images are combined, and corresponding descriptors are calculated, so that the ORB feature points can be extracted by self-adaptive threshold value homogenization.
3. The feature extraction method based on the dotted line feature fusion and the adaptive threshold according to claim 1, wherein: the step 3 of extracting line characteristics based on the EDLines algorithm further comprises the following steps:
step I: in order to improve the recognition effect, distortion correction is added to an input image, and a parameter value is set for the distortion to be the camera leaving the factory;
step II: image smoothing, namely removing noise by using a Gaussian filter, and suppressing the noise in the image by filtering, wherein the Gaussian kernel is 5×5, and sigma=1;
step III: and calculating the gradient and the direction in the image by using a Sobel gradient operator, wherein the calculation formula is as follows:
where I (x, y) is the pixel value of the image at (x, y), g (x, y) is the magnitude of the gradient, and angle (x, y) is the angle of the horizontal line;
step IV: traversing each pixel, selecting pixels with gradient values larger than or equal to gradient values of adjacent pixels in the gradient direction, and defining the pixels as anchor points;
step V: selecting an anchor point as a starting point, selecting a pixel point with the largest gradient value as a next anchor point by comparing amplitude values of adjacent pixel points, and finally connecting the adjacent anchor points to form an edge pixel chain;
step VI: one or more line segments are segmented from the edge pixel chain, pixels are traversed in sequence, and the pixels are fitted by using a least square method, wherein the expression is as follows:
in which x is i ,y i Pixel coordinates until the result exceeds a threshold, e.g., 1 pixel is error free, truncated until processing is completeThe pixel is arranged, the maximum root mean square fitting error and the shortest line length are involved in the line segment fitting process, the fitting error is calculated by using the formula (5), and the shortest line length calculation formula is as follows:
where N is the width of the input image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310835924.2A CN117011704A (en) | 2023-07-07 | 2023-07-07 | Feature extraction method based on dotted line feature fusion and self-adaptive threshold |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310835924.2A CN117011704A (en) | 2023-07-07 | 2023-07-07 | Feature extraction method based on dotted line feature fusion and self-adaptive threshold |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117011704A true CN117011704A (en) | 2023-11-07 |
Family
ID=88564700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310835924.2A Pending CN117011704A (en) | 2023-07-07 | 2023-07-07 | Feature extraction method based on dotted line feature fusion and self-adaptive threshold |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117011704A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117315274A (en) * | 2023-11-28 | 2023-12-29 | 淄博纽氏达特机器人系统技术有限公司 | Visual SLAM method based on self-adaptive feature extraction |
CN117710467A (en) * | 2024-02-06 | 2024-03-15 | 天津云圣智能科技有限责任公司 | Unmanned plane positioning method, unmanned plane positioning equipment and aircraft |
-
2023
- 2023-07-07 CN CN202310835924.2A patent/CN117011704A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117315274A (en) * | 2023-11-28 | 2023-12-29 | 淄博纽氏达特机器人系统技术有限公司 | Visual SLAM method based on self-adaptive feature extraction |
CN117315274B (en) * | 2023-11-28 | 2024-03-19 | 淄博纽氏达特机器人系统技术有限公司 | Visual SLAM method based on self-adaptive feature extraction |
CN117710467A (en) * | 2024-02-06 | 2024-03-15 | 天津云圣智能科技有限责任公司 | Unmanned plane positioning method, unmanned plane positioning equipment and aircraft |
CN117710467B (en) * | 2024-02-06 | 2024-05-28 | 天津云圣智能科技有限责任公司 | Unmanned plane positioning method, unmanned plane positioning equipment and aircraft |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109740465B (en) | Lane line detection algorithm based on example segmentation neural network framework | |
CN108510451B (en) | Method for reconstructing license plate based on double-layer convolutional neural network | |
CN117011704A (en) | Feature extraction method based on dotted line feature fusion and self-adaptive threshold | |
CN113362247A (en) | Semantic live-action three-dimensional reconstruction method and system of laser fusion multi-view camera | |
CN111598916A (en) | Preparation method of indoor occupancy grid map based on RGB-D information | |
CN111899295B (en) | Monocular scene depth prediction method based on deep learning | |
Tang et al. | Single image dehazing via lightweight multi-scale networks | |
CN109886159B (en) | Face detection method under non-limited condition | |
CN109903322B (en) | Depth camera depth image restoration method | |
CN110544300B (en) | Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics | |
CN113378756B (en) | Three-dimensional human body semantic segmentation method, terminal device and storage medium | |
CN109741358B (en) | Superpixel segmentation method based on adaptive hypergraph learning | |
CN116879870B (en) | Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar | |
CN109544635A (en) | It is a kind of based on the automatic camera calibration method for enumerating exploration | |
CN113723399A (en) | License plate image correction method, license plate image correction device and storage medium | |
CN115631223A (en) | Multi-view stereo reconstruction method based on self-adaptive learning and aggregation | |
CN110046623B (en) | Image feature point extraction method and camera | |
CN112801021B (en) | Method and system for detecting lane line based on multi-level semantic information | |
CN116862969A (en) | Binocular parallax estimation method and application thereof | |
CN116912336A (en) | Point cloud compression method based on clustering | |
CN115937011B (en) | Key frame pose optimization visual SLAM method, storage medium and equipment based on time lag feature regression | |
CN108921852B (en) | Double-branch outdoor unstructured terrain segmentation network based on parallax and plane fitting | |
CN114022500B (en) | Intelligent automobile road boundary detection method by combining laser radar and camera | |
CN113284232B (en) | Optical flow tracking method based on quadtree | |
CN115512302A (en) | Vehicle detection method and system based on improved YOLOX-s model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |