CN113989308A - Polygonal target segmentation method based on Hough transform and template matching - Google Patents

Polygonal target segmentation method based on Hough transform and template matching Download PDF

Info

Publication number
CN113989308A
CN113989308A CN202111299363.6A CN202111299363A CN113989308A CN 113989308 A CN113989308 A CN 113989308A CN 202111299363 A CN202111299363 A CN 202111299363A CN 113989308 A CN113989308 A CN 113989308A
Authority
CN
China
Prior art keywords
template
matching
straight line
target
sliding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111299363.6A
Other languages
Chinese (zh)
Inventor
刘山
周彦宏
应永康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111299363.6A priority Critical patent/CN113989308A/en
Publication of CN113989308A publication Critical patent/CN113989308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a polygonal target segmentation method based on Hough transform and template matching, which comprises the steps of firstly, carrying out edge detection on polygonal objects in a scene image, and screening out a cluster of straight lines with similar slopes corresponding to reference edges of each object by adopting Hough transform; carrying out contour extraction on the edge image, calculating by combining the contour area of a single object template to obtain the number of objects, and obtaining straight lines representing object direction characteristics in one-to-one correspondence with the objects through a K-means clustering algorithm; and sequentially sliding the templates along each straight line, wherein the sliding process adopts a strategy of firstly thinning and then concentrating, namely, the sliding step length is set to be larger before a target is found so as to accelerate the matching speed, the sliding step length is set to be smaller after the target is found, and meanwhile, the templates are rotated and translated in a small range so as to further improve the matching precision until the optimal matching of each object is found so as to realize the target segmentation. The method can improve the calculation efficiency and ensure higher identification and segmentation accuracy.

Description

Polygonal target segmentation method based on Hough transform and template matching
Technical Field
The invention relates to a polygonal target segmentation method based on Hough transform and template matching, and belongs to the field of pattern recognition and image processing.
Background
In industrial automation manufacturing and daily production and living scenarios, there are many problem scenarios of target recognition and image segmentation using objects with relatively regular geometric features as research objects, such as part recognition and detection in the field of intelligent manufacturing, and traffic sign recognition in the field of intelligent traffic. Due to the relatively complex surrounding environment, such as illumination factors, visual angle limitation, mutual adjacency or irregular overlapping arrangement of objects, etc., the recognition and segmentation of the objects are difficult to realize. Aiming at the problem, the geometric characteristics of the research object in the image can be captured by means of geometric characteristic detection means and a template matching method in the field of image processing, and template matching is carried out on the basis of the geometric characteristics so as to realize the identification and segmentation of the target with more regular geometric characteristics in a complex environment.
Hough transform is a geometric feature detection method widely applied to the field of image analysis, and on the premise that the geometric characteristics of a target can be represented by a mathematical expression, an algorithm can execute voting in an expression parameter space to detect information such as the position, the shape and the like of the target. In general, in application, an image is preprocessed and edge-detected, and the original image is converted into a binary image only including edge information. Due to the complexity of a scene and the existence of other interference factors, a pixel missing phenomenon may occur in an edge image, or a detected target edge deviates from an actual boundary due to the existence of noise, so that target identification and segmentation cannot be directly performed.
Template matching is a basic pattern recognition method, which is used to find the position of a specific object appearing in an image, thereby realizing the recognition of a target. Generally, the method calculates and extracts a plurality of characteristic vectors from an image to be identified, compares the characteristic vectors with characteristic vectors corresponding to a template image, calculates certain similarity measurement between the characteristic vectors and the characteristic vectors, and further finds the best matching position. The traditional template matching has certain limitation when used for target recognition, and is mainly characterized in that the parallel movement matching process of the template is large in calculation amount and long in time consumption.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a polygonal target segmentation method based on Hough transform and template matching, which can improve the calculation efficiency and ensure higher identification and segmentation accuracy, and comprises the following specific implementation steps:
(a) acquiring scene images of polygonal objects which are adjacent to each other or overlapped in a small range, carrying out edge detection on the images, searching straight lines corresponding to edges by adopting Hough transform, designating reference edges and screening out a cluster of straight lines with similar slopes corresponding to the reference edges of each object through threshold setting;
(b) extracting the outline of the edge image to obtain an outline area, calculating the number of the objects by combining the known outline area of a single object template, and then clustering the linear clusters corresponding to each object by a K-means clustering algorithm to obtain the linear representing the object direction characteristics corresponding to each object one by one;
(c) sequentially sliding the templates along each straight line to perform template matching; the process of sliding the template adopts a strategy of first sparse and then dense, namely, the sliding step length is set to be larger before a target is found so as to accelerate the matching speed, the sliding step length is set to be smaller after the target is found, and meanwhile, the template is rotated in a small range by taking a central point as an original point and translated in a small range along the direction vertical to a straight line so as to further improve the matching precision until the optimal matching of each object is found so as to realize the target identification and segmentation.
Further, in the step (a), for a regular polygon, any one side is designated as a reference side; for irregular polygons, the longest edge is designated as the reference edge.
Further, the step (a) is specifically as follows:
(1) edge detection is carried out on polygonal objects which are adjacent to each other or are overlapped in a small range, and the edge point set is recorded as P { P ═ P1,P2…Pk};
(2) For convenient calculation in the image, the origin of a coordinate system is usually defined at the upper left corner of the image, and the horizontal and vertical axes are respectively along the width direction and the height direction of the image; a straight line can be expressed by a mathematical expression ρ ═ xcos θ + ysin θ, where ρ is the perpendicular distance from the origin to the straight line, and θ is the angle between the perpendicular vector from the origin to the straight line and the direction vector of the horizontal axis;
(3) traversing all edge points, obtaining a straight line corresponding to the edge by Hough transform, screening out a cluster of straight lines with similar slopes corresponding to the object reference edge by setting a threshold, and recording as L ═ L { (L)1,L2…LNN is the number of objects, each object corresponds to a cluster of straight lines representing the direction characteristics of the reference edge, and the number is marked as Li={Li1,Li2…LimM is the number of straight lines in a cluster, each line being represented in the form of a dyad (ρ, θ).
Further, the step (b) is specifically as follows:
(1) carrying out contour extraction on the binary image obtained after edge detection, calculating the whole contour area of objects which are adjacent to each other or have small-range overlapping, and recording as S1In the case of a template of a known single object, the contour area S2On the premise of obtaining the number N of the objects by calculation;
(2) the straight line cluster corresponding to the reference edge of each polygonal object obtained after hough transform may include a plurality of straight lines with similar slopes and positions and approximately same directions as the reference edge, so that the straight line clusters corresponding to each object need to be clustered to obtain straight lines which are in one-to-one correspondence with the objects and can represent direction characteristics of the objects; performing K-means clustering on all the straight line clusters, setting the number of initial clustering centers as N, and respectively obtaining the mean values of the N clusters of straight lines through iterative computation, namely obtaining N straight lines corresponding to the objects one by one, and recording as sigma ═ l1,l2…lN}。
Further, the step (c) is specifically as follows:
(1) sequentially sliding the template from the image boundary area along the obtained N straight lines sigma to perform template matching; rotating the template by a certain angle according to the parameters (rho, theta) of each straight line to enable the direction of the template to be consistent with the straight line; the process of the sliding template adopts a strategy of first sparse and then dense, namely, the sliding step length is set to be a larger value delta before a target is found1Pixels to speed up the matching speed; setting the sliding step length to a smaller value delta after finding the target2Pixel, i.e. goCarrying out dense matching on lines;
(2) the index of template matching is the Intersection ratio of the object region in the target template and the object region in the actual scene image (Intersection _ over _ Union, IoU);
(3) if a target appears in the region of interest in the template matching process, namely the intersection ratio is greater than a set threshold value sigma, entering a dense matching stage; the sliding step length of the template is reduced, and after each step of sliding, the template is respectively subjected to step length delta from the vertical direction of the straight line to the two sides of the straight line for multiple times3The small-range translation of the pixels is carried out, and meanwhile, the small-range rotation with the step length of alpha angle is carried out for multiple times along the clockwise direction and the anticlockwise direction by taking the center of the template as an original point, so that the template matching precision is further improved;
(4) in the process of matching the sliding template, maintaining and recording the maximum intersection ratio of the current position and the corresponding template position information at each step, exiting template matching until the intersection ratio is smaller than a set threshold value sigma, namely finding the best matching of the object in the current straight line direction, and respectively carrying out the operation along each straight line until the best matching of all the objects is found so as to realize target identification and segmentation.
Further, in the specific implementation of the step (c), the sliding step delta is1Value range [30,70 ]]Sliding step delta2Value range [2,7 ]]Sliding step delta3Value range [2,7 ]]Step length alpha of rotation angle is in value range of [1 degree ], 5 degree DEG]The cross-over ratio threshold value sigma is within a value range of [0.1,0.3 ]]The step length and the threshold value can be flexibly adjusted according to the precision requirement of template matching.
The invention has the following beneficial effects:
(1) aiming at the problem of identifying and segmenting targets with more regular geometric characteristics, the method can realize rapid and accurate directional characteristic capture and identification segmentation, and has simple and practical principle and higher operation efficiency;
(2) the method can quickly capture the geometric characteristics of the target object to be identified and segmented, and find the straight lines which are in one-to-one correspondence with the object and represent the directional characteristics of the object;
(3) the method searches the optimal matching position along the linear sliding template representing the direction characteristics of the object, thereby improving the efficiency of template matching;
(4) the method of the invention introduces tiny position change in the vertical direction of the straight line and tiny angle rotation taking the center of the template as the origin in the process of template matching, thereby further improving the precision of the optimal matching.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a diagram illustrating a scene in which polygonal objects are disposed adjacent to each other according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of edge joining of objects adjacent to each other after edge detection processing according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a straight line representing direction characteristics of each object obtained through hough transform and K-means clustering in a scene where objects are placed adjacent to each other according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating the target segmentation effect in a scene where objects are placed adjacent to each other according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a small-range overlapping layout of polygon objects according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of edge connection of objects placed in a small overlap after edge detection processing according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a straight line representing direction characteristics of each object obtained through hough transform and K-means clustering in a small-range overlapping placement scene according to an embodiment of the present invention;
fig. 9 is a schematic diagram of the target segmentation effect in a small-range overlapping placement scene according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the invention will become more apparent. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Setting the slip step δ in the present embodiment1Is 50, the step of sliding delta2Is 3, the step of sliding delta3The rotation angle step α is 1 °, and the intersection ratio threshold σ is 0.1. The setting of the parameters is merely an example, and is not limited thereto, and the method for segmenting the polygonal target based on hough transform and template matching specifically includes the following steps, as shown in fig. 1:
1) canny edge detection is carried out on polygonal objects which are adjacent to each other or are overlapped in a small range, and the edge point set is recorded as P ═ P1,P2…PkObjects placed adjacent to each other are shown in fig. 2, objects placed in an overlapping manner are shown in fig. 6, and edge graphs connected with each other obtained after edge detection are respectively shown in fig. 3 and fig. 7;
2) traversing all edge points, obtaining a straight line corresponding to the edge by Hough transform, screening out a cluster of straight lines with similar slopes corresponding to the object reference edge by setting a threshold, and recording as L ═ L { (L)1,L2…LNN is the number of objects; each object corresponds to a cluster of straight lines representing the long-side direction characteristics of the object and is marked as Li={Li1,Li2…LimM is the number of straight lines in a cluster; a straight line can be represented by a mathematical expression ρ ═ xcos θ + ysin θ, where ρ is a vertical distance from an origin to the straight line, θ is an angle between a vertical vector from the origin to the straight line and a horizontal axis direction vector, and each straight line obtained by hough transformation is represented in the form of a binary group (ρ, θ);
3) carrying out contour extraction on the binary image obtained after edge detection, calculating the contour area of objects which are adjacent to each other or overlapped, and recording as S1In the case of a template of a known single object, the contour area S2On the premise of obtaining the number N of the objects by calculation;
4) performing K-means clustering on all the linear clusters obtained after Hough transformation, setting the number of initial clustering centers as N, and respectively obtaining the mean values of the N clusters of linear clusters through iterative computation, namely obtaining N linear clusters corresponding to the object one by one, and recording as sigma ═ l1,l2…lN}; straight lines representing the directional characteristics of each object in the scenes of object adjacent placement and small-range overlapping placement are respectively shown in fig. 4 and 8;
5) sequentially sliding the template from the image boundary area along the obtained N straight lines sigma to perform template matching; rotating the template by a certain angle according to the parameters (rho, theta) of each straight line to enable the direction of the template to be consistent with the straight line; the process of the sliding template adopts a strategy of first sparse and then dense, namely, the sliding step length is set to be a larger value delta before a target is found1Pixels to speed up the matching speed; setting the sliding step length to a smaller value delta after finding the target2Pixels, i.e. dense matching is performed;
6) the index of template matching is the Intersection ratio of the object region in the target template and the object region in the actual scene image (Intersection _ over _ Union, IoU);
7) if a target appears in the region of interest in the template matching process, namely the intersection ratio is greater than a set threshold value sigma, entering a dense matching stage; the sliding step length of the template is reduced, and after each step of sliding, the template is respectively subjected to step length delta from the vertical direction of the straight line to the two sides of the straight line for multiple times3The small-range translation of the pixels is carried out, and meanwhile, the small-range rotation with the step length of alpha angle is carried out for multiple times along the clockwise direction and the anticlockwise direction by taking the center of the template as an original point, so that the template matching precision is further improved;
8) in the process of matching the sliding template, maintaining and recording the maximum intersection ratio of the current position and the corresponding template position information at each step, quitting template matching until the intersection ratio is smaller than a set threshold value sigma, namely finding the optimal matching of the object in the current straight line direction, and respectively carrying out the operation along each straight line until the optimal matching of all the objects is found so as to realize target identification and segmentation; the target segmentation effect in the scene of object adjacent placement and small-range overlapping placement is shown in fig. 5 and 9, respectively.
According to the method, edge detection and Hough transformation are carried out on the objects, straight lines which represent direction characteristics and correspond to the objects one to one are obtained through a K-means clustering algorithm, then template matching is carried out by sliding a template along the straight line direction until the optimal matching position of each object is found, and therefore target identification and segmentation are achieved. The polygon target segmentation method has the advantages of simple principle, high running speed and high accuracy of identification and segmentation.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the invention and is not intended to limit the invention, which has been described in detail with reference to the foregoing examples, but it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (6)

1. A polygonal target segmentation method based on Hough transform and template matching is characterized by comprising the following steps:
(a) acquiring scene images of polygonal objects which are adjacent to each other or overlapped in a small range, carrying out edge detection on the images, searching straight lines corresponding to edges by adopting Hough transform, designating reference edges and screening out a cluster of straight lines with similar slopes corresponding to the reference edges of each object through threshold setting;
(b) extracting the outline of the edge image to obtain an outline area, calculating the number of the objects by combining the known outline area of a single object template, and then clustering the linear clusters corresponding to each object by a K-means clustering algorithm to obtain the linear representing the object direction characteristics corresponding to each object one by one;
(c) sequentially sliding the templates along each straight line to perform template matching; the process of sliding the template adopts a strategy of first sparse and then dense, namely, the sliding step length is set to be larger before a target is found so as to accelerate the matching speed, the sliding step length is set to be smaller after the target is found, and meanwhile, the template is rotated in a small range by taking a central point as an original point and translated in a small range along the direction vertical to a straight line until the optimal matching of each object is found so as to realize the target identification and segmentation.
2. The polygonal object segmentation method according to claim 1, wherein in the step (a), for a regular polygon, any one side is designated as a reference side; for irregular polygons, the longest edge is designated as the reference edge.
3. The method for segmenting the polygonal target according to claim 1, wherein the step (a) is specifically as follows:
(1) performing edge detection on polygonal objects which are adjacent to each other or overlapped in a small range, and recording an edge point set as P { P ═ P }1,P2...Pk};
(2) Defining the origin of a coordinate system at the upper left corner of the image, and respectively arranging the horizontal axis and the vertical axis along the width direction and the height direction of the image; a straight line is expressed by a mathematical expression rho ═ xcos theta + ysin theta, wherein rho is a vertical distance from an original point to the straight line, and theta is an included angle between a vertical vector from the original point to the straight line and a horizontal axis direction vector;
(3) traversing all edge points, obtaining a straight line corresponding to the edge by Hough transform, screening out a cluster of straight lines with similar slopes corresponding to each object reference edge through threshold setting, and recording as L ═ L1,L2...LNN is the number of objects, each object corresponds to a cluster of straight lines representing the direction characteristics of the reference edge, and the number is marked as Li={Li1,Li2...LimM is the number of straight lines in a cluster, each line being represented in the form of a dyad (ρ, θ).
4. The method for splitting a polygonal target according to claim 1, wherein the step (b) is specifically as follows:
(1) carrying out contour extraction on the binary image obtained after edge detection, and calculating the contour area and recording the contour area as S1In the case of a template of a known single object, the contour area S2On the premise of obtaining the number N of the objects by calculation;
(2) the straight line cluster corresponding to the reference edge of each polygonal object obtained after hough transform may include a plurality of straight lines with similar slopes and positions and approximately same directions as the reference edge, so that the straight line clusters corresponding to each object need to be clustered to obtain straight lines which are in one-to-one correspondence with the objects and can represent direction characteristics of the objects; to pairPerforming K-means clustering on all the straight line clusters, setting the number of initial clustering centers as N, and respectively obtaining the mean values of the N clusters of straight lines through iterative calculation, namely obtaining N straight lines corresponding to the objects one by one, and recording as sigma ═ l1,l2...lN}。
5. The polygonal object segmentation method according to claim 4, wherein the step (c) is specifically as follows:
(1) sequentially sliding the template from the image boundary area along the obtained N straight lines sigma to perform template matching; rotating the template by a certain angle according to the parameters (rho, theta) of each straight line to enable the direction of the template to be consistent with the straight line; the process of the sliding template adopts a strategy of first sparse and then dense, namely, the sliding step length is set to be a larger value delta before a target is found1Pixels to speed up the matching speed; setting the sliding step length to a smaller value delta after finding the target2Pixels, i.e. dense matching is performed;
(2) the index of template matching is the intersection ratio of an object region in the target template and an object region in the actual scene image;
(3) if a target appears in the region of interest in the template matching process, namely the intersection ratio is greater than a set threshold value sigma, entering a dense matching stage; the sliding step length of the template is reduced, and after each step of sliding, the template is respectively subjected to step length delta from the vertical direction of the straight line to the two sides of the straight line for multiple times3The small-range translation of the pixels is carried out, and meanwhile, the small-range rotation with the step length of alpha angle is carried out for multiple times along the clockwise direction and the anticlockwise direction by taking the center of the template as an original point, so that the template matching precision is further improved;
(4) in the process of matching the sliding template, maintaining and recording the maximum intersection ratio of the current position and the corresponding template position information at each step, exiting template matching until the intersection ratio is smaller than a set threshold value sigma, namely finding the best matching of the object in the current straight line direction, and respectively carrying out the operation along each straight line until the best matching of all the objects is found so as to realize target identification and segmentation.
6. According toThe polygonal object segmentation method as claimed in claim 5, wherein the sliding step size δ1Value range [30,70 ]]Sliding step delta2Value range [2,7 ]]Sliding step delta3Value range [2,7 ]]Step length alpha of rotation angle is in value range of [1 degree ], 5 degree DEG]The cross-over ratio threshold value sigma is within a value range of [0.1,0.3 ]]The step length and the threshold value can be flexibly adjusted according to the precision requirement of template matching.
CN202111299363.6A 2021-11-04 2021-11-04 Polygonal target segmentation method based on Hough transform and template matching Pending CN113989308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111299363.6A CN113989308A (en) 2021-11-04 2021-11-04 Polygonal target segmentation method based on Hough transform and template matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111299363.6A CN113989308A (en) 2021-11-04 2021-11-04 Polygonal target segmentation method based on Hough transform and template matching

Publications (1)

Publication Number Publication Date
CN113989308A true CN113989308A (en) 2022-01-28

Family

ID=79746398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111299363.6A Pending CN113989308A (en) 2021-11-04 2021-11-04 Polygonal target segmentation method based on Hough transform and template matching

Country Status (1)

Country Link
CN (1) CN113989308A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115576358A (en) * 2022-12-07 2023-01-06 西北工业大学 Unmanned aerial vehicle distributed control method based on machine vision
CN116030120A (en) * 2022-09-09 2023-04-28 北京市计算中心有限公司 Method for identifying and correcting hexagons

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030120A (en) * 2022-09-09 2023-04-28 北京市计算中心有限公司 Method for identifying and correcting hexagons
CN116030120B (en) * 2022-09-09 2023-11-24 北京市计算中心有限公司 Method for identifying and correcting hexagons
CN115576358A (en) * 2022-12-07 2023-01-06 西北工业大学 Unmanned aerial vehicle distributed control method based on machine vision

Similar Documents

Publication Publication Date Title
Lin et al. Color-, depth-, and shape-based 3D fruit detection
CN108256394B (en) Target tracking method based on contour gradient
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
Choi et al. 3D pose estimation of daily objects using an RGB-D camera
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN106709500B (en) Image feature matching method
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN113989308A (en) Polygonal target segmentation method based on Hough transform and template matching
CN110021029B (en) Real-time dynamic registration method and storage medium suitable for RGBD-SLAM
Konishi et al. Real-time 6D object pose estimation on CPU
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN108550165A (en) A kind of image matching method based on local invariant feature
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN109766850B (en) Fingerprint image matching method based on feature fusion
CN115293287A (en) Vehicle-mounted radar-based target clustering method, memory and electronic device
CN113689365B (en) Target tracking and positioning method based on Azure Kinect
CN113469195B (en) Target identification method based on self-adaptive color quick point feature histogram
CN109977892B (en) Ship detection method based on local saliency features and CNN-SVM
Chen et al. Extracting and matching lines of low-textured region in close-range navigation for tethered space robot
Han et al. Accurate and robust vanishing point detection method in unstructured road scenes
CN110738098A (en) target identification positioning and locking tracking method
CN115034577A (en) Electromechanical product neglected loading detection method based on virtual-real edge matching
CN114049380A (en) Target object positioning and tracking method and device, computer equipment and storage medium
Zhu et al. Visual campus road detection for an ugv using fast scene segmentation and rapid vanishing point estimation
Yu et al. Multimodal urban remote sensing image registration via roadcross triangular feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination