CN116309837A - Method for identifying and positioning damaged element by combining characteristic points and contour points - Google Patents

Method for identifying and positioning damaged element by combining characteristic points and contour points Download PDF

Info

Publication number
CN116309837A
CN116309837A CN202310257009.XA CN202310257009A CN116309837A CN 116309837 A CN116309837 A CN 116309837A CN 202310257009 A CN202310257009 A CN 202310257009A CN 116309837 A CN116309837 A CN 116309837A
Authority
CN
China
Prior art keywords
points
point
image
contour
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310257009.XA
Other languages
Chinese (zh)
Other versions
CN116309837B (en
Inventor
王禹林
郭茂林
查文彬
慈斌斌
杨小龙
冯永兴
张名成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310257009.XA priority Critical patent/CN116309837B/en
Publication of CN116309837A publication Critical patent/CN116309837A/en
Application granted granted Critical
Publication of CN116309837B publication Critical patent/CN116309837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying and positioning a damaged element by combining characteristic points and contour points, which overcomes the limitation of the existing damaged element identification and positioning algorithm and realizes the rapid identification of a few-texture damaged element with a simple outline. Firstly, extracting corner points and centroid points of a template picture, and calculating the geometric relationship between the corner points and centroid points; then detecting the outline in the image to be identified, selecting the position of the damaged element, finding all the characteristic points close to the position, integrating the characteristic points close to each other into a point according to Gaussian weight, and calculating the geometric relationship between the integration point and the outline centroid; finally, the damaged element meeting the condition is selected by comparing with the template. The method adopts the mode of firstly detecting the characteristics and then integrating the characteristic points to obtain the angular points, considers the geometric relationship among the characteristic points so as to strengthen the matching effect of the characteristic points, can simultaneously identify a plurality of targets by taking one figure with the same outline dimension as the object to be matched as a template, and improves the identification efficiency.

Description

Method for identifying and positioning damaged element by combining characteristic points and contour points
Technical Field
The invention relates to the technical field of image processing, in particular to a few-texture multi-target matching method combining feature point detection and contour detection.
Background
With the development of industrial robot technology, more and more production and processing fields replace traditional manual operation with robots, and how to enable the robots to identify workpieces with different shapes and sizes is a problem to be solved. At present, the workpiece is widely recognized based on machine vision, and the efficiency and accuracy of workpiece recognition and positioning are directly determined by utilizing the shape and size information of the workpiece.
At present, the sorting and the assembly of the damaged elements are mainly operated by manual operation, and the efficiency and the precision are low. Searching the prior art document finds that the Chinese patent publication No. CN115423855A, named image template matching method, device, equipment and medium, introduces an image template matching method, which can obtain a matching result by acquiring first gray scale characteristic information and first structure characteristic information on a region to be matched and a neighborhood thereof and comparing the difference between the template and the image. But this method is greatly affected by light and cannot process scaled images. The patent publication No. CN115631477A, entitled target recognition method and terminal, describes a target recognition method, which predicts a target region to be enhanced in a next image according to a target region in a current image, then performs amplification processing on the target region to be enhanced, adds the enhanced target region to a next reduced image to obtain a reset image, and finally performs target recognition on the reset image. However, the method can amplify noise and other interference in the image, and has high requirements on image quality. The Chinese patent publication No. CN115482405A, named as a single template matching method based on deep learning, introduces a single template matching method based on deep learning, which uses a backbone network in deep learning to extract image features, performs feature fusion on the image features with different scales, and then calculates detail scores and semantic scores respectively, thereby obtaining a target position in an image through feature comparison. However, this method requires pre-training the backbone network, and takes a lot of time to learn the new recognition object.
In summary, the existing template matching method needs a large amount of templates to match when the target has obvious scale and rotation transformation, and often needs a large amount of time; the feature-based target matching algorithm has great dependence on the accuracy and connectivity of the line segment extraction, and a large amount of image gradient information is lost if the recognition speed is to be improved; machine learning requires a great deal of time to train in the early stage, and efficiency is to be improved.
Disclosure of Invention
The invention aims to provide the small-texture multi-target matching method combining feature point detection and contour detection, which can realize rapid identification of a plurality of workpieces in an image to be detected by only using one graphic image with the same appearance as the workpieces, has less consumption of calculation resources and can effectively improve the identification efficiency.
The technical solution for realizing the purpose of the invention is as follows: a method for identifying and positioning a damaged element by combining characteristic points and contour points comprises the following two stages:
the first stage, the feature point integration stage, comprises the following steps:
step 1.1, performing Gaussian blur processing on an image to reduce interference of noise points, and then performing ORB feature point detection;
step 1.2, extracting the contours in the image (the number of extracted contours corresponds to the number of extracted contours, and whether the extraction mode is random or not), calculating the centroid of each contour, and primarily selecting the position of each damage element;
step 1.3, comparing pixel coordinates of edge points of the picture, compressing adjacent elements with the same direction, and only reserving end point coordinates in the direction;
step 1.4, randomly selecting a contour point, searching a nearby area, searching a characteristic point in a preset area to be considered as successful in matching, and continuing searching by using the next contour point if the characteristic point is not identified;
step 1.5, after finding a feature point successfully matched, searching for other feature points close to the feature point in distance, setting all feature points meeting the condition as a set, and if the number of other feature points near the feature point is lower than a threshold value, regarding as misextraction feature points;
step 1.6, calculating the average value of the feature point coordinates successfully matched in the set, calculating the Gaussian weight of each feature point in the set by taking the average coordinate value as an origin, solving a new integration point, and repeatedly calculating the Gaussian weight again by taking the integration point as the origin until the difference between the two results is smaller than one pixel coordinate;
and step two, a characteristic point-centroid mathematical template matching step, which comprises the following steps:
step 2.1, firstly, drawing a graph with the same appearance as the part to be matched as a template image;
step 2.2, calculating the relative relation between the corner points of the template image and the figure centroids, and calculating the relative relation between each figure integration point and the centroid in the image to be identified;
and 2.3, after the calculation of the relative relation is completed, comparing the mathematical models of the parts and the templates in the physical image, and when all eight index errors are smaller than a set threshold value, judging that the matching is successful.
Further, in step 1.3, the method for calculating the compressed contour points is as follows:
t=|(y 2 -y 1 )+(x 3 -x 1 )*(y 3 -y 1 )/(x 3 -x 1 )|
wherein x is 1 、x 2 、x 3 Pixel abscissa, y, representing three adjacent edge points 1 、y 2 、y 3 Pixel ordinate representing three adjacent edge points, p being removed when t is less than a set threshold 2 (x 2 ,y 2 ). The process is looped until all eligible edge points are eliminated.
Further, in step 1.4, the size of the search area is determined based on the size of the image and the size of the part in the image, and the range of the search area is 4 to 6 pixel distances according to experience.
Further, the method comprises the steps of,
further, in step 1.5, the threshold is set based on the image quality determination, and the higher the noise interference of the image is, the higher the threshold is, and the empirically-derived value ranges from 3 to 5.
Further, the calculation method of the gaussian weight in step 1.5 is as follows:
Figure BDA0004130018450000031
wherein mu 1 ,μ 2 Average values of x and y coordinates of feature points in the set, ρ is the correlation coefficient of x and y, and σ 1 ,σ 2 The variances of the x and y coordinate values are respectively calculated, and the weight sum of the feature points in the set is required to be equal to 1 because a new integration point is calculated, so that the weight value of each point is divided by the weight sum to obtain the final weight:
Figure BDA0004130018450000032
where k represents the sequence number of the feature points in the set.
Further, in step 2.2, the calculation method of the relative relationship between the corner point and the centroid is as follows:
Figure BDA0004130018450000033
Figure BDA0004130018450000034
wherein each alpha is 1 ,α 2 ,α 3 ,α 4 Represent four angles, l 1 ,l 2 ,l 3 ,l 4 The ratio of the four lengths is represented, A, B, C, D represents four corner points of the graph, and O represents the centroid of the graph.
Further, in step 2.3, the error setting threshold is 0.95-1.05, and the smaller the allowable error range, the more accurate the matching result, but when the image quality is poor, the case of no matching is likely to occur.
Furthermore, the calculation of the mathematical model of the template in the step 2.1 and the step 2.2 is only needed to be carried out once, the template can be used for identifying the parts of the same type after the template is obtained, and when the image to be detected is replaced, the template does not need to be redesigned.
Compared with the prior art, the invention has the remarkable advantages that: (1) The template obtaining method is simple and easy to operate, has rotation invariance and size invariance, solves the problem that the existing template matching method needs to consume huge computing resources for identifying the rotation target, and remarkably improves the identification efficiency. (2) The invention strengthens the matching effect of the feature points by combining the geometric relationship among the feature points, screens out the isolated feature points which are erroneously extracted due to the interference of noise points and the like, and improves the success rate of recognition. (3) The invention not only considers the relationship between the angular points of the graph, but also combines the relationship between the angular points and the centroid, so that the algorithm can distinguish the graphs with the same relative positions of the two angular points and different shapes, such as tile shapes, trapezoids and the like.
The invention is described in further detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of multi-objective matching based on feature point and contour point combinations.
FIG. 2 is a schematic diagram of corner-centroid geometry for four different appearance lesions.
FIG. 3 is a graph showing the identification results of four different appearance lesions.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Referring to fig. 1, a flowchart and a schematic diagram of a method for matching a few textures and multiple objects by combining feature point detection and contour detection according to a first embodiment of the present application are shown. The method for matching the few textures and the multiple targets by combining feature point detection and contour detection in the first embodiment of the application comprises the following steps:
s1, integrating characteristic points, wherein the specific process is as follows:
s1.1, gaussian blur processing the image, reducing interference of noise points, and then detecting ORB characteristic points;
s1.2, extracting the outline in the picture, and primarily selecting the position of each damage element;
s1.3, searching an area near one contour point, wherein the searching area is an area with the pixel distance from the contour point being smaller than 4, the characteristic points are found in a preset area to be successfully matched, and if the characteristic points are not identified, the next contour point is used for continuous searching;
s1.4, after finding a successfully matched characteristic point, finding other characteristic points close to the characteristic point in distance, setting all found points as a set, and if the number of other characteristic points near the characteristic point is lower than 3, regarding as misextraction of the characteristic points;
s1.5, calculating an average value of the coordinates of the characteristic points in the set, calculating Gaussian weights of the characteristic points in the set by taking the average coordinate value as an origin, solving a new integration point, and then, calculating again by taking the integration point as the origin, and carrying out iterative calculation for 3 times. The Gaussian weight calculating method comprises the following steps:
Figure BDA0004130018450000051
wherein mu 1 ,μ 2 Respectively, feature points x in the collectionAverage value of y coordinates, ρ is the correlation coefficient of x, y, σ 1 ,σ 2 The variances of the x, y coordinate values, respectively. Since the total weight of the feature points in the set must be equal to 1 to calculate the new integration point, the weight value of each point must be divided by the total weight to obtain the final weight:
Figure BDA0004130018450000052
where k represents the sequence number of the feature points in the set.
S2, matching a characteristic point-centroid mathematical template, wherein the specific process is as follows:
s2.1, drawing a graph with the same appearance as the part to be matched as a template image;
s2.2, calculating the relative relation between the corner points of the template and the centroid of the graph, and calculating the relative relation between each graph integration point and the centroid in the image to be identified, wherein the calculating method of the relative relation between the corner points and the centroid is as follows:
Figure BDA0004130018450000053
Figure BDA0004130018450000054
wherein the meanings of the parameters are as shown in FIG. 2, alpha 1 ,α 2 ,α 3 ,α 4 Represent four angles, l 1 ,l 2 ,l 3 ,l 4 The ratio of the four lengths is represented, A, B, C, D represents four corner points of the graph, and O represents the centroid of the graph.
S2.3, after the calculation of the relative relation is completed, comparing the mathematical models of the parts and the templates in the physical image, and when the difference of eight indexes is within the range of 0.95-1.05, judging that the matching is successful.
Please refer to fig. 3, which is a measurement result diagram of the first embodiment of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for identifying and positioning a damaged element by combining characteristic points and contour points is characterized by comprising the following two stages:
the first stage, the feature point integration stage, comprises the following steps:
step 1.1, performing Gaussian blur processing on an image to reduce interference of noise points, and then performing ORB feature point detection;
step 1.2, extracting the contours in the image (the number of extracted contours corresponds to the number of extracted contours, and whether the extraction mode is random or not), calculating the centroid of each contour, and primarily selecting the position of each damage element;
step 1.3, comparing pixel coordinates of edge points of the picture, compressing adjacent elements with the same direction, and only reserving end point coordinates in the direction;
step 1.4, randomly selecting a contour point, searching a nearby area, searching a characteristic point in a preset area to be considered as successful in matching, and continuing searching by using the next contour point if the characteristic point is not identified;
step 1.5, after finding a feature point successfully matched, searching for other feature points close to the feature point in distance, setting all feature points meeting the condition as a set, and if the number of other feature points near the feature point is lower than a threshold value, regarding as misextraction feature points;
step 1.6, calculating the average value of the feature point coordinates successfully matched in the set, calculating the Gaussian weight of each feature point in the set by taking the average coordinate value as an origin, solving a new integration point, and repeatedly calculating the Gaussian weight again by taking the integration point as the origin until the difference between the two results is smaller than one pixel coordinate;
and step two, a characteristic point-centroid mathematical template matching step, which comprises the following steps:
step 2.1, firstly, drawing a graph with the same appearance as the part to be matched as a template image;
step 2.2, calculating the relative relation between the corner points of the template image and the figure centroids, and calculating the relative relation between each figure integration point and the centroid in the image to be identified;
and 2.3, after the calculation of the relative relation is completed, comparing the mathematical models of the parts and the templates in the physical image, and when all eight index errors are smaller than a set threshold value, judging that the matching is successful.
2. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 1.3, the method for calculating the compressed contour points is as follows:
t=|(y 2 -y 1 )+(x 3 -x 1 )*(y 3 -y 1 )/(x 3 -x 1 )|
wherein x is 1 、x 2 、x 3 Pixel abscissa, y, representing three adjacent edge points 1 、y 2 、y 3 Pixel ordinate representing three adjacent edge points, p being removed when t is less than a set threshold 2 (x 2 ,y 2 ). The process is looped until all eligible edge points are eliminated.
3. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 1.4, the search area size is determined based on the image size and the size of the part within the image.
4. The search area size of claim 3, wherein the empirically derived value is in the range of 4-6 pixel distances.
5. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 1.5, a threshold is set based on the image quality determination, and the stronger the noise-affected image is, the higher the threshold is.
6. The threshold value for the number of feature points according to claim 5, which is empirically ranging from 3 to 5.
7. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: the Gaussian weight calculation method in the step 1.6 is as follows:
Figure FDA0004130018430000021
wherein mu 1 ,μ 2 Average values of x and y coordinates of feature points in the set, ρ is the correlation coefficient of x and y, and σ 1 ,σ 2 The variances of the x and y coordinate values are respectively calculated, and the weight sum of the feature points in the set is required to be equal to 1 because a new integration point is calculated, so that the weight value of each point is divided by the weight sum to obtain the final weight:
Figure FDA0004130018430000022
where k represents the sequence number of the feature points in the set.
8. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 2.2, the calculation method of the relative relation between the corner point and the centroid is as follows:
Figure FDA0004130018430000023
Figure FDA0004130018430000024
wherein alpha is 1 ,α 2 ,α 3 ,α 4 Represent four angles, l 1 ,l 2 ,l 3 ,l 4 The ratio of the four lengths is represented, A, B, C, D represents four corner points of the graph, and O represents the centroid of the graph.
9. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 2.3, the error setting threshold is 0.95 to 1.05.
10. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: the template mathematical model calculation in the step 2.1 and the step 2.2 is only needed to be implemented once, the obtained template can be used for identifying the parts of the same type, and when the image to be detected is replaced, the template does not need to be redesigned.
CN202310257009.XA 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points Active CN116309837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310257009.XA CN116309837B (en) 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310257009.XA CN116309837B (en) 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points

Publications (2)

Publication Number Publication Date
CN116309837A true CN116309837A (en) 2023-06-23
CN116309837B CN116309837B (en) 2024-04-26

Family

ID=86781133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310257009.XA Active CN116309837B (en) 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points

Country Status (1)

Country Link
CN (1) CN116309837B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005310070A (en) * 2004-04-26 2005-11-04 Canon Inc Device and method of information processing
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method
CN110490913A (en) * 2019-07-22 2019-11-22 华中师范大学 Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method
CN110569857A (en) * 2019-07-28 2019-12-13 景德镇陶瓷大学 image contour corner detection method based on centroid distance calculation
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111401266A (en) * 2020-03-19 2020-07-10 杭州易现先进科技有限公司 Method, device, computer device and readable storage medium for positioning corner points of drawing book

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005310070A (en) * 2004-04-26 2005-11-04 Canon Inc Device and method of information processing
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method
CN110490913A (en) * 2019-07-22 2019-11-22 华中师范大学 Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method
CN110569857A (en) * 2019-07-28 2019-12-13 景德镇陶瓷大学 image contour corner detection method based on centroid distance calculation
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111401266A (en) * 2020-03-19 2020-07-10 杭州易现先进科技有限公司 Method, device, computer device and readable storage medium for positioning corner points of drawing book

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
侯学智, 杨平, 赵云松: "CCD图像的轮廓特征点提取算法", 电子科技大学学报, no. 04, 25 August 2004 (2004-08-25) *
刘文: ""面向机器人典型特征装配的视觉和力觉定位控制方法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》, 31 December 2020 (2020-12-31) *
孙兴龙;韩广良;郭立红;刘培勋;许廷发;: "采用轮廓特征匹配的红外-可见光视频自动配准", 光学精密工程, no. 05, 13 May 2020 (2020-05-13) *

Also Published As

Publication number Publication date
CN116309837B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN109767463B (en) Automatic registration method for three-dimensional point cloud
CN109299720B (en) Target identification method based on contour segment spatial relationship
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
CN104268602A (en) Shielded workpiece identifying method and device based on binary system feature matching
CN110929713B (en) Steel seal character recognition method based on BP neural network
CN111553409A (en) Point cloud identification method based on voxel shape descriptor
CN104851095B (en) The sparse solid matching method of workpiece image based on modified Shape context
CN110084830B (en) Video moving object detection and tracking method
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN106340010A (en) Corner detection method based on second-order contour difference
CN109409356B (en) Multi-direction Chinese print font character detection method based on SWT
CN111401449A (en) Image matching method based on machine vision
CN113379777A (en) Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion
CN110472651B (en) Target matching and positioning method based on edge point local characteristic value
CN112649793A (en) Sea surface target radar trace condensation method and device, electronic equipment and storage medium
CN107895166B (en) Method for realizing target robust recognition based on feature descriptor by geometric hash method
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN105956581B (en) A kind of quick human face characteristic point initial method
CN107748897B (en) Large-size curved part profile quality detection method based on pattern recognition
CN116309837B (en) Method for identifying and positioning damaged element by combining characteristic points and contour points
CN110163894B (en) Sub-pixel level target tracking method based on feature matching
CN109191489B (en) Method and system for detecting and tracking aircraft landing marks
CN108734059B (en) Object identification method for indoor mobile robot
CN107122783B (en) Method for quickly identifying assembly connector based on angular point detection
CN112862767B (en) Surface defect detection method for solving difficult-to-distinguish unbalanced sample based on metric learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant