CN116309837B - Method for identifying and positioning damaged element by combining characteristic points and contour points - Google Patents

Method for identifying and positioning damaged element by combining characteristic points and contour points Download PDF

Info

Publication number
CN116309837B
CN116309837B CN202310257009.XA CN202310257009A CN116309837B CN 116309837 B CN116309837 B CN 116309837B CN 202310257009 A CN202310257009 A CN 202310257009A CN 116309837 B CN116309837 B CN 116309837B
Authority
CN
China
Prior art keywords
points
point
image
identifying
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310257009.XA
Other languages
Chinese (zh)
Other versions
CN116309837A (en
Inventor
王禹林
郭茂林
查文彬
慈斌斌
杨小龙
冯永兴
张名成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310257009.XA priority Critical patent/CN116309837B/en
Publication of CN116309837A publication Critical patent/CN116309837A/en
Application granted granted Critical
Publication of CN116309837B publication Critical patent/CN116309837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying and positioning a damaged element by combining characteristic points and contour points, which overcomes the limitation of the existing damaged element identification and positioning algorithm and realizes the rapid identification of a few-texture damaged element with a simple outline. Firstly, extracting corner points and centroid points of a template picture, and calculating the geometric relationship between the corner points and centroid points; then detecting the outline in the image to be identified, selecting the position of the damaged element, finding all the characteristic points close to the position, integrating the characteristic points close to each other into a point according to Gaussian weight, and calculating the geometric relationship between the integration point and the outline centroid; finally, the damaged element meeting the condition is selected by comparing with the template. The method adopts the mode of firstly detecting the characteristics and then integrating the characteristic points to obtain the angular points, considers the geometric relationship among the characteristic points so as to strengthen the matching effect of the characteristic points, can simultaneously identify a plurality of targets by taking one figure with the same outline dimension as the object to be matched as a template, and improves the identification efficiency.

Description

Method for identifying and positioning damaged element by combining characteristic points and contour points
Technical Field
The invention relates to the technical field of image processing, in particular to a few-texture multi-target matching method combining feature point detection and contour detection.
Background
With the development of industrial robot technology, more and more production and processing fields replace traditional manual operation with robots, and how to enable the robots to identify workpieces with different shapes and sizes is a problem to be solved. At present, the workpiece is widely recognized based on machine vision, and the efficiency and accuracy of workpiece recognition and positioning are directly determined by utilizing the shape and size information of the workpiece.
At present, the sorting and the assembly of the damaged elements are mainly operated by manual operation, and the efficiency and the precision are low. Searching the prior art document finds that the Chinese patent publication No. CN115423855A, named image template matching method, device, equipment and medium, introduces an image template matching method, which can obtain a matching result by obtaining first gray scale characteristic information and first structure characteristic information on a region to be matched and a neighborhood thereof and comparing the difference between the template and the image. But this method is greatly affected by light and cannot process scaled images. The patent publication No. CN115631477A, entitled target recognition method and terminal, describes a target recognition method, which predicts a target region to be enhanced in a next image according to a target region in a current image, then performs amplification processing on the target region to be enhanced, adds the enhanced target region into a next reduced image to obtain a reset image, and finally performs target recognition on the reset image. However, the method can amplify noise and other interference in the image, and has high requirements on image quality. The Chinese patent publication No. CN115482405A, named as a single template matching method based on deep learning, introduces a single template matching method based on deep learning, which uses a backbone network in deep learning to extract image features, performs feature fusion on the image features with different scales, and then calculates detail scores and semantic scores respectively, thereby obtaining a target position in an image through feature comparison. However, this method requires pre-training the backbone network, and takes a lot of time to learn the new recognition object.
In summary, the existing template matching method needs a large amount of templates to match when the target has obvious scale and rotation transformation, and often needs a large amount of time; the feature-based target matching algorithm has great dependence on the accuracy and connectivity of the line segment extraction, and a large amount of image gradient information is lost if the recognition speed is to be improved; machine learning requires a great deal of time to train in the early stage, and efficiency is to be improved.
Disclosure of Invention
The invention aims to provide the small-texture multi-target matching method combining feature point detection and contour detection, which can realize rapid identification of a plurality of workpieces in an image to be detected by only using one graphic image with the same appearance as the workpieces, has less consumption of calculation resources and can effectively improve the identification efficiency.
The technical solution for realizing the purpose of the invention is as follows: a method for identifying and positioning a damaged element by combining characteristic points and contour points comprises the following two stages:
the first stage, the feature point integration stage, comprises the following steps:
step 1.1, performing Gaussian blur processing on an image to reduce interference of noise points, and then performing ORB feature point detection;
Step 1.2, extracting the contours in the image (the number of extracted contours corresponds to the number of extracted contours, and whether the extraction mode is random or not), calculating the centroid of each contour, and primarily selecting the position of each damage element;
step 1.3, comparing pixel coordinates of edge points of the picture, compressing adjacent elements with the same direction, and only reserving end point coordinates in the direction;
step 1.4, randomly selecting a contour point, searching a nearby area, searching a characteristic point in a preset area to be considered as successful in matching, and continuing searching by using the next contour point if the characteristic point is not identified;
Step 1.5, after finding a feature point successfully matched, searching for other feature points close to the feature point in distance, setting all feature points meeting the condition as a set, and if the number of other feature points near the feature point is lower than a threshold value, regarding as misextraction feature points;
Step 1.6, calculating the average value of the feature point coordinates successfully matched in the set, calculating the Gaussian weight of each feature point in the set by taking the average coordinate value as an origin, solving a new integration point, and repeatedly calculating the Gaussian weight again by taking the integration point as the origin until the difference between the two results is smaller than one pixel coordinate;
and step two, a characteristic point-centroid mathematical template matching step, which comprises the following steps:
Step 2.1, firstly, drawing a graph with the same appearance as the part to be matched as a template image;
Step 2.2, calculating the relative relation between the corner points of the template image and the figure centroids, and calculating the relative relation between each figure integration point and the centroid in the image to be identified;
And 2.3, after the calculation of the relative relation is completed, comparing the mathematical models of the parts and the templates in the physical image, and when all eight index errors are smaller than a set threshold value, judging that the matching is successful.
Further, in step 1.3, the method for calculating the compressed contour points is as follows:
t=|(y2-y1)+(x3-x1)*(y3-y1)/(x3-x1)|
Where x 1、x2、x3 represents the pixel abscissa of the adjacent three edge points and y 1、y2、y3 represents the pixel ordinate of the adjacent three edge points, and when t is less than the set threshold, then p 2(x2,y2 is removed). The process is looped until all eligible edge points are eliminated.
Further, in step 1.4, the size of the search area is determined based on the size of the image and the size of the part in the image, and the range of the search area is 4 to 6 pixel distances according to experience.
Further, the method comprises the steps of,
Further, in step 1.5, the threshold is set based on the image quality determination, and the higher the noise interference of the image is, the higher the threshold is, and the empirically-derived value ranges from 3 to 5.
Further, the calculation method of the gaussian weight in step 1.5 is as follows:
Wherein mu 12 is the average value of the x and y coordinates of the feature points in the set, ρ is the correlation coefficient of x and y, σ 12 is the variance of the x and y coordinate values, and since the new integration point is to be calculated, the weight sum of the feature points in the set must be equal to 1, and therefore the weight value of each point is divided by the weight sum to obtain the final weight:
Where k represents the sequence number of the feature points in the set.
Further, in step 2.2, the calculation method of the relative relationship between the corner point and the centroid is as follows:
Wherein each α 1234 represents four angles, l 1,l2,l3,l4 represents the ratio of four lengths, a, B, C, D represents four corner points of the graph, and O represents the centroid of the graph.
Further, in step 2.3, the error setting threshold is 0.95-1.05, and the smaller the allowable error range, the more accurate the matching result, but when the image quality is poor, the case of no matching is likely to occur.
Furthermore, the calculation of the mathematical model of the template in the step 2.1 and the step 2.2 is only needed to be carried out once, the template can be used for identifying the parts of the same type after the template is obtained, and when the image to be detected is replaced, the template does not need to be redesigned.
Compared with the prior art, the invention has the remarkable advantages that: (1) The template obtaining method is simple and easy to operate, has rotation invariance and size invariance, solves the problem that the existing template matching method needs to consume huge computing resources for identifying the rotation target, and remarkably improves the identification efficiency. (2) The invention strengthens the matching effect of the feature points by combining the geometric relationship among the feature points, screens out the isolated feature points which are erroneously extracted due to the interference of noise points and the like, and improves the success rate of recognition. (3) The invention not only considers the relationship between the angular points of the graph, but also combines the relationship between the angular points and the centroid, so that the algorithm can distinguish the graphs with the same relative positions of the two angular points and different shapes, such as tile shapes, trapezoids and the like.
The invention is described in further detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of multi-objective matching based on feature point and contour point combinations.
FIG. 2 is a schematic diagram of corner-centroid geometry for four different appearance lesions.
FIG. 3 is a graph showing the identification results of four different appearance lesions.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Referring to fig. 1, a flowchart and a schematic diagram of a method for matching a few textures and multiple objects by combining feature point detection and contour detection according to a first embodiment of the present application are shown. The method for matching the few textures and the multiple targets by combining feature point detection and contour detection in the first embodiment of the application comprises the following steps:
s1, integrating characteristic points, wherein the specific process is as follows:
s1.1, gaussian blur processing the image, reducing interference of noise points, and then detecting ORB characteristic points;
S1.2, extracting the outline in the picture, and primarily selecting the position of each damage element;
S1.3, searching an area near one contour point, wherein the searching area is an area with the pixel distance from the contour point being smaller than 4, the characteristic points are found in a preset area to be successfully matched, and if the characteristic points are not identified, the next contour point is used for continuous searching;
s1.4, after finding a successfully matched characteristic point, finding other characteristic points close to the characteristic point in distance, setting all found points as a set, and if the number of other characteristic points near the characteristic point is lower than 3, regarding as misextraction of the characteristic points;
S1.5, calculating an average value of the coordinates of the characteristic points in the set, calculating Gaussian weights of the characteristic points in the set by taking the average coordinate value as an origin, solving a new integration point, and then, calculating again by taking the integration point as the origin, and carrying out iterative calculation for 3 times. The Gaussian weight calculating method comprises the following steps:
Wherein mu 12 is the average value of the x and y coordinates of the feature points in the set, ρ is the correlation coefficient of x and y, and σ 12 is the variance of the x and y coordinate values. Since the total weight of the feature points in the set must be equal to 1 to calculate the new integration point, the weight value of each point must be divided by the total weight to obtain the final weight:
Where k represents the sequence number of the feature points in the set.
S2, matching a characteristic point-centroid mathematical template, wherein the specific process is as follows:
s2.1, drawing a graph with the same appearance as the part to be matched as a template image;
s2.2, calculating the relative relation between the corner points of the template and the centroid of the graph, and calculating the relative relation between each graph integration point and the centroid in the image to be identified, wherein the calculating method of the relative relation between the corner points and the centroid is as follows:
Wherein the meaning of each parameter is shown in fig. 2, alpha 1234 represents four included angles, l 1,l2,l3,l4 represents the ratio of four lengths, A, B, C and D represent four corner points of the graph, and O represents the centroid of the graph.
S2.3, after the calculation of the relative relation is completed, comparing the mathematical models of the parts and the templates in the physical image, and when the difference of eight indexes is within the range of 0.95-1.05, judging that the matching is successful.
Please refer to fig. 3, which is a diagram illustrating a measurement result according to a first embodiment of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for identifying and positioning a damaged element by combining characteristic points and contour points is characterized by comprising the following two stages:
the first stage, the feature point integration stage, comprises the following steps:
step 1.1, performing Gaussian blur processing on an image to reduce interference of noise points, and then performing ORB feature point detection;
step 1.2, extracting the outline in the image, wherein the extraction amount corresponds to the outline below, and the extraction mode is random; calculating the centroid of each contour, and primarily selecting the position of each damage element;
step 1.3, comparing pixel coordinates of edge points of the picture, compressing adjacent elements with the same direction, and only reserving end point coordinates in the direction;
step 1.4, randomly selecting a contour point, searching a nearby area, searching a characteristic point in a preset area to be considered as successful in matching, and continuing searching by using the next contour point if the characteristic point is not identified;
Step 1.5, after finding a feature point successfully matched, searching for other feature points close to the feature point in distance, setting all feature points meeting the condition as a set, and if the number of other feature points near the feature point is lower than a threshold value, regarding as misextraction feature points;
Step 1.6, calculating the average value of the feature point coordinates successfully matched in the set, calculating the Gaussian weight of each feature point in the set by taking the average coordinate value as an origin, solving a new integration point, and repeatedly calculating the Gaussian weight again by taking the integration point as the origin until the difference between the two results is smaller than one pixel coordinate;
and step two, a characteristic point-centroid mathematical template matching step, which comprises the following steps:
Step 2.1, firstly, drawing a graph with the same appearance as the part to be matched as a template image;
Step 2.2, calculating the relative relation between the corner points of the template image and the figure centroids, and calculating the relative relation between each figure integration point and the centroid in the image to be identified;
And 2.3, after the calculation of the relative relation is completed, comparing the mathematical models of the parts and the templates in the physical image, and when all eight index errors are smaller than a set threshold value, judging that the matching is successful.
2. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 1.3, the method for calculating the compressed contour points is as follows:
t=|(y2-y1)+(x3-x1)*(y3-y1)/(x3-x1)|
Where x 1、x2、x3 represents the pixel abscissa of the adjacent three edge points and y 1、y2、y3 represents the pixel ordinate of the adjacent three edge points, and when t is less than the set threshold, then p 2(x2,y2 is removed), the process is cycled until all eligible edge points are eliminated.
3. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 1.4, the search area size is determined based on the image size and the size of the part within the image.
4. A method for identifying and locating a lesion combining feature points and contour points according to claim 3, wherein: the size of the search area is in the range of 4-6 pixel distance.
5. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 1.5, a threshold is set based on the image quality determination, and the stronger the noise-affected image is, the higher the threshold is.
6. The method for identifying and locating a lesion combining feature points and contour points according to claim 5, wherein: the threshold value range is 3-5.
7. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: the Gaussian weight calculation method in the step 1.6 is as follows:
Wherein mu 12 is the average value of the x and y coordinates of the feature points in the set, ρ is the correlation coefficient of x and y, σ 12 is the variance of the x and y coordinate values, and since the new integration point is to be calculated, the weight sum of the feature points in the set must be equal to 1, and therefore the weight value of each point is divided by the weight sum to obtain the final weight:
Where k represents the sequence number of the feature points in the set.
8. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 2.2, the calculation method of the relative relation between the corner point and the centroid is as follows:
Where α 1234 denotes four angles, l 1,l2,l3,l4 denotes the ratio of the four lengths, a, B, C, D denotes the four corner points of the pattern, and O denotes the centroid of the pattern.
9. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: in step 2.3, the error setting threshold is 0.95 to 1.05.
10. The method for identifying and locating a lesion combining feature points and contour points according to claim 1, wherein: the template mathematical model calculation in the step 2.1 and the step 2.2 is only needed to be implemented once, the obtained template is used for identifying the same type of parts, and when the image to be detected is replaced, the template does not need to be redesigned.
CN202310257009.XA 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points Active CN116309837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310257009.XA CN116309837B (en) 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310257009.XA CN116309837B (en) 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points

Publications (2)

Publication Number Publication Date
CN116309837A CN116309837A (en) 2023-06-23
CN116309837B true CN116309837B (en) 2024-04-26

Family

ID=86781133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310257009.XA Active CN116309837B (en) 2023-03-16 2023-03-16 Method for identifying and positioning damaged element by combining characteristic points and contour points

Country Status (1)

Country Link
CN (1) CN116309837B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005310070A (en) * 2004-04-26 2005-11-04 Canon Inc Device and method of information processing
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method
CN110490913A (en) * 2019-07-22 2019-11-22 华中师范大学 Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN110569857A (en) * 2019-07-28 2019-12-13 景德镇陶瓷大学 image contour corner detection method based on centroid distance calculation
CN111401266A (en) * 2020-03-19 2020-07-10 杭州易现先进科技有限公司 Method, device, computer device and readable storage medium for positioning corner points of drawing book

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005310070A (en) * 2004-04-26 2005-11-04 Canon Inc Device and method of information processing
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method
CN110490913A (en) * 2019-07-22 2019-11-22 华中师范大学 Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method
CN110569857A (en) * 2019-07-28 2019-12-13 景德镇陶瓷大学 image contour corner detection method based on centroid distance calculation
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111401266A (en) * 2020-03-19 2020-07-10 杭州易现先进科技有限公司 Method, device, computer device and readable storage medium for positioning corner points of drawing book

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"面向机器人典型特征装配的视觉和力觉定位控制方法研究";刘文;《中国优秀硕士学位论文全文数据库信息科技辑》;20201231;全文 *
CCD图像的轮廓特征点提取算法;侯学智, 杨平, 赵云松;电子科技大学学报;20040825(第04期);全文 *
采用轮廓特征匹配的红外-可见光视频自动配准;孙兴龙;韩广良;郭立红;刘培勋;许廷发;;光学精密工程;20200513(第05期);全文 *

Also Published As

Publication number Publication date
CN116309837A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109767463B (en) Automatic registration method for three-dimensional point cloud
CN111553409B (en) Point cloud identification method based on voxel shape descriptor
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
CN104268602A (en) Shielded workpiece identifying method and device based on binary system feature matching
CN110472625B (en) Chinese chess piece visual identification method based on Fourier descriptor
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
CN112649793A (en) Sea surface target radar trace condensation method and device, electronic equipment and storage medium
CN110472651B (en) Target matching and positioning method based on edge point local characteristic value
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN110163894B (en) Sub-pixel level target tracking method based on feature matching
CN105956581B (en) A kind of quick human face characteristic point initial method
CN108734059B (en) Object identification method for indoor mobile robot
CN116309837B (en) Method for identifying and positioning damaged element by combining characteristic points and contour points
US20230360262A1 (en) Object pose recognition method based on triangulation and probability weighted ransac algorithm
CN114463534A (en) Target key point detection method, device, equipment and storage medium
CN113221914A (en) Image feature point matching and mismatching elimination method based on Jacobsad distance
CN113316080A (en) Indoor positioning method based on Wi-Fi and image fusion fingerprint
Wu et al. Real-time robust algorithm for circle object detection
RP et al. Image feature extraction and recognition of abstractionism and realism style of indonesian paintings
Chen et al. Method of item recognition based on SIFT and SURF
CN112132783B (en) Part identification method based on digital image processing technology
CN112766037B (en) 3D point cloud target identification and positioning method based on maximum likelihood estimation method
CN116523984B (en) 3D point cloud positioning and registering method, device and medium
Laiche et al. Geometric based histograms for shape representation and retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant