CN113792728A - High-precision visual positioning method - Google Patents

High-precision visual positioning method Download PDF

Info

Publication number
CN113792728A
CN113792728A CN202110902163.9A CN202110902163A CN113792728A CN 113792728 A CN113792728 A CN 113792728A CN 202110902163 A CN202110902163 A CN 202110902163A CN 113792728 A CN113792728 A CN 113792728A
Authority
CN
China
Prior art keywords
image
detected
point set
target
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110902163.9A
Other languages
Chinese (zh)
Inventor
韦受宁
刘美美
蒋键
郑美华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanning University
Original Assignee
Nanning University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanning University filed Critical Nanning University
Priority to CN202110902163.9A priority Critical patent/CN113792728A/en
Publication of CN113792728A publication Critical patent/CN113792728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Abstract

The invention discloses a high-precision visual positioning method, which comprises the following steps: and extracting the outline of the image and calculating the image gradient, combining the outline and the image gradient to obtain a target trapezoidal descriptor of the image, comparing the characteristic point set of the image to be detected with the template image point by point to determine whether the characteristic point set of the image to be detected is matched with the template image, and finding out the matching point set of the image to be detected and the template image. And calculating a perspective transformation relation between the matched point set and the template target trapezoidal descriptor to obtain a transformation matrix, and obtaining pose information of the target to be detected. The invention provides a high-precision visual identification and positioning method aiming at the problem that the existing visual positioning algorithm is poor in positioning effect and difficult to meet the requirements of identification rate and precision. The descriptor OGSD has good noise immunity, illumination variation, scale-invariant feature.

Description

High-precision visual positioning method
Technical Field
The invention relates to a high-precision visual identification and positioning technology.
Background
With the continuous progress of computer technology and the rapid development of industrial intelligent manufacturing industry, the wide application of machine vision technology enables machine equipment to have the functions of eye watching and brain analysis and decision making, the machine equipment replaces human beings to carry out automatic production, manufacturing, control and decision making of products, social production labor force is greatly saved, production efficiency and product quality are improved, and further the development of social science and technology productivity is promoted.
Machine vision is a very important research field in the engineering field and the scientific field, and is a comprehensive discipline relating to a plurality of fields such as optics, machinery, computers, mode recognition, image processing, artificial intelligence, signal processing, photoelectric integration and the like.
The united states now has a grip to limit and impede the development of the high-tech field of china, particularly the semiconductor technology field. The semiconductor industry represents the essence of high-tech intelligent manufacturing technology, embodies the scientific and technological strength of one country, and plays an important role in the application of machine vision technology in the manufacturing process of semiconductor technology in order to ensure the product quality and the production efficiency.
The vision identification and positioning is a core basic function in the field of machine vision and artificial intelligence, and products are widely applied to identification and analysis, pose estimation, geometric measurement, appearance defects, foreign matter detection and the like in intelligent detection equipment, for example, a vision-guided robot identifies the products through vision imaging and positions product position information so as to facilitate automatic control of the robot. The size measurement and appearance inspection of products, etc. are performed by visually recognizing products and product position information before measurement and inspection. Visual identification and localization is therefore a fundamental problem for visual automated inspection. Some products with strict requirements on precision and tolerance of industrial equipment need to have a high-precision visual identification positioning function, and the requirement is up to a pixel level or even a sub-pixel level.
More mature and stable vision high-precision identification and positioning technology is provided by foreign commercial technology companies, such as Germany MVTec HALCON, United states Congnex VISION NPRO, Japan KeYENCE, and the like, at present, the self-research technical solution in the aspect is relatively lacked in China, and products of a plurality of automatic equipment research and development companies are subjected to secondary development and integrated application based on foreign technologies.
Common visual positioning algorithms include image gray scale matching (square error matching, standard variance matching, NCC correlation matching, etc.), feature point matching (sift, surf + ransac, etc.), histogram matching, shape matching based, mark positioning, etc. In practical application, the methods have various technical defects, the application scenes are limited, the requirements on the system environment are strict, and under the common application scenes such as illumination change, object deformation, high-precision requirements and the like, the visual identification positioning effect realized based on the methods is poor, and the practical application has great limitation.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problem that the existing visual positioning algorithm is poor in positioning effect and difficult to meet the requirements of recognition rate and accuracy, a high-precision visual recognition positioning method is provided. The descriptor OGSD has good noise immunity, illumination variation, scale-invariant feature.
The design idea of the invention is as follows:
1. the method has the advantages that trapezoidal feature descriptors are extracted and feature point sets are detected by utilizing the gradient direction of an image, and features are extracted based on an OGSD description extraction mode or a mode designed or modified by similar OGSD ideas.
2. The method has template (master) creation (or learning/training) and identification positioning processes, and realizes identification coarse positioning based on OGSD characteristics or an OGSD-like mode.
3. The method has the advantages that the secondary positioning process is precise positioning, and the precise positioning after the coarse positioning is based on the point cloud registration principle to correct and solve the pose so as to realize high-precision positioning.
The technical scheme of the invention is as follows:
a high-precision visual positioning method comprises the following steps: and extracting the outline of the image and calculating the gradient of the image, combining the outline and the gradient of the image to obtain a target trapezoidal descriptor of the image, comparing whether the image to be detected is matched with the template image point by point, and finding out a matching point set of the image to be detected and the template image.
And after the matching is successful, calculating a perspective transformation relation between the trapezoidal descriptor of the point set to be matched and the trapezoidal descriptor of the template target to obtain a transformation matrix, and obtaining the pose information of the target to be detected.
The extraction process of the target ladder descriptor is as follows: firstly, preprocessing an image, then respectively carrying out contour extraction and pyramid downsampling, calculating image gradient to form a characteristic response graph, extracting gradient and carrying out histogram statistics, carrying out quantization by taking a horizontal axis as a gradient direction and taking a vertical axis as the number statistics of the quantized gradient directions, and finally combining a gradient direction histogram, the gradient amplitude intensity of the point and the contour characteristics to form a characteristic point set.
The specific identification method comprises the following steps:
A. establishing a target trapezoidal descriptor of the template, preprocessing the image of the template, extracting the outline, calculating the gradient of the image, and combining to obtain the target trapezoidal descriptor of the template;
B. establishing a characteristic point set of an image to be detected, preprocessing the image to be detected, extracting a contour, calculating an image gradient, and combining to obtain a characteristic point set to be matched of the image to be detected;
C. rough positioning, comparing the target trapezoidal descriptor of the template with the feature point set of the image to be detected point by point, judging whether the matching is successful, and entering the step D after the matching is successful;
D. and (4) accurately positioning, solving the perspective transformation relation between the target trapezoidal descriptor of the template and the feature point set of the image to be detected to obtain a transformation matrix, and obtaining the pose information of the target to be detected.
The image preprocessing comprises the following steps: and performing Gaussian smoothing and denoising on the two-dimensional image to obtain a color or gray image, and reducing the dimension of the three-dimensional image into two-dimensional image processing.
The process of judging whether the matching is carried out in the step C is as follows: and performing point-by-point matching scoring on the feature point set to be detected and the template target trapezoidal descriptor, calculating output confidence, and if the confidence meets a set threshold, successfully matching.
And step C, comparing the characteristic points in the image to be detected with the target trapezoidal descriptor of the template point by point, grading, and taking the successfully matched points as the characteristic point set of the matched target.
Step D: the method comprises the steps of describing a sub point set P of a target trapezoid of a template, wherein the sub point set P is { pi | pi ∈ R3, i is 1,2, … … n }, describing a feature point set Q of an image to be detected as { qj | qj ∈ R3, j is 1,2, … … m }, setting a rotation matrix to be R and a translation matrix to be t, representing an error between a source point set P and a target point set Q under a transformation matrix (R, t) by f (R, t), and solving an optimal solution Pos2(R, t) meeting min (f (R, t)), Pos ═ 2(R, t)' f (R, t)
Figure BDA0003200153490000031
And finally calculating and outputting a result of identifying and positioning information, namely a category id, a central position, a rotation angle and a pose transformation matrix.
The invention has the beneficial effects that:
the invention firstly provides an OGSD feature descriptor, and the key design idea of the OGSD feature descriptor is that image gradient (strength and direction quantization) and object outline shape attribute are simultaneously combined to be used as a descriptor of object (rigid-body) features, the OGSD feature descriptor has high-level abstract packaging and distinguishing characteristics, and the object is identified and positioned from the outline angle according with human eyes.
Compared with the detection effect of other methods, the method provided by the invention has better and more stable detection recognition rate and accuracy rate, and is suitable for more complex and changeable application scenes. The accuracy, stability and efficiency of the identification positioning are the core competitive advantages of the machine vision detection equipment. More than 80% of core functions of machine vision detection equipment are subjected to various detection analyses based on visual identification positioning. The high-precision visual identification positioning technology has a wide application prospect, and the high-precision visual identification positioning method provided by the invention has good innovation, practicability and technical application value.
Drawings
FIG. 1 is a logic flow diagram of the present invention.
Detailed Description
Example 1:
the method mainly comprises the following steps:
1. and performing target trapezoidal feature OGSD extraction on the input two-dimensional image img (x, y).
a. And performing preprocessing such as Gaussian smoothing and denoising on the two-dimensional image img (x, y) (if the two-dimensional image is a three-dimensional image, the post-processing of the two-dimensional image needs to be reduced) to obtain a color or gray image input.
b. And respectively extracting contour shape and pyramid downsampling from the preprocessed image input, and calculating image gradient to form an input image characteristic response map responses.
c. If the template feature descriptor OGSD has not been created (learned), then:
extracting a target trapezoidal feature descriptor OGSD based on the feature response graphs responses in 1.b and assigning class names: id. The creation of the template (master) only needs to be done once, and if a plurality of categories are to be identified, the creation is done accordingly.
Otherwise, extracting the image feature point set OGSD _ Dst based on the feature response map response in 1. b.
Wherein OGSD extraction is described as follows:
based on the gradient feature response graph response in the step 1.b, Kernel extracts gradients in each point in a certain field and performs histogram statistics, wherein the horizontal axis is the gradient direction (0-360 degrees) and performs quantization (the quantized number is 8), and the vertical axis is the number statistics of the quantized gradient directions. And finally, forming a target trapezoidal descriptor OGSD of the image by combining the gradient direction histogram hist, the gradient amplitude intensity mag of the point and the profile shape characteristic.
2. And matching and scoring the characteristic point set OGSD _ Dst of the image to be detected and the template target trapezoidal descriptor OGSD _ Src, and realizing a coarse positioning process and solving (pos1 process).
OGSD _ Dst and OGSD _ Src point-by-point match scores and calculate output confidence conf.
B. And if the confidence conf is smaller than the set threshold Score, no matching target exists, and the operation returns.
C. The feature points successfully matched constitute a feature point set OGSD _ Dst2 of the recognition target.
3. And (4) carrying out accurate pose optimization on the coarse positioning pos1, and realizing a fine positioning process and solving (pos2 process).
The pos2 solution process is a key step to achieve high accuracy positioning.
According to the point cloud registration principle, the detailed description is as follows:
namely, the template OGSD _ Src and the detected feature point set OGSD _ Dst2 are two sets of feature data points to be matched. Where P { pi | pi ∈ R3, i ═ 1,2, … … n } is the source point set OGSD _ Src, Q { qj ∈ R3, j ═ 1,2, … … m } is the target point set OGSD _ Dst. Let the rotation matrix be R and the translation matrix be t, and f (R, t) be used to represent the error between the source point set P and the target point set Q under the transformation matrix (R, t). The problem of solving the optimal transformation matrix can be converted to solve the optimal solution Pos2(R, t) that satisfies min (f (R, t)).
Figure BDA0003200153490000041
Pos2(R,t)=f(R,t)
And finally, calculating and outputting a positioning information identification result:
the type id, the center position center (x, y, z), the rotation angle, the pose transformation matrix trans _ mat, and the like.
Example 2: the operation of the software is described as follows
And (3) learning a template process:
and preprocessing the template image, extracting the profile shape and OGSD _ Src characteristics and corresponding target information (such as position, size and the like), and storing the template if the learning is successful.
And (3) detection process:
firstly, loading a learned template and initializing operating environment parameters, carrying out image preprocessing on an input image, and extracting a contour shape and a feature point set OGSD _ Dst. And secondly, evaluating the confidence of the output Score of the OGSD _ Src and the OGSD _ Dst.
And if the Score is greater than Threshold, judging that the target recognition is successful. The point at which the identification was successful constitutes OGSD _ Dst 2.
And solving the perspective transformation relation between the OGSD _ Dst2 and the OGSD _ Src according to the template and the corresponding information of the target to obtain a perspective matrix mat.
And (4) accurately positioning the pose information (the position xy and the rotation angle) of the detection target by utilizing Mat, and updating the final pose information.
The further specific identification process is as follows:
A. a target ladder descriptor for the template is created. And preprocessing the template image, extracting the outline, calculating the image gradient, and combining to obtain a target trapezoidal descriptor of the template.
The image preprocessing comprises the following steps: and performing Gaussian smoothing and denoising on the two-dimensional image to obtain a color or gray image, and reducing the dimension of the three-dimensional image into two-dimensional image processing.
And respectively extracting contour shape and pyramid downsampling from the preprocessed image input, and calculating image gradient to form an input image characteristic response map responses. Extracting a target trapezoidal feature descriptor OGSD based on the feature response graphs responses and assigning a class name: id 0. The creation of the template (master) only needs to be done once, and if a plurality of categories are to be identified, the creation is done accordingly. In the fixed-point domain, Kernel extracts gradients and performs histogram statistics, the horizontal axis represents the gradient direction (0 to 360 degrees) and performs quantization (the quantized number is 8), and the vertical axis represents the number statistics of the quantized gradient directions. And finally, filtering and screening by combining the gradient direction histogram hist, the gradient amplitude intensity mag of the point and the profile characteristics to form a feature descriptor OGSD _ Src.
B. And establishing a characteristic point set of the image to be detected. And preprocessing the image to be detected, extracting the outline, calculating the image gradient, and combining to obtain a feature point set to be matched of the image to be detected.
C. And D, coarse positioning, comparing the target trapezoidal descriptor of the template with the feature point set of the image to be detected point by point, judging whether the template is matched with the feature point set, and entering the step D after the template is successfully matched with the feature point set. And matching and scoring the gradient feature point set OGSD _ Dst and the template feature descriptor OGSD _ Src, realizing a coarse positioning process and solving pos 1. Specifically, the OGSD _ Dst1 and the OGSD _ Src are matched point by point and scored, and the output confidence conf is calculated. If the confidence conf is smaller than the set threshold Score, no matching target exists, and returning is carried out, and if the confidence meets the set threshold, the matching is successful. And the successfully matched point set forms a characteristic point set of the recognition target.
Further, the contour data in the trapezoid descriptor may not be used in the rough positioning, and only the gradient of each point is used as a contrast basis.
D. And (3) accurate positioning: solving a perspective matrix between OGSD _ Src and OGSD _ Dst2, wherein a target trapezoidal descriptor point set P of the template is { pi | pi ∈ R3, i ═ 1,2, … … n }, a feature point set Q [ { qj | qj ∈ R3, j ═ 1,2, … … m }, a rotation matrix is R, a translation matrix is t, an error between the source point set P and the target point set Q under the transformation matrix (R, t) is represented by f (R, t), and an optimal solution Pos2(R, t) meeting min (f (R, t)) is solved, and Pos2(R, t) ═ f (R, t) is solved
Figure BDA0003200153490000051
And finally calculating and outputting a result of identifying and positioning information, namely a category id, a central position, a rotation angle and a pose perspective transformation matrix.

Claims (8)

1. A high-precision visual positioning method is characterized by comprising the following steps: and extracting the outline of the image and calculating the image gradient, combining the outline and the image gradient to obtain a target trapezoidal descriptor of the image, comparing the characteristic point set of the image to be detected with the template image point by point to determine whether the characteristic point set of the image to be detected is matched with the template image, and finding out the matching point set of the image to be detected and the template image.
2. The high-precision visual positioning method according to claim 1, characterized in that: and calculating a perspective transformation relation between the matched point set and the template target trapezoidal descriptor to obtain a transformation matrix, and obtaining pose information of the target to be detected.
3. The high-precision visual positioning method according to claim 1 or 2, wherein the extraction process of the target trapezoidal descriptor is as follows: firstly, preprocessing an image, then respectively carrying out contour extraction and pyramid downsampling, calculating image gradient to form a characteristic response graph, extracting gradient and carrying out histogram statistics, carrying out quantization by taking a horizontal axis as a gradient direction and taking a vertical axis as the number statistics of the quantized gradient directions, and finally combining the gradient direction histogram, the gradient amplitude intensity of the point and the contour characteristic to form a target trapezoidal descriptor.
4. The high-precision visual positioning method according to claim 3, wherein the specific identification method is as follows:
A. establishing a target trapezoidal descriptor of the template, preprocessing the image of the template, extracting the outline, calculating the gradient of the image, and combining to obtain the target trapezoidal descriptor of the template;
B. establishing a characteristic point set of an image to be detected, preprocessing the image to be detected, extracting a contour, calculating an image gradient, and combining to obtain a characteristic point set to be matched of the image to be detected;
C. rough positioning, comparing the target trapezoidal descriptor of the template with the feature point set of the image to be detected point by point, judging whether the matching is successful, and entering the step D after the matching is successful;
D. and (4) accurately positioning, solving the perspective transformation relation between the template target trapezoidal descriptor and the target feature point set of the image to be detected to obtain a transformation matrix, and obtaining the pose information of the target to be detected.
5. The high-precision visual positioning method of claim 3, wherein the image preprocessing comprises: and performing Gaussian smoothing and denoising on the two-dimensional image to obtain a color or gray image, and reducing the dimension of the three-dimensional image into two-dimensional image processing.
6. The high-precision visual positioning method according to claim 3, wherein the process of determining whether to match in step C is: and carrying out point-by-point matching scoring on the feature point set of the image to be detected and the template image feature descriptor, calculating an output confidence coefficient, and if the confidence coefficient meets a set threshold value, successfully matching.
7. The high-precision visual positioning method according to claim 6, wherein in step C, the gradients of each point in the image to be detected and each point in the template are compared point by point, the successfully matched point set is used as a matching point set, and the gradients and the contours of the matching point set form a target trapezoidal descriptor of the target to be detected.
8. The high-precision visual positioning method according to claim 3, characterized by the steps of D: target trapezoid descriptor point set P ═ { P ═ P of templatei|piE.g. R3, i is 1,2, … … n, and the feature point set Q of the image to be detected is Q { Q ═ Q }j|qjE.g., R3, j is 1,2, … … m }, let the rotation matrix be R and the translation matrix be t, and f (R, t) be used to represent the error between the source point set P and the target point set Q under the transformation matrix (R, t), so as to find the optimal solution satisfying min (f (R, t))
Figure FDA0003200153480000021
And finally calculating and outputting a result of identifying and positioning information, namely a category id, a central position, a rotation angle and a pose transformation matrix.
CN202110902163.9A 2021-08-06 2021-08-06 High-precision visual positioning method Pending CN113792728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110902163.9A CN113792728A (en) 2021-08-06 2021-08-06 High-precision visual positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110902163.9A CN113792728A (en) 2021-08-06 2021-08-06 High-precision visual positioning method

Publications (1)

Publication Number Publication Date
CN113792728A true CN113792728A (en) 2021-12-14

Family

ID=79181505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110902163.9A Pending CN113792728A (en) 2021-08-06 2021-08-06 High-precision visual positioning method

Country Status (1)

Country Link
CN (1) CN113792728A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114851206A (en) * 2022-06-06 2022-08-05 天津中科智能识别有限公司 Method for grabbing stove based on visual guidance mechanical arm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN107103323A (en) * 2017-03-09 2017-08-29 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of target identification method based on image outline feature
US9792895B2 (en) * 2014-12-16 2017-10-17 Abbyy Development Llc System and method for using prior frame data for OCR processing of frames in video sources
CN104915672B (en) * 2014-03-13 2018-08-03 北京大学 A kind of Rectangle building extracting method and system based on high-resolution remote sensing image
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
US20210065388A1 (en) * 2019-08-28 2021-03-04 Siemens Healthcare Gmbh Registering a two-dimensional image with a three-dimensional image
CN113192054A (en) * 2021-05-20 2021-07-30 清华大学天津高端装备研究院 Method and system for detecting and positioning complex parts based on 2-3D vision fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915672B (en) * 2014-03-13 2018-08-03 北京大学 A kind of Rectangle building extracting method and system based on high-resolution remote sensing image
US9792895B2 (en) * 2014-12-16 2017-10-17 Abbyy Development Llc System and method for using prior frame data for OCR processing of frames in video sources
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN107103323A (en) * 2017-03-09 2017-08-29 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of target identification method based on image outline feature
US20210065388A1 (en) * 2019-08-28 2021-03-04 Siemens Healthcare Gmbh Registering a two-dimensional image with a three-dimensional image
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN113192054A (en) * 2021-05-20 2021-07-30 清华大学天津高端装备研究院 Method and system for detecting and positioning complex parts based on 2-3D vision fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114851206A (en) * 2022-06-06 2022-08-05 天津中科智能识别有限公司 Method for grabbing stove based on visual guidance mechanical arm
CN114851206B (en) * 2022-06-06 2024-03-29 天津中科智能识别有限公司 Method for grabbing stove based on vision guiding mechanical arm

Similar Documents

Publication Publication Date Title
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
CN106251353A (en) Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN105930858A (en) Fast high-precision geometric template matching method enabling rotation and scaling functions
Zheng et al. Research on detecting bearing-cover defects based on improved YOLOv3
CN110929713B (en) Steel seal character recognition method based on BP neural network
Lin et al. Robotic grasping with multi-view image acquisition and model-based pose estimation
CN108171102A (en) A kind of part method for quickly identifying of view-based access control model
CN114863464B (en) Second-order identification method for PID drawing picture information
CN111965197A (en) Defect classification method based on multi-feature fusion
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN116309847A (en) Stacked workpiece pose estimation method based on combination of two-dimensional image and three-dimensional point cloud
CN114820471A (en) Visual inspection method for surface defects of intelligent manufacturing microscopic structure
CN107895166B (en) Method for realizing target robust recognition based on feature descriptor by geometric hash method
CN113792728A (en) High-precision visual positioning method
Long et al. Cascaded approach to defect location and classification in microelectronic bonded joints: Improved level set and random forest
Zirngibl et al. Approach for the automated analysis of geometrical clinch joint characteristics
CN105139452B (en) A kind of Geologic Curve method for reconstructing based on image segmentation
CN116363177A (en) Three-dimensional point cloud registration method based on geometric transformation and Gaussian mixture model
CN107122783B (en) Method for quickly identifying assembly connector based on angular point detection
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method
Gao et al. Fast corner detection using approximate form of second-order Gaussian directional derivative
Li et al. Online workpieces recognition for the robotic spray-painting production line with a low-cost RGB-D camera
CN114926635A (en) Method for segmenting target in multi-focus image combined with deep learning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination