CN114851206B - Method for grabbing stove based on vision guiding mechanical arm - Google Patents

Method for grabbing stove based on vision guiding mechanical arm Download PDF

Info

Publication number
CN114851206B
CN114851206B CN202210628058.5A CN202210628058A CN114851206B CN 114851206 B CN114851206 B CN 114851206B CN 202210628058 A CN202210628058 A CN 202210628058A CN 114851206 B CN114851206 B CN 114851206B
Authority
CN
China
Prior art keywords
stove
matching
template
grabbing
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210628058.5A
Other languages
Chinese (zh)
Other versions
CN114851206A (en
Inventor
张堃博
李亚彬
孟令波
杨程午
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Zhongke Intelligent Identification Co ltd
Original Assignee
Tianjin Zhongke Intelligent Identification Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Zhongke Intelligent Identification Co ltd filed Critical Tianjin Zhongke Intelligent Identification Co ltd
Priority to CN202210628058.5A priority Critical patent/CN114851206B/en
Publication of CN114851206A publication Critical patent/CN114851206A/en
Application granted granted Critical
Publication of CN114851206B publication Critical patent/CN114851206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1633Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for grabbing a stove based on a visual guiding mechanical arm, which comprises the following steps: manufacturing a matching template of a stove workpiece; image preprocessing, including image correction and noise elimination, to provide high quality images for subsequent processing; calculating coordinate parameters of the stove workpiece through a matching algorithm; and calculating the grabbing pose of the mechanical arm by calculating the compensation amount through multiple measurement by using the hand eye calibration parameters. Aiming at reliable, quick and accurate mechanical arm grabbing work, the high-precision three-dimensional information of an object is obtained by constructing a camera light source system and utilizing the high precision of a camera. In order to accurately calculate the posture of the stove, filtering processing is carried out on the two-dimensional image, and then a matching algorithm is adopted to accurately match the scene and the model. The oven pose can be determined by coordinate system transformation so that the robot can grasp the oven correctly. The system can accurately grasp the stove in objects with various angles and various distances, and reliably, quickly and accurately finish the mechanical arm grasping work.

Description

Method for grabbing stove based on vision guiding mechanical arm
Technical Field
The invention relates to the technical field of mechanical arm grabbing control, in particular to a method for grabbing a stove based on a visual guiding mechanical arm.
Background
In the industrial production process, such as stamping, repeated assembly, welding, paint spraying and other works, many people are unwilling to do so due to the severe working environment, single labor property and low technical content. Enterprises hope to replace manual operation by using mechanical arms, so that the labor intensity of workers is reduced, the production quality of products is improved, and automatic production is realized. Mechanical arms are widely used in the industry today, such as sorting, handling, assembly, etc. However, the traditional mechanical arm adopts an off-line programming mode, so that the mechanical arm is planned with a movement route and working actions in advance, has no real-time adjustment function, can only perform simple actions, and cannot meet actual production requirements.
Disclosure of Invention
The invention aims at solving the technical defects in the prior art, and provides a method for grabbing a stove based on a visual guiding mechanical arm, which enables the mechanical arm to be more intelligent in daily work through visual guiding, senses the change of a working environment to make corresponding adjustment, improves the automation production level of enterprises, and promotes the automation and intelligent transformation process of labor-intensive enterprises.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a method for gripping a stove based on vision guiding mechanical arm, comprising:
s1, manufacturing different grabbing templates according to different stove designs;
s2, preprocessing the acquired image, determining whether a stove piece exists or not, and positioning an ROI (region of interest) on the stove image containing the stove;
s3, inputting the grabbing template and the preprocessed stove image into a detection algorithm for matching treatment, and outputting a matching result after matching is successful;
s4, sending the matching result to the mechanical arm, and grabbing the stove by the mechanical arm according to the output matching result.
In step S2, ROI region positioning is performed on a furnace image including a furnace, and the method includes:
enhancing the edge characteristics of the stove image through color space conversion; preprocessing salt-pepper noise generated by shooting in a field environment, and denoising; and (5) performing ROI segmentation on the stove image after noise elimination, and positioning the region of interest.
In step S3, the capturing template and the preprocessed stove image are input into a detection algorithm for matching processing, the detection algorithm matches the preprocessed stove image and the capturing template, and the coordinates of the matched stove piece are obtained through affine transformation and least two-way constraint.
The affine transformation is performed by adopting the following rotation matrix to obtain the coordinates of the matched furnace part:
wherein x, y are coordinate values of the grabbing template, x ', y' are coordinates of the matched stove piece, and θ is a rotation angle between the matched stove piece and the grabbing template.
In step S4, the mechanical arm grabbing the stove according to the output matching result includes:
the mechanical arm adjusts the pose according to the coordinates of the matched stove piece, the rotation angle between the stove piece and the grabbing template and the height difference, and grabs the stove piece to be grabbed after the pose is adjusted.
According to the invention, the mechanical arm is guided to work by using a visual algorithm, and the working state can be adjusted according to a real-time working environment, such as the change of the grabbing coordinates and the change of angles, so that the mechanical arm can be more intelligent in actual work; the grabbing precision of the mechanical arm in actual work can be improved, the change of the working environment can be better dealt with, and the mechanical arm grabbing device can be applied to the field of industrial automation such as assembly and feeding, and the like, so that the performance of an existing production line is improved.
Drawings
FIG. 1 is a flow chart of a method of visually guiding a robotic arm to grasp a stove according to the present invention;
FIGS. 2 a-2 b are graphs comparing the matching result obtained by the template matching method of the present invention with the matching result obtained by the conventional matching method;
FIGS. 3 a-3 b are graphs comparing the edge matching results with the normal image matching results using spatial transformation in accordance with the present invention;
figure 4 is a schematic diagram of the positions of four corner points on the figure.
Fig. 5 a-5 b are schematic illustrations of the visual inspection system and the robotic arm of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
According to the invention, the visual guiding system is combined with the mechanical arm grabbing system, so that the stove piece is quickly positioned, the grabbing precision is improved, and the position and the posture of the stove piece can be changed.
The invention can grasp the stove in various angles and various distances with accurate pose, and reliably, quickly and accurately finish the mechanical arm grasping work.
The invention discloses a method for grabbing a stove based on a visual guiding mechanical arm, which comprises the following steps:
manufacturing a required matching template suitable for a stove workpiece;
preprocessing an image of the oven piece acquired through machine vision, including image correction and noise elimination, to provide a high-quality image for subsequent processing;
calculating coordinate parameters and the like of the stove workpiece through a template matching algorithm;
and calculating the compensation quantity by using the hand eye calibration parameters through multiple measurements, and calculating the grabbing pose of the mechanical arm.
Aiming at reliable, quick and accurate mechanical arm grabbing work, the camera light source system is built, high-precision three-dimensional information of an object (a stove body) is obtained by utilizing the high precision of the camera and is used for determining the pose of the stove part, and then the compensation quantity is calculated by utilizing calibration parameters through multiple measurements, so that the mechanical arm grabbing pose can be calculated according to the calculated pose of the stove part, quick positioning of the stove part is realized, and the grabbing precision is improved.
In order to accurately calculate the posture of the stove, filtering processing is carried out on the acquired two-dimensional image of the stove, and then a matching algorithm is adopted to accurately match the scene and the model.
Wherein, the pose of the stove can be determined through coordinate system transformation so that the robot can accurately grasp the stove.
Specifically, as shown in fig. 1, the method for grasping the stove based on the vision guiding mechanical arm in the embodiment of the invention is carried out by adopting the following steps:
step S1: manufacturing a grabbing template;
the step S1 specifically comprises the following steps:
the method is characterized in that the method is used for dividing the acquired images after dividing according to different types and different styles of stove grabbing pieces, so that the precision and the robustness of an algorithm in the matching process can be improved, the algorithm can have good matching effect and precision in different scenes and different product lines, as shown in fig. 2a, a civil traditional template matching method is shown, matching deviation exists at the edge part, and fig. 2b is the template matching method used by the method, and the matching method can be used for matching well at the edge part where the deviation easily occurs in the traditional matching, and the matching precision and the detection accuracy can be improved to a great extent.
Step S2: and denoising the acquired image, positioning the area, and recording coordinates and height data.
The step S2 specifically comprises the following steps:
and firstly, carrying out preliminary detection on the acquired image, judging whether a stove piece to be grabbed exists or not, and shooting the next image if the stove piece does not exist.
If the stove piece to be grabbed exists, preprocessing operation is carried out on the shot image of the stove piece, and edge characteristics of the stove image are enhanced through color space conversion in order to reduce the influence of environment. Pretreatments are carried out on salt and pepper noise generated by shooting in a field environment, ROI region segmentation is carried out on an image subjected to noise cancellation, interest regions are positioned, the matching success rate of the next step is improved, the matching edge result of a common image is shown in FIG. 3a, the matching result of a space conversion edge is shown in FIG. 3b, and the matching precision and the detection accuracy can be improved to a greater extent.
Step S3: inputting the segmented images into a matching function, and obtaining coordinates of the stove piece to be grabbed and a rotation angle or a change angle relative to the template through matching calculation;
the step S3 specifically comprises the following steps:
inputting the image preprocessed in the step S2 into a detection function/matching function, using the matching algorithm/detection algorithm of the invention in the corresponding detection function/matching function, searching and matching the ROI region of the image to be matched through a template, and calculating the edge gradient of the ROI region of the image to be matched;
wherein, the operator in the y and x directions for calculating the edge gradient is an operator A and an operator B, I is a denoised image, G Mag For the gradient value, T, of the ROI area of the map to be matched mag Gradient values for the template map, wherein the gradient values are calculated as follows:
and calculating the edge gradient of the search area through the operator to obtain the edge of the area to be matched, and matching with the edge of the template. Obtaining the effect of region matching according to a matching formula, and judging whether the matching result meets the requirements according to the Score value, wherein the matching calculation method is as follows:
T x =B*T
T y =A*T
G x =B*I
G y =A*I
wherein T is x 、T y Gradient values of the template image in the x and y directions respectively; g x 、G y Gradient values of the images to be matched in the x and y directions respectively; n is the number of all gradient values in the template map; score is the Score of the match in the region, and the higher the Score, the better the match effect.
The final score is compared with a set threshold value, and the matching is finished when the final score is larger than the set threshold value; if the matching effect is not good according to the set threshold value, the size of the image to be matched is changed through up-and-down sampling of the pyramid level, the up-and-down sampling of the pyramid level is carried out again, the proportion of the edge of the image to be matched and the edge of the template in the two directions of length and width is calculated in a self-adaptive mode, and scaling adjustment is carried out according to a certain proportion, wherein the scaling adjustment is shown in the following formula:
wherein W is r ,H r The size is changed in the length-width direction; sigma (sigma) w ,σ h A length-width conversion coefficient; t (T) 1 …T 4 The direction vector cosine values of four corner feature points are shown in fig. 4.
And (3) carrying out scale adjustment through the pyramid level, recalculating a matching value Score, and finishing matching when the Score value is larger than a set threshold value.
Recording two-dimensional coordinates (x, y) of the matched stove piece, calculating a rotation angle theta or a change angle theta between the current stove piece and the grabbing template through affine transformation, recording (x, y|theta, H), wherein H is a difference value between the height of the matched stove piece and the height of the template recorded in the step S2 (namely, the distance between a camera of machine vision and the stove piece, namely, the distance between a manipulator gripper and the stove piece, namely, the distance that the manipulator gripper needs to move), and sending coordinate values, the rotation angle or the change angle and the height difference value to the manipulator.
Step S4: outputting the detection result of the step S3;
the step S4 specifically includes:
and outputting the pose ((x, y|theta, H)) of the stove piece to be grabbed detected by the S3 to the mechanical arm, and adjusting the pose of the mechanical arm according to the coordinate value, the rotation angle or the change angle and the height difference by the mechanical arm, and accurately grabbing the stove piece to be grabbed after the adjustment is completed.
In practical application, the machine vision is used for guiding the mechanical arm to grasp the workpiece, so that the mechanical arm is very challenging, particularly, certain complex type mechanical arm exists in the working environment of a factory, and particularly, irregular product parts, workpiece shake and the like exist, so that certain difficulty is brought to image acquisition and preprocessing in step S2 in FIG. 1, and the accuracy of a detection result is directly affected.
Compared with the traditional template matching, the template matching method based on the vision guiding mechanical arm for grabbing the stove is higher in precision and better in effect, improves the follow-up differential precision, reduces the misjudgment rate of the result, and effectively improves the accuracy, robustness and usability of identification.
The traditional matching method is mainly based on gray level matching and feature matching, and the gray level matching has higher requirements and accuracy on the field acquisition environment, so that the industrial requirements are difficult to reach, the feature matching effect is best, but the algorithm is complex, the time consumption is long, and the real-time performance is difficult to realize. As shown in fig. 2a, a conventional matching method is used, so that a larger matching error can occur at the edge portion, which causes an error in the detection result and reduces the accuracy, and fig. 2b, which is the matching method used in the invention, also has a better matching effect at the edge portion, improves the subsequent detection accuracy, and reduces the loss in actual industrial production.
The invention can be used in the automatic production line of the stove. At present, the production companies mainly use forklifts or manpower to carry and store, because the sizes of the products are different, the forklifts need to manually adjust the calipers of the forklifts during carrying, and the forklifts generally have no anti-skid devices and have certain dangers in the carrying process of factories; in the process of carrying, the weight of the stove is relatively large, and a plurality of people are required to carry and unload the stove, so that a certain danger exists. The method for grabbing the stove based on the visual guide mechanical arm can automatically realize the carrying of the mechanical arm, reduce the direct operation of workers and stove parts, ensure the working safety of the workers, reduce the transfer time of the workpieces and improve the working efficiency.
The automatic grabbing device can automatically grab a production line which is rapid and unstable in positioning of the stove parts, reduces the requirement of the mechanical arm on the environment in work, reduces the use cost of enterprises, improves the practicability, effectively improves the production efficiency and reduces the grabbing failure probability of the mechanical arm, and can be applied to production lines for grabbing various stove parts.
The invention not only can be applied to the automatic production line of the stove, but also can be applied to the transportation of various production lines, and has certain universality. The mechanical positioning requirement on the product is different from that of the mechanical positioning requirement on the product by using the visual guiding mechanical arm, when the mechanical positioning requirement on the product on the production line is higher by using the mechanical arm, or else, the mechanical arm cannot work; the invention uses vision to guide the mechanical arm to work, and products on the production line can have a certain redundancy without affecting the normal carrying work of the mechanical arm.
While the fundamental and principal features of the invention and advantages of the invention have been shown and described, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof;
the present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (3)

1. The method for grabbing the stove based on the vision guiding mechanical arm is characterized by comprising the following steps of:
s1, manufacturing different grabbing templates according to different stove designs;
s2, preprocessing the acquired image, determining whether a stove piece exists or not, and positioning an ROI (region of interest) on the stove image containing the stove;
s3, inputting the grabbing template and the preprocessed stove image into a detection algorithm for matching treatment, and outputting a matching result after matching is successful; comprising the following steps:
searching and matching the ROI region of the map to be matched through the template, calculating the edge gradient value of the ROI region of the map to be matched and the gradient value of the template map, and according to the score of matching in the calculated regionComparing with a set threshold value, judging whether the matched result meets the requirement;
wherein the method comprises the steps of、/>For template images are respectively +.>,/>Gradient values of the direction; />、/>Respectively, the images to be matched are respectively in +.>Gradient values of the direction; />The number of all gradient values in the template map; />And->To calculate the +.>,/>Operator in two directions, ++>For the denoised image->Gradient values for the ROI region of the map to be matched, +.>Gradient values for the template map;
if scoreWhen the size of the image to be matched is changed by up-down sampling of the pyramid level and is lower than the set threshold value, the proportion relation between the edge of the image to be matched and the edge of the template is calculated, the proportion of the image to be matched in the two directions of the length and the width is adaptively scaled, scaling adjustment is carried out in a certain proportion, and the score is recalculated>The method comprises the steps of carrying out a first treatment on the surface of the Score->When the matching is larger than the set threshold value, the matching is ended; the following formula is adopted for certain proportion adjustment:
wherein,,/>the size is changed in the length-width direction; />,/>Is a length-width conversion coefficient; />The direction vector cosine values of the four corner feature points;
after the matching is successful, recording two-dimensional coordinates of the matched stove piece, calculating a rotation angle or a change angle between the current stove piece and the grabbing template through affine transformation, recording a difference value between the height of the matched stove piece and the height of the template, and sending coordinate values, the rotation angle or the change angle and the difference value of the height to the mechanical arm as a matching result;
s4, the mechanical arm adjusts the pose according to the coordinates of the matched stove piece, the rotation angle or the change angle between the stove piece and the grabbing template, and the height difference between the height of the matched stove piece and the height of the template, and grabs the stove piece to be grabbed after the pose is adjusted.
2. The method of grasping a oven based on a vision-guided robot arm according to claim 1, wherein in step S2, ROI region positioning is performed on an oven image including the oven, comprising:
enhancing the edge characteristics of the stove image through color space conversion; preprocessing salt-pepper noise generated by shooting in a field environment, and denoising; and (5) performing ROI segmentation on the stove image after noise elimination, and positioning the region of interest.
3. The method for gripping a stove based on a vision-guided mechanical arm according to claim 1, wherein the affine transformation is performed by using a rotation matrix as follows, and coordinates of a matched stove piece are obtained:
wherein,,/>for grabbing the template coordinate values, & lt + & gt>,/>To match the coordinates of the rear stove part +.>To match the rotation angle between the back stove part and the grabbing template.
CN202210628058.5A 2022-06-06 2022-06-06 Method for grabbing stove based on vision guiding mechanical arm Active CN114851206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210628058.5A CN114851206B (en) 2022-06-06 2022-06-06 Method for grabbing stove based on vision guiding mechanical arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210628058.5A CN114851206B (en) 2022-06-06 2022-06-06 Method for grabbing stove based on vision guiding mechanical arm

Publications (2)

Publication Number Publication Date
CN114851206A CN114851206A (en) 2022-08-05
CN114851206B true CN114851206B (en) 2024-03-29

Family

ID=82623858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210628058.5A Active CN114851206B (en) 2022-06-06 2022-06-06 Method for grabbing stove based on vision guiding mechanical arm

Country Status (1)

Country Link
CN (1) CN114851206B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116117800B (en) * 2022-12-19 2023-08-01 广东建石科技有限公司 Machine vision processing method for compensating height difference, electronic device and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004118485A (en) * 2002-09-26 2004-04-15 Toshiba Corp Image tracking device and method
JP2010097355A (en) * 2008-10-15 2010-04-30 Nippon Telegr & Teleph Corp <Ntt> Image separation device, image separation method, and image separation program
CN104091325A (en) * 2014-06-16 2014-10-08 哈尔滨工业大学 Image ROI positioning method and device based on dimension invariant feature transformation during automobile instrument panel visual detection
JP2015019958A (en) * 2013-07-22 2015-02-02 株式会社日立製作所 Magnetic resonance imaging apparatus, image processing apparatus and image processing method
CN104751147A (en) * 2015-04-16 2015-07-01 成都汇智远景科技有限公司 Image recognition method
CN104819754A (en) * 2015-05-13 2015-08-05 山东大学 Medicine bottle liquid level detection method based on image processing
CN105046197A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Multi-template pedestrian detection method based on cluster
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method
CN105894002A (en) * 2016-04-22 2016-08-24 浙江大学 Instrument reading identification method based on machine vision
CN107203990A (en) * 2017-04-02 2017-09-26 南京汇川图像视觉技术有限公司 A kind of labeling damage testing method based on template matches and image quality measure
CN109767445A (en) * 2019-02-01 2019-05-17 佛山市南海区广工大数控装备协同创新研究院 A kind of high-precision PCB defect intelligent detecting method
CN110315525A (en) * 2018-03-29 2019-10-11 天津工业大学 A kind of robot workpiece grabbing method of view-based access control model guidance
CN110472651A (en) * 2019-06-17 2019-11-19 青岛星科瑞升信息科技有限公司 A kind of object matching and localization method based on marginal point local feature value
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111126174A (en) * 2019-12-04 2020-05-08 东莞理工学院 Visual detection method for robot to grab parts
CN111780781A (en) * 2020-06-23 2020-10-16 南京航空航天大学 Template matching vision and inertia combined odometer based on sliding window optimization
CN112509063A (en) * 2020-12-21 2021-03-16 中国矿业大学 Mechanical arm grabbing system and method based on edge feature matching
CN113792728A (en) * 2021-08-06 2021-12-14 南宁学院 High-precision visual positioning method
CN114331995A (en) * 2021-12-24 2022-04-12 无锡超通智能制造技术研究院有限公司 Multi-template matching real-time positioning method based on improved 2D-ICP

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3494692B2 (en) * 1994-03-07 2004-02-09 富士写真フイルム株式会社 Radiation image alignment method
CN109886124B (en) * 2019-01-23 2021-01-08 浙江大学 Non-texture metal part grabbing method based on wire harness description subimage matching

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004118485A (en) * 2002-09-26 2004-04-15 Toshiba Corp Image tracking device and method
JP2010097355A (en) * 2008-10-15 2010-04-30 Nippon Telegr & Teleph Corp <Ntt> Image separation device, image separation method, and image separation program
JP2015019958A (en) * 2013-07-22 2015-02-02 株式会社日立製作所 Magnetic resonance imaging apparatus, image processing apparatus and image processing method
CN104091325A (en) * 2014-06-16 2014-10-08 哈尔滨工业大学 Image ROI positioning method and device based on dimension invariant feature transformation during automobile instrument panel visual detection
CN104751147A (en) * 2015-04-16 2015-07-01 成都汇智远景科技有限公司 Image recognition method
CN104819754A (en) * 2015-05-13 2015-08-05 山东大学 Medicine bottle liquid level detection method based on image processing
CN105046197A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Multi-template pedestrian detection method based on cluster
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method
CN105894002A (en) * 2016-04-22 2016-08-24 浙江大学 Instrument reading identification method based on machine vision
CN107203990A (en) * 2017-04-02 2017-09-26 南京汇川图像视觉技术有限公司 A kind of labeling damage testing method based on template matches and image quality measure
CN110315525A (en) * 2018-03-29 2019-10-11 天津工业大学 A kind of robot workpiece grabbing method of view-based access control model guidance
CN109767445A (en) * 2019-02-01 2019-05-17 佛山市南海区广工大数控装备协同创新研究院 A kind of high-precision PCB defect intelligent detecting method
CN110472651A (en) * 2019-06-17 2019-11-19 青岛星科瑞升信息科技有限公司 A kind of object matching and localization method based on marginal point local feature value
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111126174A (en) * 2019-12-04 2020-05-08 东莞理工学院 Visual detection method for robot to grab parts
CN111780781A (en) * 2020-06-23 2020-10-16 南京航空航天大学 Template matching vision and inertia combined odometer based on sliding window optimization
CN112509063A (en) * 2020-12-21 2021-03-16 中国矿业大学 Mechanical arm grabbing system and method based on edge feature matching
CN113792728A (en) * 2021-08-06 2021-12-14 南宁学院 High-precision visual positioning method
CN114331995A (en) * 2021-12-24 2022-04-12 无锡超通智能制造技术研究院有限公司 Multi-template matching real-time positioning method based on improved 2D-ICP

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于灰度相关匹配的工业分拣系统实现;曹兰;《龙岩学院学报》;20220325;第40卷(第2期);第39-45页 *
马慧彬.《基于机器学习的乳腺图像辅助诊断算法研究》.湖南师范大学出版社,2016,第148页. *

Also Published As

Publication number Publication date
CN114851206A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN110014426B (en) Method for grabbing symmetrically-shaped workpieces at high precision by using low-precision depth camera
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN110509300B (en) Steel hoop processing and feeding control system and control method based on three-dimensional visual guidance
CN113146172B (en) Multi-vision-based detection and assembly system and method
CN112010024B (en) Automatic container grabbing method and system based on laser and vision fusion detection
Chen et al. The autonomous detection and guiding of start welding position for arc welding robot
Zhang et al. On-line path generation for robotic deburring of cast aluminum wheels
CN112529858A (en) Welding seam image processing method based on machine vision
CN110625644B (en) Workpiece grabbing method based on machine vision
CN114851206B (en) Method for grabbing stove based on vision guiding mechanical arm
CN113369761B (en) Method and system for positioning welding seam based on vision guiding robot
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN112561886A (en) Automatic workpiece sorting method and system based on machine vision
CN113894481A (en) Method and device for adjusting welding pose of complex space curve welding seam
CN112497219A (en) Columnar workpiece classification positioning method based on target detection and machine vision
CN113269723A (en) Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts
CN113664826A (en) Robot grabbing method and system in unknown environment
CN210589323U (en) Steel hoop processing feeding control system based on three-dimensional visual guidance
CN115770988A (en) Intelligent welding robot teaching method based on point cloud environment understanding
CN114882108A (en) Method for estimating grabbing pose of automobile engine cover under two-dimensional image
CN114926531A (en) Binocular vision based method and system for autonomously positioning welding line of workpiece under large visual field
JPH02110788A (en) Method for recognizing shape of three-dimensional object
CN112233176A (en) Target posture measurement method based on calibration object
CN111452036B (en) Workpiece grabbing method based on line laser binocular stereoscopic vision
CN117260003B (en) Automatic arranging, steel stamping and coding method and system for automobile seat framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant