CN103486969A - Method and device for aligning machine vision - Google Patents

Method and device for aligning machine vision Download PDF

Info

Publication number
CN103486969A
CN103486969A CN201310464698.8A CN201310464698A CN103486969A CN 103486969 A CN103486969 A CN 103486969A CN 201310464698 A CN201310464698 A CN 201310464698A CN 103486969 A CN103486969 A CN 103486969A
Authority
CN
China
Prior art keywords
camera
point
image
aimed
alignment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310464698.8A
Other languages
Chinese (zh)
Other versions
CN103486969B (en
Inventor
熊金磊
张建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201310464698.8A priority Critical patent/CN103486969B/en
Publication of CN103486969A publication Critical patent/CN103486969A/en
Application granted granted Critical
Publication of CN103486969B publication Critical patent/CN103486969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a method for aligning machine vision. The method mainly includes the following steps of fixing a first camera and a second camera, determining a coordinate conversion relationship, selecting at least one first feature point and at least two second feature points, assigning a target physical location coordinate of the first feature point and target physical location coordinates of the second feature points respectively, collecting an image of a coarse alignment mark point of an object to be aligned, calculating the coordinate deflection of a current physical location coordinate of the first feature point and the target physical location coordinate of the first feature point, conducting coarse alignment on the object to be aligned, collecting an image of a fine alignment mark point of the object to be aligned, calculating the coordinate deflection between a current physical location coordinate of each second feature point with the target physical location coordinate of each second feature point according to the image of the fine alignment mark point, and conducting fine alignment on the object to be aligned. Meanwhile, the invention further provides a device for aligning machine vision, and the method and device for aligning machine vision are low in cost and good in instantaneity.

Description

Machine vision alignment methods and device thereof
Technical field
The present invention relates to the industrial devices positioning field, particularly relate to a kind of machine vision alignment methods and device thereof.
Background technology
At present, the machine vision alignment system has been widely used on automated production equipment, robot, medical detection means and military arms.On industry, vision alignment system precision generally can reach 2~5 μ m, and the requirement of alignment precision is also being improved constantly, and estimates that the precision of follow-on vision alignment system can reach 1 μ m, even reaches below 1 μ m.This micron-sized aligning positioning precision, be difficult to realize if depend merely on manual type, on production efficiency, more can't guarantee.Especially on the automated production equipment, alignment work usually has repeatability, and the manual-alignment mode can add production cost and the handling cost of large enterprises far away undoubtedly, and the production precision is difficult to guarantee.The machine vision alignment system that vision system is equipped with to motion control device and produces, can fine replacement workman be produced.
Along with improving constantly of camera resolution or other hardware performance, and people's updating on image processing algorithm, can constantly meet the new requirement on industry, the vision alignment system proposed.The improvement of people to the vision alignment system in the past, or be adopt high-resolution camera or adopt other high-performance hardware from hardware, or from image processing algorithm, improved exactly.
Yet, solely from hardware, get on to improve alignment methods, can make machine vision alignment system cost greatly increase.Simultaneously, for some equipment, as chip package, dull and stereotyped encapsulation equipment, its alignment precision to reach especially 1 micron left and right or below, the resolution of existing camera can not meet the demands.And solely from software, get on to improve alignment methods, and can increase again the complexity of algorithm, algorithm is too complicated will affect the real-time that vision is aimed at.
Summary of the invention
Based on this, be necessary to provide the lower and real-time of a kind of cost machine vision alignment methods and device thereof preferably.
A kind of machine vision alignment methods, comprise the steps:
Fixedly first camera and resolution are higher than the second camera of described first camera;
Described first camera and second camera are demarcated, determined the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at;
Selected coarse alignment gauge point and fine alignment gauge point on object to be aimed at, selected at least one First Characteristic point on described coarse alignment gauge point, select at least two Second Characteristic points on described fine alignment gauge point, and specify respectively the target physical position coordinates of described First Characteristic point and Second Characteristic point;
Utilize described first camera to gather the image of the coarse alignment gauge point of object to be aimed at;
Image according to described coarse alignment gauge point, adopt area image division pre-service and Boundary extracting algorithm and calculate the current physical location coordinate of described First Characteristic point based on described coordinate transformation relation, then calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of described First Characteristic point and described First Characteristic point;
Treat and aim at object and carry out coarse alignment according to the grid deviation between the target physical position coordinates of the current physical location coordinate of described First Characteristic point and described First Characteristic point;
Utilize described second camera to gather the image of the fine alignment gauge point of object to be aimed at;
Image according to described fine alignment gauge point, adopt area image division pre-service and Boundary extracting algorithm and calculate the current physical location coordinate of described fine alignment gauge point based on described coordinate transformation relation, then calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of described Second Characteristic point and described Second Characteristic point according to the current physical location coordinate of described fine alignment gauge point;
Treat and aim at object and carry out fine alignment according to the grid deviation between the target physical position coordinates of the current physical location coordinate of described Second Characteristic point and described Second Characteristic point.
Therein in embodiment, described described first camera and second camera are demarcated, are determined that the step of the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at comprises:
Scaling board is placed in plane, object to be aimed at place, utilizes first camera and second camera to take respectively a scaling board image;
Extract the unique point of scaling board image, and determine the image coordinate of all unique points that extract;
Determine the physical location coordinate of described unique point according to the physical location of scaling board;
Determine the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at according to the physical location coordinate of the image coordinate of unique point and unique point.
In embodiment, the step of the image of the described coarse alignment gauge point that utilizes described first camera to gather object to be aimed at comprises therein:
Treat and aim at object and carry out initial coarse alignment according to being arranged at coarse alignment gauge point on object to be aimed at.
In embodiment, the step of the image of the described fine alignment gauge point that utilizes described second camera to gather object to be aimed at comprises therein:
Treat and aim at object and carry out initial fine alignment according to being arranged at fine alignment gauge point on object to be aimed at.
In embodiment, described fine alignment gauge point adopts asymmetrical graphic therein.
In embodiment, described coarse alignment gauge point is arranged at respectively with the fine alignment gauge point two surfaces that object to be aimed at is relative therein.
In embodiment, described area image division pre-service adopts quaternary tree image splitting method therein.
In addition, also be necessary to provide a kind of machine vision alignment device, comprise first camera, resolution second camera, processor and the control system higher than described first camera, described first camera and second camera are connected to described processor, and described processor is connected in described control system;
Described first camera is for gathering the image of coarse alignment gauge point of object to be aimed at, described second camera is for gathering the image of fine alignment gauge point of object to be aimed at, described processor is for the grid deviation between the target physical position coordinates of the current physical location coordinate that calculates First Characteristic point according to the coordinate transformation relation of image based on predetermined of described coarse alignment gauge point and described First Characteristic point, also for the image according to described fine alignment gauge point, the coordinate transformation relation based on predetermined calculates the grid deviation between the target physical position coordinates of the current physical location coordinate of described Second Characteristic point and described Second Characteristic point to described processor, described control system is for being moved and realized coarse alignment and fine alignment the platform of placing object to be aimed at.
Therein in embodiment, described machine vision alignment device also comprises Initial Alignment Systems, described Initial Alignment Systems is connected in described control system, for according to being arranged at coarse alignment gauge point on object to be aimed at and fine alignment gauge point, treating and aim at object and carry out respectively initial coarse alignment and initial fine alignment.
Therein in embodiment, described coarse alignment gauge point is cruciform or the simple graph such as circular, and described fine alignment gauge point is asymmetrical graphic, and described coarse alignment gauge point is arranged at respectively with the fine alignment gauge point two surfaces that object to be aimed at is relative.
Above-mentioned machine vision alignment methods adopts the camera of one high and one low Resolutions to carry out image acquisition, than two high-resolution cameras of current use, can reduce costs, can also effectively reduce the data volume that successive image is processed simultaneously, and the division pre-service of following adopted area image and Boundary extracting algorithm are treated the aligning object and are carried out coarse alignment and fine alignment, can simplify and need data volume to be processed, effectively improve real-time.In addition, according to above-mentioned machine vision alignment methods, also provide a kind of machine vision alignment device, this device has the lower and real-time of cost advantage preferably.
The accompanying drawing explanation
The process flow diagram of the machine vision alignment methods that Fig. 1 is an embodiment;
Fig. 2 is an embodiment described first camera and second camera are demarcated, and determines the process flow diagram of the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at;
The schematic diagram of the employing quaternary tree image splitting method reduced data amount that Fig. 3 is an embodiment;
The structural representation of the machine vision alignment device that Fig. 4 is an embodiment;
The operational flowchart that the sheet material position is aimed at that Fig. 5 is an embodiment.
Embodiment
In order to solve current machine vision alignment methods and installation cost is higher and real-time is poor problem thereof, present embodiment provides a kind of machine vision alignment methods and device thereof.Below in conjunction with specific embodiment, machine vision alignment methods and device thereof are specifically described.
Please refer to Fig. 1, the machine vision alignment methods that present embodiment provides, comprise the steps:
Step S110: fixedly first camera and resolution are higher than the second camera of first camera.In this step, the position of first camera and second camera is fixed, like this, first camera and second camera can be taken the photo of object to be aimed in definite camera coordinates system.Simultaneously, first camera and second camera can be separately fixed at the both sides of object to be aimed at.
Step S120: first camera and second camera are demarcated, determine the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at.In this step, can utilize uncalibrated image to determine the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at.Please refer to Fig. 2, step S120 specifically comprises the steps:
Step S122: scaling board is placed in plane, object to be aimed at place, utilizes first camera and second camera to take respectively a scaling board image.
Step S124: extract the unique point of scaling board image, and determine the image coordinate of all unique points that extract.In this step, take out some points as unique point from the scaling board image, determine the coordinate figure of these unique points in image simultaneously.
Step S126: the physical location coordinate of determining unique point according to the physical location of scaling board.In this step, determine the physical location coordinate of unique point according to the physical location in real space of scaling board.
Step S128: determine the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at according to the physical location coordinate of the image coordinate of unique point and unique point.In this step, the image coordinate of the unique point obtained according to step S124 and step S126 and the physical location coordinate of unique point, can set up the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at, this coordinate transformation relation is mainly used in follow-up coarse alignment and fine alignment step.
Step S130: selected coarse alignment gauge point and fine alignment gauge point on object to be aimed at, selected at least one First Characteristic point on the coarse alignment gauge point, select at least two Second Characteristic points on the fine alignment gauge point, and specify respectively the target physical position coordinates of First Characteristic point and Second Characteristic point.Here, the First Characteristic point is selected from the coarse alignment gauge point, and for follow-up coarse alignment step, and the Second Characteristic point is selected from the fine alignment gauge point, for the subsequent fine alignment procedures.Coarse alignment gauge point and fine alignment gauge point are selected on object to be aimed in advance.
Step S140: utilize first camera to gather the image of the coarse alignment gauge point of object to be aimed at.The pixel of first camera is lower, is used for gathering the image of the coarse alignment gauge point of object to be aimed at.The image of this coarse alignment gauge point is for follow-up coarse alignment step.
Step S150: according to the image of coarse alignment gauge point, adopt area image division pre-service and Boundary extracting algorithm and calculate the current physical location coordinate of First Characteristic point based on coordinate transformation relation, then calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of First Characteristic point and First Characteristic point.In this step, the coordinate transformation relation of setting up based on step S120, can calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of First Characteristic point and First Characteristic point according to the image of the coarse alignment gauge point obtained by step S140.Grid deviation simple computation method between the current physical location coordinate of First Characteristic point and the target physical position coordinates of First Characteristic point is as follows:
Set First Characteristic point target physical location coordinate and be (x ', y '), the current physical location of processing the First Characteristic point obtained by image is (x1, y1), in the coarse alignment step, grid deviation between the current physical location coordinate of First Characteristic point and the target physical position coordinates of First Characteristic point be exactly (x '-x1, y '-y1).
Step S160: according to the grid deviation between the target physical position coordinates of the current physical location coordinate of First Characteristic point and First Characteristic point, treat and aim at object and carry out coarse alignment.In this step, the grid deviation value obtained according to step S150 (x '-x1, y '-y1), by control system, the platform of placing object to be aimed at is moved and can be completed the coarse alignment step.
Step S170: utilize second camera to gather the image of the fine alignment gauge point of object to be aimed at.The pixel of second camera is higher, is used for gathering the image of the fine alignment gauge point of object to be aimed at.The image of this fine alignment gauge point is for the subsequent fine alignment procedures.
Step S180: according to the image of fine alignment gauge point, adopt area image division pre-service and Boundary extracting algorithm and calculate the current physical location coordinate of fine alignment gauge point based on coordinate transformation relation, then calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of Second Characteristic point and Second Characteristic point according to the current physical location coordinate of fine alignment gauge point.In this step, the coordinate transformation relation of setting up based on step S120, can calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of Second Characteristic point and Second Characteristic point according to the image of the fine alignment gauge point obtained by step S170.Grid deviation simple computation method between the current physical location coordinate of Second Characteristic point and the target physical position coordinates of Second Characteristic point is as follows:
The target physical position coordinates of setting two Second Characteristic points is respectively (xc, yc), (xs, ys), the current physical location of processing two Second Characteristic points that obtain by image is (x2, y2), (x3, y3), the grid deviation between the target physical position coordinates of the current physical location coordinate of Second Characteristic point and Second Characteristic point is ((x2-xc+x3-xs)/2, (y2-yc+y3-ys)/2), and angular deviation be Δθ = arctan ( y 2 - y c ) + ( y 3 - y s ) d .
Wherein, the distance between the target physical position that d is two Second Characteristic points.
Can see, in step S150 and step S180, all comprised that an area image divides pretreated step simultaneously.Area image division pre-treatment step purpose is to simplify to need data volume to be processed, improves the real-time of the machine vision alignment methods of present embodiment.What here, area image division pre-treatment step adopted is quaternary tree image splitting method.Please refer to Fig. 3, quaternary tree image splitting method is first will be gathered the image of resulting coarse alignment gauge point or be split into four zones by the image averaging that second camera gathers resulting fine alignment gauge point by first camera, and giving each zone marker be regional 1,2,3,4.The quaternary tree division is carried out in selected one of them zone (as zone 1) again, then carries out mark (as zone 11,12,13,14), and the level quantity of division is determined according to actual conditions.Like this, as long as programmed in a zone of the bottom, first the view data substitution program of a zonule is carried out to edge extracting, then other zone is progressively searched for, finally extract the edge of First Characteristic point or Second Characteristic point and determine its current physical location coordinate.So just without the total data of the image of the image of processing the coarse alignment gauge point or fine alignment gauge point, the data volume that can greatly simplify procedures and process, thus improve counting yield.While extracting in the present embodiment the edge of Second Characteristic point, Boundary extracting algorithm can adopt the sub-pix image processing method, such as method of interpolation or wavelet analysis method etc., for improving the image processing accuracy.
Step S190: according to the grid deviation between the target physical position coordinates of the current physical location coordinate of Second Characteristic point and Second Characteristic point, treat and aim at object and carry out fine alignment.In this step, grid deviation value between the current physical location coordinate of the Second Characteristic point obtained according to step S180 and the target physical position coordinates of Second Characteristic point, moved and can be completed the fine alignment step the platform of placing object to be aimed at by control system.
In addition, above-mentioned steps S140 comprises according to being arranged at coarse alignment gauge point on object to be aimed at and treats and aim at the step that object carries out initial coarse alignment.The coarse alignment gauge point can adopt some simple figures, such as circle, cruciform etc.Can treat the aligning object by the coarse alignment gauge point and carry out initial coarse alignment, and then execution step S150.Similarly, step S170 comprises according to being arranged at fine alignment gauge point on object to be aimed at and treats and aim at the step that object carries out initial fine alignment.The fine alignment gauge point adopts asymmetrical image, here asymmetric non-centrosymmetry and two kinds of situations of non-rotational symmetry of comprising.Owing to having selected at least two Second Characteristic points on the fine alignment gauge point, and the fine alignment gauge point adopts asymmetric image, therefore can detect the phenomenon that object to be aimed at may rotate.Can treat the aligning object by the fine alignment gauge point and carry out initial fine alignment, and then execution step S180.In the present embodiment, in order further to improve validity and the accuracy of aiming at, coarse alignment gauge point and fine alignment gauge point are that to be separately positioned on relative two of object to be aimed at lip-deep.
Above-mentioned machine vision alignment methods adopts the camera of one high and one low Resolutions to carry out image acquisition, than two high-resolution cameras of current use, can reduce costs, can also effectively reduce the data volume that successive image is processed simultaneously, and the division pre-service of following adopted area image and Boundary extracting algorithm are treated the aligning object and are carried out coarse alignment and fine alignment, can simplify and need data volume to be processed, effectively improve real-time.
In addition, present embodiment also provides a kind of machine vision alignment device.Please refer to Fig. 4, the machine vision alignment device 400 that present embodiment provides comprises first camera 410, second camera 420, processor 430 and control system 440.First camera 410 and second camera 420 are connected to processor 430.Processor 430 is connected in control system 440.
First camera 410 is lower than second camera 420 pixels, for gather the image of the coarse alignment gauge point of object to be aimed in above-mentioned machine vision alignment methods.And second camera 420 is higher with respect to first camera 410 pixels, for gather the image of the fine alignment gauge point of object to be aimed in above-mentioned machine vision alignment methods.Simultaneously, first camera and second camera can be separately fixed at the both sides of object to be aimed at.
For the image according to the coarse alignment gauge point, the coordinate transformation relation based on predetermined calculates the grid deviation between the target physical position coordinates of the current physical location coordinate of First Characteristic point and First Characteristic point to processor 430, and also for the image according to the fine alignment gauge point, the coordinate transformation relation based on predetermined calculates the grid deviation between the target physical position coordinates of the current physical location coordinate of Second Characteristic point and Second Characteristic point.Here predetermined coordinate transformation relation step S120 in above-mentioned machine vision alignment methods determines.
Control system 440 is moved and is realized coarse alignment and fine alignment the platform of placing object to be aimed at for the grid deviation value of calculating according to processor 430.
In addition, machine vision alignment device 400 also comprises Initial Alignment Systems 450.Initial Alignment Systems 450 is connected in control system 440, for according to being arranged at coarse alignment gauge point on object to be aimed at and fine alignment gauge point, treating and aim at object and carry out respectively initial coarse alignment and initial fine alignment.
Please refer to Fig. 5, using sheet material as concrete object to be aimed at, according to the aforementioned machines visual aligning method, at first sheet material enter the platform of placing object to be aimed at, then taken pictures by 410 pairs of sheet materials of first camera, gather the image of the coarse alignment gauge point of sheet material, processor 430 goes out grid deviation according to the image calculation of coarse alignment gauge point, then control system 440, according to this deviation control platform movement, realizes the coarse alignment to the sheet material position.By 420 pairs of sheet materials of second camera, taken pictures again afterwards, gather the image of the fine alignment gauge point of sheet material, processor 430 goes out grid deviation according to the image calculation of fine alignment gauge point, and then control system 440, according to this deviation control platform movement, realizes the fine alignment to the sheet material position.
Above-mentioned machine vision alignment device 400 realizes treating the aligning of the object of aligning according to the aforementioned machines visual aligning method, and has the low and real-time of cost advantage preferably.
The above embodiment has only expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (10)

1. a machine vision alignment methods, comprise the steps:
Fixedly first camera and resolution are higher than the second camera of described first camera;
Described first camera and second camera are demarcated, determined the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at;
Selected coarse alignment gauge point and fine alignment gauge point on object to be aimed at, selected at least one First Characteristic point on described coarse alignment gauge point, select at least two Second Characteristic points on described fine alignment gauge point, and specify respectively the target physical position coordinates of described First Characteristic point and Second Characteristic point;
Utilize described first camera to gather the image of the coarse alignment gauge point of object to be aimed at;
Image according to described coarse alignment gauge point, adopt area image division pre-service and Boundary extracting algorithm and calculate the current physical location coordinate of described First Characteristic point based on described coordinate transformation relation, then calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of described First Characteristic point and described First Characteristic point;
Treat and aim at object and carry out coarse alignment according to the grid deviation between the target physical position coordinates of the current physical location coordinate of described First Characteristic point and described First Characteristic point;
Utilize described second camera to gather the image of the fine alignment gauge point of object to be aimed at;
Image according to described fine alignment gauge point, adopt area image division pre-service and Boundary extracting algorithm and calculate the current physical location coordinate of described fine alignment gauge point based on described coordinate transformation relation, then calculate the grid deviation between the target physical position coordinates of the current physical location coordinate of described Second Characteristic point and described Second Characteristic point according to the current physical location coordinate of described fine alignment gauge point;
Treat and aim at object and carry out fine alignment according to the grid deviation between the target physical position coordinates of the current physical location coordinate of described Second Characteristic point and described Second Characteristic point.
2. machine vision alignment methods according to claim 1, is characterized in that, described described first camera and second camera demarcated, and determines that the step of the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at comprises:
Scaling board is placed in plane, object to be aimed at place, utilizes first camera and second camera to take respectively a scaling board image;
Extract the unique point of scaling board image, and determine the image coordinate of all unique points that extract;
Determine the physical location coordinate of described unique point according to the physical location of scaling board;
Determine the coordinate transformation relation between the physical location coordinate system of camera coordinates system and object to be aimed at according to the physical location coordinate of the image coordinate of unique point and unique point.
3. machine vision alignment methods according to claim 1, is characterized in that, the step of the image of the described coarse alignment gauge point that utilizes described first camera to gather object to be aimed at comprises:
Treat and aim at object and carry out initial coarse alignment according to being arranged at coarse alignment gauge point on object to be aimed at.
4. machine vision alignment methods according to claim 3, is characterized in that, the step of the image of the described fine alignment gauge point that utilizes described second camera to gather object to be aimed at comprises:
Treat and aim at object and carry out initial fine alignment according to being arranged at fine alignment gauge point on object to be aimed at.
5. machine vision alignment methods according to claim 4, is characterized in that, described fine alignment gauge point adopts asymmetrical graphic.
6. machine vision alignment methods according to claim 4, is characterized in that, described coarse alignment gauge point is arranged at respectively with the fine alignment gauge point two surfaces that object to be aimed at is relative.
7. machine vision alignment methods according to claim 5, is characterized in that, described area image division pre-service adopts quaternary tree image splitting method.
8. a machine vision alignment device, it is characterized in that, comprise first camera, resolution second camera, processor and the control system higher than described first camera, described first camera and second camera are connected to described processor, and described processor is connected in described control system;
Described first camera is for gathering the image of coarse alignment gauge point of object to be aimed at, described second camera is for gathering the image of fine alignment gauge point of object to be aimed at, described processor is for the grid deviation between the target physical position coordinates of the current physical location coordinate that calculates First Characteristic point according to the coordinate transformation relation of image based on predetermined of described coarse alignment gauge point and described First Characteristic point, also for the image according to described fine alignment gauge point, the coordinate transformation relation based on predetermined calculates the grid deviation between the target physical position coordinates of the current physical location coordinate of described Second Characteristic point and described Second Characteristic point to described processor, described control system is for being moved and realized coarse alignment and fine alignment the platform of placing object to be aimed at.
9. machine vision alignment device according to claim 8, it is characterized in that, described machine vision alignment device also comprises Initial Alignment Systems, described Initial Alignment Systems is connected in described control system, for according to being arranged at coarse alignment gauge point on object to be aimed at and fine alignment gauge point, treating and aim at object and carry out respectively initial coarse alignment and initial fine alignment.
10. machine vision alignment device according to claim 9, it is characterized in that, described coarse alignment gauge point is cruciform or circle, and described fine alignment gauge point is asymmetrical graphic, and described coarse alignment gauge point is arranged at respectively with the fine alignment gauge point two surfaces that object to be aimed at is relative.
CN201310464698.8A 2013-09-30 2013-09-30 Machine vision alignment methods and device thereof Active CN103486969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310464698.8A CN103486969B (en) 2013-09-30 2013-09-30 Machine vision alignment methods and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310464698.8A CN103486969B (en) 2013-09-30 2013-09-30 Machine vision alignment methods and device thereof

Publications (2)

Publication Number Publication Date
CN103486969A true CN103486969A (en) 2014-01-01
CN103486969B CN103486969B (en) 2016-02-24

Family

ID=49827386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310464698.8A Active CN103486969B (en) 2013-09-30 2013-09-30 Machine vision alignment methods and device thereof

Country Status (1)

Country Link
CN (1) CN103486969B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104165598A (en) * 2014-08-05 2014-11-26 中国科学院长春光学精密机械与物理研究所 Automatic reflection light spot positioning method for large-caliber mirror interferometer vertical type detection
CN105486288A (en) * 2015-11-30 2016-04-13 上海电机学院 Machine-vision-based vision servo alignment system
CN106248349A (en) * 2016-10-10 2016-12-21 长飞光纤光缆股份有限公司 A kind of test optical fiber automatic coupler
CN107014291A (en) * 2017-02-15 2017-08-04 南京航空航天大学 A kind of vision positioning method of the accurate transfer platform of material
CN107380671A (en) * 2017-08-09 2017-11-24 庄秀宝 A kind of reusable edible Intelligent Package box
CN108536151A (en) * 2018-05-06 2018-09-14 长春北方化工灌装设备股份有限公司 A kind of the closed loop execution system and visual guidance method of visual guidance
CN109981982A (en) * 2019-03-25 2019-07-05 联想(北京)有限公司 Control method, device and system
CN110341978A (en) * 2019-05-31 2019-10-18 北京航天飞腾装备技术有限责任公司 A kind of automatic bomb truck alignment methods and system
CN111061260A (en) * 2018-10-17 2020-04-24 长沙行深智能科技有限公司 Automatic container transfer control method based on automatic driving coarse alignment and two-dimensional image fine alignment
CN111498474A (en) * 2020-03-13 2020-08-07 广东九联科技股份有限公司 Control system and method for taking and placing module
CN113376181A (en) * 2021-06-09 2021-09-10 深圳中科飞测科技股份有限公司 Detection method and detection equipment
WO2021226774A1 (en) * 2020-05-11 2021-11-18 深圳中科飞测科技有限公司 Method for acquiring conversion relationship, and detection device and detection method
CN115628685A (en) * 2022-08-15 2023-01-20 魅杰光电科技(上海)有限公司 Method and equipment for measuring critical dimension and method for positioning critical dimension in grading manner
CN117119115A (en) * 2023-10-23 2023-11-24 杭州百子尖科技股份有限公司 Calibration method and device based on machine vision, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080007700A1 (en) * 2006-07-10 2008-01-10 Vanbaar Jeroen Method and system for aligning an array of rear-projectors
CN101127075A (en) * 2007-09-30 2008-02-20 西北工业大学 Multi-view angle three-dimensional human face scanning data automatic registration method
CN101216321A (en) * 2008-01-04 2008-07-09 南京航空航天大学 Rapid fine alignment method for SINS
US20120069018A1 (en) * 2010-09-22 2012-03-22 Casio Computer Co., Ltd. Ar process apparatus, ar process method and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080007700A1 (en) * 2006-07-10 2008-01-10 Vanbaar Jeroen Method and system for aligning an array of rear-projectors
CN101127075A (en) * 2007-09-30 2008-02-20 西北工业大学 Multi-view angle three-dimensional human face scanning data automatic registration method
CN101216321A (en) * 2008-01-04 2008-07-09 南京航空航天大学 Rapid fine alignment method for SINS
US20120069018A1 (en) * 2010-09-22 2012-03-22 Casio Computer Co., Ltd. Ar process apparatus, ar process method and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪宏昇 等: "高精度机器视觉对准系统的研究与设计", 《光学技术》, vol. 30, no. 2, 31 March 2004 (2004-03-31) *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104165598A (en) * 2014-08-05 2014-11-26 中国科学院长春光学精密机械与物理研究所 Automatic reflection light spot positioning method for large-caliber mirror interferometer vertical type detection
CN104165598B (en) * 2014-08-05 2017-01-25 中国科学院长春光学精密机械与物理研究所 Automatic reflection light spot positioning method for large-caliber mirror interferometer vertical type detection
CN105486288A (en) * 2015-11-30 2016-04-13 上海电机学院 Machine-vision-based vision servo alignment system
CN106248349A (en) * 2016-10-10 2016-12-21 长飞光纤光缆股份有限公司 A kind of test optical fiber automatic coupler
CN107014291A (en) * 2017-02-15 2017-08-04 南京航空航天大学 A kind of vision positioning method of the accurate transfer platform of material
CN107014291B (en) * 2017-02-15 2019-04-09 南京航空航天大学 A kind of vision positioning method of material precision transfer platform
CN107380671A (en) * 2017-08-09 2017-11-24 庄秀宝 A kind of reusable edible Intelligent Package box
CN107380671B (en) * 2017-08-09 2023-06-30 庄秀宝 But cyclic utilization intelligence parcel box
CN108536151A (en) * 2018-05-06 2018-09-14 长春北方化工灌装设备股份有限公司 A kind of the closed loop execution system and visual guidance method of visual guidance
CN111061260A (en) * 2018-10-17 2020-04-24 长沙行深智能科技有限公司 Automatic container transfer control method based on automatic driving coarse alignment and two-dimensional image fine alignment
CN109981982B (en) * 2019-03-25 2021-02-19 联想(北京)有限公司 Control method, device and system
CN109981982A (en) * 2019-03-25 2019-07-05 联想(北京)有限公司 Control method, device and system
CN110341978A (en) * 2019-05-31 2019-10-18 北京航天飞腾装备技术有限责任公司 A kind of automatic bomb truck alignment methods and system
CN111498474A (en) * 2020-03-13 2020-08-07 广东九联科技股份有限公司 Control system and method for taking and placing module
WO2021226774A1 (en) * 2020-05-11 2021-11-18 深圳中科飞测科技有限公司 Method for acquiring conversion relationship, and detection device and detection method
CN113376181A (en) * 2021-06-09 2021-09-10 深圳中科飞测科技股份有限公司 Detection method and detection equipment
CN115628685A (en) * 2022-08-15 2023-01-20 魅杰光电科技(上海)有限公司 Method and equipment for measuring critical dimension and method for positioning critical dimension in grading manner
CN115628685B (en) * 2022-08-15 2024-03-26 魅杰光电科技(上海)有限公司 Method and equipment for measuring critical dimension and method for classifying and positioning critical dimension
CN117119115A (en) * 2023-10-23 2023-11-24 杭州百子尖科技股份有限公司 Calibration method and device based on machine vision, electronic equipment and storage medium
CN117119115B (en) * 2023-10-23 2024-02-06 杭州百子尖科技股份有限公司 Calibration method and device based on machine vision, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN103486969B (en) 2016-02-24

Similar Documents

Publication Publication Date Title
CN103486969B (en) Machine vision alignment methods and device thereof
CN101814185B (en) Line structured light vision sensor calibration method for micro-size measurement
CN104729406B (en) A kind of machine vision localization method of element pasted on surface
JP7212236B2 (en) Robot Visual Guidance Method and Apparatus by Integrating Overview Vision and Local Vision
WO2016055031A1 (en) Straight line detection and image processing method and relevant device
CN105865344A (en) Workpiece dimension measuring method and device based on machine vision
CN103993431B (en) A kind of vision correction methods for sewing and system
CN105217324A (en) A kind of novel de-stacking method and system
CN103676976B (en) The bearing calibration of three-dimensional working platform resetting error
CN107297399A (en) A kind of method of robot Automatic-searching bending position
CN106599760B (en) Method for calculating running area of inspection robot of transformer substation
CN102721364A (en) Positioning method and positioning device for workpiece
CN105307115A (en) Distributed vision positioning system and method based on action robot
CN105823504B (en) A kind of more zero point processing method of encoder
CN105783712B (en) A kind of method and device detecting tool marks
CN105307116A (en) Distributed vision positioning system and method based on mobile robot
CN102289810B (en) Quick rectangle detection method of images high resolution and high order of magnitude
CN104715487A (en) Method for sub-pixel edge detection based on pseudo Zernike moments
CN102915043B (en) Method for increasing location accuracy of cloud platform
CN103600353B (en) A kind of method that terminal-collecting machine detects group material edge
CN102930247B (en) A kind of cane stalk recognition method based on computer vision
CN102479004B (en) Touch point positioning method and device and touch screen
CN103644894A (en) Method for object identification and three-dimensional pose measurement of complex surface
CN103949054A (en) Infrared light gun positioning method and system
CN104290102A (en) Rapid positioning compensation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant