CN104915957B - A kind of matching antidote for improving industrial robot 3D vision accuracy of identification - Google Patents

A kind of matching antidote for improving industrial robot 3D vision accuracy of identification Download PDF

Info

Publication number
CN104915957B
CN104915957B CN201510291834.7A CN201510291834A CN104915957B CN 104915957 B CN104915957 B CN 104915957B CN 201510291834 A CN201510291834 A CN 201510291834A CN 104915957 B CN104915957 B CN 104915957B
Authority
CN
China
Prior art keywords
image
robot
edge
matching
characteristic point
Prior art date
Application number
CN201510291834.7A
Other languages
Chinese (zh)
Other versions
CN104915957A (en
Inventor
何再兴
来建良
徐春伟
Original Assignee
何再兴
来建良
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 何再兴, 来建良 filed Critical 何再兴
Priority to CN201510291834.7A priority Critical patent/CN104915957B/en
Publication of CN104915957A publication Critical patent/CN104915957A/en
Application granted granted Critical
Publication of CN104915957B publication Critical patent/CN104915957B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a kind of matching antidote for improving industrial robot 3D vision accuracy of identification, comprise the following steps:(a) robot system of vision guide is set up;(b) detection and adjustment of part position;(c) detection and replacing of part;(d) acquisition and conversion of image;(e) camera distortion is handled;(f) camera calibration;(g) to image gray processing processing;(h) image preprocessing;(i) Edge Gradient Feature;(j) Edge Feature Matching and correction.The present invention carries out Feature Points Matching to image, according to the topological relation of its characteristic point, obtain the triangulated graph of reference picture, same characteristic point is carried out to another image according to the triangulated graph of reference picture picture to connect, and determine abnormal edge, further according to the abnormal edge and normal ratio of each characteristic point, judge and eliminate without matching double points, Mismatching point pair is corrected, the degree of accuracy of Feature Points Matching is improved, improves the precision of three-dimensional modeling.

Description

A kind of matching antidote for improving industrial robot 3D vision accuracy of identification
Technical field
The invention belongs to robot three-dimensional vision system technical field, and in particular to one kind improves industrial robot three-dimensional and regarded Feel the matching antidote of accuracy of identification.
Background technology
Industrial robot is a kind of working condition and production environment adaptability and flexibility very strong flexible automation to be set It is standby.It is particularly suitable for use in multi items, becomes the flexible production of batch.It improves product quality, raising production efficiency and improvement to stable Working condition plays very important effect.Because robot is a kind of to adapt to the flexible automation that product updates rapidly Equipment, thus its application substantially reduce new product change the production cycle, so as to improve the competitiveness of product in market.In the present age In technological revolution, industrial production increasingly tends to flexible automation direction and developed, and Industrial Robot Technology has turned into modern work Robot technology has all been included in high-tech development plan by an important component in industry technological revolution, many countries.Industry The development of robot technology will produce more far-reaching influence to the development of social economy and productivity.
Industrial robot is the chief component of FMS machining cells, and its flexibility and flexibility becomes automation Essential equipment in logistics system, is mainly used in material, the handling of part, go-no-go and storing.The current whole world have number with The industrial robot of million each middle type is applied in machine-building, part processing and the field such as assembling and transport, but these Using being run after being all based on first accurate teaching, and working environment is all fixed, so robot can succeed Ground captures object.But it is understood that in many cases be particularly streamline occasion part pose it is often unfixed, The pose and dreamboat object pose of realistic objective object are always devious, and this deviation is even very little may result in machine The failure of people's operation task.It is this to cause robot to complete the situation of task well greatly due to the change of environment Limit the practical ranges of robot.With the progress of modern production manufacturing technology, the flexibility of production line is further improved Requirement it is also increasingly urgent, to industrial robot system's application field, flexibility and initiative requirement also more and more higher, and machine The premise that people possesses certain independence is that have certain understanding to itself environment, and this forces people to increase sensor to improve machine People is to the perception of environment, and in this respect, vision, close feel, tactile and power feel there is great effect, robot vision quilt It is considered the most important perceptibility of robot.Vision is the important means in the mankind observation world and the cognitive world.Table according to statistics It is bright, the information that the mankind obtain from the external world, more than 80% is from vision, and this shows the importance of human vision function, and The mankind have very high utilization rate to visual information.All dream assigns industrial robot vision's function, machine to the mankind for a long time People's vision is the embodiment for simulating human vision in robot.
High-end industrial robot vision of today is just developed towards 3-D technology, can be with by the identification to product three-dimensional information More complete spatial information is obtained to improve the accuracy rate of identification.The basis of this 3D vision technology is set up product three Dimension module, binocular vision is one of existing frequently-used three-dimensional modeling method.The camera that it has mainly been demarcated using two, never Same angle shoots two dimensional image to product, then using Feature Points Matching method, by the same point on product respectively in two width Imaging point pairing in image, so as to set up three-dimensional point cloud and final threedimensional model.The method of this three-dimensional modeling is relied on very much In product space, shape of product standardization, Feature Points Matching precision, but the precision of existing Feature Points Matching method is inadequate Height, often produces the situation of error hiding, has a strong impact on the precision of follow-up three-dimensional modeling.
The content of the invention
It is the invention provides a kind of matching antidote for improving industrial robot 3D vision accuracy of identification, part is accurate Scan position really is adjusted to, the shape contour of strict control part, to the figure of two different angles of the Same Part of acquisition Matched one by one as carrying out characteristic point, judge and eliminate no matching double points, corrected Mismatching point pair, improve the standard of Feature Points Matching Exactness, improves the precision of three-dimensional modeling.
The present invention is adopted the following technical scheme that:
A kind of matching antidote for improving industrial robot 3D vision accuracy of identification, it is characterised in that including following step Suddenly:
(a) robot system of vision guide is set up:
Using upper and lower two layers of absolute construction system, the control of superstructure system is obtained, processing external information, is made certainly Plan, and control information is sent to understructure system, superstructure system uses robot controller, and understructure system receives The control information of superstructure system, the operation that is controlled to motor, motor drive connection robot body, then end is grasped Make on mechanical arm of the device fixed to robot, then sensor is fixed on the end-effector on rectangular co-ordinate, will Part is placed on work top;
(b) detection and adjustment of part position:
The location of the part being placed on work top, record part are detected using detection module, is according to scanning The path of system, position and the magnitude of misalignment of the location of part of contrast standard image are analyzed by analysis module, passes through position Put position of the correcting system adjustment part on work top so that part is in the scanning plane scope of scanning system;
(c) detection and replacing of part:
Determine part be in scanning plane scope in after, outline identification system on the basis of the profile information of qualified part, Part of the investigation on work top, obtains the phase knowledge and magnanimity of part, the rejected part on work top is replaced with New part;
(d) acquisition and conversion of image:
Scanning system is advanced with constant movement velocity along the direction parallel to scanning plane, scanning system is inswept parts list Face, adjusts scanning system angle, and scanning system again scans across piece surface, and two same objects are obtained by signal acquisition module The image of different angles, the image of acquisition is divided into by conversion module many horizontal lines being made up of adjacent pixel, will be every The bright dark degree of individual pixel represents that image is represented as an INTEGER MATRICES, is converted into digitized image with an integer value;
(e) camera distortion is handled:
Camera distortion actual pixels coordinate and ideal pixel coordinate are calculated by visual distortion processing module, pass through reality Border pixel coordinate and ideal pixel coordinate measurement and calculation go out distortion coefficients of camera lens vector, and distortion is investigated out by distortion coefficients of camera lens vector Type, is that the work of later stage camera calibration is ready;
(f) camera calibration:
Establish reflecting between the feature point coordinates in extraneous three-dimensional scenic and its corresponding picpointed coordinate on the image plane Relation is penetrated, is the three-dimensional coordinate under robot processing module coordinate system by the three-dimensional measurement Coordinate Conversion under camera coordinate system;
(g) to image gray processing processing;
(h) image preprocessing:
Image preprocessing handles the noise signal in image using image enhaucament and image smoothing mode, improves the clear of image Clear degree, it is ensured that the operation of subsequent edges feature extraction is smoothed out;
(i) Edge Gradient Feature:
Entered the image of the different angles of pretreated two same objects, using SURF algorithm module, in 20*20 sizes Pixel region in the range of, be divided into 4*4 sub-regions, every sub-regions use relative principal direction both horizontally and vertically Harr small echos respond the absolute value sum of sum and response to determine, ultimately form 64 dimensional feature description vectors, same to two The image of one object difference angle carries out feature point extraction, then with the method for Feature Points Matching by the characteristic point of two images Match one by one;
(j) Edge Feature Matching and correction:
For wherein one image as reference picture, triangulation is carried out to it according to its characteristic point, the image is obtained Triangulated graph, according to the triangulated graph of reference picture, characteristic point connection is carried out to another image, and determine exception Edge, further according to the abnormal edge and normal ratio of each characteristic point, judges the match point made mistake, deletion error Match point corresponding to characteristic point pair, suppressing exception edge, detected on the basis of the abnormal edge of detection and eliminate by mistake With point pair, the ratio of the quantity of the connected total edge of quantity at the abnormal edge that a characteristic point is connected is calculated, is occurred Ratio is more than certain threshold value, by its corresponding characteristic point to deleting, completes Edge Feature Matching and correction.
The present invention is by adopting the above-described technical solution, have the advantages that:
The present invention first passes through position and the magnitude of misalignment of the location of part that analysis module analyzes contrast standard image, Part is adjusted to by scan position by position correcting system exactly, then outline identification system is with the profile of qualified part On the basis of information, then the shape contour of strict control part is entered to the image of two different angles of the same object of acquisition Row characteristic point is matched one by one, according to the topological relation of its characteristic point, the triangulated graph of reference picture is obtained, according to reference picture picture Triangulated graph same characteristic point carried out to another image connected, and abnormal edge is determined, further according to each The abnormal edge and normal ratio of characteristic point, judge and eliminate without matching double points, correct Mismatching point pair, improve characteristic point The degree of accuracy of matching, improves the precision of three-dimensional modeling, so that before industrial robot work, pass through industrial robot 3D vision gathers the positional information of part and handles pertinent image information in real time, and part is identified and determined exactly Position, it is determined that the position and direction of required part, so that industrial robot can accurately capture part, industrial robot passes through three Tie up the change that vision understands working environment, correspondingly adjustment is acted, it is ensured that task is normally completed, improve production flexibility and from Dynamicization degree.
Brief description of the drawings
The invention will be further described below in conjunction with the accompanying drawings:
Fig. 1 is a kind of structural frames for the matching antidote for improving industrial robot 3D vision accuracy of identification of the present invention Figure;
Fig. 2 for the present invention in set up vision guide robot system structural representation;
Fig. 3 is the triangulated graph of reference picture in the present invention;
Fig. 4 is the triangulated graph of normal picture in the present invention;
Fig. 5 is the triangulated graph of abnormal image in the present invention.
In figure, 1- robot controllers;2- motors;3- robot bodies;4- mechanical arms;5- end-effectors;6- is sensed Device;7- parts;8- work tops;9- positions correcting system;10- characteristic points;11- edges.
Embodiment
In the occasion of industrial flow-line, the pose of part is often unfixed, the pose and ideal of realistic objective object Target object pose is always devious, and this deviation is even very little may result in the failure of robot manipulation's task.It is this by The situation for causing robot in the change of environment and can not completing task well significantly limit the practical application of robot Scope.With the progress of modern production manufacturing technology, the flexible requirement for further improving production line is also increasingly urgent, to industry Robot system application field, flexibility and initiative requirement also more and more higher, and the premise of the certain independence of robot It is to have certain understanding to itself environment, this forces people to increase sensor to improve perception of the robot to environment, by This is the invention provides a kind of matching antidote for improving industrial robot 3D vision accuracy of identification, as shown in figure 1, it is carried The high degree of accuracy of Feature Points Matching, improves the precision of three-dimensional modeling, industrial robot understands building ring by 3D vision The change in border, correspondingly adjustment is acted, it is ensured that normally completing for task, improves the flexibility and automaticity of production.
This construction method comprises the following steps:
(a) robot system of vision guide is set up:
As shown in Fig. 2 employing bilevel absolute construction system, superstructure system is mainly used to control extraneous letter Acquisition, the processing of breath, are made decisions, and control information is sent to understructure system, and superstructure system uses robot control Device 1 processed, the platform designed by superstructure system is designed using technical grade, stable work in work;Understructure system is by connecing By the control information of superstructure system, the control operation to motor 2 is completed so that robot body 3, which is produced, to be needed to move shape State.For in terms of motion control, using the differential control mode of two-wheeled, the drive connection robot body 3 of motor 2, respectively by only Vertical motor 2 drives revolver and right wheel to move forward and backward, so that robot body 3 produces the motion side of straight line or curve Formula, then add a universal wheel to play a part of Auxiliary support, then the end-effector 5 of robot is fixed to the machinery of robot On arm 4, then the install sensor 6 on the tail end manipulator of robot 5 on rectangular co-ordinate, work is placed on by part 7 On table top 8, the scanning system on sensor 6 is towards part 7, and the visual information of sensor 6 is used by later stage work, obtains three-dimensional Information, the need for the later stage can be according to specific occasion, the other sensors of configuration, with good expandability.Regarded in foundation In the robot system for feeling guiding, the structural behaviour parameter such as table 1 of robot body 3:
Table 1
(b) detection and adjustment of part position:
The location of the part 7 being placed on work top, record part 7 are detected using detection module, according to scanning The path domain of system, position and the magnitude of misalignment of the location of part of contrast standard image are analyzed by analysis module, Pass through the horizontal left and right micro-shifting part 7 of position correcting system 9, position of the adjustment part 7 on work top 8 so that part 7 is in In the scanning plane scope of scanning system so that scanning system can capture the information of the part on industrial flow-line exactly, be The matching correction of later stage 3D vision identification provides reliable image resource.
(c) detection and replacing of part:
Determine part 7 be in scanning plane scope in after, outline identification system on the basis of the profile information of qualified part, Profile is shown green in part 7 of the investigation on work top 8, the phase knowledge and magnanimity of acquisition part 7, general outline identification system Color, then it is qualified part to illustrate the part;Red is shown in profile in outline identification system, then it is not conform to illustrate the part The part of lattice.Rejected part on work top should be replaced with newly by technical staff in time according to outline identification system Part, the wide identifying system of two chaptrels is typically set on industrial flow-line, and the exterior feature identifying system spacing control of two chaptrels is in 6- 15m, per chaptrel, wide identifying system generally requires 2-3 technical staff and operated.
(d) acquisition and conversion of image:
Three dimensional vision system emphasizes accuracy and speed, so needing image capture module to provide accurately and in time clearly Image, so could cause image processing module to draw correct result within the shorter time, it can thus be appreciated that IMAQ Partial performance can directly influence the performance of whole 3D vision.In the robot system of vision guide is set up, it will scan System is advanced with constant movement velocity along the direction parallel to scanning plane, the surface of scanning system is inswept part, adjustment scanning System perspective, scanning system again scans across the surface of part, and the different angles of two same objects are obtained by signal acquisition module Image.In the case of nature, image can not be analyzed directly by processing module.Because processing module can only handle numeral without It is picture, so the image of the different angles of acquired two same objects needs to be converted into number carrying out before processing with processing module Font formula, the method that image is converted into digital form is to divide the image for obtaining the different angles of two same objects through over-sampling For image pixel area, image is converted into digitized image, most common splitting scheme is square sampling grid, and image is divided The many horizontal lines being made up of adjacent pixel are cut into, the image after over-sampling is not also digital picture, because in these pixels Gray value be still a continuous quantity, it is necessary to quantified, so-called quantization is exactly that the bright dark degree of each pixel is whole with one Numerical value represents, i.e. the gray scale discretization of pixel, completes after above-mentioned conversion, image is represented as an INTEGER MATRICES, so The image that acquisition module is obtained can be converted into digitized image.
(e) camera distortion is handled:
Due to reasons such as manufacture, installation, techniques, camera lens have various distortion, in order to improve camera calibration Precision, the distortion of imaging lens is must take into consideration when camera calibration.Calculated and taken the photograph by visual distortion processing module As head distortion actual pixels coordinate and ideal pixel coordinate, camera lens is gone out by actual pixels coordinate and ideal pixel coordinate measurement and calculation abnormal Variable coefficient vector, distortion type is investigated out by distortion coefficients of camera lens vector, is that the work of later stage camera calibration is ready.One As distortion type mainly include:
1st, radial distortion:The change of optical lens radial buckling is the main cause for causing radial deformation.This deformation meeting Picture point is caused to move radially, more remote from central point, its deflection is bigger.Positive radial-deformation can cause a little to away from figure The direction movement of inconocenter, the increase of its proportionality coefficient;Negative radial-deformation can cause a little to be moved to the direction close to picture centre Dynamic, its proportionality coefficient reduces.
2nd, decentering distortion:Due to rigging error, constituting the optical axis of multiple optical lens of optical system can not possibly be total to completely Line, so as to cause lobbing, this deformation is collectively formed by radial deformation component and tangential deformation component.
3rd, thin prism distorts:Thin prism deformation refers to be drawn by optical lens foozle and imaging sensitization array foozle The anamorphose risen, this deformation is collectively formed by radial deformation component and tangential deformation component.
(f) camera calibration:
Need to carry out camera calibration before pre-processing digitized image, take into full account the distortion of imaging lens Afterwards, the mapping pass established between the feature point coordinates in extraneous three-dimensional scenic and its corresponding picpointed coordinate on the image plane System, obtains camera model.The data that sensor is measured in the robot system of vision guide are with sensor own coordinate Based on, thus measurement data will be converted to expression under robot coordinate.Camera acquisition finds out image speck to after data Center pixel position, location of pixels n and coordinate P of the target object under camera coordinatesVideo cameraThere is following relation in (x, y, z):
X=Lsin (θ);Y=Lcos (θ);Z=constants
X in formula, y, z represent the three-dimensional coordinate of the tested part 7 under camera coordinate system;L represents that scanning mirror center is arrived The distance of object and laser cross-over;θ represents position angle between laser beam deviation.Coordinate system in robot processing module Relation between the position of part 7 of coordinate system under the position of part 7 and camera is:
PRobot(x1, y2, z3)=A0A1PVideo camera(x, y, z)
A in formula0Represent transformation matrix of the end-effector 5 on robot coordinate origin;A1Represent sensor 6 in end Installation matrix on operator 5.Can be machine by the three-dimensional measurement Coordinate Conversion under camera coordinate system by camera calibration Three-dimensional coordinate under people's processing module coordinate system, the feature point coordinates established in extraneous three-dimensional scenic with it on the image plane Mapping relations between correspondence picpointed coordinate.
(g) to image gray processing processing:
The image that acquisition module is obtained is coloured image, and it is understood that coloured image rich color, comprising containing much information, Image processing speed is slow, it is contemplated that algorithm is not using the requirement of real-time of colored and identifying system, to coloured image gray processing Processing is necessary.Gray processing is exactly the process for making the R of colour, G, B component value equal, therefore each picture in gray level image The R of element, G, B component value are equal, and the R of each pixel of coloured image, G, B component value are unequal, so aobvious The a variety of colors such as RGB are shown, gray level image is then without this species diversity, the difference in simply brightness having.Gray processing processing Method mainly has following three kinds:Maximum value process, mean value method and weighted value method.
(h) image preprocessing:
Image obtained by IMAQ often contains more noise, so having to pass through pretreatment, removes substantial amounts of make an uproar Acoustical signal, makes image apparent.Image preprocessing handles the noise signal in image using image enhaucament and image smoothing mode, Preprocessing Technique is mainly the signal to noise ratio improved in view data, the suppression for carrying out background etc., it is therefore an objective to mitigate follow-up place The data processing pressure of part is managed, to meet the requirement of real-time of image processing system, and the quality of pretreatment is directly affected Arrive the operation of feature extraction below, Edge Feature Matching and correction.
Image enhaucament can improve the visual effect of image, be also beneficial to further automatically process.Image enhaucament can be with Refer to the noise reduced in image, can also refer to and emphasize or suppress some of image details.The main method of image enhaucament has:Base Method in point processing, the method based on spatial operation, the method based on frequency domain computing.
The main purpose of image smoothing is to eliminate noise.Noise is not limited to the distortion and change that human eye can be seen Shape, some noises only can be just found in progress image procossing.The noise of image often with signal interleaving together, if flat Sliding is improper, the details of image in itself will be made to thicken unclear.Image smoothing mainly includes:The field method of average, wiener filter Ripple, median filtering method.
(i) Edge Gradient Feature:
Entered the image of the different angles of pretreated two same objects, using SURF algorithm module, in 20*20 sizes Pixel region in the range of, be divided into 4*4 sub-regions, every sub-regions use relative principal direction both horizontally and vertically Harr small echos respond the absolute value sum of sum and response to determine, 64 dimensional feature description vectors are ultimately formed, so to two The image of Zhang Tongyi objects difference angle carries out characteristic point 10 and extracted, then with the method for Feature Points Matching by two images Characteristic point 10 is matched one by one.
(j) Edge Feature Matching and correction:
As shown in figure 3, carrying out triangle as reference picture for wherein one image to it according to its characteristic point 10 and cuing open Point, the triangulated graph of the image is obtained, according to the triangulated graph of reference picture, characteristic point 10 is carried out to another image and connected Connect, form edge 11.The triangulated graph of normal picture is shown in Fig. 4, does not have any edge crossing in the figure, therefore do not have Abnormal edge;The triangulated graph of abnormal image is shown in Fig. 5, there is edge crossing in the figure, therefore have abnormal edge.When Determine behind abnormal edge, further according to the abnormal edge and normal ratio of each characteristic point, judge made mistake With point, the characteristic point pair corresponding to the match point of deletion error, suppressing exception edge is examined on the basis of the abnormal edge of detection Survey and eliminate Mismatching point pair, calculate the quantity of the connected total edge of quantity at the abnormal edge that a characteristic point is connected Ratio, there is ratio more than certain threshold value, by its corresponding characteristic point to deleting, so complete Edge Feature Matching and strong Just.
The present invention first passes through position and the magnitude of misalignment of the location of part that analysis module analyzes contrast standard image, Part is adjusted to by scan position by position correcting system exactly, then outline identification system is with the profile of qualified part On the basis of information, then the shape contour of strict control part is entered to the image of two different angles of the same object of acquisition Row characteristic point is matched one by one, according to the topological relation of its characteristic point, the triangulated graph of reference picture is obtained, according to reference picture picture Triangulated graph same characteristic point carried out to another image connected, and abnormal edge is determined, further according to each The abnormal edge and normal ratio of characteristic point, judge and eliminate without matching double points, correct Mismatching point pair, improve characteristic point The degree of accuracy of matching, improves the precision of three-dimensional modeling, so that before industrial robot work, pass through industrial robot 3D vision gathers the positional information of part and handles pertinent image information in real time, and part is identified and determined exactly Position, it is determined that the position and direction of required part, so that industrial robot can accurately capture part, industrial robot passes through three Tie up the change that vision understands working environment, correspondingly adjustment is acted, it is ensured that task is normally completed, improve production flexibility and from Dynamicization degree.
The specific embodiment of the present invention is these are only, but the technical characteristic of the present invention is not limited thereto.It is any with this hair Based on bright, to solve essentially identical technical problem, essentially identical technique effect is realized, made ground simple change, etc. With replacement or modification etc., all it is covered by among protection scope of the present invention.

Claims (1)

1. a kind of matching antidote for improving industrial robot 3D vision accuracy of identification, it is characterised in that including following step Suddenly:
(a) robot system of vision guide is set up:
Using upper and lower two layers of absolute construction system, the control of superstructure system is obtained, processing external information, is made decisions, and Control information is sent to understructure system, superstructure system uses robot controller, and understructure system receives upper strata The control information of structural system, operation, motor drive connection robot body is controlled to motor, then end-effector is consolidated Determine onto the mechanical arm of robot, then sensor is fixed on the end-effector on rectangular co-ordinate, part is put Put on work top;
(b) detection and adjustment of part position:
The location of the part being placed on work top, record part are detected using detection module, according to scanning system Path, position and the magnitude of misalignment of the location of part of contrast standard image are analyzed by analysis module, is mended by position Positive system adjusts position of the part on work top so that part is in the scanning plane scope of scanning system;
(c) detection and replacing of part:
After determining that part is in scanning plane scope, outline identification system is on the basis of the profile information of qualified part, investigation Part on work top, obtains the phase knowledge and magnanimity of part, the rejected part on work top is replaced with new Part;
(d) acquisition and conversion of image:
Scanning system is advanced with constant movement velocity along the direction parallel to scanning plane, scanning system is inswept piece surface, Scanning system angle is adjusted, scanning system again scans across piece surface, two same objects are obtained not by signal acquisition module With the image of angle, the image of acquisition is divided into many horizontal lines being made up of adjacent pixel by conversion module, will be each The bright dark degree of pixel represents that image is represented as an INTEGER MATRICES, is converted into digitized image with an integer value;
(e) camera distortion is handled:
Camera distortion actual pixels coordinate and ideal pixel coordinate are calculated by visual distortion processing module, pass through actual picture Plain coordinate and ideal pixel coordinate measurement and calculation goes out distortion coefficients of camera lens vector, and distortion class is investigated out by distortion coefficients of camera lens vector Type, is that the work of later stage camera calibration is ready;
(f) camera calibration:
The mapping established between the feature point coordinates in extraneous three-dimensional scenic and its respective pixel coordinate on the image plane is closed System, is the three-dimensional coordinate under robot processing module coordinate system by the three-dimensional measurement Coordinate Conversion under camera coordinate system;
(g) to image gray processing processing;
(h) image preprocessing:
Image preprocessing handles the noise signal in image using image enhaucament and image smoothing mode, improves the clear of image Degree, it is ensured that the operation of subsequent edges feature extraction is smoothed out;
(i) Edge Gradient Feature:
By the image of the different angles of pretreated two same objects, using SURF algorithm module, in the picture of 20*20 sizes In plain regional extent, it is divided into 4*4 sub-regions, uses the Harr both horizontally and vertically of relative principal direction small in every sub-regions Ripple responds the absolute value sum of sum and response to determine, 64 dimensional feature description vectors is ultimately formed, to two same objects The images of different angles carries out feature point extraction, then with the method for Feature Points Matching by the characteristic point of two images one by one Match somebody with somebody;
(j) Edge Feature Matching and correction:
For wherein one image as reference picture, triangulation is carried out to it according to its characteristic point, the three of the image are obtained Angle subdivision graph, according to the triangulated graph of reference picture, carries out characteristic point connection, and determine abnormal side to another image Edge, further according to the abnormal edge and normal ratio of each characteristic point, judges the match point made mistake, deletion error Characteristic point pair corresponding to match point, suppressing exception edge detects on the basis of the abnormal edge of detection and eliminates error hiding Point pair, calculates the ratio of the quantity of the connected total edge of quantity at the abnormal edge that a characteristic point is connected, compares Value is more than certain threshold value, by its corresponding characteristic point to deleting, completes Edge Feature Matching and correction.
CN201510291834.7A 2015-05-29 2015-05-29 A kind of matching antidote for improving industrial robot 3D vision accuracy of identification CN104915957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510291834.7A CN104915957B (en) 2015-05-29 2015-05-29 A kind of matching antidote for improving industrial robot 3D vision accuracy of identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510291834.7A CN104915957B (en) 2015-05-29 2015-05-29 A kind of matching antidote for improving industrial robot 3D vision accuracy of identification

Publications (2)

Publication Number Publication Date
CN104915957A CN104915957A (en) 2015-09-16
CN104915957B true CN104915957B (en) 2017-10-27

Family

ID=54084995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510291834.7A CN104915957B (en) 2015-05-29 2015-05-29 A kind of matching antidote for improving industrial robot 3D vision accuracy of identification

Country Status (1)

Country Link
CN (1) CN104915957B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106625676B (en) * 2016-12-30 2018-05-29 易思维(天津)科技有限公司 A kind of 3D vision of automobile intelligent manufacture automatic charging accurately guides localization method
CN107300100B (en) * 2017-05-22 2019-05-14 浙江大学 A kind of tandem type mechanical arm vision guide approach method of Online CA D model-driven
CN108007451B (en) * 2017-11-10 2020-08-11 未来机器人(深圳)有限公司 Method and device for detecting position and posture of cargo carrying device, computer equipment and storage medium
CN108364306B (en) * 2018-02-05 2020-11-24 北京建筑大学 Visual real-time detection method for high-speed periodic motion
US10970586B2 (en) 2018-06-28 2021-04-06 General Electric Company Systems and methods of 3D scene segmentation and matching for robotic operations
CN109188902A (en) * 2018-08-08 2019-01-11 重庆两江微链智能科技有限公司 A kind of robotics learning method, control method, device, storage medium and main control device
CN109590699B (en) * 2018-11-13 2020-09-18 北京遥测技术研究所 Part surface design method for improving automatic assembly visual identification
CN109523594A (en) * 2018-11-15 2019-03-26 华南智能机器人创新研究院 A kind of vision tray characteristic point coordinate location method and system
CN109949362A (en) * 2019-03-01 2019-06-28 广东九联科技股份有限公司 A kind of material visible detection method
CN110006361A (en) * 2019-03-12 2019-07-12 精诚工科汽车系统有限公司 Part automated detection method and system based on industrial robot
CN110230994B (en) * 2019-04-30 2020-08-14 浙江大学 Phase measurement error correction method of image point tracing object grating image phase shift method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251373A (en) * 2008-03-28 2008-08-27 北京工业大学 Method for rapidly detecting micro-structure three-dimensional dimension stereoscopic picture
CN104318554A (en) * 2014-10-15 2015-01-28 北京理工大学 Triangulation optical matching based medical image rigid registration method
CN104647377A (en) * 2014-12-30 2015-05-27 杭州新松机器人自动化有限公司 Cognition-system-based industrial robot and control method of industrial robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251373A (en) * 2008-03-28 2008-08-27 北京工业大学 Method for rapidly detecting micro-structure three-dimensional dimension stereoscopic picture
CN104318554A (en) * 2014-10-15 2015-01-28 北京理工大学 Triangulation optical matching based medical image rigid registration method
CN104647377A (en) * 2014-12-30 2015-05-27 杭州新松机器人自动化有限公司 Cognition-system-based industrial robot and control method of industrial robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Improved keypoint descriptors based on Delaunay triangulation for image matching;Xinyue Zhao et al;《optik》;20140630;第125卷(第13期);第3121-3123页 *
一种三维激光成像系统的平面离散数据三角剖分方法;赵龙 等;《导弹与航天运载技术》;20130430(第4期);第48-51页 *
基于视觉的工业机器人目标识别定位方法的研究;王红涛;《中国优秀硕士学位论文全文数据库 信息科技辑》;20070815(第2期);第I140-162页 *

Also Published As

Publication number Publication date
CN104915957A (en) 2015-09-16

Similar Documents

Publication Publication Date Title
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
JP6280525B2 (en) System and method for runtime determination of camera miscalibration
US10288418B2 (en) Information processing apparatus, information processing method, and storage medium
CN104057453B (en) The manufacture method of robot device and machined object
JP5558585B2 (en) Work picking device
US8098928B2 (en) Apparatus for picking up objects
CN104331896B (en) A kind of system calibrating method based on depth information
JP2016103230A (en) Image processor, image processing method and program
US20150003678A1 (en) Information processing apparatus, information processing method, and storage medium
US9233469B2 (en) Robotic system with 3D box location functionality
JP5469216B2 (en) A device for picking up bulk items by robot
CN107292927B (en) Binocular vision-based symmetric motion platform pose measurement method
JP2012123781A (en) Information processing device, information processing system and information processing method
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
JP6415066B2 (en) Information processing apparatus, information processing method, position and orientation estimation apparatus, robot system
EP2466543A2 (en) Position and orientation measurement device and position and orientation measurement method
CN109270534B (en) Intelligent vehicle laser sensor and camera online calibration method
CN101419055B (en) Space target position and pose measuring device and method based on vision
CN103678754B (en) Information processor and information processing method
CN106226325B (en) A kind of seat surface defect detecting system and its method based on machine vision
CN104567679B (en) A kind of system of turbo blade vision-based detection
KR102056664B1 (en) Method for work using the sensor and system for performing thereof
Ricolfe-Viala et al. Correcting non-linear lens distortion in cameras without using a model
US20180350056A1 (en) Augmented reality application for manufacturing

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171027

Termination date: 20180529

CF01 Termination of patent right due to non-payment of annual fee