CN109781014A - The technology and methods of polyphaser collaboration on-line measurement strip target length under machine vision mode - Google Patents

The technology and methods of polyphaser collaboration on-line measurement strip target length under machine vision mode Download PDF

Info

Publication number
CN109781014A
CN109781014A CN201910180429.6A CN201910180429A CN109781014A CN 109781014 A CN109781014 A CN 109781014A CN 201910180429 A CN201910180429 A CN 201910180429A CN 109781014 A CN109781014 A CN 109781014A
Authority
CN
China
Prior art keywords
image
target
camera
polyphaser
strip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910180429.6A
Other languages
Chinese (zh)
Other versions
CN109781014B (en
Inventor
刘宏申
刘天鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Technology AHUT
Original Assignee
Anhui University of Technology AHUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Technology AHUT filed Critical Anhui University of Technology AHUT
Priority to CN201910180429.6A priority Critical patent/CN109781014B/en
Publication of CN109781014A publication Critical patent/CN109781014A/en
Application granted granted Critical
Publication of CN109781014B publication Critical patent/CN109781014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses the technology and methods of polyphaser collaboration on-line measurement strip target length under the machine vision mode of machine vision mode technical field, object to be measured image is shot using picture pick-up device such as industrial camera, length information of the target in picture pick-up device coordinate system is obtained with the ways and means of image procossing again, this method requires object to be measured panorama must be in captured image, when object to be measured overlength, its panorama can not be taken with a camera, polyphaser collaboration is needed to take pictures, collaboration processing is possible to out result;The present invention passes through when polyphaser shoots a super large target jointly, the photo of each camera can not simple concatenation restore the target, each image processing result does not have additive property, linear measure longimetry to strip overlength target, the result nonadditivity under polyphaser collaboration is overcome, while greatly reducing system processes data amount.

Description

Under machine vision mode polyphaser collaboration on-line measurement strip target length technology and Method
Technical field
The present invention relates to machine vision mode technical fields, and in particular to polyphaser collaboration is online under machine vision mode surveys Measure the technology and methods of strip target length.
Background technique
The production of current complex industrial product, which generally requires multiple link processings, to be completed, in not having also for each link processing There is the processed goods as finished product to be known as middleware.It may need to understand middleware in time in these links of product working process Dimensional information, it is desirable that in the case where not influencing the condition of production to the dimensional information for the middleware being on each processing links production line into Row on-line measurement.
The advantage that machine vision mode has some other measurement methods not have in terms of on-line measurement, if do not contacted, can To facilitate far from measurement target, later maintenance and equipment replacement, be particularly suitable for measurement high temperature, be not suitable for close or contact, The target of movement.The image that industrial camera shoots middleware online is usually used in machine vision mode on-line measurement, then right The image is handled in time obtains its dimensional information, and dimensional information is finally fed back to production line.If middleware is strip and ruler It spends very big, its panorama can not be photographed using an industrial camera, a kind of method is to need to take pictures using more industrial cameras collaborations, Then the seamless spliced panoramic picture for forming middleware is carried out to these cameras image of taking pictures, then it is obtained to these image procossings Length information.The present invention is to carry out on-line measurement to strip target length information using machine vision mode under polyphaser collaboration Method.
Long scheme problem to be solved is surveyed in polyphaser collaboration following problems: first is that the problem of machine vision aspect, i.e., Object to be measured segmentation and length information obtain in each camera image;Second is that each camera result how to reprocess acquisition meet it is actual Total length information;Third is that the increase of camera quantity leads to the sharp increase of length measurement system processing data volume, system burden can also increase sharply, The timeliness of this what state guarantee on-line measurement.The content of present invention is laid particular emphasis on to the polyphaser association in the measurement of strip target length The solution of the latter two problems faced in.Based on this, the present invention devises the online survey of polyphaser collaboration under machine vision mode The technology and methods of strip target length are measured, to solve the above problems.
Summary of the invention
The purpose of the present invention is to provide the skills of polyphaser collaboration on-line measurement strip target length under machine vision mode How art and method reprocess acquisition to solve each camera result mentioned above in the background art and meet actual total length letter The problem of increase of breath and camera quantity leads to the sharp increase of length measurement system processing data volume, and system burden can also increase sharply.
To achieve the above object, the invention provides the following technical scheme: the online survey of polyphaser collaboration under machine vision mode Measure the technology and methods of strip target length, it is assumed that piece image is f (x, y), and the operation for extracting attribute to image procossing is set as H is transformed to T by measurement coordinate system to world coordinate system, and the actual result to the processing of diagram picture is R, and measurement process can be with It is described as
R=T (H (f (x, y))) (1)
Core is the definition and realization of H in this process;In the measuring system of view-based access control model, H (f (x, y)) result The distance between two measuring points are namely based on, the two measuring point coordinates are based on image coordinate system, since distance is to seat Mark transformation has invariance, therefore long system is surveyed for one camera, and (1) formula becomes
R=H (f (x, y)) (2)
For the length measurement system of polyphaser collaboration, the H of each camera is identical, measurement result R, testing image f (x, y) and seat It is different to mark transformation matrix, i-th of camera is represented by
Ri=Ti(H(fi(x,y))) (3)
For entire polyphaser cooperative system, comprehensive each camera result is that the last measurement result of whole system also needs one A operation;It is assumed that the operation that comprehensive each camera result is the last measurement result of whole system is Tc, result is R after synthesis, then more Camera cooperates with lower measurement process can be described as
R=Tc(Ti(H(fi(x,y)))) (4)
Assuming that entire measuring system is made of n camera, the operation that attribute is extracted to image procossing is H, and system has n survey Coordinate system is measured, each measurement coordinate system corresponds to a transformation and is set as Ti(i=0,1,2,3,4 ..., n-1), the figure that camera is taken pictures As fi(x, y) (i=0,1,2,3,4 ..., n-1), obtaining actual result to image procossing is Ri(i=0,1,2,3,4 ..., n- 1), final measurement R then has
Ri=Ti(H(fi(x,y))) (5)
R=Tc(R0,R1, R=..., Rn-1) (6)
By above-mentioned two formula combine as
R=Tc(T0(H(f0(x,y))),…,Ti(H(fi(x,y))),…,Tn-1(H(fn-1(x,y)))) (7)
According to above problem model, it needs to be determined that there is T in the measuring system of polyphaser collaborationcAnd Ti(i=0,1,2, 3,4 ..., n-1) two kinds of transformation and image procossing H;
There is the T under the ideally polyphaser collaboration of additive property for each camera resultcTransformation and TiConvert two changes It is as follows to change matrix setting
S in formula (8)xi、SyiThe respectively transformation coefficient of coordinate x, y, X0i、Y0iFor image fiThe measurement coordinate system of (x, y) Starting point, the coordinate of corresponding world coordinate system, is generated, T by HcIt is the unit matrix of n dimension, it is assumed that H (fi(x, y) result is (Xi,Yi), then formula (7) becomes
Measuring length is
As previously described, above scheme establishment is conditional, i.e., each camera horizontal direction is consistent and picture is not overlapped, This is difficult to accomplish when realizing in practice, furthermore the program measures every time must handle n width image, obtains in n picture After length, then their results are added up, how many camera, which participates in collaboration, will handle how many width images, and that does not lighten the burden is excellent Gesture;For reality, how T is definedcTransformation, TiTransformation and H needs are studied, when designing, establishing conversion scheme, side As assessment foundation in terms of the practical operability and the algorithm speed of service two of case.
Preferably, it redefines and G is operated to image, which is that the product of N and newly-increased operation D is operated to image, I.e.
G=ND or DN (13)
It is that H and coordinate system transformation T was operated to image originally to image operation NiProduct, i.e., not only obtained to after image procossing Strip target endpoint coordinate, but also will transformation to from the coordinate to world coordinate system, i.e.,
N=TiH (14)
Newly-increased operation D is defined as follows
ci=D (fi(x, y)) (15)
ciIt is a constant, constant ciValue is as follows
(7) formula above in this way can be further written as
R=Tc(T0((c0)H(f0(x,y))),…,Ti((ci)H(fi(x,y))),…,Tn-1((cn-1)H(fn-1(x, y)))) (16)
Due to ciTo TiFor constant, it is possible to move on to TiBefore transformation, i.e.,
R=Tc(c0T0(H(f0(x,y))),…,ciT=(H (fi(x,y))),…,cn-1Tn-1(H(fn-1(x,y)))) (17)
To any one strip target, only there are two camera c in industrial camera queueiIt is not 0, other is 0, if strip Target both ends are located at i-th and j-th of camera, then above formula becomes
R=Tc(-(ci)Ti(H(fi(x,y))),(cj)Tj(H(fj(x,y)))) (18)
Become in new coordinate transform (10), (11) formula
Rx=(SxjXj-X0j)-(SxiXi-X0i) (19)
Ry=(SyjYj-Y0j)-(SyiYi-Y0i) (20)
It redefines after operating G to image, all increases each image a D operation, it appears that it is multiple to increase system processing Miscellaneous degree, actually significantly reduces data processing amount, although to be D operation, but T to each camera imageiOperation and H operation Do not need to carry out each image, every time measurement only to two images carry out can, as long as each image carry out D operation The two edges of check image can determine, does not need to check whole image, greatly accelerates algorithm speed.
Preferably, what is redefined is the composition operation of multiple operations to image operation G, including the D operation to image, H Operation and Ti transformation;The realization for operating G is mainly the realization of D operation, H operation;For piece image, D operation is main to be answered Strip target be left end or right end in the picture, i.e., generation constant ci, H operation is that the endpoint location of strip target to be obtained is sat Whether in the picture the two requirements are summed up the both ends for seeking to determine strip target by mark, if will further really Surely be left end or right end in the images, specific coordinate position, this coordinate position must also be the value of world coordinate system;
It can be divided into three classes the case where polyphaser cooperates with situation, and strip target occurs in the image of polyphaser, it is a kind of Strip target left end in image, the second class be strip target without endpoint in the picture, the only segment section of strip target scheming As in, third class is strip target right end in image;For any one strip target, the first kind under polyphaser cooperative mode Respectively there was only piece image with third class image, the second class image there may be several, it is also possible to not have;
1) each camera investigates the realization of region setting and H operation
To industry spot on-line measurement, the position that object to be measured occurs in each camera image is not arbitrary, but is gone out A present fixed area, target region is different between each camera, but the target region for some certain camera Be it is certain, to make full use of this feature when image operation G is realized in research, propose region of interest ROI or investigate area Domain concept, it is the region that target to be measured occurs in the picture that investigating region, which is exactly the fixed area in image, and setting is investigated The advantage in region is that data processing amount is greatly decreased, and avoids the processing of complex background, so that algorithm design is simplified;
The principle of region setting is investigated first is that the treating capacity of data is reduced to the greatest extent, second is that making Processing Algorithm simple possible;It examines Examine region setting shape, size etc. and application in relation to, it is related with target shape to be split, this is grown for the survey of strip target Business, investigation region are arranged in the region that strip target is passed through and strip, through image the right and left as far as possible, are arranged in this way After investigating region afterwards, investigate region and be divided into two parts, strip target internal point and outer portion, the two should attribute it is different, hand over Be exactly strip target left end at boundary, as long as region is investigated in each search sweep in this way, investigate range be greatly reduced, complex background pair The influence of algorithm can not almost consider;H operation can only investigate region in carry out, can using threshold method or other Partition means determine the boundary of target and background, and at this moment coordinate is still based on the coordinate system of image;
2) D operation realization and the complete algorithm of image procossing
H operation is the extreme coordinates for returning to strip target, and D operation is constant c to be obtainedi, i.e., whether there is or not strips in the image Target endpoint and differentiation are left end point or right endpoint, as long as D operation investigates the left and right in region close to the two of image border Whether the two column pixels at end are that target pixel points are obtained with answer, and speed can be quickly.
H operation and D operate the complete algorithm being combined, and constant c is arranged according to two column situation of ROI left and right edgesiStep It is as follows
1. when two column are target object pixel, ciConstant=0;
2. when two column are not target object pixel, ciConstant=0;
3. right column are not target pixel points, c when left column is target pixel pointsiConstant=1;
4. when the right side is classified as target pixel points, left column is not target pixel points, ciConstant=- 1;
3) image calibration and coordinate transform
Image is after D is operated, if ciConstant ≠ 0, then H operation can provide the strip target side based on camera coordinates system Point coordinate, the extreme coordinates have to pass through coordinate transform, and the coordinate for converting it into world coordinate system could become measured value, sit Mark transformation is it needs to be determined that Sxi、Syi、X0i、Y0i, in polyphaser collaboration, have multiple camera coordinates systems, how to determine these ginsengs Number is also to need to study;
Under polyphaser collaboration, in order to realize conversion of the camera coordinates system to world coordinate system, need to each camera point It is not demarcated, to obtain the S of each cameraxi、Syi、X0i、Y0i, the physics of specific calibration and object to be measured shape and measurement Measure related, calibration is exactly to demarcate it point by point in world coordinates along target length direction for strip target surveys this long application The length of system.
Preferably, the two edges of the two edges of the D operation inspection image are and strip target vertical direction.
Preferably, described image calibration and calibration process in coordinate transform can first determine the calibration point in the picture, so It the point is corresponded to background again afterwards really puts and determine its world coordinate system coordinate value, be also possible to for real background In specified point look for its corresponding diagram picture point in the picture, the distribution of calibration request point must cover entire mesh in the longitudinal direction Mark range.
Compared with prior art, the beneficial effects of the present invention are: the present invention by shooting a super large in polyphaser jointly When target, the photo of each camera can not simple concatenation restore the target, each image processing result does not have additive property, to strip The linear measure longimetry of overlength target overcomes the result nonadditivity under polyphaser collaboration, while greatly reducing system processing number According to amount.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will be described below to embodiment required Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is polyphaser synergetic structure example schematic of the present invention.
Fig. 2 is the more picture spliced panoramic schematic diagrames of polyphaser of the present invention.
Fig. 3 is the practical polyphaser image mosaic example schematic of the present invention.
Fig. 4 is the three classes situation schematic diagram that strip target endpoint of the present invention is distributed in the picture.
Fig. 5 is that the present invention investigates region setting example schematic.
Fig. 6 lists intention for D of the present invention operation is investigated two.
Fig. 7 is the complete algorithm flow schematic diagram of the present invention.
Fig. 8 is present invention calibration schematic diagram.
Fig. 9 is that present system implements architecture diagram schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other Embodiment shall fall within the protection scope of the present invention.
Please refer to Fig. 1-9, the present invention provides a kind of technical solution: polyphaser cooperates with on-line measurement item under machine vision mode The technology and methods of shape target length, it is assumed that piece image is f (x, y), and the operation for extracting attribute to image procossing is set as H, by It measures coordinate system and is transformed to T to world coordinate system, the actual result to the processing of diagram picture is R, and measurement process can be retouched State for
R=T (H (f (x, y))) (1)
Core is the definition and realization of H in this process;In the measuring system of view-based access control model, H (f (x, y)) result The distance between two measuring points are namely based on, the two measuring point coordinates are based on image coordinate system, since distance is to seat Mark transformation has invariance, therefore long system is surveyed for one camera, and (1) formula becomes
R=H (f (x, y)) (2)
For the length measurement system of polyphaser collaboration, the H of each camera is identical, measurement result R, testing image f (x, y) and seat It is different to mark transformation matrix, i-th of camera is represented by
Ri=Ti(H(fi(x,y))) (3)
For entire polyphaser cooperative system, comprehensive each camera result is that the last measurement result of whole system also needs one A operation;It is assumed that the operation that comprehensive each camera result is the last measurement result of whole system is Tc, result is R after synthesis, then more Camera cooperates with lower measurement process can be described as
R=Tc(Ti(H(fi(x,y)))) (4)
Assuming that entire measuring system is made of n camera, the operation that attribute is extracted to image procossing is H, and system has n survey Coordinate system is measured, each measurement coordinate system corresponds to a transformation and is set as Ti(i=0,1,2,3,4 ..., n-1), the figure that camera is taken pictures As fi(x, y) (i=0,1,2,3,4 ..., n-1), obtaining actual result to image procossing is Ri(i=0,1,2,3,4 ..., n- 1), final measurement R then has
Ri=Ti(H(fi(x,y))) (5)
R=Tc(R0,R1, R=..., Rn-1) (6)
By above-mentioned two formula combine as
R=Tc(T0(H(f0(x,y))),…,Ti(H(fi(x,y))),…,Tn-1(H(fn-1(x,y)))) (7)
According to above problem model, it needs to be determined that there is T in the measuring system of polyphaser collaborationcAnd Ti(i=0,1,2, 3,4 ..., n-1) two kinds of transformation and image procossing H;
There is the T under the ideally polyphaser collaboration of additive property for each camera resultcTransformation and TiConvert two changes It is as follows to change matrix setting
S in formula (8)xi、SyiThe respectively transformation coefficient of coordinate x, y, X0i、Y0iFor image fiThe measurement coordinate system of (x, y) Starting point, the coordinate of corresponding world coordinate system, is generated, T by HcIt is the unit matrix of n dimension, it is assumed that H (fi(x, y) result is (Xi,Yi), then formula (7) becomes
Measuring length is
As previously described, above scheme establishment is conditional, i.e., each camera horizontal direction is consistent and picture is not overlapped, This is difficult to accomplish when realizing in practice, furthermore the program measures every time must handle n width image, obtains in n picture After length, then their results are added up, how many camera, which participates in collaboration, will handle how many width images, and that does not lighten the burden is excellent Gesture;For reality, how T is definedcTransformation, TiTransformation and H needs are studied, when designing, establishing conversion scheme, side As assessment foundation in terms of the practical operability and the algorithm speed of service two of case.
Wherein, it redefines and G is operated to image, which is to operate the product of N and newly-increased operation D to image, i.e.,
G=ND or DN (13)
It is that H and coordinate system transformation T was operated to image originally to image operation NiProduct, i.e., not only obtained to after image procossing Strip target endpoint coordinate, but also will transformation to from the coordinate to world coordinate system, i.e.,
N=TiH (14)
Newly-increased operation D is defined as follows
ci=D (fi(x, y)) (15)
ciIt is a constant, constant ciValue is as follows
(7) formula above in this way can be further written as
R=Tc(T0((c0)H(f0(x,y))),…,Ti((ci)H(fi(x,y))),…,Tn-1((cn-1)H(fn-1(x, y)))) (16)
Due to ciTo TiFor constant, it is possible to move on to TiBefore transformation, i.e.,
R=Tc(c0T0(H(f0(x,y))),…,ciT=(H (fi(x,y))),…,cn-1Tn-1(H(fn-1(x,y)))) (17)
To any one strip target, only there are two camera c in industrial camera queueiIt is not 0, other is 0, if strip Target both ends are located at i-th and j-th of camera, then above formula becomes
R=Tc(-(ci)Ti(H(fi(x,y))),(cj)Tj(H(fj(x,y)))) (18)
Become in new coordinate transform (10), (11) formula
Rx=(SxjXj-X0j)-(SxiXi-X0i) (19)
Ry=(SyjYj-Y0j)-(SyiYi-Y0i) (20)
It redefines after operating G to image, all increases each image a D operation, it appears that it is multiple to increase system processing Miscellaneous degree, actually significantly reduces data processing amount, although to be D operation, but T to each camera imageiOperation and H operation Do not need to carry out each image, every time measurement only to two images carry out can, as long as each image carry out D operation The two edges of check image can determine, does not need to check whole image, greatly accelerates algorithm speed.
What is redefined is the composition operation of multiple operations to image operation G, including D operation, H operation and the Ti to image Transformation;The realization for operating G is mainly the realization of D operation, H operation;For piece image, D operation is main to answer strip target Be left end or right end in the picture, i.e., generation constant ci, H operation is the endpoint location coordinate of strip target to be obtained, by this Two require whether to be in the picture summed up the both ends for seeking to determine strip target, if to further determine that being left In the images, specific coordinate position, this coordinate position must also be the value of world coordinate system for end or right end;
It can be divided into three classes the case where polyphaser cooperates with situation, and strip target occurs in the image of polyphaser, it is a kind of Strip target left end in image, the second class be strip target without endpoint in the picture, the only segment section of strip target scheming As in, third class is strip target right end in image;For any one strip target, the first kind under polyphaser cooperative mode Respectively there was only piece image with third class image, the second class image there may be several, it is also possible to not have;
1) each camera investigates the realization of region setting and H operation
To industry spot on-line measurement, the position that object to be measured occurs in each camera image is not arbitrary, but is gone out A present fixed area, target region is different between each camera, but the target region for some certain camera Be it is certain, to make full use of this feature when image operation G is realized in research, propose region of interest ROI or investigate area Domain concept, it is the region that target to be measured occurs in the picture that investigating region, which is exactly the fixed area in image, and setting is investigated The advantage in region is that data processing amount is greatly decreased, and avoids the processing of complex background, so that algorithm design is simplified;
The principle of region setting is investigated first is that the treating capacity of data is reduced to the greatest extent, second is that making Processing Algorithm simple possible;It examines Examine region setting shape, size etc. and application in relation to, it is related with target shape to be split, this is grown for the survey of strip target Business, investigation region are arranged in the region that strip target is passed through and strip, through image the right and left as far as possible, are arranged in this way After investigating region afterwards, investigate region and be divided into two parts, strip target internal point and outer portion, the two should attribute it is different, hand over Be exactly strip target left end at boundary, as long as region is investigated in each search sweep in this way, investigate range be greatly reduced, complex background pair The influence of algorithm can not almost consider;H operation can only investigate region in carry out, can using threshold method or other Partition means determine the boundary of target and background, and at this moment coordinate is still based on the coordinate system of image;
2) D operation realization and the complete algorithm of image procossing
H operation is the extreme coordinates for returning to strip target, and D operation is constant c to be obtainedi, i.e., whether there is or not strips in the image Target endpoint and differentiation are left end point or right endpoint, as long as D operation investigates the left and right in region close to the two of image border Whether the two column pixels at end are that target pixel points are obtained with answer, and speed can be quickly.
H operation and D operate the complete algorithm being combined, and constant c is arranged according to two column situation of ROI left and right edgesiStep It is as follows
1. when two column are target object pixel, ciConstant=0;
2. when two column are not target object pixel, ciConstant=0;
3. right column are not target pixel points, c when left column is target pixel pointsiConstant=1;
4. when the right side is classified as target pixel points, left column is not target pixel points, ciConstant=- 1;
3) image calibration and coordinate transform
Image is after D is operated, if ciConstant ≠ 0, then H operation can provide the strip target side based on camera coordinates system Point coordinate, the extreme coordinates have to pass through coordinate transform, and the coordinate for converting it into world coordinate system could become measured value, sit Mark transformation is it needs to be determined that Sxi、Syi、X0i、Y0i, in polyphaser collaboration, have multiple camera coordinates systems, how to determine these ginsengs Number is also to need to study;
Under polyphaser collaboration, in order to realize conversion of the camera coordinates system to world coordinate system, need to each camera point It is not demarcated, to obtain the S of each cameraxi、Syi、X0i、Y0i, the physics of specific calibration and object to be measured shape and measurement Measure related, calibration is exactly to demarcate it point by point in world coordinates along target length direction for strip target surveys this long application The length of system.
The two edges of the two edges of D operation inspection image are and strip target vertical direction.
Calibration process can first determine the calibration point in the picture in image calibration and coordinate transform, then again the point pair It should really be put to background and be also possible to exist for the specified point in real background to determine its world coordinate system coordinate value Its corresponding diagram picture point is looked in image, the distribution of calibration request point must cover entire target zone in the longitudinal direction.
One concrete application of the present embodiment are as follows: object to be measured image is shot using picture pick-up device such as industrial camera, then is used The ways and means of image procossing obtain length information of the target in picture pick-up device coordinate system.This method requires mesh to be measured Marking panorama must be in captured image.When object to be measured overlength, its panorama can not be taken with a camera, need Polyphaser collaboration is taken pictures, collaboration processing is possible to out result.
Start first to investigate two column of ROI or so, constant c is arranged according to two column situation of ROIi, judge ciWhether constant is equal to 0, when ciWhen constant is equal to 0, directly terminate;Work as ciIt when constant is not equal to 0, investigates inside ROI, obtains target side coordinate, is i.e. H is operated, so After terminate.
Firstly the need of more industrial cameras, the resolution ratio coincidence measurement required precision of camera, camera must be GiGE interface 's;Secondly a computer is needed to run system of the invention;Finally need the equipment of some networkings.Each camera is passed through into net Line is connected on the network card interface of computer.Guarantee that each camera and computer constitute an independent network;It again will be based on this The software installation of invention is after computer, and it is long can to run survey for system after appropriately configured.It is shot jointly in polyphaser When one super large target, the photo of each camera can not simple concatenation restore the target, each image processing result does not have and can add Property.Linear measure longimetry to strip overlength target, the result that the present invention proposes that a kind of new method overcomes under polyphaser collaboration can not Additivity, while greatly reducing system processes data amount.
In the description of this specification, the description of reference term " one embodiment ", " example ", " specific example " etc. means Particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one implementation of the invention In example or example.In the present specification, schematic expression of the above terms may not refer to the same embodiment or example. Moreover, particular features, structures, materials, or characteristics described can be in any one or more of the embodiments or examples to close Suitable mode combines.
Present invention disclosed above preferred embodiment is only intended to help to illustrate the present invention.There is no detailed for preferred embodiment All details are described, are not limited the invention to the specific embodiments described.Obviously, according to the content of this specification, It can make many modifications and variations.These embodiments are chosen and specifically described to this specification, is in order to better explain the present invention Principle and practical application, so that skilled artisan be enable to better understand and utilize the present invention.The present invention is only It is limited by claims and its full scope and equivalent.

Claims (5)

1. the technology and methods of polyphaser collaboration on-line measurement strip target length under machine vision mode, it is characterised in that: false If piece image is f (x, y), the operation for extracting attribute to image procossing is set as H, by the change of measurement coordinate system to world coordinate system It is changed to T, the actual result to the processing of diagram picture is R, and measurement process can be described as
R=T (H (f (x, y))) (1)
Core is the definition and realization of H in this process;In the measuring system of view-based access control model, H (f (x, y)) result is exactly Based on the distance between two measuring points, the two measuring point coordinates are based on image coordinate system, since distance becomes coordinate It changes with invariance, therefore surveys long system for one camera, (1) formula becomes
R=H (f (x, y)) (2)
For the length measurement system of polyphaser collaboration, the H of each camera is identical, measurement result R, testing image f (x, y) and coordinate change It is different to change matrix, i-th of camera is represented by
Ri=Ti(H(fi(x,y))) (3)
For entire polyphaser cooperative system, comprehensive each camera result is that the last measurement result of whole system also needs a behaviour Make;It is assumed that the operation that comprehensive each camera result is the last measurement result of whole system is Tc, result is R after synthesis, then polyphaser Lower measurement process is cooperateed with to can be described as
R=Tc(Ti(H(fi(x,y)))) (4)
Assuming that entire measuring system is made of n camera, the operation for extracting attribute to image procossing is H, and system has n measurement to sit Mark system, each measurement coordinate system correspond to a transformation and are set as Ti(i=0,1,2,3,4 ..., n-1), the image f that camera is taken picturesi (x, y) (i=0,1,2,3,4 ..., n-1), obtaining actual result to image procossing is Ri(i=0,1,2,3,4 ..., n-1), most Whole measurement result is R, then has Ri=Ti(H(fi(x,y))) (5)
R=Tc(R0,R1, R=..., Rn-1) (6)
By above-mentioned two formula combine as
R=Tc(T0(H(f0(x,y))),…,Ti(H(fi(x,y))),…,Tn-1(H(fn-1(x,y)))) (7)
According to above problem model, it needs to be determined that there is T in the measuring system of polyphaser collaborationcAnd Ti(i=0,1,2,3, 4 ..., n-1) two kinds of transformation and image procossing H;
There is the T under the ideally polyphaser collaboration of additive property for each camera resultcTransformation and TiConvert two transformation squares Battle array setting is as follows
S in formula (8)xi、SyiThe respectively transformation coefficient of coordinate x, y, X0i、Y0iFor image fiThe measurement coordinate system of (x, y) originates Point, the coordinate of corresponding world coordinate system, is generated, T by HcIt is the unit matrix of n dimension, it is assumed that H (fi(x, y) result is (Xi, Yi), then formula (7) becomes
Measuring length is
As previously described, above scheme establishment is conditional, i.e., each camera horizontal direction is consistent and picture is not overlapped, this It is difficult to accomplish when realizing in practice, furthermore the program measure every time must handle n width image, the length in n picture of acquisition Afterwards, then their results it adds up, how many camera, which participates in collaboration, will handle how many width images, the advantage that do not lighten the burden; For reality, how T is definedcTransformation, TiTransformation and H needs are studied, when designing, establishing conversion scheme, scheme As assessment foundation in terms of practical operability and the algorithm speed of service two.
2. under machine vision mode according to claim 1 polyphaser collaboration on-line measurement strip target length technology and Method, it is characterised in that: it redefines and G is operated to image, which is that the product of N and newly-increased operation D is operated to image, I.e.
G=ND or DN (13)
It is that H and coordinate system transformation T was operated to image originally to image operation NiProduct, i.e., to not only obtaining strip after image procossing Target endpoint coordinate, but also will transformation to from the coordinate to world coordinate system, i.e.,
N=TiH (14)
Newly-increased operation D is defined as follows
ci=D (fi(x, y)) (15)
ciIt is a constant, constant ciValue is as follows
(7) formula above in this way can be further written as
R=Tc(T0((c0)H(f0(x, y))) ..., Ti((ci)H(fi(x, y))) ..., Tn-1((cn-1)H(fn-1(x, y)))) (16)
Due to ciTo TiFor constant, it is possible to move on to TiBefore transformation, i.e.,
R=Tc(c0T0(H(f0(x, y))) ..., ciT=(H (fi(x, y))) ..., Cn-1Tn-1(H(fn-1(x, y)))) (17)
To any one strip target, only there are two camera c in industrial camera queueiIt is not 0, other is 0, if strip target Both ends are located at i-th and j-th of camera, then above formula becomes
R=Tc(-(ci)Ti(H(fi(x, y))), (cj)Tj(H(fj(x, y)))) (18)
Become in new coordinate transform (10), (11) formula
Rx=(SxjXj-X0j)-(SxiXi-X0i) (19)
Ry=(SyjYj-Y0j)-(SyiYi-Y0i) (20)
It redefines after operating G to image, all increases each image a D operation, it appears that system processing complexity is increased, But data processing amount is actually significantly reduced, although to be D operation, T to each camera imageiOperation and H operation are not required to Each image is carried out, every time measurement only to two images carry out can, as long as each image carry out D operation check The two edges of image can determine, does not need to check whole image, greatly accelerates algorithm speed.
3. under machine vision mode according to claim 1 polyphaser collaboration on-line measurement strip target length technology and Method, it is characterised in that: what is redefined is the composition operation of multiple operations to image operation G, including the D operation to image, H Operation and Ti transformation;The realization for operating G is mainly the realization of D operation, H operation;For piece image, D operation is main to be answered Strip target be left end or right end in the picture, i.e., generation constant ci, H operation is that the endpoint location of strip target to be obtained is sat Whether in the picture the two requirements are summed up the both ends for seeking to determine strip target by mark, if will further really Surely be left end or right end in the images, specific coordinate position, this coordinate position must also be the value of world coordinate system;
It can be divided into three classes the case where polyphaser cooperates with situation, and strip target occurs in the image of polyphaser, one kind is item Shape target left end in image, the second class be strip target without endpoint in the picture, only strip target segment section in the picture, Third class is strip target right end in image;For any one strip target, the first kind and under polyphaser cooperative mode Three classes image respectively only has piece image, and the second class image may have several, it is also possible to not have;
1) each camera investigates the realization of region setting and H operation
To industry spot on-line measurement, the position that object to be measured occurs in each camera image is not arbitrary, and is occurred from One fixed area, target region is different between each camera, but target region is one for some certain camera Fixed, this feature is made full use of when image operation G is realized in research, proposes that region of interest ROI or investigation region are general It reads, it is the region that target to be measured occurs in the picture that investigating region, which is exactly the fixed area in image, and region is investigated in setting Advantage be that data processing amount is greatly decreased, avoid the processing of complex background so that algorithm design simplify;
The principle of region setting is investigated first is that the treating capacity of data is reduced to the greatest extent, second is that making Processing Algorithm simple possible;Investigate area Domain be arranged shape, size etc. and application in relation to, it is related with target shape to be split, this task is grown for the survey of strip target, The region that strip target is passed through and strip is arranged in, through image the right and left in investigation region as far as possible, examines after being arranged in this way After examining region, investigate region and be divided into two parts, strip target internal point and outer portion, the two should attribute it is different, intersection Exactly strip target left end, as long as region is investigated in search sweep each in this way, range is greatly reduced, complex background is to algorithm for investigation Influence can not almost consider;H operation only can investigate progress in region, can use threshold method or other segmentations Means determine the boundary of target and background, and at this moment coordinate is still based on the coordinate system of image;
2) D operation realization and the complete algorithm of image procossing
H operation is the extreme coordinates for returning to strip target, and D operation is constant c to be obtainedi, i.e., whether there is or not strip target sides in the image Put and distinguish left end point or right endpoint, if D operation investigate investigate region left and right close to image border both ends two Whether column pixel is that target pixel points are obtained with answer, and speed can be quickly.
H operation and D operate the complete algorithm being combined, and constant c is arranged according to two column situation of ROI left and right edgesiSteps are as follows
1. when two column are target object pixel, ciConstant=0;
2. when two column are not target object pixel, ciConstant=0;
3. right column are not target pixel points, c when left column is target pixel pointsiConstant=1;
4. when the right side is classified as target pixel points, left column is not target pixel points, ciConstant=- 1;
3) image calibration and coordinate transform
Image is after D is operated, if ciConstant ≠ 0, then H operation can provide the strip target endpoint based on camera coordinates system and sit Mark, the extreme coordinates have to pass through coordinate transform, and the coordinate for converting it into world coordinate system could become measured value, and coordinate becomes It changes it needs to be determined that Sxi、Syi、X0i、Y0i, in polyphaser collaboration, have multiple camera coordinates systems, how to determine these parameters Needs are studied;
Polyphaser collaboration under, in order to realize conversion of the camera coordinates system to world coordinate system, need to each camera respectively into Rower is fixed, to obtain the S of each cameraxi、Syi、X0i、Y0i, specific calibration and the physical quantity of object to be measured shape and measurement have It closes, calibration is exactly to demarcate it point by point in world coordinate system along target length direction for strip target surveys this long application Length.
4. under machine vision mode according to claim 2 polyphaser collaboration on-line measurement strip target length technology and Method, it is characterised in that: the two edges of the two edges of the D operation inspection image are and strip target vertical direction.
5. under machine vision mode according to claim 3 polyphaser collaboration on-line measurement strip target length technology and Method, it is characterised in that: calibration process can first determine the calibration point in the picture in described image calibration and coordinate transform, so It the point is corresponded to background again afterwards really puts and determine its world coordinate system coordinate value, be also possible to for real background In specified point look for its corresponding diagram picture point in the picture, the distribution of calibration request point must cover entire mesh in the longitudinal direction Mark range.
CN201910180429.6A 2019-03-11 2019-03-11 Technology and method for online measuring length of strip-shaped target through cooperation of multiple cameras in machine vision mode Active CN109781014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910180429.6A CN109781014B (en) 2019-03-11 2019-03-11 Technology and method for online measuring length of strip-shaped target through cooperation of multiple cameras in machine vision mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910180429.6A CN109781014B (en) 2019-03-11 2019-03-11 Technology and method for online measuring length of strip-shaped target through cooperation of multiple cameras in machine vision mode

Publications (2)

Publication Number Publication Date
CN109781014A true CN109781014A (en) 2019-05-21
CN109781014B CN109781014B (en) 2020-10-16

Family

ID=66488816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910180429.6A Active CN109781014B (en) 2019-03-11 2019-03-11 Technology and method for online measuring length of strip-shaped target through cooperation of multiple cameras in machine vision mode

Country Status (1)

Country Link
CN (1) CN109781014B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278380A (en) * 2019-07-18 2019-09-24 成都甄识科技有限公司 A kind of restructural super more mesh cameras and its multiplexing method
CN112697065A (en) * 2021-01-25 2021-04-23 东南大学 Three-dimensional shape reconstruction method based on camera array
CN112797900A (en) * 2021-04-07 2021-05-14 中科慧远视觉技术(北京)有限公司 Multi-camera plate size measuring method

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1286134A1 (en) * 2001-08-18 2003-02-26 Trumpf Werkzeugmaschinen GmbH + Co. KG Apparatus and method for measuring the geometry of workpieces
CN2773676Y (en) * 2005-04-04 2006-04-19 中国印钞造币总公司 Composite field registration detection housing with multiple cameras
CN102042814A (en) * 2010-06-24 2011-05-04 中国人民解放军国防科学技术大学 Projection auxiliary photographing measurement method for three-dimensional topography of large storage yard
US20120099119A1 (en) * 1999-04-05 2012-04-26 Faro Technologies Inc. Laser-based coordinate measuring device and laser-based method for measuring coordinates
CN102519400A (en) * 2011-12-15 2012-06-27 东南大学 Large slenderness ratio shaft part straightness error detection method based on machine vision
CN103971353A (en) * 2014-05-14 2014-08-06 大连理工大学 Splicing method for measuring image data with large forgings assisted by lasers
CN104021540A (en) * 2013-02-28 2014-09-03 宝山钢铁股份有限公司 Static state calibration device and method for machine visual surface detection equipment
CN105823416A (en) * 2016-03-04 2016-08-03 大族激光科技产业集团股份有限公司 Method for measuring object through multiple cameras and device thereof
CN205449822U (en) * 2016-02-29 2016-08-10 鞍钢股份有限公司 A polyphaser image mosaicking device for belted steel surface is detected
CN105953730A (en) * 2016-06-22 2016-09-21 首航节能光热技术股份有限公司 Multi-camera solar heat collector steel structure support assembling quality detection system
CN106886979A (en) * 2017-03-30 2017-06-23 深圳市未来媒体技术研究院 A kind of image splicing device and image split-joint method
CN107091610A (en) * 2017-04-19 2017-08-25 清华大学 The Three-Dimensional Dynamic on-line measurement device and its measuring method of a kind of large scale structure
CN109099883A (en) * 2018-06-15 2018-12-28 哈尔滨工业大学 The big visual field machine vision metrology of high-precision and caliberating device and method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120099119A1 (en) * 1999-04-05 2012-04-26 Faro Technologies Inc. Laser-based coordinate measuring device and laser-based method for measuring coordinates
EP1286134A1 (en) * 2001-08-18 2003-02-26 Trumpf Werkzeugmaschinen GmbH + Co. KG Apparatus and method for measuring the geometry of workpieces
CN2773676Y (en) * 2005-04-04 2006-04-19 中国印钞造币总公司 Composite field registration detection housing with multiple cameras
CN102042814A (en) * 2010-06-24 2011-05-04 中国人民解放军国防科学技术大学 Projection auxiliary photographing measurement method for three-dimensional topography of large storage yard
CN102519400A (en) * 2011-12-15 2012-06-27 东南大学 Large slenderness ratio shaft part straightness error detection method based on machine vision
CN104021540A (en) * 2013-02-28 2014-09-03 宝山钢铁股份有限公司 Static state calibration device and method for machine visual surface detection equipment
CN103971353A (en) * 2014-05-14 2014-08-06 大连理工大学 Splicing method for measuring image data with large forgings assisted by lasers
CN205449822U (en) * 2016-02-29 2016-08-10 鞍钢股份有限公司 A polyphaser image mosaicking device for belted steel surface is detected
CN105823416A (en) * 2016-03-04 2016-08-03 大族激光科技产业集团股份有限公司 Method for measuring object through multiple cameras and device thereof
CN105953730A (en) * 2016-06-22 2016-09-21 首航节能光热技术股份有限公司 Multi-camera solar heat collector steel structure support assembling quality detection system
CN106886979A (en) * 2017-03-30 2017-06-23 深圳市未来媒体技术研究院 A kind of image splicing device and image split-joint method
CN107091610A (en) * 2017-04-19 2017-08-25 清华大学 The Three-Dimensional Dynamic on-line measurement device and its measuring method of a kind of large scale structure
CN109099883A (en) * 2018-06-15 2018-12-28 哈尔滨工业大学 The big visual field machine vision metrology of high-precision and caliberating device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
万袁冬: "用于零件检测的多相机标定与图像拼接方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278380A (en) * 2019-07-18 2019-09-24 成都甄识科技有限公司 A kind of restructural super more mesh cameras and its multiplexing method
CN112697065A (en) * 2021-01-25 2021-04-23 东南大学 Three-dimensional shape reconstruction method based on camera array
CN112697065B (en) * 2021-01-25 2022-07-15 东南大学 Three-dimensional shape reconstruction method based on camera array
CN112797900A (en) * 2021-04-07 2021-05-14 中科慧远视觉技术(北京)有限公司 Multi-camera plate size measuring method
CN112797900B (en) * 2021-04-07 2021-07-06 中科慧远视觉技术(北京)有限公司 Multi-camera plate size measuring method

Also Published As

Publication number Publication date
CN109781014B (en) 2020-10-16

Similar Documents

Publication Publication Date Title
DE112012001984B4 (en) Integrate video metadata into 3D models
DE112013003338B4 (en) Size measuring device and size measuring method
Shiranita et al. Determination of meat quality by texture analysis
EP2471040B1 (en) Method and device for joining a plurality of individual digital images into a total image
DE102017001667A1 (en) An image processing device for displaying an object that is recognized by an input image capture
CN109781014A (en) The technology and methods of polyphaser collaboration on-line measurement strip target length under machine vision mode
DE112018001050T5 (en) SYSTEM AND METHOD FOR VIRTUALLY ENHANCED VISUAL SIMULTANEOUS LOCALIZATION AND CARTOGRAPHY
DE112011100652T5 (en) THREE-DIMENSIONAL MEASURING DEVICE, PROCESSING METHOD AND NON-VOLATILE COMPUTER-READABLE STORAGE MEDIUM
DE112018000332T5 (en) SEAL VISUAL SLAM WITH PROBABILISTIC SURFEL MAP
DE112016004535T5 (en) Universal Compliance Network
WO2019096952A1 (en) A system and method for single image object density estimation
DE102013113490A1 (en) Method and system for estimating a position of a camera
DE102018207414A1 (en) Image processing system
Xian et al. UprightNet: geometry-aware camera orientation estimation from single images
DE102018119682A1 (en) Image processing device, image processing method and non-temporary computer readable memory
DE102013211240A1 (en) Range measuring device and range measuring method
DE112018003571T5 (en) Information processing device and information processing method
DE102018003475A1 (en) Form-based graphic search
DE102010039652A1 (en) Mosaic shot generation
CN108171735A (en) 1,000,000,000 pixel video alignment schemes and system based on deep learning
Solorzano et al. Whole slide image registration for the study of tumor heterogeneity
EP2528042A1 (en) Method and device for the re-meshing of 3D polygon models
JP6900704B2 (en) Cell observation system
Li et al. Self-supervised coarse-to-fine monocular depth estimation using a lightweight attention module
DE102004026782A1 (en) Method and apparatus for computer-aided motion estimation in at least two temporally successive digital images, computer-readable storage medium and computer program element

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant