CN109211198A - A kind of intelligent Target detection and measuring system and method based on trinocular vision - Google Patents

A kind of intelligent Target detection and measuring system and method based on trinocular vision Download PDF

Info

Publication number
CN109211198A
CN109211198A CN201810930141.1A CN201810930141A CN109211198A CN 109211198 A CN109211198 A CN 109211198A CN 201810930141 A CN201810930141 A CN 201810930141A CN 109211198 A CN109211198 A CN 109211198A
Authority
CN
China
Prior art keywords
edge
point
target
target detection
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810930141.1A
Other languages
Chinese (zh)
Other versions
CN109211198B (en
Inventor
李庆武
马云鹏
任飞翔
周亚琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201810930141.1A priority Critical patent/CN109211198B/en
Publication of CN109211198A publication Critical patent/CN109211198A/en
Application granted granted Critical
Publication of CN109211198B publication Critical patent/CN109211198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/028Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring lateral position of a boundary of the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • G01B11/0608Height gauges

Abstract

The intelligent Target detection and measuring system and method that the invention discloses a kind of based on trinocular vision, a kind of intelligent Target detection and measuring system based on trinocular vision, including three cameras, three slideways and processing module, three slideway one end connections, between any two in 120 degree of distributions, three cameras are respectively arranged on slideway to be moved along slideway, and processing module is connected with camera, for carrying out target detection and target measurement;A kind of intelligent Target detection and measurement method based on trinocular vision, first progress target detection, determine the target position in image;Then target measurement is respectively completed by three binocular vision subsystems;Finally, three binocular vision subsystem measurement results of fusion obtain final target detection and objective measurement.The present invention is able to detect the depth information from different angle, changes baseline according to target, and the position for changing camera obtains complete object, has good generality and very high accuracy in plurality of target is detected and measured.

Description

A kind of intelligent Target detection and measuring system and method based on trinocular vision
Technical field
The invention belongs to digital image processing fields, and in particular to a kind of intelligent Target detection and survey based on trinocular vision Measure system and method.
Background technique
Picture system plays the role of information collection in air remote sensing, in geodesic survey, army, the energy, industry, agriculture The various fields extensive application such as industry, media.Especially in complex environment, picture system can look down view from different Accurately complete contactless information collection work in angle.With the development of image processing techniques, picture system starts gradually from information Acquisition tool is changed into information processing system.
Picture system is used to processing measurement problem in multiple fields.Camera, which is mounted on unmanned plane, to be completed The earth is surveyed and drawn, and the air-dried erosion range etc. for measuring large area soil passes through the big landearth-circumstance of image processing techniques detection and analysis Condition enables big geological measuring to be implemented in a manner of digitized.
With the development of multi-vision visual technology, the accuracy of measurement is also gradually increased.Three mesh cameras can be used to reality Existing object synchronization tracking measurement, the target that positioning and measurement are not demarcated;Binocular vision can be used for detecting the outside in power line Obstacle makes Power Line Inspection System can be realized accurate measurement.Multi-vision visual technology can be improved the accuracy of measuring system, more The measuring system that is fused to of angle shot information provides a kind of cognitive method of contactless multidimensional information.
Nowadays the multi-vision visual technology of most existing focuses on Stereo matching and disparity computation, has ignored multi-vision visual system Model structure and algorithm of target detection in system.For detection system, more mesh models and relevant algorithm are less, fixed Multi-vision visual model largely limits the raising of measuring system accuracy and generality.
Summary of the invention
It is an object of the invention to improve the accuracy of measuring system, a kind of intelligent Target inspection based on trinocular vision is proposed Survey and measuring system and method, realize the target detection not by target type and position, solve fixed multi-vision visual mould The technical issues of can not accurately detecting caused by type and measure target.
The present invention adopts the following technical scheme that, a kind of intelligent Target detection and measuring system based on trinocular vision, including Three cameras, three slideways and processing module;Wherein, three slideway one end connection, is distributed in 120 degree between any two, Three cameras are respectively arranged on slideway, are moved along slideway;The processing module is connected with camera, for carrying out mesh Mark detection and target measurement.
A kind of intelligent Target detection and measurement method based on trinocular vision, comprising the following steps:
The target detection stage:
1) prominent edge is detected: using the marginal information of Canny algorithm detection grayscale image, using difference of Gaussian DoG algorithm The conspicuousness for calculating marginal information, obtains prominent edge;
2) matching characteristic point;
3) carry out target detection based on seed mediated growth method: using prominent edge as boundary condition, matching characteristic point is as kind Son point carries out target detection based on seed mediated growth method;
4) edge matching is carried out, central point point group is obtained: edge detection is carried out to the image that seed mediated growth method detects, Divide each edge according to the angle at edge, pixel of the edge angle less than 100 degree is the maximal end point at an edge, is obtained To local edge;Pixel of the edge angle less than 160 degree is intermediate point in local edge;Edge matching is executed, according to edge Position and flux matched two intermediate points of pixel number between sub- marginal portion, it is if fruit marginal portion cannot match, then right The intermediate point answered is extra intermediate point, deletes extra intermediate point, obtains intermediate point point group;Connect all matched sons of completion Edge, using the result of all sub- edge matchings as the matching result at current entire edge;
The target measurement stage:
5) it demarcates camera: completing each camera in the calibration of trinocular vision system, by the image flame detection of distortion;
6) stepper motor dollying head is utilized, baseline and parallax are calculated;
7) it measures target size: calculating length, width and the height of target based on intermediate point point group obtained in step 4).
Invent achieved the utility model has the advantages that the present invention is a kind of intelligent Target detection and measuring system based on trinocular vision And method, the target detection not by target type and position is realized, solving can not caused by fixed multi-vision visual model The technical issues of accurate detection and measurement target.The present invention changes the fixed model of traditional multi-vision visual, three cameras Position can be according to target adjust automatically;These three cameras form three binocular vision systems, and detection comes from different angle Depth information, carry out target detection in each binocular vision subsystem using the vision significance of vision system;The present invention It is not limited by target type and position, is accurately realized target detection and target measurement.
Detailed description of the invention
Fig. 1 is the system mock-up figure in the embodiment of the present invention;
Fig. 2 is target detection a kind of in the embodiment of the present invention and measurement method flow diagram;
Fig. 3 is the characteristic matching point diagram based on spatial information in the embodiment of the present invention;(a) upper figure (b) right figure (c) right figure (d) left figure;
Fig. 4 is image calibration schematic diagram in the embodiment of the present invention;
Fig. 5 is that baseline calculates schematic diagram in the embodiment of the present invention;
Fig. 6 is disparity computation schematic diagram in the embodiment of the present invention;(a) without the parallax model (b) of camera rotational coordinates There is the parallax model of camera rotational coordinates.
Specific embodiment
Below according to attached drawing and technical solution of the present invention is further elaborated in conjunction with the embodiments.
The present invention adopts the following technical scheme that, a kind of intelligent Target detection and measuring system based on trinocular vision, with biography System binocular vision system is compared, the spatial information that trinocular vision system can be integrated more from different angles, and There are good generality and very high accuracy in plurality of target detection and measurement.
Embodiment 1:
The mock-up of system is as shown in Figure 1, include three cameras, three slideways and processing module;Wherein, described three A slideway one end connection, between any two in 120 degree of distributions, three cameras are respectively arranged on slideway, move along slideway; The processing module is connected with camera, for carrying out target detection and target measurement.
The camera is the high definition industry color camera that three focal lengths are 6mm.
Embodiment 2:
On the basis of embodiment 1, further include stepper motor, stepper motor is controlled by processing module, camera is driven to move Dynamic, recording camera location information obtains complete object image.The baseline of system can change according to objective self-adapting, get rid of The limitation of baseline in legacy system can obtain complete object by changing the position of camera.By three camera groups Target detection and measurement are carried out at three binocular vision subsystems, is able to detect that the depth information from different angle.
The processing module includes module of target detection and target measurement module, and the module of target detection is for determining figure Target position as in;The target measurement module is for obtaining the dimension information of target.
The module of target detection includes prominent edge detection module, Feature Points Matching module, based on seed point growth method Module of target detection and edge matching module;Wherein, prominent edge detection module utilizes Canny algorithm detection grayscale image Marginal information is calculated the conspicuousness of marginal information using difference of Gaussian DoG algorithm, obtains prominent edge;Based on seed mediated growth method Module of target detection using prominent edge as boundary condition, matching characteristic point as seed point utilize seed mediated growth method carry out mesh Mark detection;The image that edge matching module is used to detect seed mediated growth method carries out edge detection, according to the angle at edge Divide each edge, pixel of the edge angle less than 100 degree is the maximal end point at an edge, obtains local edge;Office Pixel of the edge angle less than 160 degree is intermediate point in portion edge;Edge matching is executed, according to the position at edge and pixel Sub- marginal portion between two intermediate points of quantity Matching, if fruit marginal portion cannot match, then corresponding intermediate point is more Remaining intermediate point deletes extra intermediate point, obtains intermediate point point group;All matched sub- edges of completion are connected, by all sons Matching result of the result of edge matching as current entire edge;
The target measurement module includes camera calibration module, baseline and disparity computation module and target size measurement Module, wherein camera calibration module rectifys the image of distortion for completing each camera in the calibration of trinocular vision system Just;Baseline and disparity computation module are used to utilize stepper motor dollying head, calculate baseline and parallax;Target size measurement mould The intermediate point point group that block is used to obtain based on edge matching module obtains length, width and the height of target.
The prominent edge detection module carries out the specific steps of prominent edge detection are as follows:
(11) using the marginal information of Canny algorithm detection grayscale image, the edge conduct that length is greater than given threshold is extracted Efficient frontier;
(12) conspicuousness of efficient frontier: the judgment criteria of efficient frontier conspicuousness is calculated using difference of Gaussian DoG algorithm Quantity for the quantity of each efficient frontier significant point, each efficient frontier significant point is directly proportional to the conspicuousness of efficient frontier;
(13) it extracts conspicuousness and is greater than the efficient frontier of given threshold as prominent edge.
The Feature Points Matching module matching characteristic point specific steps are as follows:
(21) characteristic point is detected by SURF algorithm;
(22) Euclidean between the left and right mesh image characteristic point for the binocular vision subsystem that three cameras form two-by-two is calculated Distance, the characteristic point with minimum Eustachian distance is as preliminary matches point;
(23) frequency that the line slope of all preliminary matches points occurs is calculated, deletes the frequency of occurrences lower than given threshold The corresponding preliminary matches point of slope completes Feature Points Matching.
The module of target detection based on seed point growth method is based on seed mediated growth method and carries out target detection specifically, will Prominent edge, matching characteristic point and colouring information combine, and realize the rapid growth of object detection area, specific steps are as follows:
(31) according to the direction of prominent edge and tendency, prominent edge is connected;
(32) according to Feature Points Matching as a result, matching characteristic point is labeled as seed point;
(33) color space information is utilized, the similarity between seed point region and the region increased is calculated;
(34) judge whether the region division that will increase into seed point region according to similarity.
The baseline and disparity computation module record mobile step number by stepper motor, according to camera in slideway The step number of the heart calculates the baseline of three binocular vision subsystems based on Pythagorean theorem;Two are calculated in connection binocular vision subsystem The parallax in the direction of picture centre.
The target size measurement module obtains length, width and the height specific steps of target are as follows:
(71) checking procedure 4) obtained in each of intermediate point point group point, the point composition one in a connected domain A subset;
(72) Euclidean distance in subset between all intermediate points is calculated;
(73) length and the spatial position at edge are calculated according to Euclidean distance;
(74) it repeats the above steps in three binocular vision subsystems;
(75) each edge retains maximum measurement result value in three binocular vision subsystems;
(76) in each binocular vision subsystem, all edges are judged one by one, according to locating spatial position With space angle, judge whether current edge can represent length, width and the height of target object, discrimination standard is to work as front Whether edge is if meeting discrimination standard, then to differentiate to the direction in space for selecting edge when edge longest in front direction, The edge length information that space angle is 90 degree is chosen as target size information.
Embodiment 3:
Target detection is the first step of image procossing, carries out target detection by three binocular vision subsystems, determines figure Target position as in;Then, three binocular vision subsystems respectively complete target measurement;Finally, according to three binocular visions The target detection and objective measurement of subsystem carry out information fusion, obtain final target detection and objective measurement, i.e., Take length, width and height maxima obtained in three binocular vision subsystems as final target detection and target measurement knot Fruit.In order to enable many types of target can be detected completely under various environment, the present invention proposes a kind of based on trinocular vision Intelligent Target detects and measurement method, i.e. image processing section in Fig. 2, comprising the following steps:
The target detection stage: it firstly, successively carrying out prominent edge detection and Feature Points Matching, is completed by Feature Points Matching Seed point screening;Seed growth is carried out then in conjunction with prominent edge, seed point screening and color space information;Finally carry out side Edge matching, specific step is as follows.
1) prominent edge is detected: using the marginal information of Canny algorithm detection grayscale image, using difference of Gaussian algorithm (Difference of Gaussian, DoG) calculates the conspicuousness of marginal information, obtains prominent edge;
DoG algorithm can calculate the saliency value of pixel, promote local center area, inhibit adjacent domain.DoG algorithm is fixed Justice are as follows:
σ1=0.6 and σ2=0.9 respectively represents stimulation bandwidth and inhibits bandwidth, and x and y respectively represent the abscissa of pixel And ordinate, I are grayscale image,The frequency filter to grayscale image is represented, DoG (x, y) is the pixel that conspicuousness detects Saliency value.The pixel value of pixel is handled to obtain the notable figure of image, processing rule are as follows:
R (x, y) indicates that the pixel value of treated pixel, T indicate that saliency value average value, count (DoG > 0) indicate figure The number of pixel of the saliency value DoG (x, y) greater than 0 as in, sum (DoG > 0) indicate that saliency value DoG (x, y) is greater than 0 in image Pixel saliency value summation.The pixel that treated in formula (2) pixel value is 1 is significant point, i.e., saliency value is greater than aobvious The pixel of work value average value is significant point;
Notable figure is similar with the result of Canny edge detection, but notable figure be not it is continuously distributed, by notable figure and The result combination of Canny edge detection can obtain prominent edge, specifically:
(11) using the marginal information of Canny algorithm detection grayscale image, effective edge is selected in Canny edge detection results Edge, in Canny edge detection results, there are many short edge and reflectance factor, they are unfavorable for the extraction of object edge, mention The edge for taking length to be greater than given threshold is conducive to eliminate error detection, refines marginal information and reduction as efficient frontier Computational complexity;
(12) use difference of Gaussian algorithm to calculate the conspicuousness of efficient frontier: the judgment criteria of efficient frontier conspicuousness is every The quantity of the quantity of a efficient frontier significant point, each efficient frontier significant point is directly proportional to the conspicuousness of efficient frontier;
(13) it extracts prominent edge: arranging the conspicuousness of efficient frontier in descending order, conspicuousness is greater than the effective of given threshold Edge is prominent edge.
2) matching characteristic point detects characteristic point by SURF (speed-up robust features) algorithm, according to Euclidean distance and line slope between the left and right mesh image characteristic point of each binocular vision subsystem complete Feature Points Matching, i.e., dilute Matching is dredged, specifically:
(21) characteristic point is detected;
The accuracy of target detection is improved in the present embodiment, optimizes the Feature Points Matching strategy in SURF algorithm, tool Body are as follows:
Firstly, selecting different size of image-region in image;
Then, characteristic point is extracted using the Hessian matrix (Hessian) in SURF algorithm;
The Hessian matrix H (x, σ) of pixel in image are as follows:
Parameter σ represents the scale of characteristic point, and x and y respectively represent the abscissa and ordinate of pixel, matrix parameter Lxx (x,σ)、Lxy(x, σ) and Lyy(x, σ) is defined as follows:
Wherein, R (x, y) calculating process is shown in formula (2), and g (σ) is Gaussian transformation process;
It takes current point as characteristic point, establishes the sky with the σ of 20 σ × 20 around the Main way of current signature point Heart block;Hollow block is divided into 16 zonules, calculates the Harr microwave of each zonule the σ of 5 σ × 5, obtain the 64 of characteristic point tie up to Amount.
The position of characteristic point in image records are as follows:
Pos1 represents left figure characteristic point, and Pos2 represents right figure characteristic point.(x1',y1')、(x2',y2')、(xm',ym') respectively Represent the coordinate of first characteristic point of left figure, second characteristic point and m-th of characteristic point;(x1,y1)、(x2,y2)、 (xn,yn) point The coordinate of first characteristic point of right figure, second characteristic point and n-th of characteristic point is not represented.
(22) Euclidean distance between each binocular vision subsystem or so all characteristic points of mesh image is calculated, there is minimum The characteristic point of Euclidean distance is as preliminary matches point;Then according to Euclidean distance collating sort match point, it is unsuccessful to delete matching Point, i.e., deletion preliminary matches point except characteristic point;
The Euclidean distance between all characteristic points of left images is calculated, i.e., between all characteristic points in Pos1 and Pos2 Euclidean distance, the characteristic point with minimum Eustachian distance is as preliminary matches point;According to Euclidean distance, ascending sequence is right All preliminary matches points are ranked up, preceding K preliminary matches point are as follows:
Pos_K={ { (x1',y1'),(x1,y1), { (x'2,y'2),(x2,y2)},...,{(x'K,y'K),(xK,yK)}} (6)
{(x1',y1'),(x1,y1), represent first match point coordinate of left figure and right figure in point set Pos_K, { (x'2, y'2),(x2,y2), represent second match point coordinate of left figure and right figure in point set Pos_K, { (x'K,y'K),(xK,yK)} Represent the k-th match point coordinate of left figure and right figure in point set Pos_K.
(23) select match point according to the line slope of match point: the line for calculating preliminary matches point in point set Pos_K is oblique The frequency that rate occurs deletes the corresponding match point of slope that the frequency of occurrences is lower than given threshold, completes Feature Points Matching;
Match point is selected according to the slope of match point corresponding in point set Pos_K, that is, counts the frequency that all slopes occur, deletes Except frequency of occurrences match point corresponding less than 0.5 slope, accurate matched point set Pos_K is obtainednewAre as follows:
Pos_Knew={ { (xz1,yz1),(xy1,yy1)},{(xz2,yz2),(xy2,yy2)},...,{(xzl,yzl),(xyl, yyl)}} (7)
{(xz1,yz1),(xy1,yy1) represent point set Pos_KnewFirst match point coordinate of left figure and right figure, { (xz2, yz2),(xy2,yy2) represent point set Pos_KnewSecond match point coordinate of left figure and right figure, { (xzl,yzl),(xyl,yyl)} Represent point set Pos_KnewFirst of match point coordinate of left figure and right figure.
Pos_KnewIn match point be seed point, as shown in figure 3, obtain whole characteristic points all on target object, The probability of erroneous segmentation can be reduced.
3) carry out target detection based on seed mediated growth method: using prominent edge as boundary condition, matching characteristic point is as kind Son point carries out target detection;
Prominent edge, matching characteristic point and HIS color space information are combined, realize the quick increasing of object detection area Long, specific step are as follows:
(31) according to the direction of prominent edge and tendency, connect prominent edge, as the dormant condition of seed point it One;
(32) according to Feature Points Matching as a result, match point is labeled as seed point, to obtain being located on target object Seed point;
(33) HIS color space information is utilized, calculates the similarity between seed point region and the region increased, just In the region of growthWith seed point region RcBetween similarity functionIs defined as:
simH(Rc)、simS(Rc)、simI(Rc) respectively indicate the region increasedWith seed point region RcSecondary color It adjusts, the similarity of saturation degree and brightness, Ht、StAnd ItRespectively represent the average tone in the region increased, average staturation and Average brightness, h, s and i respectively represent average tone, average staturation and the average brightness in the seed point region increased, j generation J-th of pixel in the seed point region that table increased, L represent the pixel number in the seed point region increased, d, e and g Indicate the component ratio of tone, saturation degree and brightness, in order to obtain accurate result, the area that d, e, g value are respectively increasing Tone, saturation degree and the standard deviation of brightness in domain;
(34) judge whether that the region division that will increase into seed point region, carries out the increasing in region according to similarity It is long.
4) edge matching is carried out, central point point group is obtained:
(41) edge detection is carried out to the image that seed mediated growth method detects from following four direction: from top to bottom, from Under to upper, from left to right, from right to left;Divide each edge, pixel of the edge angle less than 100 degree according to the angle at edge Point is the maximal end point at an edge, obtains local edge;
(42) local edge shorter than local edge length average value is deleted, edge angle is less than in remaining local edge 160 degree of pixel is intermediate point;
(43) carry out edge matching, during matched, progress edge matching first, later according to the position at edge and Sub- marginal portion between flux matched two intermediate points of pixel number;
(44) such as fruit marginal portion cannot match, then corresponding intermediate point is extra intermediate point, delete in extra Between point;
(45) all matched sub- edges of completion are connected, using the result of all sub- edge matchings as current entire edge Matching result.
In the target measurement stage, three cameras, which form three different binocular vision subsystems, can be improved measuring system Versatility, test method mainly include camera calibration, baseline and disparity computation and target size measurement:
5) it demarcates camera: completing each camera in the calibration of trinocular vision system, by the image flame detection of distortion;
Each camera is completed in the calibration of trinocular vision system using Zhang Shi standardization, as shown in figure 4, obtaining each take the photograph As the intrinsic parameter of head:
Use a plane chessboard as reference target, three cameras observe entire reference target simultaneously, the q of chessboard is taken to cover Image Pt(p), p=1,2 ..., q, every set image include left mesh, right mesh, three cameras of upper mesh shooting three width figures, will be whole Left and right, upper figure reformulate left set figure P respectivelyleft(p), right set figure Pright(p) and upper set figure Pup(p), p=1,2 ..., q According to left set figure Pleft(p), right set figure Pright(p) and upper set figure Pup(p), consolidating for each camera is obtained using Zhang Shi standardization There is parameter, intrinsic parameter includes focal length, the reference axis of principal point, spin matrix and translation matrix.The mathematical model of radial distortion are as follows:
(ud,vd) indicate that the pixel coordinate in fault image, (u, v) indicate the pixel coordinate of orthoscopic image, k1、 k2 Indicate tangential distortion parameter, (cx, cy) indicates that the principal point of image, (x, y) indicate the physical coordinates of distortionless image, Ke Yigen It is calculated according to iconic model, above formula conversion are as follows:
K is calculated by least square method1And k2, fault image is corrected by following formula later:
In later the step of, u is redefined abscissa x, and v is redefined ordinate y.Each camera it is abnormal After change is corrected, image Pleft(p), Pright(p), Pup(p) become P 'left(p), P 'right(p), P 'up(p)。
6) stepper motor dollying head is utilized, baseline and parallax are calculated;
By the mobile step number of stepper motor record, show that camera to the distance at slideway center, calculates three according to step number The baseline of a binocular vision subsystem;Calculate the photo center side for connecting any two camera in upper, left and right three cameras To parallax.
The baseline b and focal length f of binocular vision subsystem are two important parameters, and three cameras have identical focal length f, Each camera can automatically move position by stepper motor, so each binocular vision subsystem has variable baseline b. As shown in figure 5, the distance L based on mobile step number three cameras of calculating of stepper motor to slideway center1、L2、L3, binocular vision Feel the baseline b of subsystem1、b2、b3Are as follows:
Traditional parallax calculation method is as shown in Figure 6 a, in trinocular vision system, the optical axis of camera be it is parallel, it is double The structure of mesh vision subsystem changes according to the position of camera.In this case, the view of the horizontal direction of conventional method Difference is without considering further that, need to calculate the parallax of connection photo center position, parallax shows as the horizontal seat of left figure and right figure identical point The distance between mark.As shown in Figure 6 b, the parallax P and spatial coordinates calculation method of match point are as follows:
(xl,yl) and (xu,yu) respectively indicate the coordinate of original left figure current pixel point and right figure current pixel point; (xtl,ytl) and (xtu,ytu) respectively represent rotation after left figure current pixel point and right figure current pixel point coordinate;A generation The angle of table image rotation;B represents baseline, value b1、b2、b3;(xc,yc,zc) represent the space coordinate of current pixel point; (x, y) value is (xl,yl) or (xu,yu)。
7) it measures target size: calculating length, width and the height of target based on intermediate point obtained in step 4).
The space coordinate of intermediate point point group in step 4) is obtained using the formula of step 6), in each binocular vision subsystem Dimension information is calculated, final measurement is obtained according to the collaboration fusion of the dimension information of three binocular vision subsystems, that is, is measured Length, width and the height of object, specific steps are as follows: (71) point in each intermediate point point group is examined, in a connected domain On point form a subset;(72) Euclidean distance in subset between all intermediate points is calculated;(73) according to Euclidean distance meter Calculate length and the spatial position at edge;(74) it repeats the above steps in three binocular vision subsystems;(75) each edge is protected Stay maximum measurement result value in three binocular vision subsystems;(76) in each binocular vision subsystem, to all edges Judged one by one, according to locating spatial position and space angle, judges whether current edge can represent target object Length, discrimination standard are whether current edge is when edge longest in front direction, if meeting discrimination standard, then to choosing The direction in space at edge is differentiated out, chooses the edge length information that space angle is 90 degree as target size information.

Claims (10)

1. a kind of intelligent Target detection and measuring system based on trinocular vision, which is characterized in that including three cameras, three Slideway and processing module;Wherein, three slideway one end connection, between any two in 120 degree of distributions, three cameras point It is not set on slideway, is moved along slideway;The processing module is connected with camera, surveys for carrying out target detection and target Amount.
2. a kind of intelligent Target detection and measuring system based on trinocular vision according to claim 1, which is characterized in that Further include stepper motor, stepper motor is controlled by processing module and drives camera mobile, recording camera location information obtains Complete object image.
3. a kind of intelligent Target detection and measuring system based on trinocular vision according to claim 1, which is characterized in that The processing module includes module of target detection and target measurement module, and the module of target detection is used to determine the mesh in image Cursor position;The target measurement module is for obtaining the dimension information of target.
4. a kind of intelligent Target detection and measuring system based on trinocular vision according to claim 3, which is characterized in that The module of target detection includes prominent edge detection module, Feature Points Matching module, the target inspection based on seed point growth method Survey module and edge matching module;Wherein, prominent edge detection module detects the marginal information of grayscale image using Canny algorithm, The conspicuousness that marginal information is calculated using difference of Gaussian DoG algorithm, obtains prominent edge;Target detection based on seed mediated growth method For module using prominent edge as boundary condition, matching characteristic point carries out target detection using seed mediated growth method as seed point;Side The image that edge matching module is used to detect seed mediated growth method carries out edge detection, divides each according to the angle at edge Edge, pixel of the edge angle less than 100 degree are the maximal end point at an edge, obtain local edge;Side in local edge Pixel of the edge angle less than 160 degree is intermediate point;Edge matching is executed, according to the position at edge and pixel number flux matched two Sub- marginal portion between a intermediate point, if fruit marginal portion cannot match, then corresponding intermediate point is extra intermediate point, Extra intermediate point is deleted, intermediate point point group is obtained;All matched sub- edges of completion are connected, by the knot of all sub- edge matchings Matching result of the fruit as current entire edge;
The target measurement module includes camera calibration module, baseline and disparity computation module and target size measurement mould Block, wherein camera calibration module rectifys the image of distortion for completing each camera in the calibration of trinocular vision system Just;Baseline and disparity computation module are used to utilize stepper motor dollying head, calculate baseline and parallax;Target size measurement mould The intermediate point point group that block is used to obtain based on edge matching module obtains length, width and the height of target.
5. a kind of intelligent Target detection and measurement method based on trinocular vision, which comprises the following steps:
The target detection stage:
1) it detects prominent edge: using the marginal information of Canny algorithm detection grayscale image, being calculated using difference of Gaussian DoG algorithm The conspicuousness of marginal information, obtains prominent edge;
2) matching characteristic point;
3) carry out target detection based on seed mediated growth method: using prominent edge as boundary condition, matching characteristic point is as seed point Target detection is carried out based on seed mediated growth method;
4) edge matching is carried out, central point point group is obtained: edge detection is carried out to the image that seed mediated growth method detects, according to The angle at edge divides each edge, and pixel of the edge angle less than 100 degree is the maximal end point at an edge, obtains office Portion edge;Pixel of the edge angle less than 160 degree is intermediate point in local edge;Edge matching is executed, according to the position at edge The sub- marginal portion between flux matched two intermediate points of pixel number is set, it is if fruit marginal portion cannot match, then corresponding Intermediate point is extra intermediate point, deletes extra intermediate point, obtains intermediate point point group;Connect all matched sub- sides of completion Edge, using the result of all sub- edge matchings as the matching result at current entire edge;
The target measurement stage:
5) it demarcates camera: completing each camera in the calibration of trinocular vision system, by the image flame detection of distortion;
6) stepper motor dollying head is utilized, baseline and parallax are calculated;
7) it measures target size: calculating length, width and the height of target based on intermediate point point group obtained in step 4).
6. a kind of intelligent Target detection and measurement method based on trinocular vision according to claim 5, which is characterized in that Prominent edge specific steps are detected in the step 1) are as follows:
(11) it using the marginal information of Canny algorithm detection grayscale image, extracts length and is greater than the edge of given threshold as effective Edge;
(12) conspicuousness of efficient frontier is calculated using difference of Gaussian DoG algorithm, the judgment criteria of efficient frontier conspicuousness is every The quantity of the quantity of a efficient frontier significant point, each efficient frontier significant point is directly proportional to the conspicuousness of efficient frontier;
(13) it extracts conspicuousness and is greater than the efficient frontier of given threshold as prominent edge.
7. a kind of intelligent Target detection and measurement method based on trinocular vision according to claim 5, which is characterized in that Matching characteristic point specific steps in the step 2) are as follows:
(21) characteristic point is detected by SURF algorithm;
(22) Euclidean distance between the left and right mesh image characteristic point for the binocular vision subsystem that three cameras form two-by-two is calculated, Characteristic point with minimum Eustachian distance is as preliminary matches point;
(23) frequency that the line slope of all preliminary matches points occurs is calculated, the slope that the frequency of occurrences is lower than given threshold is deleted Corresponding preliminary matches point completes Feature Points Matching.
8. a kind of intelligent Target detection and measurement method based on trinocular vision according to claim 5, which is characterized in that Carrying out target detection based on seed mediated growth method in the step 3) is to combine prominent edge, matching characteristic point and colouring information, Realize the rapid growth of object detection area, specific steps are as follows:
(31) according to the direction of prominent edge and tendency, prominent edge is connected;
(32) according to Feature Points Matching as a result, matching characteristic point is labeled as seed point;
(33) color space information is utilized, the similarity between seed point region and the region increased is calculated;
(34) judge whether the region division that will increase into seed point region according to similarity.
9. a kind of intelligent Target detection and measurement method based on trinocular vision according to claim 5, which is characterized in that By the step number that stepper motor record is mobile in the step 6), it is fixed that the step number according to camera apart from slideway center is based on hook stock Reason calculates the baseline of three binocular vision subsystems;Calculate the view in the direction of two picture centres in connection binocular vision subsystem Difference.
10. a kind of intelligent Target detection and measurement method, feature based on trinocular vision according to claim 5 exists In the size specific steps of step 7) the measurement target are as follows:
(71) checking procedure 4) obtained in each of intermediate point point group point, the point in a connected domain forms a son Collection;
(72) Euclidean distance in subset between all intermediate points is calculated;
(73) length and the spatial position at edge are calculated according to Euclidean distance;
(74) it repeats the above steps in three binocular vision subsystems;
(75) each edge retains maximum measurement result value in three binocular vision subsystems;
(76) in each binocular vision subsystem, all edges are judged one by one, according to locating spatial position and sky Between angle, judge whether current edge can represent length, width and the height of target object, discrimination standard is for current edge No is if meeting discrimination standard, then to differentiate to the direction in space for selecting edge when edge longest in front direction, is chosen The edge length information that space angle is 90 degree is as target size information.
CN201810930141.1A 2018-08-15 2018-08-15 Intelligent target detection and measurement system and method based on trinocular vision Active CN109211198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810930141.1A CN109211198B (en) 2018-08-15 2018-08-15 Intelligent target detection and measurement system and method based on trinocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810930141.1A CN109211198B (en) 2018-08-15 2018-08-15 Intelligent target detection and measurement system and method based on trinocular vision

Publications (2)

Publication Number Publication Date
CN109211198A true CN109211198A (en) 2019-01-15
CN109211198B CN109211198B (en) 2021-01-01

Family

ID=64988767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810930141.1A Active CN109211198B (en) 2018-08-15 2018-08-15 Intelligent target detection and measurement system and method based on trinocular vision

Country Status (1)

Country Link
CN (1) CN109211198B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109405737A (en) * 2018-10-10 2019-03-01 湖南科技大学 Camera system and measurement method towards large-scale metrology
CN110470216A (en) * 2019-07-10 2019-11-19 湖南交工智能技术有限公司 A kind of three-lens high-precision vision measurement method and device
CN110992291A (en) * 2019-12-09 2020-04-10 国网安徽省电力有限公司检修分公司 Distance measuring method, system and storage medium based on trinocular vision
CN111832542A (en) * 2020-08-15 2020-10-27 武汉易思达科技有限公司 Three-eye visual identification and positioning method and device
CN112734712A (en) * 2020-12-31 2021-04-30 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) Imaging detection method and system for health state of ship vibration equipment
CN112785647A (en) * 2021-01-27 2021-05-11 北京宇航时代科技发展有限公司 Three-eye stereo image detection method and system
CN113040909A (en) * 2021-02-26 2021-06-29 张志宏 Optical tracking system and method based on near-infrared three-eye stereo vision
CN113450350A (en) * 2021-07-28 2021-09-28 张晓寒 Detection method based on texture features of cloth area
WO2023115561A1 (en) * 2021-12-24 2023-06-29 深圳市大疆创新科技有限公司 Movement control method and apparatus for movable platform, and movable platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1917963A (en) * 2003-12-23 2007-02-21 奎斯有限公司 Method for automatically applying and controlling a structure applicable on a substrate and device for carrying out said method
CN106506927A (en) * 2016-12-09 2017-03-15 努比亚技术有限公司 A kind of terminal and the method shot using terminal
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107966135A (en) * 2017-11-15 2018-04-27 北京化工大学 A kind of multi-vision visual measuring method based on dome structure
US20180176541A1 (en) * 2016-12-20 2018-06-21 Gopro, Inc. Compact array of imaging devices with supplemental imaging unit
CN108323238A (en) * 2018-01-23 2018-07-24 深圳前海达闼云端智能科技有限公司 More mesh camera systems, terminal device and robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1917963A (en) * 2003-12-23 2007-02-21 奎斯有限公司 Method for automatically applying and controlling a structure applicable on a substrate and device for carrying out said method
CN106506927A (en) * 2016-12-09 2017-03-15 努比亚技术有限公司 A kind of terminal and the method shot using terminal
US20180176541A1 (en) * 2016-12-20 2018-06-21 Gopro, Inc. Compact array of imaging devices with supplemental imaging unit
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107966135A (en) * 2017-11-15 2018-04-27 北京化工大学 A kind of multi-vision visual measuring method based on dome structure
CN108323238A (en) * 2018-01-23 2018-07-24 深圳前海达闼云端智能科技有限公司 More mesh camera systems, terminal device and robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张浩鹏: ""视觉检测系统的若干关键问题研究"", 《中国博士学位论文全文数据库 信息科技辑》 *
李庆武 等: ""基于双目视觉的显著性目标检测方法"", 《光学学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109405737A (en) * 2018-10-10 2019-03-01 湖南科技大学 Camera system and measurement method towards large-scale metrology
CN110470216A (en) * 2019-07-10 2019-11-19 湖南交工智能技术有限公司 A kind of three-lens high-precision vision measurement method and device
CN110992291B (en) * 2019-12-09 2023-07-21 国网安徽省电力有限公司超高压分公司 Ranging method, system and storage medium based on three-eye vision
CN110992291A (en) * 2019-12-09 2020-04-10 国网安徽省电力有限公司检修分公司 Distance measuring method, system and storage medium based on trinocular vision
CN111832542A (en) * 2020-08-15 2020-10-27 武汉易思达科技有限公司 Three-eye visual identification and positioning method and device
CN111832542B (en) * 2020-08-15 2024-04-16 武汉易思达科技有限公司 Tri-vision identifying and positioning device
CN112734712A (en) * 2020-12-31 2021-04-30 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) Imaging detection method and system for health state of ship vibration equipment
CN112734712B (en) * 2020-12-31 2022-07-01 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) Imaging detection method and system for health state of ship vibration equipment
CN112785647A (en) * 2021-01-27 2021-05-11 北京宇航时代科技发展有限公司 Three-eye stereo image detection method and system
CN113040909A (en) * 2021-02-26 2021-06-29 张志宏 Optical tracking system and method based on near-infrared three-eye stereo vision
CN113450350B (en) * 2021-07-28 2022-08-30 张晓寒 Detection method based on texture features of cloth area
CN113450350A (en) * 2021-07-28 2021-09-28 张晓寒 Detection method based on texture features of cloth area
WO2023115561A1 (en) * 2021-12-24 2023-06-29 深圳市大疆创新科技有限公司 Movement control method and apparatus for movable platform, and movable platform

Also Published As

Publication number Publication date
CN109211198B (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN109211198A (en) A kind of intelligent Target detection and measuring system and method based on trinocular vision
CN110781827B (en) Road edge detection system and method based on laser radar and fan-shaped space division
CN110443836A (en) A kind of point cloud data autoegistration method and device based on plane characteristic
CN104537659B (en) The automatic calibration method and system of twin camera
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN107248159A (en) A kind of metal works defect inspection method based on binocular vision
CN109100741A (en) A kind of object detection method based on 3D laser radar and image data
CN109444911A (en) A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN107346550B (en) It is a kind of for the three dimensional point cloud rapid registering method with colouring information
CN110297232A (en) Monocular distance measuring method, device and electronic equipment based on computer vision
CN106969706A (en) Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN103793708B (en) A kind of multiple dimensioned car plate precise positioning method based on motion correction
CN104517095B (en) A kind of number of people dividing method based on depth image
CN110334678A (en) A kind of pedestrian detection method of view-based access control model fusion
CN110211108A (en) A kind of novel abnormal cervical cells automatic identifying method based on Feulgen colouring method
CN106651882B (en) A kind of bird's nest impurities identification and detection method and device based on machine vision
CN108470338B (en) A kind of water level monitoring method
CN110378921B (en) Intelligent identification method for substrate layer boundary of channel based on floating mud rheological property and gray level co-occurrence matrix
CN104484680B (en) A kind of pedestrian detection method of multi-model multi thresholds combination
CN107016353A (en) A kind of method and system of variable resolution target detection and identification integration
CN102663723A (en) Image segmentation method based on color sample and electric field model
CN110110793A (en) Binocular image fast target detection method based on double-current convolutional neural networks
CN109886170A (en) A kind of identification of oncomelania intelligent measurement and statistical system
CN105809678B (en) A kind of line segment feature global registration method between two views under short base line condition
CN108320799B (en) Image analysis and recognition method for lateral flow paper strip disease diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant