CN114255286B - Target size measuring method based on multi-view binocular vision perception - Google Patents

Target size measuring method based on multi-view binocular vision perception Download PDF

Info

Publication number
CN114255286B
CN114255286B CN202210184835.1A CN202210184835A CN114255286B CN 114255286 B CN114255286 B CN 114255286B CN 202210184835 A CN202210184835 A CN 202210184835A CN 114255286 B CN114255286 B CN 114255286B
Authority
CN
China
Prior art keywords
binocular
formula
groups
coordinate system
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210184835.1A
Other languages
Chinese (zh)
Other versions
CN114255286A (en
Inventor
郑欣
彭靓
吴昊
李庆武
马云鹏
周亚琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Robost Robot Co ltd
Original Assignee
Changzhou Robost Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Robost Robot Co ltd filed Critical Changzhou Robost Robot Co ltd
Priority to CN202210184835.1A priority Critical patent/CN114255286B/en
Publication of CN114255286A publication Critical patent/CN114255286A/en
Application granted granted Critical
Publication of CN114255286B publication Critical patent/CN114255286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention relates to the technical field of image processing, and discloses a target size measuring method for multi-view binocular vision perception, which comprises the following steps: acquiring two groups of binocular camera parameters based on a Zhang calibration method; shooting a target by using two groups of binocular cameras to obtain two groups of binocular images, and correcting the two groups of binocular images by using an improved Bouguet algorithm to enable the two groups of binocular images to meet epipolar constraint; performing stereo matching on the two groups of binocular images respectively to obtain the parallax of the two groups of binocular images; dividing the binocular image to obtain a target area of the binocular image and obtain two groups of target three-dimensional point clouds; carrying out three-dimensional data fusion on data points obtained from the two groups of local coordinate systems and unifying the data points to the same coordinate system; determining the contour of the target area, and utilizing the fused three-dimensional point cloud to realize the length measurement of the contour. The invention can improve the precision of target contour dimension measurement and has great application value in industry.

Description

Target size measuring method based on multi-view binocular vision perception
Technical Field
The invention relates to the technical field of image processing, in particular to a target size measuring method based on multi-view binocular vision perception.
Background
The binocular vision simulates the mechanism of human eye vision, and the technology has high efficiency, simple equipment and lower cost, so the binocular vision simulation technology is widely applied in many fields. In the industrial field, the binocular vision technology can realize non-contact detection and monitoring of products without influencing the motion state of a target, so the technology is often used as an assistant to carry out three-dimensional reconstruction on the target, and further realizes the purposes of distance measurement, size measurement and the like;
at present, a single group of binocular cameras are mostly adopted for binocular vision-based three-dimensional reconstruction, binocular images are corrected by using camera parameter values obtained through calibration, and three-dimensional point cloud is obtained through stereo matching. The stereo matching and the three-dimensional reconstruction based on the single group of binocular cameras have low precision at the sheltering and shadow positions, neglect the possibility that the target object has different shape characteristics at each angle, have limitations on the target three-dimensional reconstruction, and are difficult to ensure the accuracy of the subsequent contour dimension measurement. According to the method for measuring the contour dimension of the binocular vision target with multiple visual angles, the target image is divided and subjected to stereo matching respectively, two groups of three-dimensional point clouds can be obtained, the two groups of data points are unified, the problem that target information obtained by a single camera is incomplete is solved, the precision of target three-dimensional reconstruction can be effectively improved, the precision of target contour dimension measurement is improved, and the method has important research value and significance in industry.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a target size measuring method based on multi-view binocular vision perception, which can effectively improve the precision of target three-dimensional reconstruction.
In order to achieve the purpose, the invention provides the following technical scheme: a target size measuring method for multi-view binocular vision perception comprises the following steps:
1. two groups of binocular camera parameters are obtained based on the Zhang calibration method, including internal reference matrixes of four cameras and three groups of cameras (
Figure 30587DEST_PATH_IMAGE001
And
Figure 997406DEST_PATH_IMAGE002
Figure 237895DEST_PATH_IMAGE003
and
Figure 922954DEST_PATH_IMAGE004
and
Figure 539880DEST_PATH_IMAGE005
and
Figure 361206DEST_PATH_IMAGE006
) A rotation matrix and a translation matrix of (a);
2. shooting a target by using two groups of binocular cameras to obtain two groups of binocular images, and correcting the two groups of binocular images by using an improved Bouguet algorithm to enable the two groups of binocular images to meet epipolar constraint;
3. performing stereo matching on the two groups of binocular images respectively to obtain the parallax of the two groups of binocular images;
4. dividing the binocular image to obtain a target area of the binocular image and obtain two groups of target three-dimensional point clouds;
5. carrying out three-dimensional data fusion on data points obtained from the two groups of local coordinate systems and unifying the data points to the same coordinate system;
6. determining the contour of the target area, and utilizing the fused three-dimensional point cloud to realize the length measurement of the contour.
The invention provides a target size measuring method based on multi-view binocular vision perception, which has the beneficial effects that:
1. in the design process of the camera model, the characteristics of high technical efficiency, simple equipment and low cost of binocular vision are utilized, the problem that the shooting range of a single group of binocular cameras is limited is considered, two groups of binocular cameras are arranged to shoot a target object in an all-round way, and the comprehensive appearance information of the target object is obtained;
2. the invention solves the problem of cooperation of three-dimensional data obtained by a multi-view camera, unifies a plurality of groups of three-dimensional point clouds obtained by the multi-view camera into a world coordinate system to generate a point cloud picture with a complete target, improves the precision of three-dimensional reconstruction and further improves the precision of contour dimension measurement.
Drawings
FIG. 1 is a schematic view of a multi-view binocular vision target contour dimension measurement algorithm of the present invention;
FIG. 2 is a schematic diagram of two binocular camera models according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1, please refer to fig. 1-2, the present invention provides a technical solution: a target size measuring method for multi-view binocular vision perception comprises the following steps:
step 1, shooting a plurality of groups of calibration plate images from a plurality of angles by using an image acquisition device, and acquiring parameters of a camera based on a Zhang calibration method, wherein the method comprises the following specific steps:
step 11, the image acquisition device adopts four cameras with the same specification to form two groups of binocular cameras which comprise
Figure 38175DEST_PATH_IMAGE007
And
Figure 944951DEST_PATH_IMAGE008
the image acquisition device is used for acquiring images of the calibration plate from multiple angles and ensuring that the calibration plate is positioned
Figure 99989DEST_PATH_IMAGE009
Clear and complete;
step 12, calibrating the four-eye camera by using a Zhang calibration method to obtain internal and external parameters of the camera;
(1a) to four-eye camera
Figure 41400DEST_PATH_IMAGE010
Respectively calibrating to obtain internal parameters of each camera;
(1b) respectively to two groups of cameras
Figure 623691DEST_PATH_IMAGE011
And
Figure 519228DEST_PATH_IMAGE012
calibrating to obtain rotation matrix and translation matrix of two groups of cameras, and definingIs composed of
Figure 212378DEST_PATH_IMAGE013
And
Figure 273874DEST_PATH_IMAGE014
(1c) to pair
Figure 27067DEST_PATH_IMAGE015
And
Figure 908435DEST_PATH_IMAGE016
calibrating to obtain rotation matrix of two cameras
Figure 139696DEST_PATH_IMAGE017
And translation matrix
Figure 55700DEST_PATH_IMAGE018
Step 21, acquiring a target image to be measured by using an image acquisition device to obtain two groups of binocular images, wherein the first group of cameras
Figure 979793DEST_PATH_IMAGE019
The binocular image is taken as
Figure 348458DEST_PATH_IMAGE020
Second group of cameras
Figure 147524DEST_PATH_IMAGE021
The binocular image is taken as
Figure 652455DEST_PATH_IMAGE022
(ii) a Defining world coordinate system
Figure 747450DEST_PATH_IMAGE023
First group of left eye cameras
Figure 868990DEST_PATH_IMAGE024
Coordinate system is
Figure 442054DEST_PATH_IMAGE025
And is and
Figure 332649DEST_PATH_IMAGE026
and
Figure 598545DEST_PATH_IMAGE027
coincidence, second group left eye camera
Figure 941802DEST_PATH_IMAGE028
Coordinate system is
Figure 584136DEST_PATH_IMAGE029
(ii) a First group of right eye cameras
Figure 329238DEST_PATH_IMAGE030
Coordinate system is
Figure 31615DEST_PATH_IMAGE031
Second group of right eye cameras
Figure 98053DEST_PATH_IMAGE032
Coordinate system is
Figure 12920DEST_PATH_IMAGE033
Step 22, constructing a first group of binocular images by using a Bouguet algorithm
Figure 612528DEST_PATH_IMAGE034
Of the rotation matrix
Figure 220227DEST_PATH_IMAGE035
To pair
Figure 272497DEST_PATH_IMAGE036
Performing primary horizontal correction, and specifically comprising the following steps:
(2a) firstly, the first step is to
Figure 256633DEST_PATH_IMAGE036
A rotation matrix of
Figure 710748DEST_PATH_IMAGE037
Composite matrix divided into left and right cameras
Figure 489349DEST_PATH_IMAGE038
Wherein
Figure DEST_PATH_IMAGE039
Figure 261870DEST_PATH_IMAGE040
First set of binocular cameras having rotation matrix
Figure 49698DEST_PATH_IMAGE037
The rotation matrix is divided into two opposite direction matrixes, which is equivalent to that the left eye camera rotates towards one direction
Figure 358319DEST_PATH_IMAGE037
Half of the right eye camera rotates in the opposite direction
Figure 573400DEST_PATH_IMAGE041
The half of the image is converted into the same plane by the left camera and the right camera;
(2b) creating
Figure 334683DEST_PATH_IMAGE042
A rotation matrix of translation vector directions in between such that the baseline is parallel to the imaging plane;
Figure 926201DEST_PATH_IMAGE043
formula (1)
Wherein the content of the first and second substances,
Figure 89329DEST_PATH_IMAGE044
being the poles in the same direction as the translation vectors,
Figure 475311DEST_PATH_IMAGE045
Figure 989469DEST_PATH_IMAGE046
Figure 119099DEST_PATH_IMAGE047
and
Figure 649917DEST_PATH_IMAGE048
are respectively as
Figure 206800DEST_PATH_IMAGE049
And
Figure 208254DEST_PATH_IMAGE051
a translation vector in a direction;
Figure 875996DEST_PATH_IMAGE052
a vector in the direction of an image plane;
Figure 13716DEST_PATH_IMAGE053
is perpendicular to
Figure 7080DEST_PATH_IMAGE054
And with
Figure DEST_PATH_IMAGE055
The vector of the plane in which the lens is located;
(2c) obtaining left and right cameras according to formula (2)
Figure 699093DEST_PATH_IMAGE056
Integral rotation matrix of
Figure 436105DEST_PATH_IMAGE057
(ii) a First set of left and right camera coordinate systems
Figure 428331DEST_PATH_IMAGE058
Figure 327017DEST_PATH_IMAGE031
Multiplying by the respectiveIntegral rotation matrix
Figure 536019DEST_PATH_IMAGE059
So that the main optical axes of the left camera and the right camera are parallel, the image plane is parallel to the base line, and the coordinate systems of the first group of the left camera and the right camera are the same after rotation
Figure 811143DEST_PATH_IMAGE060
Figure 657876DEST_PATH_IMAGE061
Formula (2)
Step 23, mixing
Figure 993043DEST_PATH_IMAGE062
Figure 925227DEST_PATH_IMAGE063
Rotate simultaneously about respective optical centers
Figure 4041DEST_PATH_IMAGE064
Obtaining a new coordinate system
Figure 970860DEST_PATH_IMAGE065
Figure 211348DEST_PATH_IMAGE066
At this time
Figure 896408DEST_PATH_IMAGE067
And
Figure 513334DEST_PATH_IMAGE068
and world coordinates
Figure 334659DEST_PATH_IMAGE069
Overlapping; obtaining a line alignment image after rotation
Figure 247514DEST_PATH_IMAGE070
Step 24, repeating step 22, and carrying out binocular image processing on the second group of binocular images
Figure 154290DEST_PATH_IMAGE071
Performing primary correction to obtain
Figure 574907DEST_PATH_IMAGE071
Integral rotation matrix of
Figure 516318DEST_PATH_IMAGE072
The coordinate system of the second group of left and right eye cameras after correction
Figure 98609DEST_PATH_IMAGE073
Figure 758261DEST_PATH_IMAGE074
Step 25, repeat step 23, will
Figure 451410DEST_PATH_IMAGE075
Figure 247328DEST_PATH_IMAGE076
Rotate simultaneously about respective optical centers
Figure 266100DEST_PATH_IMAGE077
Obtaining a new coordinate system
Figure 881889DEST_PATH_IMAGE078
Figure 644308DEST_PATH_IMAGE079
Then, then
Figure 58847DEST_PATH_IMAGE080
And
Figure 982941DEST_PATH_IMAGE081
obtaining a line alignment image after overlapping and rotating
Figure 617184DEST_PATH_IMAGE082
Wherein, the step 24 and the step 25 correspond to the step 22 and the step 23 respectively; the operating method is completely identical, with the difference that step 24, step 25, is directed to a second set of binocular images; step 22, step 23 for a first set of binocular images;
for two groups of binocular images
Figure 652136DEST_PATH_IMAGE083
And
Figure 157067DEST_PATH_IMAGE084
respectively carrying out stereo matching to generate a disparity map
Figure 517641DEST_PATH_IMAGE085
Figure 108023DEST_PATH_IMAGE086
(ii) a An improved stereo matching algorithm based on AD-Census is adopted and divided into four steps of matching cost calculation, cost aggregation, parallax calculation and parallax optimization so as to
Figure 212245DEST_PATH_IMAGE087
For example, the specific steps are as follows:
step 31, calculating initial matching cost, and defining Census matching cost
Figure 102840DEST_PATH_IMAGE088
Is a pixel point in the representation shown in formula (3)
Figure 368737DEST_PATH_IMAGE089
And
Figure 711993DEST_PATH_IMAGE090
middle corresponds to parallax
Figure 590213DEST_PATH_IMAGE091
Pixel point of
Figure 335315DEST_PATH_IMAGE092
Census transformation betweenA similarity measure;
Figure 772112DEST_PATH_IMAGE093
formula (3)
Wherein the content of the first and second substances,
Figure 602665DEST_PATH_IMAGE094
and
Figure 783111DEST_PATH_IMAGE095
are respectively left eye images
Figure 648299DEST_PATH_IMAGE096
Middle pixel point
Figure 255998DEST_PATH_IMAGE089
And the right eye image
Figure 42688DEST_PATH_IMAGE097
Middle pixel point
Figure 292404DEST_PATH_IMAGE098
The Census-transformed code of (a),
Figure 746519DEST_PATH_IMAGE099
represents an exclusive or;
defining the cost of AD as
Figure 289233DEST_PATH_IMAGE100
As shown in equation (4):
Figure 563220DEST_PATH_IMAGE101
formula (4)
Wherein the content of the first and second substances,
Figure 351047DEST_PATH_IMAGE102
Figure 925248DEST_PATH_IMAGE102
are respectively left eye images
Figure 874750DEST_PATH_IMAGE103
Middle pixel point
Figure 167191DEST_PATH_IMAGE089
And the right eye image
Figure 227551DEST_PATH_IMAGE097
Middle pixel point
Figure 921837DEST_PATH_IMAGE098
A mapped gray value in RGB space; final matching cost
Figure 42240DEST_PATH_IMAGE104
As shown in equation (5);
Figure 556398DEST_PATH_IMAGE105
formula (5)
Wherein the content of the first and second substances,
Figure 420449DEST_PATH_IMAGE106
Figure 470706DEST_PATH_IMAGE107
respectively controlling Census matching cost and AD matching cost;
Figure 762010DEST_PATH_IMAGE106
Figure 763465DEST_PATH_IMAGE107
these two values are control parameters, Census matching cost and AD matching cost are respectively expressed as:
Figure 431206DEST_PATH_IMAGE108
Figure 834506DEST_PATH_IMAGE109
equation (5) is equivalent to adding the two equations when
Figure 296711DEST_PATH_IMAGE110
Figure 519882DEST_PATH_IMAGE111
Figure 991315DEST_PATH_IMAGE112
Figure 249121DEST_PATH_IMAGE107
When both are positive values, C (p, d) of the formula (5) is controlled to [0, 2 ]]A range of (d);
step 32, smoothing the matching cost by guiding filtering, aggregating the matching cost by taking the filter and the function as adaptive weight, and defining a kernel function as shown in formula (6);
Figure 882227DEST_PATH_IMAGE113
formula (6)
In the formula (I), the compound is shown in the specification,
Figure 356809DEST_PATH_IMAGE114
as a window
Figure 366353DEST_PATH_IMAGE115
The size of (a) is (b),
Figure 478666DEST_PATH_IMAGE116
and
Figure 548253DEST_PATH_IMAGE117
are respectively windows
Figure 480437DEST_PATH_IMAGE115
The mean and variance of the gray values of the inner pixels,
Figure 559251DEST_PATH_IMAGE118
to adjust the parameters, the matching cost after aggregation is
Figure 260491DEST_PATH_IMAGE119
P is a window
Figure 766559DEST_PATH_IMAGE115
Q is a pixel point (terminating pixel point) at the corner of the window,
Figure 186039DEST_PATH_IMAGE120
and
Figure 802965DEST_PATH_IMAGE121
the gray values of the two pixel points are respectively shown as a formula (7);
Figure 889869DEST_PATH_IMAGE122
formula (7)
The cost aggregation method is based on a cross window, Np is the selected cross window, and q belongs to Np and represents that a termination pixel point q is in the selected cross window;
step 33, selecting the value corresponding to the lowest matching cost from the candidate disparity values as the disparity value of the pixel point
Figure 791005DEST_PATH_IMAGE123
Obtaining an initial disparity map corresponding to the first group of binocular images
Figure 697781DEST_PATH_IMAGE124
As shown in equation (8);
Figure 118398DEST_PATH_IMAGE125
formula (8)
In the formula (I), the compound is shown in the specification,
Figure 794230DEST_PATH_IMAGE126
representing a maximum disparity search range;
step 34, distinguishing shielded points and non-shielded points through left-right consistency check, as shown in formula (9);
Figure 642101DEST_PATH_IMAGE127
formula (9)
In the formula (I), the compound is shown in the specification,
Figure 301752DEST_PATH_IMAGE128
is that
Figure 994902DEST_PATH_IMAGE129
Middle pixel point
Figure 56399DEST_PATH_IMAGE130
The value of the disparity of (a) to (b),
Figure 544012DEST_PATH_IMAGE131
is that
Figure 690959DEST_PATH_IMAGE132
Corresponding pixel point in
Figure 686335DEST_PATH_IMAGE133
When the difference between the two is more than 1 pixel, the pixel point is a shielding point; distinguishing color segmentation areas after obtaining occlusion points and non-occlusion points, wherein the color segmentation areas are reliable areas when the following conditions are met, and are unreliable areas if the color segmentation areas are not met, as shown in a formula (10);
Figure 602338DEST_PATH_IMAGE134
formula (10)
In the formula (I), the compound is shown in the specification,
Figure 526432DEST_PATH_IMAGE135
is the total number of pixels of the region,
Figure DEST_PATH_IMAGE137
the number of non-occlusion points in the region,
Figure 363938DEST_PATH_IMAGE138
is a constant and is provided with a constant,
Figure 664469DEST_PATH_IMAGE140
is a proportionality coefficient; in order to obtain a better fitting effect, the plane fitting work is only adopted for the reliable area, and the parallax plane equation is defined as shown in a formula (11);
Figure 434979DEST_PATH_IMAGE141
formula (11)
In the formula (I), the compound is shown in the specification,
Figure 795553DEST_PATH_IMAGE142
is the coordinate of the pixel point and is,
Figure 385935DEST_PATH_IMAGE143
as a result of the disparity value,
Figure 490157DEST_PATH_IMAGE144
is a parallax plane parameter, which can be obtained according to the weighted least squares method, as shown in formula (12);
Figure 115173DEST_PATH_IMAGE145
formula (12)
Wherein, the formulas of the sum are respectively as shown in formula (13);
Figure 148114DEST_PATH_IMAGE146
formula (13)
In the formula (I), the compound is shown in the specification,
Figure 225791DEST_PATH_IMAGE147
the plane parameters can be obtained from the formula (12) and the formula (13) for the number of correct matching points in the region
Figure 868125DEST_PATH_IMAGE148
Substituting the formula (11) to obtain the parallax value of each pixel in the reliable region, and finally obtaining the parallax map corresponding to the first group of binocular images
Figure 347648DEST_PATH_IMAGE149
Step 35, repeating the steps 31 to 34 to obtain a disparity map corresponding to the second group of binocular images
Figure 50025DEST_PATH_IMAGE150
For binocular image
Figure 614998DEST_PATH_IMAGE151
And
Figure 61023DEST_PATH_IMAGE152
segmenting to obtain target areas of binocular images and obtain three-dimensional point clouds of two groups of target areas, and the method comprises the following specific steps:
step 41, performing fine segmentation on the target areas in the two groups of left and right eye images by using Grabcut algorithm and combining with manual frame selection of the target areas, and respectively obtaining image area segmentation results
Figure 395052DEST_PATH_IMAGE153
Figure 268330DEST_PATH_IMAGE154
The method comprises the following specific steps:
(4a) inputting images taken by a first group of left eye cameras
Figure 320600DEST_PATH_IMAGE155
The user selects the marked area by adopting the rectangular area
Figure 570316DEST_PATH_IMAGE156
The foreground of the initialization is that the foreground of the display,
Figure 257387DEST_PATH_IMAGE157
the inner area is the foreground area
Figure 301566DEST_PATH_IMAGE158
Figure 106711DEST_PATH_IMAGE157
The outer area is the background area
Figure 628959DEST_PATH_IMAGE159
(ii) a For the
Figure 937581DEST_PATH_IMAGE160
Each of the pixels in
Figure 152662DEST_PATH_IMAGE161
If, if
Figure 179524DEST_PATH_IMAGE162
Then to the pixel
Figure 505463DEST_PATH_IMAGE163
Dispensing label
Figure 934170DEST_PATH_IMAGE164
(ii) a If it is
Figure 320152DEST_PATH_IMAGE165
Then the label is assigned
Figure 834310DEST_PATH_IMAGE166
(4b) Using a K-means clustering algorithm to cluster the foreground region
Figure 199826DEST_PATH_IMAGE167
And a background region
Figure 483039DEST_PATH_IMAGE168
Clustering K kinds of pixels respectively;
(4c) by using
Figure 39923DEST_PATH_IMAGE169
Figure 41377DEST_PATH_IMAGE170
The two label sets respectively initialize GMM parameters of the foreground and the background, and the foreground area is divided into a plurality of regions
Figure 709118DEST_PATH_IMAGE171
Each pixel in the image is substituted into the two obtained GMMs to obtain the probability that the pixel belongs to the foreground area and the background area respectively, and the probability is in a negative logarithm form to obtain an area item;
(4d) computing foreground regions
Figure 112418DEST_PATH_IMAGE171
Obtaining boundary terms by Euclidean distances between all every two adjacent pixels, obtaining the minimum value of energy by adopting a maximum flow minimum cut algorithm, and giving the calculated result to the foreground region again
Figure 574623DEST_PATH_IMAGE171
The pixels in the row are allocated with label sets;
(4e) repeating the steps (4 b) to (4 d) until convergence, and outputting the image
Figure 63373DEST_PATH_IMAGE172
Is divided into
Figure 269227DEST_PATH_IMAGE173
(4f) Repeating the steps (4 a) to (4 e) to segment the target area in the rest of the binocular images, and finally obtaining the segmentation result of the two groups of binocular images
Figure 527033DEST_PATH_IMAGE174
Figure 924254DEST_PATH_IMAGE175
Step 42, based on the camera parameters obtained by the Zhang calibration method, setting the focal length of the four-eye camera as
Figure 900300DEST_PATH_IMAGE176
Figure 909844DEST_PATH_IMAGE177
Has a base line distance of
Figure 22157DEST_PATH_IMAGE178
First group of left eye images
Figure 826165DEST_PATH_IMAGE179
Has principal point coordinates of
Figure 289507DEST_PATH_IMAGE180
To a
Figure 837163DEST_PATH_IMAGE181
Middle pixel point
Figure 69561DEST_PATH_IMAGE182
From a disparity map
Figure 310050DEST_PATH_IMAGE183
With a parallax value of
Figure 995109DEST_PATH_IMAGE184
(ii) a As shown in equation (14), it is possible to obtain from the principle of the triangular parallax
Figure 346456DEST_PATH_IMAGE185
In a first set of left eye camera coordinate systems
Figure 934826DEST_PATH_IMAGE186
The three-dimensional coordinates of
Figure 611795DEST_PATH_IMAGE187
(ii) a Computing
Figure 518571DEST_PATH_IMAGE188
All the pixels in the coordinate system
Figure 673609DEST_PATH_IMAGE189
The three-dimensional coordinates of the camera to obtain a first group of cameras
Figure 615020DEST_PATH_IMAGE190
All three-dimensional point clouds of visible targets under the visual angle are recorded as a first group of three-dimensional point clouds
Figure 462890DEST_PATH_IMAGE191
Figure 856962DEST_PATH_IMAGE193
Representing a quantity of a first set of three-dimensional point clouds;
Figure 550112DEST_PATH_IMAGE194
formula (14)
Step 43, set up
Figure 346029DEST_PATH_IMAGE195
Has a base line distance of
Figure 99222DEST_PATH_IMAGE196
Second group of left eye images
Figure 246169DEST_PATH_IMAGE197
Has principal point coordinates of
Figure 241545DEST_PATH_IMAGE198
(ii) a Same as step 41, calculate
Figure 157548DEST_PATH_IMAGE199
All the pixels in the coordinate system
Figure 81642DEST_PATH_IMAGE200
Obtaining the three-dimensional coordinates of the second group of cameras
Figure 450306DEST_PATH_IMAGE201
All three-dimensional point clouds of visible targets under the visual angle are recorded as a second group of three-dimensional point clouds
Figure 485259DEST_PATH_IMAGE202
Figure 521348DEST_PATH_IMAGE204
Representing the number of the second set of three-dimensional point clouds.
Three-dimensional point clouds obtained from two groups of local coordinate systems
Figure 350763DEST_PATH_IMAGE205
Figure 472303DEST_PATH_IMAGE206
Unifying three-dimensional data fusion to a world coordinate system
Figure 310946DEST_PATH_IMAGE207
Obtaining a group of target collaborative three-dimensional point clouds, which comprises the following steps:
step 51, because of the first group of the left eye camera coordinate system
Figure 935963DEST_PATH_IMAGE208
With a predetermined world coordinate system
Figure 467438DEST_PATH_IMAGE209
Coincidence, then coordinate system
Figure 58299DEST_PATH_IMAGE210
And
Figure 700633DEST_PATH_IMAGE211
a rotation matrix of
Figure 445735DEST_PATH_IMAGE212
Translation matrix
Figure 882533DEST_PATH_IMAGE213
(ii) a (ii) Point cloud of the first set of three-dimensional points according to equation (15)
Figure 713085DEST_PATH_IMAGE215
All three-dimensional data in (a) are subjected to rotational translation to a world coordinate system
Figure 893531DEST_PATH_IMAGE216
Then, a three-dimensional point cloud set under the world coordinate system is obtained
Figure 493140DEST_PATH_IMAGE217
Figure 100839DEST_PATH_IMAGE218
Formula (15)
Step 52, second group of left eye camera coordinate system
Figure 418687DEST_PATH_IMAGE219
With the first group of left eye camera coordinate system
Figure 137245DEST_PATH_IMAGE220
The rotation matrix and the translation matrix in between are respectively
Figure 355474DEST_PATH_IMAGE221
Figure 134074DEST_PATH_IMAGE222
(ii) a (ii) the second set of three-dimensional point clouds according to equation (16)
Figure 939219DEST_PATH_IMAGE223
All three-dimensional data in (a) are subjected to rotational translation to a world coordinate system
Figure 461468DEST_PATH_IMAGE224
Then, a three-dimensional point cloud set under the world coordinate system is obtained
Figure 35668DEST_PATH_IMAGE225
Figure 985170DEST_PATH_IMAGE226
Formula (16)
Step 53, unifying the three-dimensional point clouds under all the local coordinate systems to the world coordinate system to obtain the three-dimensional point cloud with complete target
Figure 746452DEST_PATH_IMAGE227
Determining the outline of the target area, and realizing the dimension measurement of the outline by utilizing the fused three-dimensional point cloud.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A target size measuring method of multi-view binocular vision perception is characterized by comprising the following steps:
the method comprises the following steps of (1) acquiring two groups of binocular camera parameters based on a Zhang calibration method;
step (2), shooting a target by using two groups of binocular cameras to obtain two groups of binocular images, and correcting the two groups of binocular images by using an improved Bouguet algorithm to enable the two groups of binocular images to meet epipolar constraint;
step (3), performing stereo matching on the two groups of binocular images respectively to obtain the parallax of the two groups of binocular images;
the method comprises the following steps:
step 31, calculating initial matching cost, and defining Census matching cost
Figure 600785DEST_PATH_IMAGE001
Is a pixel point in the representation shown in formula (3)
Figure 680736DEST_PATH_IMAGE002
And
Figure 175302DEST_PATH_IMAGE003
middle corresponds to parallax
Figure 239073DEST_PATH_IMAGE004
Pixel point of
Figure 595231DEST_PATH_IMAGE005
Census transform similarity measure between;
Figure 529689DEST_PATH_IMAGE007
formula (3)
Wherein, the first and the second end of the pipe are connected with each other,
Figure 195156DEST_PATH_IMAGE008
and
Figure 480644DEST_PATH_IMAGE009
are respectively left eye images
Figure 873448DEST_PATH_IMAGE010
Middle pixel point
Figure 68937DEST_PATH_IMAGE002
And the right eye image
Figure 29940DEST_PATH_IMAGE011
Middle pixel point
Figure 163243DEST_PATH_IMAGE012
The Census-transformed code of (a),
Figure 235105DEST_PATH_IMAGE013
represents an exclusive or;
defining the cost of AD as
Figure 285100DEST_PATH_IMAGE014
As shown in equation (4):
Figure 151425DEST_PATH_IMAGE015
formula (4)
Wherein the content of the first and second substances,
Figure 536139DEST_PATH_IMAGE016
Figure 146112DEST_PATH_IMAGE016
are respectively left eye images
Figure 50614DEST_PATH_IMAGE017
Middle pixel point
Figure 87840DEST_PATH_IMAGE002
And the right eye image
Figure 461315DEST_PATH_IMAGE011
Middle pixel point
Figure 874979DEST_PATH_IMAGE012
A mapped gray value in RGB space; final matching cost
Figure 633987DEST_PATH_IMAGE018
As shown in equation (5);
Figure 107694DEST_PATH_IMAGE019
formula (5)
Wherein the content of the first and second substances,
Figure 201421DEST_PATH_IMAGE020
Figure 153196DEST_PATH_IMAGE021
respectively controlling Census matching cost and AD matching cost;
Figure 32290DEST_PATH_IMAGE020
Figure 145740DEST_PATH_IMAGE021
these two values are control parameters, Census matching cost and AD matching cost are respectively expressed as:
Figure 602129DEST_PATH_IMAGE022
Figure 249273DEST_PATH_IMAGE023
equation (5) is equivalent to adding the two equations;
step 32, smoothing the matching cost by guiding filtering, aggregating the matching cost by taking the filter and the function as adaptive weight, and defining a kernel function as shown in formula (6);
Figure 982874DEST_PATH_IMAGE024
formula (6)
In the formula (I), the compound is shown in the specification,
Figure 798383DEST_PATH_IMAGE025
as a window
Figure 866702DEST_PATH_IMAGE026
The size of (a) is (b),
Figure 160280DEST_PATH_IMAGE027
and
Figure 748388DEST_PATH_IMAGE028
are respectively windows
Figure 734798DEST_PATH_IMAGE026
The mean and variance of the gray values of the inner pixels,
Figure 616097DEST_PATH_IMAGE029
to adjust the parameters, the matching cost after aggregation is
Figure 713366DEST_PATH_IMAGE030
P is a window
Figure 155980DEST_PATH_IMAGE026
Q is the pixel point at the corner of the window,
Figure 47712DEST_PATH_IMAGE031
and
Figure 965990DEST_PATH_IMAGE032
the gray values of the two pixel points are respectively shown as a formula (7);
Figure 726004DEST_PATH_IMAGE033
formula (7)
The cost aggregation method is based on a cross window, Np is the selected cross window, and q belongs to Np and represents that a termination pixel point q is in the selected cross window;
step 33, selecting the value corresponding to the lowest matching cost from the candidate disparity values as the value of the pixel pointDisparity value
Figure 413338DEST_PATH_IMAGE034
Obtaining an initial disparity map corresponding to the first group of binocular images
Figure 616917DEST_PATH_IMAGE035
As shown in equation (8);
Figure 648589DEST_PATH_IMAGE036
formula (8)
In the formula (I), the compound is shown in the specification,
Figure 87661DEST_PATH_IMAGE037
representing a maximum disparity search range;
step 34, distinguishing shielded points and non-shielded points through left-right consistency check, as shown in formula (9);
Figure 629501DEST_PATH_IMAGE038
formula (9)
In the formula (I), the compound is shown in the specification,
Figure 3981DEST_PATH_IMAGE039
is that
Figure 631272DEST_PATH_IMAGE040
Middle pixel point
Figure 733089DEST_PATH_IMAGE041
The value of the disparity of (a) to (b),
Figure 395014DEST_PATH_IMAGE042
is that
Figure 940396DEST_PATH_IMAGE043
Corresponding pixel point in
Figure 789403DEST_PATH_IMAGE044
When the difference between the two is more than 1 pixel, the pixel point is a shielding point; distinguishing color segmentation areas after obtaining occlusion points and non-occlusion points, wherein the color segmentation areas are reliable areas when the following conditions are met, and are unreliable areas if the color segmentation areas are not met, as shown in a formula (10);
Figure 461956DEST_PATH_IMAGE045
formula (10)
In the formula (I), the compound is shown in the specification,
Figure 978388DEST_PATH_IMAGE046
is the total number of pixels of the region,
Figure 694671DEST_PATH_IMAGE048
the number of non-occluded points in the area,
Figure 296553DEST_PATH_IMAGE049
is a constant number of times, and is,
Figure 5752DEST_PATH_IMAGE051
is a proportionality coefficient; in order to obtain a better fitting effect, the plane fitting work is only adopted for the reliable area, and the parallax plane equation is defined as shown in a formula (11);
Figure 376691DEST_PATH_IMAGE052
formula (11)
In the formula (I), the compound is shown in the specification,
Figure 529455DEST_PATH_IMAGE053
is the coordinate of the pixel point and is,
Figure 979153DEST_PATH_IMAGE054
as a result of the disparity value,
Figure 632988DEST_PATH_IMAGE055
is a parallax plane parameter, which can be obtained according to the weighted least squares method, as shown in formula (12);
Figure 733799DEST_PATH_IMAGE056
formula (12)
Wherein, the formulas of the sum are respectively as shown in formula (13);
Figure 916519DEST_PATH_IMAGE057
formula (13)
In the formula (I), the compound is shown in the specification,
Figure 352048DEST_PATH_IMAGE058
the plane parameters can be obtained from the formula (12) and the formula (13) for the number of correct matching points in the region
Figure 278416DEST_PATH_IMAGE059
Substituting the formula (11) to obtain the parallax value of each pixel in the reliable region, and finally obtaining the parallax map corresponding to the first group of binocular images
Figure 233734DEST_PATH_IMAGE060
Step 35, repeating the steps 31 to 34 to obtain a disparity map corresponding to the second group of binocular images
Figure 852934DEST_PATH_IMAGE061
Dividing the binocular image to obtain a target area of the binocular image and obtain two groups of target three-dimensional point clouds;
step 5, carrying out three-dimensional data fusion on data points obtained from the two groups of local coordinate systems and unifying the data points to the same coordinate system;
and (6) determining the contour of the target area, and measuring the length of the contour by using the fused three-dimensional point cloud.
2. The method for measuring the size of the target based on the multi-view binocular vision perception according to claim 1, wherein the method comprises the following steps: in the step (1), two groups of binocular camera parameters are obtained based on a Zhang calibration method, and the method comprises the following steps:
step 11, the image acquisition device adopts four cameras with the same specification to form two groups of binocular cameras which comprise
Figure 385546DEST_PATH_IMAGE062
And
Figure 476124DEST_PATH_IMAGE063
the image acquisition device is used for acquiring images of the calibration plate from multiple angles and ensuring that the calibration plate is positioned
Figure 817107DEST_PATH_IMAGE064
Clear and complete;
step 12, calibrating the four-eye camera by using a Zhang calibration method to obtain internal and external parameters of the camera;
(1a) to four-eye camera
Figure 607208DEST_PATH_IMAGE065
Respectively calibrating to obtain internal parameters of each camera;
(1b) respectively to two groups of cameras
Figure 751751DEST_PATH_IMAGE066
And
Figure 551080DEST_PATH_IMAGE067
calibrating to obtain rotation matrix of two groups of camerasAnd a translation matrix, defined as
Figure 480989DEST_PATH_IMAGE068
And
Figure 176413DEST_PATH_IMAGE069
(1c) to pair
Figure 575296DEST_PATH_IMAGE070
And
Figure 647157DEST_PATH_IMAGE071
calibrating to obtain rotation matrix of two cameras
Figure 431573DEST_PATH_IMAGE072
And translation matrix
Figure 563477DEST_PATH_IMAGE073
3. The method for measuring the size of the target for the multi-view binocular visual perception according to claim 2, wherein the method comprises the following steps: in the step (2), shooting a target by using two groups of binocular cameras to obtain two groups of binocular images, and correcting the two groups of binocular images by using an improved Bouguet algorithm respectively to enable the two groups of binocular images to meet epipolar constraint, wherein the method comprises the following steps:
step 21, acquiring a target image to be measured by using an image acquisition device to obtain two groups of binocular images, wherein the first group of cameras
Figure 682612DEST_PATH_IMAGE074
The binocular image is taken as
Figure 699109DEST_PATH_IMAGE075
Second group of cameras
Figure 885502DEST_PATH_IMAGE076
Taken by shootingThe binocular image is
Figure 188308DEST_PATH_IMAGE077
(ii) a Defining world coordinate system
Figure 811050DEST_PATH_IMAGE078
First group of left eye cameras
Figure 224714DEST_PATH_IMAGE079
Coordinate system is
Figure 232990DEST_PATH_IMAGE080
And is and
Figure 582063DEST_PATH_IMAGE081
and with
Figure 285577DEST_PATH_IMAGE082
Coincidence, second group left eye camera
Figure 502931DEST_PATH_IMAGE083
Coordinate system is
Figure 867179DEST_PATH_IMAGE084
(ii) a First group of right eye cameras
Figure 511787DEST_PATH_IMAGE085
Coordinate system is
Figure 109121DEST_PATH_IMAGE086
Second group of right eye cameras
Figure 989222DEST_PATH_IMAGE087
Coordinate system is
Figure 113035DEST_PATH_IMAGE088
Step 22, constructing a first group of binocular images by using a Bouguet algorithm
Figure 803911DEST_PATH_IMAGE089
Of the rotation matrix
Figure 361976DEST_PATH_IMAGE090
To pair
Figure 186713DEST_PATH_IMAGE091
Performing primary horizontal correction, and specifically comprising the following steps:
(2a) firstly, the first step is to
Figure 774820DEST_PATH_IMAGE091
A rotation matrix of
Figure 151443DEST_PATH_IMAGE092
Composite matrix divided into left and right cameras
Figure 582425DEST_PATH_IMAGE093
Wherein
Figure 555060DEST_PATH_IMAGE094
Figure 122307DEST_PATH_IMAGE095
(2b) Creating
Figure 905718DEST_PATH_IMAGE096
A rotation matrix of translation vector directions in between such that the baseline is parallel to the imaging plane;
Figure 823995DEST_PATH_IMAGE097
formula (1)
Wherein the content of the first and second substances,
Figure 334742DEST_PATH_IMAGE098
being the poles in the same direction as the translation vectors,
Figure 287655DEST_PATH_IMAGE099
Figure 740502DEST_PATH_IMAGE100
Figure 880496DEST_PATH_IMAGE101
and
Figure 194934DEST_PATH_IMAGE102
are respectively as
Figure 628452DEST_PATH_IMAGE103
And
Figure 534091DEST_PATH_IMAGE105
a translation vector in a direction;
Figure 161381DEST_PATH_IMAGE106
a vector in the direction of an image plane;
Figure 528777DEST_PATH_IMAGE107
is perpendicular to
Figure 190703DEST_PATH_IMAGE108
And
Figure 1664DEST_PATH_IMAGE109
the vector of the plane in which the lens is located;
(2c) obtaining left and right cameras according to formula (2)
Figure 850671DEST_PATH_IMAGE110
Integral rotation matrix of
Figure 523223DEST_PATH_IMAGE111
(ii) a Left of the first groupCoordinate system of right camera
Figure 39655DEST_PATH_IMAGE112
Figure 755939DEST_PATH_IMAGE086
Multiplying by respective integral rotation matrices
Figure 92242DEST_PATH_IMAGE113
So that the main optical axes of the left camera and the right camera are parallel, the image plane is parallel to the base line, and the coordinate systems of the first group of the left camera and the right camera are the same after rotation
Figure 801441DEST_PATH_IMAGE114
Figure 172379DEST_PATH_IMAGE115
Formula (2)
Step 23, mixing
Figure 59564DEST_PATH_IMAGE116
Figure 148743DEST_PATH_IMAGE117
Rotate simultaneously about respective optical centers
Figure 897518DEST_PATH_IMAGE118
Obtaining a new coordinate system
Figure 388542DEST_PATH_IMAGE119
Figure 446628DEST_PATH_IMAGE120
At this time
Figure 678895DEST_PATH_IMAGE121
And
Figure 480629DEST_PATH_IMAGE122
and world coordinates
Figure 826160DEST_PATH_IMAGE123
Overlapping; obtaining a line alignment image after rotation
Figure 805879DEST_PATH_IMAGE124
Step 24, repeating step 22, and carrying out binocular image processing on the second group of binocular images
Figure 604071DEST_PATH_IMAGE125
Performing primary correction to obtain
Figure 209496DEST_PATH_IMAGE125
Integral rotation matrix of
Figure 409533DEST_PATH_IMAGE126
The coordinate system of the second group of left and right eye cameras after correction
Figure 58689DEST_PATH_IMAGE127
Figure 344177DEST_PATH_IMAGE128
Step 25, repeat step 23, will
Figure 753293DEST_PATH_IMAGE129
Figure 807836DEST_PATH_IMAGE130
Rotate simultaneously about respective optical centers
Figure 129359DEST_PATH_IMAGE131
Obtaining a new coordinate system
Figure 902143DEST_PATH_IMAGE132
Figure 849370DEST_PATH_IMAGE133
Then, then
Figure 758420DEST_PATH_IMAGE134
And with
Figure 14958DEST_PATH_IMAGE135
Obtaining line alignment images after superposition and rotation
Figure 9459DEST_PATH_IMAGE136
4. The method for measuring the size of the target of the multi-view binocular vision perception according to claim 3, wherein the method comprises the following steps: in the step (4), the binocular image is segmented to obtain a target area of the binocular image, and two groups of target three-dimensional point clouds are obtained, which include:
step 41, performing fine segmentation on the target areas in the two groups of left and right eye images by using Grabcut algorithm and combining with manual frame selection of the target areas, and respectively obtaining image area segmentation results
Figure 760377DEST_PATH_IMAGE137
Figure 523934DEST_PATH_IMAGE138
The method comprises the following specific steps:
(4a) inputting images taken by a first group of left eye cameras
Figure 452838DEST_PATH_IMAGE139
The user selects the marked area by adopting the rectangular area
Figure 934635DEST_PATH_IMAGE140
The foreground of the initialization is that the foreground of the display,
Figure 223665DEST_PATH_IMAGE141
the inner area is the foreground area
Figure 107307DEST_PATH_IMAGE142
Figure 705648DEST_PATH_IMAGE141
The outer area is the background area
Figure 674741DEST_PATH_IMAGE143
(ii) a For the
Figure 767462DEST_PATH_IMAGE144
Each of the pixels in
Figure 771190DEST_PATH_IMAGE145
If, if
Figure 53615DEST_PATH_IMAGE146
Then to the pixel
Figure 510004DEST_PATH_IMAGE147
Dispensing label
Figure 140837DEST_PATH_IMAGE148
(ii) a If it is
Figure 389284DEST_PATH_IMAGE149
Then the label is assigned
Figure 204794DEST_PATH_IMAGE150
(4b) Using a K-means clustering algorithm to cluster the foreground region
Figure 23845DEST_PATH_IMAGE151
And a background region
Figure 583002DEST_PATH_IMAGE152
Clustering K kinds of pixels respectively;
(4c) by using
Figure 187421DEST_PATH_IMAGE153
Figure 49198DEST_PATH_IMAGE154
The two label sets respectively initialize GMM parameters of the foreground and the background, and the foreground area is divided into a plurality of regions
Figure 480179DEST_PATH_IMAGE155
Each pixel in the image is substituted into the two obtained GMMs to obtain the probability that the pixel belongs to the foreground area and the background area respectively, and the probability is in a negative logarithm form to obtain an area item;
(4d) computing foreground regions
Figure 702082DEST_PATH_IMAGE155
Obtaining boundary terms by Euclidean distance between every two adjacent pixels, obtaining the minimum value of energy by adopting a maximum flow minimum cut algorithm, and giving the calculated result to the foreground region again
Figure 675854DEST_PATH_IMAGE155
The pixels in the row are assigned label sets;
(4e) repeating the steps (4 b) to (4 d) until convergence, and outputting the image
Figure 833166DEST_PATH_IMAGE156
Is divided into
Figure 377542DEST_PATH_IMAGE157
(4f) Repeating the steps (4 a) to (4 e) to segment the target area in the rest of the binocular images, and finally obtaining the segmentation result of the two groups of binocular images
Figure 278502DEST_PATH_IMAGE158
Figure 841202DEST_PATH_IMAGE159
Step 42, based on the camera parameters obtained by the Zhang calibration method, setting the focal length of the four-eye camera as
Figure 169415DEST_PATH_IMAGE160
Figure 699622DEST_PATH_IMAGE161
Has a base line distance of
Figure 279639DEST_PATH_IMAGE162
First group of left eye images
Figure 713157DEST_PATH_IMAGE163
Has principal point coordinates of
Figure 212271DEST_PATH_IMAGE164
To a
Figure 714928DEST_PATH_IMAGE165
Middle pixel point
Figure 223270DEST_PATH_IMAGE166
From a disparity map
Figure 744250DEST_PATH_IMAGE167
With a parallax value of
Figure 414266DEST_PATH_IMAGE168
(ii) a As shown in equation (14), acquisition is based on the principle of triangular parallax
Figure 404218DEST_PATH_IMAGE169
In a first set of left eye camera coordinate systems
Figure 450672DEST_PATH_IMAGE170
The three-dimensional coordinates of
Figure 327623DEST_PATH_IMAGE171
(ii) a Computing
Figure 168540DEST_PATH_IMAGE172
All the pixels in the coordinate system
Figure 645789DEST_PATH_IMAGE173
The three-dimensional coordinates of the camera to obtain a first group of cameras
Figure 230354DEST_PATH_IMAGE174
All three-dimensional point clouds of visible targets under the visual angle are recorded as a first group of three-dimensional point clouds
Figure 725926DEST_PATH_IMAGE175
Figure 878690DEST_PATH_IMAGE177
Representing a quantity of a first set of three-dimensional point clouds;
Figure 967869DEST_PATH_IMAGE178
formula (14)
Step 43, set up
Figure 716644DEST_PATH_IMAGE179
Has a base line distance of
Figure 207668DEST_PATH_IMAGE180
Second group of left eye images
Figure 531334DEST_PATH_IMAGE181
Has principal point coordinates of
Figure 232442DEST_PATH_IMAGE182
(ii) a Same as step 41, calculate
Figure 158810DEST_PATH_IMAGE183
All the pixels in the coordinate system
Figure 379707DEST_PATH_IMAGE184
Obtaining the three-dimensional coordinates of the second group of cameras
Figure 733328DEST_PATH_IMAGE185
All three-dimensional point clouds of visible targets under the visual angle are recorded as a second group of three-dimensional point clouds
Figure 423197DEST_PATH_IMAGE186
Figure 153256DEST_PATH_IMAGE188
Representing the number of the second set of three-dimensional point clouds.
5. The method of claim 4, wherein the method comprises: in the step (5), three-dimensional data fusion of data points obtained from the two sets of local coordinate systems is unified to the same coordinate system, which includes:
step 51, because of the first group of the left eye camera coordinate system
Figure 228659DEST_PATH_IMAGE189
With a defined world coordinate system
Figure 753182DEST_PATH_IMAGE190
Coincidence, then coordinate system
Figure 163303DEST_PATH_IMAGE191
And
Figure 572419DEST_PATH_IMAGE192
between are rotatedRotating matrix
Figure 626963DEST_PATH_IMAGE193
Translation matrix
Figure 214064DEST_PATH_IMAGE194
(ii) a (ii) Point cloud of the first set of three-dimensional points according to equation (15)
Figure 986848DEST_PATH_IMAGE196
All three-dimensional data in (1) are subjected to rotational translation to a world coordinate system
Figure 668496DEST_PATH_IMAGE197
Then, a three-dimensional point cloud set under the world coordinate system is obtained
Figure 108705DEST_PATH_IMAGE198
Figure 99663DEST_PATH_IMAGE199
Formula (15)
Step 52, second group of left eye camera coordinate system
Figure 359743DEST_PATH_IMAGE200
With the first group of left eye camera coordinate system
Figure 845083DEST_PATH_IMAGE201
The rotation matrix and the translation matrix in between are respectively
Figure 874218DEST_PATH_IMAGE202
Figure 791404DEST_PATH_IMAGE203
(ii) a (ii) the second set of three-dimensional point clouds according to equation (16)
Figure 273201DEST_PATH_IMAGE204
All three-dimensional data in (a) are subjected to rotational translation to a world coordinate system
Figure DEST_PATH_IMAGE205
Then, a three-dimensional point cloud set under the world coordinate system is obtained
Figure 93389DEST_PATH_IMAGE206
Figure 101665DEST_PATH_IMAGE207
Formula (16)
Step 53, unifying the three-dimensional point clouds under all the local coordinate systems to the world coordinate system to obtain the three-dimensional point cloud with complete target
Figure 716317DEST_PATH_IMAGE208
6. The method of claim 5, wherein the method comprises: determining the outline of the target area, and realizing the dimension measurement of the outline by utilizing the fused three-dimensional point cloud.
CN202210184835.1A 2022-02-28 2022-02-28 Target size measuring method based on multi-view binocular vision perception Active CN114255286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210184835.1A CN114255286B (en) 2022-02-28 2022-02-28 Target size measuring method based on multi-view binocular vision perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210184835.1A CN114255286B (en) 2022-02-28 2022-02-28 Target size measuring method based on multi-view binocular vision perception

Publications (2)

Publication Number Publication Date
CN114255286A CN114255286A (en) 2022-03-29
CN114255286B true CN114255286B (en) 2022-05-13

Family

ID=80800014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210184835.1A Active CN114255286B (en) 2022-02-28 2022-02-28 Target size measuring method based on multi-view binocular vision perception

Country Status (1)

Country Link
CN (1) CN114255286B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112161997B (en) * 2020-09-28 2022-09-27 南京工程学院 Online precise visual measurement method and system for three-dimensional geometric dimension of semiconductor chip pin
CN114842091B (en) * 2022-04-29 2023-05-23 广东工业大学 Binocular egg size assembly line measuring method
CN114998532B (en) * 2022-08-05 2022-11-01 中通服建设有限公司 Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN115731303B (en) * 2022-11-23 2023-10-27 江苏濠汉信息技术有限公司 Large-span transmission conductor sag three-dimensional reconstruction method based on bidirectional binocular vision
CN116129037B (en) * 2022-12-13 2023-10-31 珠海视熙科技有限公司 Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN116188558B (en) * 2023-04-27 2023-07-11 华北理工大学 Stereo photogrammetry method based on binocular vision
CN116758026B (en) * 2023-06-13 2024-03-08 河海大学 Dam seepage area measurement method based on binocular remote sensing image significance analysis
CN117190866B (en) * 2023-11-08 2024-01-26 广东工业大学 Polarity discrimination detection method, device and equipment for multiple stacked electronic components

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN111563921B (en) * 2020-04-17 2022-03-15 西北工业大学 Underwater point cloud acquisition method based on binocular camera
CN111750806B (en) * 2020-07-20 2021-10-08 西安交通大学 Multi-view three-dimensional measurement system and method
CN112991369B (en) * 2021-03-25 2023-11-17 湖北工业大学 Method for detecting outline size of running vehicle based on binocular vision

Also Published As

Publication number Publication date
CN114255286A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN114255286B (en) Target size measuring method based on multi-view binocular vision perception
CN109949899B (en) Image three-dimensional measurement method, electronic device, storage medium, and program product
CN101697233B (en) Structured light-based three-dimensional object surface reconstruction method
US11521311B1 (en) Collaborative disparity decomposition
CN107392947B (en) 2D-3D image registration method based on contour coplanar four-point set
CN111192293B (en) Moving target pose tracking method and device
CN108335350A (en) The three-dimensional rebuilding method of binocular stereo vision
CN108694741B (en) Three-dimensional reconstruction method and device
CN106920276B (en) A kind of three-dimensional rebuilding method and system
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN110838164B (en) Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN111062131A (en) Power transmission line sag calculation method and related device
CN109741382A (en) A kind of real-time three-dimensional method for reconstructing and system based on Kinect V2
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN112489193A (en) Three-dimensional reconstruction method based on structured light
CN109766896B (en) Similarity measurement method, device, equipment and storage medium
CN110909571B (en) High-precision face recognition space positioning method
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN114998532B (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN115409897A (en) Laser radar and camera combined calibration method based on background point cloud refinement processing
CN114018214A (en) Marker binocular sub-pixel distance measurement method based on hardware acceleration system
CN106980601A (en) The high-precision method for solving of basis matrix based on three mesh epipolar-line constraints
CN113781573A (en) Visual odometer method based on binocular catadioptric panoramic camera
CN109934879B (en) Method for calibrating parabolic catadioptric camera by utilizing ball and public autocolar triangle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant