CN115187612A - Plane area measuring method, device and system based on machine vision - Google Patents

Plane area measuring method, device and system based on machine vision Download PDF

Info

Publication number
CN115187612A
CN115187612A CN202210800450.3A CN202210800450A CN115187612A CN 115187612 A CN115187612 A CN 115187612A CN 202210800450 A CN202210800450 A CN 202210800450A CN 115187612 A CN115187612 A CN 115187612A
Authority
CN
China
Prior art keywords
image
area
camera
detected
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210800450.3A
Other languages
Chinese (zh)
Inventor
周庆森
赵来定
张更新
洪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Royal Communication Information Technology Co ltd
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Royal Communication Information Technology Co ltd
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Royal Communication Information Technology Co ltd, Nanjing University of Posts and Telecommunications filed Critical Nanjing Royal Communication Information Technology Co ltd
Priority to CN202210800450.3A priority Critical patent/CN115187612A/en
Publication of CN115187612A publication Critical patent/CN115187612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Abstract

The invention discloses a plane area measuring method, a device and a system based on machine vision, wherein the method comprises the following steps: acquiring an image of an object to be detected acquired under camera parameters; carrying out distortion correction pretreatment on an image of an object to be detected, and then intercepting a rectangular area image comprising a complete object to be detected; converting the pixel coordinates of the rectangular area image into world coordinates according to the conversion relation between the pixel coordinate system and the world coordinate system; calculating the area of the rectangular area image according to the world coordinates; carrying out perspective transformation on the rectangular area image to obtain an overlook image of the object to be detected; performing color-based K-means clustering on the overlooking image, and dividing the overlooking image into an object area to be detected, a background area and a blank area after perspective transformation; calculating to obtain the proportion of the area of the object to be measured; and calculating to obtain the area of the object to be detected according to the proportion of the area of the object to be detected and the area of the rectangular area image. The non-contact automatic measurement of the plane object is realized.

Description

Plane area measuring method, device and system based on machine vision
Technical Field
The invention belongs to the technical field of machine vision image measurement, and relates to a plane area measurement method, device and system based on machine vision.
Background
The machine vision has the characteristics of easily acquiring a large amount of information, easily fusing with the digitalized design information of a product and the feedback information of a processing control system to form closed-loop control in the production process of the product, and is widely applied to the field of modern production. With the development of machine vision, how to apply on-line measurement and non-contact measurement realized by a vision measurement technology attracts wide attention.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides a planar area measuring method, a device and a system based on machine vision, which are used for realizing non-contact area measurement of a planar object.
The invention relates to machine vision and image processing related technologies, which mainly comprise camera calibration, image distortion removal, perspective transformation, image segmentation and the like. For the three-dimensional reconstruction and reduction of the world coordinate system, the accuracy of camera calibration and the construction reasonability of a conversion formula from the pixel coordinate system to the world coordinate system are particularly important.
Firstly, building a measuring system and adjusting a camera to a proper position; then collecting checkerboard images to calibrate the monocular camera, acquiring camera parameters, and carrying out preprocessing such as distortion correction on the plane object image; meanwhile, the image coordinate is restored to a world coordinate according to the transformation relation between the pixel coordinate system and the world coordinate system, and plane area calculation is carried out under the world coordinate system; and projecting the image into an overlook image by using perspective transformation, carrying out image segmentation, calculating the proportion of the object to be measured, and finishing the area measurement of the planar object.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, a planar area measurement method based on machine vision is provided, which includes:
acquiring camera parameters and an object image to be detected acquired under the camera parameters, wherein the object image to be detected is shot by a monocular camera, and the camera parameters are acquired by calibrating the monocular camera and comprise internal parameters and distortion parameters of the camera;
carrying out distortion correction pretreatment on an object image to be detected;
intercepting a rectangular area image comprising a complete object to be detected on the preprocessed object image to be detected;
converting the pixel coordinates of the rectangular area image into world coordinates according to the conversion relation between the pixel coordinate system and the world coordinate system and by combining camera parameters; calculating the area of the rectangular region image according to the world coordinates;
carrying out perspective transformation on the rectangular area image to obtain an overlook image of the object to be detected;
performing color-based K-means clustering on the overlooking image, and dividing the overlooking image into an object area to be detected, a background area and a blank area after perspective transformation; calculating to obtain the proportion of the area of the object to be measured;
and calculating to obtain the area of the object to be detected according to the proportion of the area of the object to be detected and the area of the rectangular area image.
In some embodiments, the camera parameters are acquired, including: and installing the measuring system, adjusting the focal length of the monocular camera to enable the camera to clearly shoot the object to be measured, then carrying out image acquisition on the calibration plate, transmitting the image to a computer for monocular camera calibration, and obtaining camera parameters.
Further, the monocular camera calibration method comprises the following steps:
step 1.1: fixing a monocular camera, adjusting the focal length of the monocular camera, and placing a checkerboard on the workbench; changing the placing position of the checkerboard, and shooting for multiple times by using a monocular camera to obtain multiple images;
step 1.2: camera calibration in MATLAB: detecting and identifying corner coordinates of the checkerboard in each image by using a function detectCheckerbearbadPoints (), wherein the corner coordinates of all the images are stored in a variable i _ Points; generating world coordinates of the checkerboard corner Points in a coordinate system centered on the checkerboard pattern by a function generateCheckerhoardPoints () with the coordinates of the upper left corner being (0, 0), and storing the world coordinates of all images in a variable w _ Points as well; finally, inputting the variables i _ Points and w _ Points into a calibration function estimatParameters (), and calibrating to obtain the monocular camera parameter variable camera _ Params.
In some embodiments, the aberration correction pre-processing comprises:
performing by a function undistortImage () function in MATLAB; and inputting camera internal parameters and distortion parameters, correcting the edge of the initial image, and outputting the image after distortion correction.
In some embodiments, performing perspective transformation on the rectangular region image to obtain an overhead view image of the object to be measured includes:
4 basic coordinate pairs exist between the overlook image and the rectangular area image of the object to be detected, a mapping matrix is solved, and the projection transformation relation of a pair of pixel coordinates in the overlook image and the rectangular area image of the object to be detected is as follows:
Figure BDA0003737365410000031
wherein the projective transformation matrix contains 8 unknowns: a is 0 ,a 1 ……a 7 ;(M 0 ,N 0 ) Pixel coordinate points in the rectangular area image are set; (M' 0 ,N′ 0 ) Pixel coordinate points in the overlook image; the following equations are constructed by 4 sets of coordinate pairs:
Figure BDA0003737365410000041
wherein (M) 0 ,N 0 )、(M 1 ,N 1 )、(M 2 ,N 2 )、(M 3 ,N 3 ) 4 groups of pixel coordinate points in the rectangular area image; (M' 0 ,N′ 0 )、(M′ 1 ,N′ 1 )、(M′ 2 ,N′ 2 )、(M′ 3 ,N′ 3 ) Are the corresponding 4 sets of pixel coordinate points in the top view image.
In some embodiments, converting the pixel coordinates of the rectangular region image into world coordinates according to a conversion relationship of the pixel coordinate system and the world coordinate system in combination with the camera parameters includes:
the conversion formula of a point from a pixel coordinate system to a world coordinate system is as follows:
Figure BDA0003737365410000042
wherein, the point under the world coordinate system is (X) W ,Y W ,Z W ) The point in the pixel coordinate system is (u, v), Z C The projection value of the coordinate system of the target point camera in the Z-axis direction is shown, K is an internal reference matrix of the camera, R is a rotation matrix, and T is a translation vector.
In some embodiments, calculating the rectangular region image area from world coordinates comprises:
Figure BDA0003737365410000051
wherein S C The world coordinates of 4 vertexes of the rectangular region image are respectively marked as (X) for the rectangular region image area 1 ,Y 1 )、(X 2 ,Y 2 )、(X 3 ,Y 3 )、(X 4 ,Y 4 )。
In some embodiments, color-based K-means clustering is performed on top-view images, including: a space rectangular coordinate system is established for an x-y-z axis based on RGB three channels of a color image, the classes of the same color in the image are aggregated into one class, and the image segmentation based on the color is realized.
In some embodiments, calculating the area of the object to be measured according to the proportion of the area of the object to be measured and the area of the image of the rectangular area includes:
S=R*S C
wherein S is the area of the object to be measured, R is the proportion of the area of the object to be measured, S C Is a rectangular area image area.
In a second aspect, the present invention provides a planar area measuring device based on machine vision, including a processor and a storage medium;
the storage medium is to store instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to the first aspect.
In a third aspect, the invention provides a machine vision-based plane area measuring system, which comprises a monocular camera and the machine vision-based plane area measuring device of the second aspect;
the monocular camera is configured to: and acquiring an image, and uploading the image to the plane area measuring device based on the machine vision.
Has the advantages that: the plane area measuring method, device and system based on the machine vision provided by the invention have the following advantages:
the invention fully considers the problems of insufficient portability, higher cost and the like of the existing plane area measuring equipment which adopts infrared scanning for measurement, and provides the plane area measuring system and method based on machine vision, which accurately and efficiently measure the area of an object to be measured by methods such as monocular camera calibration, image distortion removal, perspective transformation, image segmentation and the like. The invention effectively reduces the cost, has higher automation degree and higher market application value.
Drawings
FIG. 1 is a schematic view of a measurement system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a measurement method according to an embodiment of the present invention;
FIG. 3 is a checkerboard calibration plate in an embodiment of the present invention;
FIG. 4 is a diagram of coordinate systems according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the figures and examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and larger, smaller, larger, etc. are understood as excluding the present numbers, and larger, smaller, inner, etc. are understood as including the present numbers. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Example 1
A planar area measurement method based on machine vision comprises the following steps:
acquiring camera parameters and an object image to be detected acquired under the camera parameters, wherein the object image to be detected is shot by a monocular camera, and the camera parameters are acquired by calibrating the monocular camera and comprise internal parameters and distortion parameters of the camera;
carrying out distortion correction pretreatment on an object image to be detected;
intercepting a rectangular area image comprising a complete object to be detected on the preprocessed object image to be detected;
converting the pixel coordinates of the rectangular area image into world coordinates according to the conversion relation between the pixel coordinate system and the world coordinate system and by combining camera parameters; calculating the area of the rectangular region image according to the world coordinates;
carrying out perspective transformation on the rectangular area image to obtain an overlook image of the object to be detected;
performing color-based K-means clustering on the overlooking image, and dividing the overlooking image into an object area to be detected, a background area and a blank area after perspective transformation; calculating to obtain the proportion of the area of the object to be measured;
and calculating to obtain the area of the object to be measured according to the proportion of the area of the object to be measured and the area of the rectangular area image.
In some embodiments, as shown in fig. 1, a schematic diagram is built for a planar area measurement system based on machine vision, and the measurement system is built. The measuring system comprises a fixed-focus monocular camera, an industrial computer provided with a Windows operating system and a camera support; the camera bracket is fixed on the operating platform; the monocular camera is arranged on the bracket in a clamping and fixing mode and is connected with a computer through a USB2.0 standard interface, and the computer comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to embodiment 1.
Referring to fig. 2, a flow chart of a system and a method for measuring a plane area based on machine vision according to an embodiment of the present invention includes the following steps:
step 1: firstly, mounting a measuring system, adjusting the focal length of a monocular camera to enable the camera to clearly shoot an object to be measured, then carrying out image acquisition on a calibration plate, transmitting the image to a computer for monocular camera calibration, and obtaining camera parameters;
the monocular camera calibration specifically comprises the following steps:
step 1.1: fixing a monocular camera, and placing a checkerboard on a workbench, wherein the specification of the checkerboard is 12 x 9, and the side length of each checkerboard is 15mm; the layout position of the checkerboard is changed, and shooting is carried out for multiple times by using a monocular camera, wherein 12 images are shot.
Step 1.2: and carrying out camera calibration on MATLAB. The corner coordinates identifying the checkerboard in each image are detected using the function detectCheckeroardPoints (), the corner coordinates of all images being stored in the variable i _ Points. The world coordinates of the checkerboard corner Points are generated in the coordinate system centered on the checkerboard pattern by the function generateCheckerborbardPoints () and made to be (0, 0) in the upper left corner, and the world coordinates of all images are also stored in the variable w _ Points. Finally, inputting variables i _ Points and w _ Points into a calibration function estimatParameters (), and calibrating to obtain a monocular camera parameter variable camera _ Params, wherein the monocular camera parameter variable camera _ Params comprises an internal parameter and a distortion parameter of the camera.
Step 2: and (3) in the same step (1), the fixed position of the monocular camera is unchanged, the image of the object to be detected is acquired, the acquired image is subjected to distortion correction, and the specific process of the algorithm and the process is realized based on MATLAB.
In particular, using the undistortImage () function in MATLAB; and inputting camera internal parameters and distortion parameters, correcting the edge of the initial image, and outputting the image after distortion correction.
And 3, step 3: and intercepting a rectangular area image on the image of the object to be detected, wherein the image needs to contain a complete object to be detected, and then carrying out perspective transformation on the image to obtain an overlooking image of the object to be detected.
In particular, in this embodiment, obtaining the top view image of the object to be measured needs to find 4 basic coordinate pairs on the original image and the top view image to be obtained, and solve the mapping matrix, where the projection transformation relationship between a pair of pixel coordinates in the top view image of the object to be measured and the rectangular area image is:
Figure BDA0003737365410000091
wherein 3 x 3 matrix is projection transformation matrix, contains 8 unknowns: a is a 0 ,a 1 ……a 7 (ii) a The following equation is constructed by 4 sets of coordinate pairs:
Figure BDA0003737365410000101
wherein (M) 0 ,N 0 )、(M 1 ,N 1 )、(M 2 ,N 2 )、(M 3 ,N 3 ) Pixel coordinate points in the rectangular area image; (M' 0 ,N′ 0 )、(M′ 1 ,N′ 1 )、(M′ 2 ,N′ 2 )、(M′ 3 ,N′ 3 ) The pixel coordinate points in the corresponding overlook images are obtained;
in particular, in this embodiment, after solving the 3 × 3 projective transformation matrix, the clipped rectangular region image and the projective transformation matrix are selected by using an imwarp () function in the MATLAB, and a 'fillvolus' parameter is selected to ensure that the output image includes all the initial image pixels, and the output image is the top view image.
And 4, step 4: and substituting the camera parameters into a conversion formula according to the conversion relation between the pixel coordinate system and the world coordinate system to realize the reduction of the pixel coordinate into the world coordinate, and calculating the image area of the rectangular area.
Specifically, in this embodiment, converting the pixel coordinates into world coordinates specifically includes: the point under the world coordinate system is (X) W ,Y W ,Z W ) The point in the pixel coordinate system is (u, v), Z C The projection value of the coordinate system of the target point camera in the Z-axis direction is shown, K is an internal reference matrix of the camera, R is a rotation matrix, and T is a translation vector;
the conversion formula of a point from a pixel coordinate system to a world coordinate system is as follows:
Figure BDA0003737365410000111
let R -1 ×K -1 ×[u,v,1] -1 =U 1 ,R -1 ×T=U 2 Due to [ u, v,1 ]] T The third row in the vector is 1, and Z can be derived C =(Z W +U 2 [3,1])/U 1 [3,1]And the Z-axis direction of the plane of the checkerboard in the world coordinate system is 0, namely Z W =0, then Z C =U 2 [3,1]/U 1 [3,1]。
In particular, in the present embodiment, the world coordinates of 4 vertices of the rectangular region image are respectively expressed as (X) 1 ,Y 1 )、(X 2 ,Y 2 )、(X 3 ,Y 3 )、(X 4 ,Y 4 ) Calculating the area of the rectangular region image according to the world coordinates and recording as S C And calculating a formula:
Figure BDA0003737365410000112
and 5: and performing color-based K-means clustering on the overlooking image, and dividing the overlooking image into an object area to be detected, a background area and a blank area after perspective transformation.
Particularly, in this embodiment, a spatial rectangular coordinate system is established for the x-y-z axis based on the three RGB channels of the color image, and a one-to-one mapping relationship is established between each pixel point on one image and the spatial rectangular coordinate system. And 3 points are taken from a space rectangular coordinate system according to the number of the image colors and are taken as the centers of the 3 clusters. And calculating the distances from all the pixel points to 3 cluster centers, and dividing all the pixel points into cluster classes with the minimum distances from the pixel points. Through multiple iterations, the points in the clusters are connected as close as possible, and the distance between the clusters is as large as possible, so that parts with different colors in the image are distinguished, and image segmentation is realized.
And 6: and obtaining the proportion of the object to be measured according to the size of each region obtained after image segmentation, and calculating the area of the object to be measured according to the known area of the rectangular region.
In particular, in this embodiment, the ratio of the object to be measured is denoted as R, the area of the object to be measured is denoted as S, and the calculation formula is as follows:
S=R*S C
the main innovation points of the method are as follows:
1) The field of machine vision measurement discloses a system and a method for measuring the area of a plane object;
2) Aiming at the problem that a real overlook image is difficult to shoot when a plane object is shot, due to the fact that the imaging is close to large and far from small, a large error exists in the direct calculation of the area ratio, and the overlook image is obtained by using projection transformation to obtain the accurate area ratio;
3) In the aspect of three-dimensional reconstruction and reduction of world coordinate points, Z is paired C The value solving provides a novel solving mode, and Z corresponding to each pixel point can be obtained C The corresponding world coordinate is accurately solved;
4) The intelligent integration function of camera calibration, image distortion removal, image projection transformation and image segmentation is realized, the stability is good, and the low-cost and high-precision target can be achieved on the premise of meeting the measurement accuracy of most plane areas.
Example 2
In a second aspect, the present embodiment provides a planar area measuring device based on machine vision, including a processor and a storage medium;
the storage medium is to store instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method of embodiment 1.
Example 3
In a third aspect, the present embodiment provides a planar area measuring system based on machine vision, including a monocular camera and the planar area measuring device based on machine vision according to the second aspect;
the monocular camera is configured to: and acquiring an image and uploading the image to the plane area measuring device based on the machine vision.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (10)

1. A plane area measurement method based on machine vision is characterized by comprising the following steps:
acquiring camera parameters and an object image to be detected acquired under the camera parameters, wherein the object image to be detected is shot by a monocular camera, and the camera parameters are acquired by calibrating the monocular camera and comprise internal parameters and distortion parameters of the camera;
carrying out distortion correction pretreatment on an object image to be detected;
intercepting a rectangular area image comprising a complete object to be detected on the preprocessed object image to be detected;
converting the pixel coordinates of the rectangular area image into world coordinates according to the conversion relation between the pixel coordinate system and the world coordinate system and by combining camera parameters; calculating the area of the rectangular region image according to the world coordinates;
carrying out perspective transformation on the rectangular area image to obtain an overlook image of the object to be detected;
performing color-based K-means clustering on the overlooking image, and dividing the overlooking image into an object area to be detected, a background area and a blank area after perspective transformation; calculating to obtain the proportion of the area of the object to be measured;
and calculating to obtain the area of the object to be measured according to the proportion of the area of the object to be measured and the area of the rectangular area image.
2. The machine-vision-based planar area measurement method of claim 1, wherein acquiring camera parameters comprises: and (3) installing the measuring system, adjusting the focal length of the monocular camera to enable the camera to clearly shoot the object to be measured, then carrying out calibration plate image acquisition, transmitting to a computer to carry out monocular camera calibration, and obtaining camera parameters.
3. The planar area measurement method based on machine vision according to claim 1 or 2, characterized in that the monocular camera calibration method comprises:
step 1.1: fixing a monocular camera, adjusting the focal length of the monocular camera, and placing a checkerboard on the workbench; changing the placing position of the checkerboard, and shooting for multiple times by using a monocular camera to obtain multiple images;
step 1.2: camera calibration in MATLAB: detecting and identifying the corner coordinates of the checkerboard in each image by using a function detectcheckerbardpoints (), wherein the corner coordinates of all the images are stored in a variable i _ Points; generating world coordinates of the checkerboard corner Points in a coordinate system centered on the checkerboard pattern by a function generateCheckerhoardPoints () with the coordinates of the upper left corner being (0, 0), and storing the world coordinates of all images in a variable w _ Points as well; finally, inputting the variables i _ Points and w _ Points into a calibration function estimatParameters (), and calibrating to obtain the monocular camera parameter variable camera _ Params.
4. The machine-vision-based planar area measurement method of claim 1, wherein the distortion correction preprocessing comprises:
performing by a function undistortImage () function in MATLAB; and inputting camera internal parameters and distortion parameters, correcting the edge of the initial image, and outputting the image after distortion correction.
5. The method for measuring the plane area based on the machine vision according to claim 1, wherein the perspective transformation is performed on the rectangular area image to obtain the top view image of the object to be measured, and the method comprises the following steps:
4 basic coordinate pairs are arranged between the overlooking image and the rectangular area image of the object to be detected, a mapping matrix is solved, and the projection transformation relation of a pair of pixel coordinates in the overlooking image and the rectangular area image of the object to be detected is as follows:
Figure FDA0003737365400000031
wherein the projective transformation matrix contains 8 unknowns: a is a 0 ,a 1 ……a 7 ;(M 0 ,N 0 ) Pixel coordinate points in the rectangular area image; (M) 0 ′,N 0 ') pixel coordinate points in the top view image; the following equations were constructed with 4 sets of coordinate pairs:
Figure FDA0003737365400000032
wherein (M) 0 ,N 0 )、(M 1 ,N 1 )、(M 2 ,N 2 )、(M 3 ,N 3 ) 4 groups of pixel coordinate points in the rectangular area image; (M' 0 ,N′ 0 )、(M′ 1 ,N′ 1 )、(M′ 2 ,N′ 2 )、(M′ 3 ,N′ 3 ) Are the corresponding 4 sets of pixel coordinate points in the top view image.
6. The planar area measurement method based on machine vision according to claim 1, wherein the converting of the pixel coordinates of the rectangular area image into the world coordinates according to the conversion relationship between the pixel coordinate system and the world coordinate system in combination with the camera parameters comprises:
the conversion formula of a point from a pixel coordinate system to a world coordinate system is as follows:
Figure FDA0003737365400000033
wherein, the point under the world coordinate system is (X) W ,Y W ,Z W ) The point in the pixel coordinate system is (u, v), Z C And the projection value of the target point camera coordinate system in the Z-axis direction, K is an internal reference matrix of the camera, R is a rotation matrix, and T is a translation vector.
7. The planar area measurement method based on machine vision according to claim 1, wherein the rectangular area image area is calculated according to world coordinates, comprising:
Figure FDA0003737365400000041
whereinS C The world coordinates of 4 vertexes of the rectangular region image are respectively expressed as (X) for the area of the rectangular region image 1 ,Y 1 )、(X 2 ,Y 2 )、(X 3 ,Y 3 )、(X 4 ,Y 4 )。
8. The machine-vision-based planar area measurement method of claim 1,
performing color-based K-means clustering on top-view images, comprising: establishing a spatial rectangular coordinate system for an x-y-z axis based on RGB three channels of a color image, and clustering the same color in the image into one class to realize image segmentation based on color;
and/or calculating to obtain the area of the object to be measured according to the proportion of the area of the object to be measured and the area of the rectangular area image, and the method comprises the following steps:
S=R*S C
wherein S is the area of the object to be measured, R is the proportion of the area of the object to be measured, S C Is a rectangular area image area.
9. A plane area measuring device based on machine vision is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 8.
10. A machine vision-based planar area measuring system, comprising a monocular camera and the machine vision-based planar area measuring device of claim 9;
the monocular camera is configured to: and acquiring an image and uploading the image to the plane area measuring device based on the machine vision.
CN202210800450.3A 2022-07-08 2022-07-08 Plane area measuring method, device and system based on machine vision Pending CN115187612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210800450.3A CN115187612A (en) 2022-07-08 2022-07-08 Plane area measuring method, device and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210800450.3A CN115187612A (en) 2022-07-08 2022-07-08 Plane area measuring method, device and system based on machine vision

Publications (1)

Publication Number Publication Date
CN115187612A true CN115187612A (en) 2022-10-14

Family

ID=83517256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210800450.3A Pending CN115187612A (en) 2022-07-08 2022-07-08 Plane area measuring method, device and system based on machine vision

Country Status (1)

Country Link
CN (1) CN115187612A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563292A (en) * 2023-07-11 2023-08-08 聚时科技(深圳)有限公司 Measurement method, detection device, detection system, and storage medium
CN117437304A (en) * 2023-12-18 2024-01-23 科大讯飞(苏州)科技有限公司 Security check machine calibration method, related method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563292A (en) * 2023-07-11 2023-08-08 聚时科技(深圳)有限公司 Measurement method, detection device, detection system, and storage medium
CN116563292B (en) * 2023-07-11 2023-09-26 聚时科技(深圳)有限公司 Measurement method, detection device, detection system, and storage medium
CN117437304A (en) * 2023-12-18 2024-01-23 科大讯飞(苏州)科技有限公司 Security check machine calibration method, related method, device, equipment and storage medium
CN117437304B (en) * 2023-12-18 2024-04-16 科大讯飞(苏州)科技有限公司 Security check machine calibration method, related method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN110689579B (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN109297436B (en) Binocular line laser stereo measurement reference calibration method
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN109493389B (en) Camera calibration method and system based on deep learning
JP2004127239A (en) Method and system for calibrating multiple cameras using calibration object
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN109544643A (en) A kind of camera review bearing calibration and device
CN108362205B (en) Space distance measuring method based on fringe projection
CN114283203B (en) Calibration method and system of multi-camera system
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
CN113920206A (en) Calibration method of perspective tilt-shift camera
JP2005509877A (en) Computer vision system calibration method and system
CN114792345B (en) Calibration method based on monocular structured light system
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
CN110827360B (en) Photometric stereo measurement system and method for calibrating light source direction thereof
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN110838146A (en) Homonymy point matching method, system, device and medium for coplanar cross-ratio constraint
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN116147477A (en) Joint calibration method, hole site detection method, electronic device and storage medium
CN114897990A (en) Camera distortion calibration method and system based on neural network and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination