CN117830418A - Defect position determining method, device, equipment and medium based on camera calibration - Google Patents
Defect position determining method, device, equipment and medium based on camera calibration Download PDFInfo
- Publication number
- CN117830418A CN117830418A CN202211184979.3A CN202211184979A CN117830418A CN 117830418 A CN117830418 A CN 117830418A CN 202211184979 A CN202211184979 A CN 202211184979A CN 117830418 A CN117830418 A CN 117830418A
- Authority
- CN
- China
- Prior art keywords
- camera
- coordinate system
- defect
- calibration
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 122
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000006243 chemical reaction Methods 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims description 44
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 claims description 3
- 230000001747 exhibiting effect Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 11
- 238000009749 continuous casting Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000002893 slag Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000003723 Smelting Methods 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The embodiment of the disclosure provides a defect position determining method, device, equipment and medium based on camera calibration, wherein the method comprises the following steps: calibrating each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping visual field calibration mode; determining first defect positions in images of metallurgical products acquired by cameras based on acquired images of metallurgical products to be detected under the condition that the metallurgical products move into the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation; based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, an initial point position of an initial point of the metallurgical product under a reference space coordinate system is obtained, and the offset of the second defect position relative to the initial point position is calculated. The method and the device realize comprehensive, efficient and accurate defect positioning on the surface of the metallurgical product.
Description
Technical Field
The disclosure relates to the technical field of industrial detection, in particular to a defect position determining method, device, equipment and medium based on camera calibration.
Background
Because the current smelting technology level is limited, surface defects of the round continuous casting billet often occur in the production process, if a large amount of waste products can not be found in time, and the economic benefit of enterprises is seriously affected, the surface defects of the round continuous casting billet must be positioned, and the problems of low efficiency, high cost and the like are considered in a manual observation mode.
Although the automatic monitoring of the surface quality of the circular continuous casting billet can be realized through machine vision, the defect position of the surface of the circular continuous casting billet can be detected efficiently, accurately and automatically with low cost. However, defect analysis under monocular or binocular vision is generally not suitable for round continuous casting billets with large sizes, and it is difficult to perform comprehensive defect detection on the surfaces of the round continuous casting billets.
Disclosure of Invention
In view of the above-described drawbacks of the related art, an object of the present disclosure is to provide a method, apparatus, device, and medium for determining a defect position based on camera calibration, which solve the problems in the related art.
A first aspect of the present disclosure provides a method for determining a location of a defect based on camera calibration, applied to an image acquisition system comprising a plurality of cameras, for determining a location of a surface defect of a metallurgical product; the method comprises the following steps: calibrating each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping view field calibration mode, one camera is used as a reference camera, and the camera external parameters of each other camera correspond to coordinate conversion from a respective camera coordinate system to a reference space coordinate system of the reference camera; the camera internal parameters and the camera external parameters form a position mapping relation from a pixel coordinate system of each other camera to the reference space coordinate system; determining first defect positions in images of metallurgical products acquired by cameras based on acquired images of metallurgical products to be detected under the condition that the metallurgical products move into the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation; based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, an initial point position of an initial point of the metallurgical product under a reference space coordinate system is obtained, and the offset of the second defect position relative to the initial point position is calculated.
In an embodiment of the first aspect, the plurality of cameras are arranged at intervals along the linear direction.
In an embodiment of the first aspect, the plurality of cameras have no common field of view with respect to each other.
In an embodiment of the first aspect, the calibrating each of the cameras to obtain a camera intrinsic parameter and a camera extrinsic parameter includes: sequentially moving from the reference camera through the field of view of each camera in the linear direction by a calibration plate exhibiting a characteristic pattern; calibrating a camera internal reference of a camera according to an image acquired by the camera for the characteristic pattern when the characteristic pattern appears in a field of view of the camera; when the characteristic patterns appear in the fields of view of the previous camera and the next camera, calibrating camera external parameters of the next camera in a non-overlapping field of view calibration mode according to a first image and a second image which are obtained by respectively acquiring the characteristic patterns by the previous camera and the next camera.
In an embodiment of the first aspect, the calibrating the camera parameters of the subsequent camera by a non-overlapping field-of-view calibration method according to the first image and the second image obtained by respectively acquiring the feature patterns of the previous camera and the subsequent camera includes: determining a first extrinsic matrix of a camera coordinate system coordinate conversion from the second characteristic pattern part to the first characteristic pattern part according to a physical position relation between the first characteristic pattern part in the first image and the second characteristic pattern part in the second image; calculating a second extrinsic matrix of coordinate conversion from a spatial coordinate system of the first feature pattern part to a camera coordinate system of the previous camera based on pixel coordinates of a plurality of feature points in the first feature pattern part and coordinates corresponding to the pixel coordinates in a world coordinate system and camera intrinsic parameters of the previous camera; calculating a third extrinsic matrix of coordinate conversion from the camera coordinate system of the latter camera to the space coordinate system of the second feature pattern part based on the pixel coordinates of the plurality of feature points in the first feature pattern part and the coordinates corresponding to the pixel coordinates in the world coordinate system and the camera intrinsic of the latter camera; calculating local external parameters of coordinate conversion from a camera coordinate system of the rear camera to a camera coordinate system of the front camera based on the first external parameter matrix, the second external parameter matrix and the third external parameter matrix; and calculating the camera external parameters of the latter camera based on the local external parameters of each camera between the latter camera and the reference camera.
In an embodiment of the first aspect, the method for determining a defect position based on camera calibration further includes: based on the calculated error of the re-projection of the characteristic points by using each local external parameter, an optimized objective function is established; based on the optimized objective function, the camera extrinsic matrix among all cameras is optimized through a Levenberg-Marquardt algorithm.
In an embodiment of the first aspect, obtaining an initial point position of an initial point of the metallurgical product in a reference space coordinate system based on a front-to-back frame difference when the metallurgical product enters a camera field of view and the position coordinate conversion relation includes: based on the gray value difference of the front and rear frames when the metallurgical product enters the camera view field, a corresponding difference image is obtained; performing threshold processing and connectivity analysis on the differential image to obtain pixel coordinates of a starting point of a metallurgical product under a camera view field of a target camera; and converting the pixel coordinates into a reference space coordinate system based on the calibrated camera internal parameters and camera external parameters of the target camera to obtain the initial point position.
A second aspect of the present disclosure provides a camera calibration based defect location determination apparatus for use in an image acquisition system comprising a plurality of cameras for determining the location of a surface defect of a metallurgical product; the device comprises: the calibration module is used for calibrating each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping view field calibration mode, one camera is used as a reference camera, and the camera external parameters of each other camera correspond to coordinate conversion from a respective camera coordinate system to a reference space coordinate system of the reference camera; the camera internal parameters and the camera external parameters form a position mapping relation from a pixel coordinate system of each other camera to the reference space coordinate system; the defect position acquisition module is used for determining first defect positions in the images of the metallurgical products acquired by the cameras based on the acquired images of the metallurgical products to be detected under the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation; and the defect positioning module is used for obtaining an initial point position of an initial point of the metallurgical product under a reference space coordinate system based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, and calculating the offset of the second defect position relative to the initial point position.
A third aspect of the present disclosure provides a computer device comprising: a memory and a processor; the memory stores program instructions for executing the program instructions to perform the camera calibration based defect location determination method of any of the first aspects.
A fourth aspect of the present disclosure provides a computer readable storage medium storing program instructions that are executed to perform the method of determining a defect location based on camera calibration as set forth in any one of the first aspects.
As described above, in an embodiment of the present disclosure, a method, an apparatus, a device, and a medium for determining a defect location based on camera calibration are provided, where the method includes: calibrating each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping visual field calibration mode; determining first defect positions in images of metallurgical products acquired by cameras based on acquired images of metallurgical products to be detected under the condition that the metallurgical products move into the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation; based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, an initial point position of an initial point of the metallurgical product under a reference space coordinate system is obtained, and the offset of the second defect position relative to the initial point position is calculated. The method and the device realize comprehensive, efficient and accurate defect positioning on the surface of the metallurgical product.
Drawings
Fig. 1 shows a schematic structural diagram of an image acquisition system in an embodiment of the present disclosure.
FIG. 2 shows a flow chart of a method for determining a location of a defect based on camera calibration in an embodiment of the disclosure.
Fig. 3 shows a schematic diagram of the calibration of the camera intrinsic and extrinsic parameters of each camera by a calibration plate in an embodiment of the present disclosure.
FIG. 4 shows a flow diagram of a non-overlapping field of view calibration scheme in an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of a camera 2 and camera 1 field-of-view division feature pattern in an embodiment of the present disclosure.
FIG. 6 shows a schematic diagram of a metallurgical product in an embodiment of the disclosure.
FIG. 7 shows a schematic diagram of defect detection results in an embodiment of the disclosure.
FIG. 8 shows a block diagram of a method for determining a location of a defect based on camera calibration in an embodiment of the present disclosure.
Fig. 9 shows a schematic structural diagram of a computer device in an embodiment of the present disclosure.
Detailed Description
Other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the following detailed description of the embodiments of the disclosure given by way of specific examples. The disclosure may be embodied or applied in other specific forms and details, and various modifications and alterations may be made to the details of the disclosure in various respects, all without departing from the spirit of the disclosure. It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other.
The embodiments of the present disclosure will be described in detail below with reference to the attached drawings so that those skilled in the art to which the present disclosure pertains can easily implement the same. The present disclosure may be embodied in many different forms and is not limited to the embodiments described herein.
In the description of the present disclosure, references to the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or a group of embodiments or examples. Furthermore, various embodiments or examples, as well as features of various embodiments or examples, presented in this disclosure may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the representations of the present disclosure, "a set" means two or more, unless specifically defined otherwise.
For the purpose of clarity of the present disclosure, components that are not related to the description are omitted, and the same or similar components are given the same reference numerals throughout the specification.
Throughout the specification, when a device is said to be "connected" to another device, this includes not only the case of "direct connection" but also the case of "indirect connection" with other elements interposed therebetween. In addition, when a certain component is said to be "included" in a certain device, unless otherwise stated, other components are not excluded, but it means that other components may be included.
Although the terms first, second, etc. may be used herein to connote various elements in some examples, the elements should not be limited by the terms. These terms are only used to distinguish one element from another element. For example, a first interface, a second interface, etc. Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, modules, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, modules, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the language clearly indicates the contrary. The meaning of "comprising" in the specification is to specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but does not preclude the presence or addition of other features, regions, integers, steps, operations, elements, and/or components.
Although not differently defined, including technical and scientific terms used herein, all terms have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The term append defined in commonly used dictionaries is interpreted as having a meaning that is consistent with the meaning of the relevant technical literature and the currently prompted message, and is not excessively interpreted as an ideal or very formulaic meaning, so long as no definition is made.
In the metallurgical industry context, metallurgical products such as round billets require surface defects including, but not limited to, surface cracks, burns, studs, pinholes, slag inclusions, scratches, slag runner, ovality, shrinkage cavity, dissimilar steel composition, etc. However, the existing detection method still depends on manual work, and the manual detection has the problems of low efficiency, subjectivity, low accuracy, easiness in omission and the like. Although the use of common monocular, binocular, etc. machine vision can help to improve detection efficiency, metallurgical products such as round continuous casting billets are often large in size and difficult to comprehensively detect the surface of the metallurgical products.
In view of the above problems, in the embodiments of the present disclosure, a method for defect localization based on camera calibration is provided, which effectively solves the above problems. The method for defect localization based on camera calibration is applied to an image acquisition system comprising a plurality of cameras for determining the location of surface defects of metallurgical products.
As shown in fig. 1, a schematic structural diagram of an image acquisition system in an embodiment of the present disclosure is shown.
The image acquisition system is provided with a plurality of cameras and is used for acquiring images of the surface of the metallurgical product through different view fields so as to acquire an image set covering all parts of the surface of the metallurgical product. Illustratively, the cameras may be arranged in linear intervals, the shape of which is adapted to the shape of the metallurgical product. For example, the metallurgical product in the drawing may be, for example, a round continuous casting billet, and the length direction is a straight line, so that each camera in the drawing is arranged along the straight line direction consistent with the length direction.
In the camera imaging model, each camera has a field of view and camera internal parameters, and a camera coordinate system can be established with each as a center. The camera internal parameters of each camera comprise internal parameter matrixes and distortion parameters, and can be used for coordinate conversion from an image coordinate system and a pixel coordinate system which are shot by the camera to a camera coordinate system of the camera, and the camera external parameters comprising translation and rotation parameter matrixes are used for the coordinate conversion between the camera coordinate systems. The camera external reference and the camera internal reference can be determined through calibration.
In some embodiments, the plurality of cameras may have no common field of view but may be in field of view connection with each other, that is, captured images may not overlap but may be spliced, and the external parameter matrix of the camera coordinate system conversion between the cameras is obtained by a non-overlapping field of view calibration manner during calibration.
In some embodiments, the circular beads in fig. 1 are scrolled, such as in the depth direction of the figure, which may scroll into the field of view of each camera or out of the field of view.
It will be appreciated that the number of cameras corresponds to the size of the metallurgical product, provided that the combination of acquired images substantially covers the entire surface of the metallurgical product. The figure shows by way of example 14 cameras, namely cameras 1 to 14, each with a respective camera coordinate system. Specifically, since the camera coordinate systems of the cameras 1 to 14 are independent of each other, the measured coordinates do not have uniformity, and therefore, it is necessary to solve the coordinate conversion relationship between the respective camera coordinate systems, and turn the obtained coordinate system under the reference space coordinate system, so that one camera can be selected as the reference camera, and the camera coordinate system thereof is the reference space coordinate system. For example, camera 1 is taken as a reference camera, the camera coordinate system of which is regarded as a reference space coordinate system, so that the camera exo-genres of the other cameras, i.e. camera 2 to camera 14, are the camera exo-genres of the coordinate conversion of their respective camera coordinate systems into the camera coordinate system of camera 1. Of course, in other embodiments, the reference space coordinate system may be a world coordinate system, which is not limited thereto.
The camera intrinsic and extrinsic form a positional mapping of the pixel coordinate system of each other camera to the reference spatial coordinate system.
As shown in fig. 2, a flow chart of a defect location determination method based on camera calibration in an embodiment of the present disclosure is shown.
In fig. 2, the defect position determining method based on camera calibration includes:
step S201: calibrating each camera to obtain a camera internal parameter and a camera external parameter.
In some embodiments, the field of view of each camera may be sequentially moved in the linear direction from the reference camera through a calibration plate exhibiting a characteristic pattern. When the characteristic pattern appears in the field of view of a camera, calibrating the camera internal parameters of the camera according to the image acquired by the camera for the characteristic pattern. When the characteristic patterns appear in the fields of view of the previous camera and the next camera, calibrating camera external parameters of the next camera in a non-overlapping field of view calibration mode according to a first image and a second image which are obtained by respectively acquiring the characteristic patterns by the previous camera and the next camera.
The details will be described with reference to fig. 1 and 3 together. Fig. 3 shows a schematic diagram of the calibration of the camera intrinsic and extrinsic parameters of each camera by a calibration plate in an embodiment of the present disclosure.
The calibration plate is used to move from camera 1 to camera 14. Monocular calibration is performed when the camera images enter the fields of view of the cameras 1 to 14 respectively, so that internal parameters and distortion coefficients of the cameras 1 to 14 are obtained, and camera external parameters of the cameras 1 to 14 can be calibrated in the moving process, so that camera external parameters of the cameras 2 to 14 to a reference space coordinate system (namely a camera coordinate system of the camera 1) are obtained. Because there may be no common field of view between camera 1 and camera 14, camera external parameters need to be calibrated by non-overlapping fields of view. For example, when the calibration board enters the field of view of the camera 1 (i.e. the previous camera), the monocular calibration of the internal parameters of the camera 1 is started, and sufficient pictures are collected to complete the monocular calibration. Then, the calibration plate is moved toward the field of view of the camera 2 (i.e., the latter camera) until the calibration plate starts to perform the camera external reference calibration in the non-overlapping field-of-view mode of the camera external references of the camera 1, 2 when a part of the calibration plate appears in the field of view of the camera 1 and another part appears in the field of view of the camera 2. The monocular calibration of the camera 1 and the camera 2 is finished, the camera external parameter calibration of the non-overlapped view field mode from the camera 2 to the reference space coordinate system is finished, the same operation is carried out on the cameras 3 to 14, and finally the monocular calibration task of the cameras 1 to 14 and the camera external parameter calibration task of the non-overlapped view field mode from the camera 2 to the camera 14 to the reference space coordinate system are finished.
In some embodiments, the calibration plate feature pattern includes a plurality of feature points, and each feature point may have unique distinguishing feature information. For example, the feature pattern includes feature patterns arranged in an array, and the feature patterns may be the same shape and size, but the content of the patterns is different as distinguishing feature information. The feature pattern may be, for example, a shape with central symmetry (e.g., square, circular), and the gray scale of the central point may be clearly distinguished from the surrounding points to be used as the feature point. The acquisition of a feature pattern by a camera will comprise a plurality of feature points whose reference spatial coordinates in a reference spatial coordinate system, for example the world coordinate system, are known. The pixel coordinates of each feature point and the corresponding reference space coordinates form coordinate pairs, and in the camera imaging model, the position mapping relation between the pixel coordinates and the reference space coordinates in each coordinate pair is determined by the camera internal parameters and the camera external parameters, so that a plurality of equations are constructed according to a plurality of coordinate pairs to solve an internal parameter matrix.
It can be understood that in the process of collecting the image of the calibration plate by calibrating the camera external parameters, the characteristic points of the collected characteristic patterns are enough to ensure that a characteristic point surface is formed to ensure the accuracy of resolving the camera external parameters when the external parameters of the non-overlapping view field mode are calibrated and extracted, and the sufficient image is collected to ensure the resolution of the result of the camera external parameters of the non-overlapping view field mode. In addition, when calibrating the camera external parameters, the camera internal parameters of the camera 2 are obtained through calibration, and can be used for calibrating the camera external parameters in a non-overlapping view field mode.
The calculation principle of the camera internal parameters is specifically explained. And inputting a coordinate pair formed by the pixel coordinates of each feature point and the corresponding reference space coordinates into a camera model to obtain:
p ui =sM[R|t]P wi ,i=1,2,3,…,n (1)
wherein p is ui For the pixel coordinate (or sub-pixel coordinate with higher precision) of the ith feature point, P wi The world coordinates of the feature graphics are represented by n, s is a scale factor, R and t are rotation matrixes and translation vectors of targets from the world coordinates to the camera coordinates, and the rotation matrixes and translation vectors are called camera external parameters of the camera, and are called external parameters for short. M is an internal reference matrix of the camera, and is marked as M i Such as M1-M14 of cameras 1-14 in fig. 3.
Wherein, (u) 0 ,v 0 ) Is the principal point coordinate of the image, f x And f y The scale factors of the u axis and the v axis in the image pixel coordinate system are respectively, and gamma is the non-perpendicular factor of the u axis and the v axis.
Taking into account the distortion of the camera lens, both radial and tangential distortions are considered. Set point P wi Is an ideal imaging point p of (2) i The coordinates' under the image coordinate system are (x i ,y i ) Point P wi Is the actual imaging point p of (2) i The coordinates in the image coordinate system are (x d ,y d ) Correcting image distortion according to the following distortion model
Wherein k is 1 ,k 2 ,k 3 Is the radial distortion coefficient of the lens, p 1 ,p 2 Is the tangential distortion coefficient of the lens,
based on the imaging model and the distortion model, processing the calibration plate images with different postures, obtaining the corresponding point pairs, substituting the point pairs into the model, and obtaining camera internal parameters of a single camera, namely an internal reference matrix M, through preliminary solution i And distortion coefficient k 1 ,k 2 ,k 3 ,p 1 ,p 2 。
When the feature pattern appears in the fields of view of two adjacent cameras, the camera external parameters of the coordinate conversion between the camera coordinate systems of the adjacent cameras can be calculated according to the physical positional relationship between the plurality of feature points in the feature pattern photographed by each, the pixel coordinates of the feature points, and the calculated camera internal parameters of the cameras.
As shown in fig. 4, a flow chart of a non-overlapping field of view calibration method in an embodiment of the disclosure is shown.
The first camera and the second camera respectively acquire a first image and a second image which are obtained by the characteristic patterns, wherein the first image comprises a first characteristic pattern part of the characteristic patterns, and the second image comprises a second characteristic pattern part of the characteristic patterns. The fields of view of the previous and subsequent cameras may be configured such that they both cover the left and right portions of the feature pattern, respectively, as shown, for example, in fig. 5.
The flow in fig. 4 includes:
step S401: a first extrinsic matrix of a camera coordinate system coordinate conversion of a second feature pattern portion to the first feature pattern portion is determined based on a physical positional relationship between the first feature pattern portion in the first image and the second feature pattern portion in the second image.
The camera external parameter calibration from the camera 2 coordinate system to the camera 1 coordinate system is taken as an example for the following description.
Specifically, as shown in fig. 3, there is no common field of view between the camera 1 and the camera 2, and the camera external parameter matrix from the camera 2 coordinate system to the camera 1 coordinate system cannot be obtained directly by calibrating the camera external parameters in a non-overlapping field of view modeThus, a calibration plate is placed between the cameras 1 and 2, the characteristic pattern of which is divided by the fields of view of the cameras 1 and 2 into two mutually associated areas, denoted t 1 、t 2 ,t 2 To t 1 Camera external reference matrix->I.e. a first extrinsic matrix, using t 1 、t 2 The physical connection relationship between the two parts is calibrated, such as t 1 A certain characteristic point 1 in the three-dimensional space rotates by x degrees and translates by y millimeters to t 2 This can be obtained from actual measurements on the calibration plate.
Step S402: a second extrinsic matrix of coordinate conversion of a spatial coordinate system of the first feature pattern portion to a camera coordinate system of the previous camera is calculated based on pixel coordinates of a plurality of feature points in the first feature pattern portion and coordinates corresponding thereto in a world coordinate system, and camera intrinsic parameters of the previous camera.
Step S403: a third extrinsic matrix of the spatial coordinate system coordinate conversion of the camera coordinate system of the subsequent camera to the second feature pattern portion is calculated based on the pixel coordinates of the plurality of feature points in the first feature pattern portion and their corresponding coordinates in the world coordinate system, and the camera intrinsic of the subsequent camera.
Continuing with the example in step S401, camera 1 pairs calibration plates located within its field of viewt 1 Part is subjected to monocular calibration to obtain t 1 Camera external parameter matrix to camera 1 coordinate systemI.e. a second extrinsic matrix; at the same time, the camera 2 aligns the calibration plate t within its field of view 2 Part is subjected to monocular calibration to obtain t 2 Camera external parameter matrix to camera 2 coordinate systemI.e. a third extrinsic matrix.
Specifically, the camera 1 and the camera 2 take pictures of a calibration plate at the same time, which is divided into t by the fields of view of the camera 1 and the camera 2 1 And t 2 Part, t 1 In part have n 1 Characteristic points t 2 In part have n 2 The feature points can be accurately extracted t by the method of the step S111 1 And t 2 Pixel coordinates of all concentric circle center feature points Respectively expressed as->Corresponding Z w World coordinate of =0 +.>Respectively expressed as->The corresponding relation is as follows:
let homography matrix H 1 =sM c1 H t1,c1 、H 2 =sM c2 H t2,c2 Solving the linear equation of the corresponding point pairs in (9) and (10) to obtain two homography matrixes, and respectively performing singular value decomposition on the homography matrixes to obtain a calibration plate t 1 Camera extrinsic matrix H, part to camera 1 t1,c1 And a calibration plate t 2 Camera extrinsic matrix H, part to camera 2 t2,c2 。
Step S404: and calculating local external parameters of coordinate conversion from the camera coordinate system of the rear camera to the camera coordinate system of the front camera based on the first external parameter matrix, the second external parameter matrix and the third external parameter matrix.
Let t be 2 Upper characteristic point P t2 At t 2 Homogeneous coordinates in the coordinate systemAt t 1 The homogeneous coordinates in the coordinate system are +.>The homogeneous coordinates in the camera 2 coordinate system are +.>The homogeneous coordinates in the camera 1 coordinate system are +.>The transformation relationship between the above homogeneous coordinates is as follows:
H t2,t1 obtained by the connection relation of the two parts of the calibration plate, H t1,c1 、H t2,c2 Obtained by the monocular calibration process, the local outsides from the camera 2 coordinate system to the camera 1 coordinate system can be finally obtainedGinseng radixThus, the calibration of the camera external parameter matrix between the camera 1 coordinate system and the camera 2 coordinate system is completed. Similarly, the camera external parameter calibration method based on the non-overlapping view field mode can sequentially obtain camera external parameter matrixes from the camera 2 coordinate system to the camera 14 coordinate system to the camera 1 coordinate system by moving the calibration plate.
Step S405: and calculating the camera external parameters of the latter camera based on the local external parameters of each camera between the latter camera and the reference camera.
For example, see equation (16), the product of each local outlier of the latter camera to the reference camera constitutes the camera outlier of the coordinate transformation of the camera coordinate system of the latter camera to the reference camera.
In some embodiments, in the above process of linearly solving the camera outliers, the orthogonal constraint property of the rotation matrix is not considered, and the solving result is susceptible to noise, so that the camera outliers of the linear solving result are used as initial values, and an optimized objective function is established by using the re-projection error. The camera outlier matrix between all cameras was then optimized using the Levenberg-Marquardt algorithm. The following expressions can be obtained from the expression (9), the expression (10) and the expression (11):
Wherein the method comprises the steps ofFor the pixel coordinates of the feature points obtained by re-projecting the feature points on the coding coordinate system through the camera internal parameter matrix and the camera external parameter matrix, the optimized objective function can be obtained as follows:
the optimized camera external parameter matrix H can be obtained by using a Levenberg-Marquardt optimization algorithm c2,c1 The same method is adopted for optimizing the rotation matrix R and the translation vector T obtained by calibration by the cameras 3 to 14, so that a final calibration result is obtained.
Continuing with step S201, step S202 is performed: determining first defect positions in images of metallurgical products acquired by cameras based on acquired images of metallurgical products to be detected under the condition that the metallurgical products move into the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation;
specifically, taking fig. 1 as an example, 14 cameras acquire first images of a part of metallurgical products entering the view field of the cameras in real time, and perform defect detection on each first image, namely, perform defect detection on each part of the metallurgical products. The defect detection may be achieved by a deep learning model (e.g., CNN, R-CNN or modified model thereof) pre-trained with the defect image data, or by other image analysis algorithms. If the input image contains a corresponding defect, the model can accurately predict the pixel coordinate of the defect position, namely the first defect position. The defect position is arranged in the field of view of a camera i (i is more than or equal to 1 and less than or equal to 14), and the pixel coordinate of the corresponding defect position can be obtained after the processing of a neural network and is recorded as p i (u i ,v i ) In the camera i coordinate system, the camera coordinate corresponding to the pixel coordinate is P ci (X ci ,Y ci ,Z ci ) An internal reference matrix M of the camera i obtained by combining the monocular calibration of the method i The coordinate conversion formula from pixel coordinates to camera coordinates can be obtained as follows:
so far, the camera sitting of the defect position under the camera i coordinate system can be obtainedMark P ci (X ci ,Y ci ,Z ci ) The camera external reference matrix from the camera 2 coordinate system to the camera 14 coordinate system to the camera 1 coordinate system obtained in step S12, that is, the reference space coordinate system, can obtain the coordinates of the defect position in all the camera coordinate systems, which are transformed to the second defect position in the reference space coordinate system, and the formula is as follows:
P c1 =H c2,c1 ·H c3,c2 ...·H ci,c(i-1) P ci (2≤i≤14) (16)
step S203: based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, an initial point position of an initial point of the metallurgical product under a reference space coordinate system is obtained, and the offset of the second defect position relative to the initial point position is calculated.
In some embodiments, when the metallurgical product enters the field of view of the camera, the large proportion of pixels with high gray scale on the surface of the corresponding metallurgical product in the image can cause the great difference of gray scale values in the front frame and the rear frame acquired by the camera, so that a corresponding differential image can be obtained, that is, an image obtained by subtracting the pixel values of the same pixel positions of the front frame and the rear frame correspondingly. Further, thresholding (such as binary thresholding) and connectivity analysis (extracting a starting point of interest) are performed on the differential image to obtain pixel coordinates of the starting point of the metallurgical product under the camera view field of a target camera, and the pixel coordinates are further converted into a reference space coordinate system based on the camera internal parameters and the camera external parameters of the target camera calibrated in the previous step to obtain the initial point position.
For example, let the camera coordinates of the starting point in the reference space coordinate system beProviding the metallurgical product with a plurality of defect positions, wherein the camera coordinates of the kth defect position in a reference space coordinate system areAccording to the coordinatesThe distance calculation formula can obtain the offset l of the defect position relative to the starting point k The following is shown:
the offset represents the actual physical length of the defect location to the starting point.
In order to more clearly illustrate the above principle, several examples of practical experiments will be described below.
Embodiment case 1:
the existing metallurgical products are to be detected, as shown in fig. 6, wherein one metallurgical product has a defect on the whole length of 9m, the defect is positioned in the view field of the camera 2, a defect position determining device is arranged at a preset height position on the metallurgical product, and the device is used for automatically positioning the defect position of the metallurgical product. The circular continuous casting blank is positioned on a slope track, is manually pried downwards by a worker to slowly roll into the view field of an automatic defect position determining device, 14 cameras with non-overlapping view fields are distributed in the device, meanwhile, metallurgical product pictures positioned in the view field of the device are collected, 14 pictures are predicted and output at one time through a defect detection model, and the coordinates of the central pixel point of the defect position of the first defect in the view field of the camera 2 can be obtained As shown in fig. 7.
Obtaining an internal reference matrix of the cameras 1-14 through monocular calibration, wherein the internal reference of the camera 1Internal parameters of camera 2>From equation (15), the camera coordinates of the defect position in the camera 2 coordinate system can be calculated>Through non-overlappingThe camera external parameter calibration in the field-of-view mode can obtain a camera external parameter matrix from a camera 2 coordinate system to a camera 1 coordinate systemFrom equation (16), the camera coordinates of the defect location in the reference space coordinate system can be calculated>And the pixel coordinates of the starting point of the metallurgical product detected by step S203 are +.>And can be converted to obtain the camera coordinates of the starting point of the metallurgical product in the reference space coordinate system>Thereby calculating the offset l of the defect position relative to the starting point k = 1248.6mm, and finally the automatic positioning of the defect position is completed.
As shown in fig. 8, a defect location determination apparatus based on camera calibration in an embodiment of the present disclosure is shown. Since each functional module or sub-module of the defect position determining device based on camera calibration in the embodiment of the present disclosure is the same as the corresponding steps or sub-step principle of the defect position determining method based on camera calibration in the above-described exemplary method embodiment, the specific implementation in this embodiment may refer to the previous content, and the same technical content will not be repeated.
The defect position determining apparatus 800 includes:
a calibration module 801, configured to calibrate each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping view field calibration mode, one camera is used as a reference camera, and the camera external parameters of each other camera correspond to coordinate conversion from a respective camera coordinate system to a reference space coordinate system of the reference camera; the camera intrinsic and extrinsic form a positional mapping of the pixel coordinate system of each other camera to the reference spatial coordinate system.
A defect position obtaining module 802, configured to determine first defect positions in images of metallurgical products collected by each camera based on collected images of metallurgical products to be detected under the view field of each camera, and obtain second defect positions of each first defect position under a reference space coordinate system based on the position mapping relationship.
And the defect positioning module 803 is configured to obtain an initial point position of an initial point of the metallurgical product in a reference space coordinate system based on a front-to-back frame difference when the metallurgical product enters the camera field of view and the position coordinate conversion relationship, and calculate an offset of the second defect position relative to the initial point position.
It should be noted that, in the embodiment of fig. 8, each functional module may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a program instruction product. The program instruction product comprises one or a set of program instructions. When the program instructions are loaded and executed on a computer, the processes or functions in accordance with the present disclosure are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The program instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
Moreover, the apparatus disclosed in the embodiment of fig. 8 may be implemented by other module division manners. The above-described embodiments of the apparatus are merely illustrative, and the division of modules, for example, is merely a logical division of functionality, and may be implemented in alternative ways, such as a combination of modules or modules may be combined or may be dynamic to another system, or some features may be omitted, or not implemented. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical or other forms.
In addition, each functional module and sub-module in the embodiment of fig. 8 may be dynamically in one processing component, or each module may exist alone physically, or two or more modules may be dynamically in one component. The dynamic components described above may be implemented in hardware or in software functional modules. The dynamic components described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
It should be particularly noted that the flow or method representations of the flow chart representations of the above embodiments of the present disclosure can be understood as representing modules, segments, or portions of code which include one or more sets of executable instructions for implementing specific logical functions or steps of a process. And the scope of the preferred embodiments of the present disclosure includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
For example, the order of the steps in the method embodiment of fig. 2 may be changed in a specific scenario, and is not limited to the above description.
As shown in fig. 9, a schematic circuit diagram of a computer device according to an embodiment of the present disclosure is shown.
The computer device 900 includes a bus 901, a processor 902, and a memory 903. The processor 902 and the memory 903 may communicate with each other via a bus 901. The memory 903 may have stored therein program instructions (e.g., system or application software). The processor 902 implements the steps in the defect location determining method in the embodiments of the present disclosure by executing program instructions in the memory 903, thereby implementing the functions of the defect location determining device in the previous embodiments.
Bus 901 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry StandardArchitecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, although only one thick line is shown in the figures, only one bus or one type of bus is not shown.
In some embodiments, the processor 902 may be implemented as a central processing unit (Central Processing Unit, CPU), a micro-processing unit (MCU), a System On Chip (System On Chip), or a field programmable logic array (FPGA), or the like. The Memory 903 may include Volatile Memory (RAM) for temporary use of data when running a program, such as random access Memory (RandomAccess Memory).
The Memory 903 may also include a non-volatile Memory (non-volatile Memory) for data storage, such as Read-Only Memory (ROM), flash Memory, hard Disk Drive (HDD) or Solid State Disk (SSD).
In some embodiments, the computer device 900 may also include a communicator 904. The communicator 904 is configured to communicate with the outside. In particular examples, the communicator 904 may include one or a set of wired and/or wireless communication circuit modules. For example, the communicator 904 may include one or more of a wired network card, a USB module, a serial interface module, and the like, for example. The wireless communication protocol followed by the wireless communication module includes: such as one or more of near field wireless communication (Nearfield communication, NFC) technology, infrared (IR) technology, global system for mobile communications (Global System for Mobile communications, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code division multiple access, WCDMA), time division code division multiple access (Time-Division Code Division Multiple Access, TD-SCDMA), long term evolution (Long Term Evolution, LTE), blueTooth (BT), global navigation satellite system (Global Navigation Satellite System, GNSS), etc.
Embodiments of the present disclosure may also provide a computer readable storage medium storing program instructions that, when executed, perform the flow steps in the foregoing method embodiments (e.g., fig. 2, etc.). That is, the steps of the method in the above-described embodiments are implemented as software or computer code storable in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium and to be stored in a local recording medium downloaded through a network, so that the method represented herein may be processed by such software stored on a recording medium using a general-purpose computer, a special-purpose processor, or programmable or dedicated hardware (such as an ASIC or FPGA).
In summary, in the embodiments of the present disclosure, a method, an apparatus, a device, and a medium for determining a defect location based on camera calibration are provided, where the method includes: acquiring first images acquired by all parts of a metallurgical product to be detected, wherein the combination of the first images covers the whole surface of the metallurgical product; inputting each first image to at least one defect detection model to obtain a defect detection result and recording the defect detection result; the defect detection result comprises defect type information of each part and global position information of the corresponding defect on a metallurgical product; the defect detection model is a trained deep neural network model; and displaying at least the defect type information in the defect detection result in real time through a human-computer interaction interface. Aspects of embodiments of the present disclosure provide for complete detection of the surface of a metallurgical product through a combination of first images and selection of a defect detection model based on a deep neural network model to accurately identify defects.
The above embodiments are merely illustrative of the principles of the present disclosure and its efficacy, and are not intended to limit the disclosure. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present disclosure. Accordingly, it is intended that all equivalent modifications and variations which a person having ordinary skill in the art would accomplish without departing from the spirit and technical spirit of the present disclosure be covered by the claims of the present disclosure.
Claims (10)
1. A defect position determining method based on camera calibration, which is characterized by being applied to an image acquisition system comprising a plurality of cameras for determining the position of a surface defect of a metallurgical product; the method comprises the following steps:
calibrating each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping view field calibration mode, one camera is used as a reference camera, and the camera external parameters of each other camera correspond to coordinate conversion from a respective camera coordinate system to a reference space coordinate system of the reference camera; the camera internal parameters and the camera external parameters form a position mapping relation from a pixel coordinate system of each other camera to the reference space coordinate system;
Determining first defect positions in images of metallurgical products acquired by cameras based on acquired images of metallurgical products to be detected under the condition that the metallurgical products move into the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation;
based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, an initial point position of an initial point of the metallurgical product under a reference space coordinate system is obtained, and the offset of the second defect position relative to the initial point position is calculated.
2. The method of claim 1, wherein the plurality of cameras are spaced apart along the linear direction.
3. The method of claim 1, wherein the plurality of cameras have no common field of view with respect to each other.
4. The method of claim 2, wherein calibrating each camera to obtain a camera intrinsic parameter and a camera extrinsic parameter comprises:
sequentially moving from the reference camera through the field of view of each camera in the linear direction by a calibration plate exhibiting a characteristic pattern;
Calibrating a camera internal reference of a camera according to an image acquired by the camera for the characteristic pattern when the characteristic pattern appears in a field of view of the camera;
when the characteristic patterns appear in the fields of view of the previous camera and the next camera, calibrating camera external parameters of the next camera in a non-overlapping field of view calibration mode according to a first image and a second image which are obtained by respectively acquiring the characteristic patterns by the previous camera and the next camera.
5. The method for determining a defect position based on camera calibration according to claim 4, wherein the calibrating the camera parameters of the subsequent camera by the non-overlapping field of view calibration method according to the first image and the second image obtained by respectively acquiring the feature patterns of the previous camera and the subsequent camera comprises:
determining a first extrinsic matrix of a camera coordinate system coordinate conversion from the second characteristic pattern part to the first characteristic pattern part according to a physical position relation between the first characteristic pattern part in the first image and the second characteristic pattern part in the second image;
calculating a second extrinsic matrix of coordinate conversion from a spatial coordinate system of the first feature pattern part to a camera coordinate system of the previous camera based on pixel coordinates of a plurality of feature points in the first feature pattern part and coordinates corresponding to the pixel coordinates in a world coordinate system and camera intrinsic parameters of the previous camera;
Calculating a third extrinsic matrix of coordinate conversion from the camera coordinate system of the latter camera to the space coordinate system of the second feature pattern part based on the pixel coordinates of the plurality of feature points in the first feature pattern part and the coordinates corresponding to the pixel coordinates in the world coordinate system and the camera intrinsic of the latter camera;
calculating local external parameters of coordinate conversion from a camera coordinate system of the rear camera to a camera coordinate system of the front camera based on the first external parameter matrix, the second external parameter matrix and the third external parameter matrix;
and calculating the camera external parameters of the latter camera based on the local external parameters of each camera between the latter camera and the reference camera.
6. The method for determining the position of a defect based on camera calibration of claim 5, further comprising:
based on the calculated error of the re-projection of the characteristic points by using each local external parameter, an optimized objective function is established;
based on the optimized objective function, the camera extrinsic matrix among all cameras is optimized through a Levenberg-Marquardt algorithm.
7. The method for determining a defect position based on camera calibration according to claim 1, wherein obtaining an initial point position of an initial point of a metallurgical product in a reference space coordinate system based on a front-to-rear frame difference when the metallurgical product enters a camera field of view and the position coordinate conversion relation comprises:
Based on the gray value difference of the front and rear frames when the metallurgical product enters the camera view field, a corresponding difference image is obtained;
performing threshold processing and connectivity analysis on the differential image to obtain pixel coordinates of a starting point of a metallurgical product under a camera view field of a target camera;
and converting the pixel coordinates into a reference space coordinate system based on the calibrated camera internal parameters and camera external parameters of the target camera to obtain the initial point position.
8. A defect position determining device based on camera calibration, which is characterized by being applied to an image acquisition system comprising a plurality of cameras for determining the position of a surface defect of a metallurgical product; the device comprises:
the calibration module is used for calibrating each camera to obtain a camera internal parameter and a camera external parameter; the calibration mode of the camera external parameters is a non-overlapping view field calibration mode, one camera is used as a reference camera, and the camera external parameters of each other camera correspond to coordinate conversion from a respective camera coordinate system to a reference space coordinate system of the reference camera; the camera internal parameters and the camera external parameters form a position mapping relation from a pixel coordinate system of each other camera to the reference space coordinate system;
The defect position acquisition module is used for determining first defect positions in the images of the metallurgical products acquired by the cameras based on the acquired images of the metallurgical products to be detected under the view field of the cameras, and obtaining second defect positions of each first defect position under a reference space coordinate system based on the position mapping relation;
and the defect positioning module is used for obtaining an initial point position of an initial point of the metallurgical product under a reference space coordinate system based on the front-back frame difference and the position coordinate conversion relation when the metallurgical product enters the camera view field, and calculating the offset of the second defect position relative to the initial point position.
9. A computer device, comprising: a memory and a processor; the memory stores program instructions for executing the program instructions to perform the camera calibration-based defect location determination method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that program instructions are stored that are executed to perform the method of determining a defect position based on camera calibration as recited in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211184979.3A CN117830418A (en) | 2022-09-27 | 2022-09-27 | Defect position determining method, device, equipment and medium based on camera calibration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211184979.3A CN117830418A (en) | 2022-09-27 | 2022-09-27 | Defect position determining method, device, equipment and medium based on camera calibration |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117830418A true CN117830418A (en) | 2024-04-05 |
Family
ID=90517847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211184979.3A Pending CN117830418A (en) | 2022-09-27 | 2022-09-27 | Defect position determining method, device, equipment and medium based on camera calibration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117830418A (en) |
-
2022
- 2022-09-27 CN CN202211184979.3A patent/CN117830418A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108416791B (en) | Binocular vision-based parallel mechanism moving platform pose monitoring and tracking method | |
CN110689579A (en) | Rapid monocular vision pose measurement method and measurement system based on cooperative target | |
CN113409382B (en) | Method and device for measuring damaged area of vehicle | |
CN111339951A (en) | Body temperature measuring method, device and system | |
CN108020826A (en) | Multi-line laser radar and multichannel camera mixed calibration method | |
CN105989588B (en) | Special-shaped material cutting image correction method and system | |
CN105447882B (en) | A kind of method for registering images and system | |
CN108871185B (en) | Method, device and equipment for detecting parts and computer readable storage medium | |
CN112001880A (en) | Characteristic parameter detection method and device for planar component | |
CN115578310A (en) | Binocular vision detection method and system for refractory bricks | |
CN112017259B (en) | Indoor positioning and image building method based on depth camera and thermal imager | |
CN117830418A (en) | Defect position determining method, device, equipment and medium based on camera calibration | |
CN111489384B (en) | Method, device, system and medium for evaluating shielding based on mutual viewing angle | |
CN109373901B (en) | Method for calculating center position of hole on plane | |
CN113781575B (en) | Calibration method and device for camera parameters, terminal and storage medium | |
CN115836744A (en) | Cigarette circumference detection method | |
CN115170541A (en) | Method, device and system for detecting gear meshing state | |
CN115471562A (en) | TOF module calibration method and device and electronic equipment | |
CN111951303A (en) | Robot motion attitude visual estimation method | |
KR101793091B1 (en) | Method and apparatus for detecting defective pixels | |
CN108399412A (en) | A kind of X-type angular-point sub-pixel extracting method | |
JP2006047252A (en) | Image processing unit | |
JPH09245166A (en) | Pattern matching device | |
CN113870120B (en) | Processing surface texture inclination correction method based on pq-mean distribution | |
JPH0550784B2 (en) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |