CN115797354A - Method for detecting appearance defects of laser welding seam - Google Patents

Method for detecting appearance defects of laser welding seam Download PDF

Info

Publication number
CN115797354A
CN115797354A CN202310083936.4A CN202310083936A CN115797354A CN 115797354 A CN115797354 A CN 115797354A CN 202310083936 A CN202310083936 A CN 202310083936A CN 115797354 A CN115797354 A CN 115797354A
Authority
CN
China
Prior art keywords
image
plane
point
defect
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310083936.4A
Other languages
Chinese (zh)
Other versions
CN115797354B (en
Inventor
林福赐
李佐霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Weiya Intelligent Technology Co ltd
Original Assignee
Xiamen Weiya Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Weiya Intelligence Technology Co ltd filed Critical Xiamen Weiya Intelligence Technology Co ltd
Priority to CN202310083936.4A priority Critical patent/CN115797354B/en
Publication of CN115797354A publication Critical patent/CN115797354A/en
Application granted granted Critical
Publication of CN115797354B publication Critical patent/CN115797354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to the technical field of appearance defect detection and electrophotography of a welding seam of laser welding, in particular to a method for detecting appearance defects of the welding seam of the laser welding, which comprises the following steps: acquiring a depth image and a 2D image of a welded product; converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data; processing the 2D image, and identifying the processed 2D image to obtain second defect data; and outputting a welding seam appearance defect detection result by performing combined analysis on the first defect data and the second defect data. The method is based on a computer vision technology, and combines a point cloud algorithm and a deep learning algorithm, so that the defects of the appearance of the laser welding seam can be successfully identified.

Description

Method for detecting appearance defects of laser welding seam
Technical Field
The invention relates to the technical field of appearance defect detection and electrophotography of a welding seam in laser welding, in particular to a method for detecting appearance defects of the welding seam in laser welding.
Background
The lithium battery sealing nail generally adopts laser welding, and abnormal concave-convex defects are generated sometimes after welding due to energy abnormity or impurities in a welding seam in the laser welding process, and the defects seriously affect the quality of the battery, for example, U.S. Pat. No. 2,20080253410 A1 discloses a laser device and a manufacturing method of the battery. The traditional detection method generally adopts a manual visual inspection mode to identify defects, and the manual visual inspection mode has the following defects: the online period is long, a visual inspector needs to train for a period of time before going on duty, the training period is long, and the labor is consumed; the judgment of the defects is subjective, and the judgment standards of different visual inspectors on the defects are inconsistent; some defects have small sizes and cannot be observed by naked eyes; visual inspection personnel can not accurately control the judgment time of the product, so that the production line beats are inconsistent. The defects can seriously affect the overall quality of the battery and the production efficiency, so that the defect identification mode of manual visual inspection is not suitable for the production requirement at the present stage any more.
In recent years, more and more industries introduce computer vision technology to identify defects; compared with manual visual inspection, the online period is fast, the same detection method can be used for the same product, the judgment standard is objective, the detection time is relatively fixed, the detection efficiency is improved, the consistency of the defect detection result is ensured, and the yield of the product is improved. However, how to introduce a computer vision technology in the aspect of detecting the appearance defects of the welding seams in laser welding to achieve the purpose of improving the consistency and yield of the detection results of the defects of the products is a problem to be solved urgently.
Disclosure of Invention
In order to solve the technical problem, the invention provides a method for detecting appearance defects of a laser welding seam, which comprises the following steps:
s100: acquiring a depth image and a 2D image of a welded product;
s200: converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data;
s300: processing the 2D image, and identifying the processed 2D image to obtain second defect data;
s400: and outputting a welding seam appearance defect detection result by performing combined analysis on the first defect data and the second defect data.
Optionally, in step S100, the depth image is acquired by a 3D sensor; the 2D image is obtained by shooting with a CCD camera.
Optionally, in step S200, converting the depth image into a 3D point cloud image includes:
determining conversion coefficients of the depth image: adopting x _ res to represent the distance actually corresponding to one pixel at a distance on the same row, y _ res to represent the distance actually corresponding to one pixel at a distance on the same column, and z _ res to represent the conversion ratio of the pixel value size to the actual height on the depth image;
and (3) performing spatial point cloud conversion: traversing pixel points on the depth image, and converting each pixel point into a space point in the 3D point cloud image through a conversion coefficient, wherein the conversion formula is as follows: x = i X _ res, Y = j Y _ res, Z = value Z _ res; wherein, X, Y and Z represent the coordinates of spatial points in the 3D point cloud image, i represents the row of the depth image, j represents the column of the depth image, and value represents the pixel value of the pixel point (i, j) on the depth image.
Optionally, in the step S200, after the depth image is converted into a 3D point cloud image, downsampling is performed, that is, the number of point clouds is reduced by a voxel grid downsampling method, and the specific method is as follows:
and (2) setting a point cloud set before downsampling as A and a point cloud set after downsampling as B, and adopting a voxel grid downsampling process as follows: and (3) dividing the space into a plurality of cubes with equal sizes, and if one or more points exist in the cubes in the point cloud set A, assigning a point to the center of the cube at the corresponding position in the point cloud set B, so as to achieve the purpose of downsampling.
Optionally, in the step S200, processing the 3D point cloud image includes:
s210: removing outliers in the 3D point cloud image;
s220: identifying salient points and sunken points in the 3D point cloud image, specifically: firstly, fitting a reference plane; secondly, traversing all points in the 3D point cloud image, calculating the distance from each point to a reference plane, and finally, if the distance is greater than a set distance threshold value, indicating that the point belongs to a convex point or a concave point;
s230: the method comprises the following steps of carrying out quantization processing on convex points or concave points, specifically: clustering the convex points or the concave points into a set by adopting a clustering algorithm to obtain a point set image; and calculating a first characteristic size of the point set image, and taking the first characteristic size as first defect data.
Optionally, in step S220, fitting a reference plane by using a random sampling consistency fitting plane algorithm, where a process of fitting the reference plane is as follows:
s221: randomly selecting 3 non-collinear target points in the 3D point cloud image, and solving a plane equation of a plane formed by the 3 target points to obtain a plane model;
s222: selecting other points in the 3D point cloud image, calculating the point-surface distance from the selected point to the plane model determined by the plane equation, comparing the point-surface distance with a preset minimum distance threshold, if the point-surface distance is smaller than the minimum distance threshold, the selected point is an inner point, otherwise, the selected point is an outer point, and recording the number of all inner points under the plane model;
s223: repeating the steps S221 and S222, and recording the current plane model if the number of the interior points of the current plane model exceeds the number of the interior points of the previous plane model;
s224: and repeating the steps S221-S223 to iterate until the iteration is finished, and obtaining the plane model with the most interior points, namely the reference plane.
Optionally, in step S300, the processing the 2D image includes:
s310: grabbing plane defects through an object detection technology, comprising:
firstly, collecting a certain number of plane defect pictures, and then labeling the positions of the plane defects to obtain a training set; secondly, training the training set by adopting a YOLO target detection network to obtain a detection model; finally, the detection model is used for detecting the plane defects of the 2D image, and the undercut plane defects and the specific positions of the plane defects on the 2D image are obtained;
s320: the method for quantizing the plane defects comprises the following steps:
carrying out confidence coefficient analysis on the plane defect to obtain a confidence coefficient; performing minimum bounding box algorithm analysis on the plane defects, and screening out the plane defects with the second characteristic size larger than a set second size threshold; and taking the second characteristic size of the screened plane defect and the confidence coefficient thereof as second defect data.
Optionally, in step S400, the binding assay format is as follows:
s410: screening out a point set image with a first characteristic size larger than a first size threshold value from the 3D point cloud image; taking a convex area or a concave area corresponding to the screened point set image as a first appearance defect;
s420: taking the area corresponding to the plane defect with the confidence coefficient larger than the confidence coefficient threshold value as a second appearance defect;
s430: for the plane defect with the confidence coefficient not greater than the confidence coefficient threshold value, searching whether a convex or concave point set image exists at the corresponding position in the 3D point cloud image or not according to the position of the plane defect on the 2D image; if not, rejecting the plane defect; and if the point set image at the corresponding position in the 3D point cloud image exists and is larger than a third size threshold, point location area combination is carried out on the plane defect and the point set image at the corresponding position, and the point location area after combination is used as a third appearance defect.
Optionally, in step S320, the confidence level analysis includes:
determining a confidence interval of the plane defect;
analyzing the probability that the plane defect grabbing reliability falls into a confidence interval by adopting normal distribution;
and determining the confidence coefficient of the plane defect according to the probability that the grabbing reliability of the plane defect falls into the confidence interval.
Optionally, in step S300, the processing the 2D image further includes image preprocessing, specifically:
carrying out graying processing on the 2D image to obtain a corresponding 2D image subjected to graying processing;
based on each pixel point in the 2D image after the graying processing, the product of the R channel pixel value and the R channel weighted value, the product of the G channel pixel value and the G channel weighted value and the product of the B channel pixel value and the B channel weighted value are added to obtain the gray value of the corresponding pixel point, the Gaussian filter algorithm is adopted to combine the inherent variation of an image window and the total variation of the image window with the pixel point of the 2D image after the graying processing, a structure and texture decomposition regularizer is formed, and the 2D image after the smoothing processing is obtained;
performing image enhancement processing on the 2D image after the smoothing processing, namely performing supervised model training on the 2D image after the smoothing processing by adopting an image enhancement model to obtain the 2D image after the image enhancement;
and using the image enhanced 2D image for plane defect grabbing.
The method for detecting the appearance defects of the laser welding seams comprises the steps of acquiring a depth image and a 2D image of a welded product, respectively processing the depth image and the 2D image to respectively obtain first defect data and second defect data, performing combined analysis on the first defect data and the second defect data on the basis of the respective processing, and outputting a welding seam appearance defect detection result according to an analysis result; the method is based on the computer vision technology, and combines the point cloud algorithm and the deep learning algorithm, so that the defects of the appearance of the laser welding seam can be successfully identified.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for detecting cosmetic defects in a laser weld joint in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of converting a depth image into a 3D point cloud image as employed in an embodiment of the method for detecting apparent defects of a laser welding seam of the present invention;
FIG. 3 is a flow chart of processing a 3D point cloud image in an embodiment of the method for detecting appearance defects of a laser welding seam according to the invention;
FIG. 4 is a flow chart of fitting a datum plane in an embodiment of a method for detecting cosmetic defects in a laser weld of the present invention;
FIG. 5 is a flow chart of processing a 2D image in an embodiment of a method for detecting cosmetic defects in a laser weld of the present invention;
FIG. 6 is a flow chart of a bond analysis approach employed in an embodiment of a method for detecting apparent defects in a laser weld of the present invention;
FIG. 7 is a flowchart of the method for detecting the apparent defect of the laser welding seam according to the present invention, applied to the detection of the apparent defect of the welding seam in the laser welding of the lithium battery.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, an embodiment of the present invention provides a method for detecting an appearance defect of a laser welding seam, including:
s100: acquiring a depth image and a 2D image of a welded product;
s200: converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data;
s300: processing the 2D image, and identifying the processed 2D image to obtain second defect data;
s400: and outputting a welding seam appearance defect detection result by combining and analyzing the first defect data and the second defect data.
The working principle and the beneficial effects of the technical scheme are as follows: according to the scheme, a depth image and a 2D image of a welded product are collected and are respectively processed to respectively obtain first defect data and second defect data, the first defect data and the second defect data are combined and analyzed on the basis of the respective processing, and a welding seam appearance defect detection result is output according to an analysis result; the method is based on a computer vision technology, and combines a point cloud algorithm and a deep learning algorithm, so that the defect of the appearance of the laser welding seam can be successfully identified, and compared with a manual visual inspection method, the detection efficiency of the appearance defect of the welding seam is greatly improved and the yield of products can be improved from the viewpoint of detection results; for example, aiming at the lithium battery industry, the quality consistency and the yield of lithium battery products can be improved.
In one embodiment, in step S100, a depth image is acquired by a 3D sensor; the 2D image is obtained by shooting with a CCD camera.
The working principle and the beneficial effects of the technical scheme are as follows: the 3D sensor in the scheme can adopt a 3D camera, and three-dimensional associated data can be implied in the acquired depth image, so that the subsequent conversion and processing of the 3D point cloud image are facilitated; the 2D image is shot by a CCD camera, so that finer surface conditions can be reflected in the 2D image; the scheme lays a better foundation for subsequent image analysis and is beneficial to improving the precision of an analysis result.
In one embodiment, the converting the depth image into the 3D point cloud image in the S200 step includes:
determining conversion coefficients of the depth image: adopting x _ res to represent the distance actually corresponding to one pixel at a distance on the same row, y _ res to represent the distance actually corresponding to one pixel at a distance on the same column, and z _ res to represent the conversion ratio of the pixel value size to the actual height on the depth image;
and (3) implementing space point cloud conversion: pixel points on the depth image are traversed, and the pixel points are converted into space points in the 3D point cloud image through conversion coefficients, wherein the conversion formula is as follows: x = i X _ res, Y = j Y _ res, Z = value Z _ res; wherein, X, Y and Z represent the coordinates of spatial points in the 3D point cloud image, i represents the row of the depth image, j represents the column of the depth image, and value represents the pixel value of the pixel point (i, j) on the depth image.
The working principle and the beneficial effects of the technical scheme are as follows: the depth image can be actually regarded as an expression mode of 3D point cloud, so that a fixed conversion coefficient exists between the depth image and the 3D point cloud; wherein the three conversion parameters x _ res, y _ res and z _ res are fixed parameters in the 3D camera setting and can be obtained by the 3D camera; knowing a depth image and its corresponding conversion coefficients, the process of converting to a 3D point cloud image is for example: obtaining the size of rows and columns of the depth image, wherein i represents the rows of the image, j represents the columns of the image, i =0, j =0 is firstly set, the pixel row i on the depth image is obtained, the pixel value of j is obtained, and is set as value, then the spatial point coordinate corresponding to the pixel point (i, j) is X = i X _ res, Y = j Y _ res, Z = value Z _ res, the set point cloud set is closed, the spatial point belongs to the point cloud set, when j > cols, the cols represents the total column number of the depth image, and the next step is carried out, otherwise, j = j +1, and the next pixel point in the same pixel row is converted; when i is greater than rows, rows represent the total number of rows of the depth image, the point cloud set is output after the operation is finished, otherwise, i = i +1 is given, and the conversion of pixel points in the next row is carried out; the calculation flow of the conversion into the 3D point cloud image is shown in fig. 2.
In one embodiment, in step S200, after the depth image is converted into a 3D point cloud image, downsampling is performed, that is, the number of point clouds is reduced by a voxel grid downsampling method, which specifically includes:
and (2) setting a point cloud set before downsampling as A and a point cloud set after downsampling as B, and adopting a voxel grid downsampling process as follows: and (3) dividing the space into a plurality of cubes with equal sizes, and if one or more points exist in the cubes in the point cloud set A, assigning a point to the center of the cube at the corresponding position in the point cloud set B, so as to achieve the purpose of downsampling.
The working principle and the beneficial effects of the technical scheme are as follows: after the depth map is converted into the point cloud, the point cloud data volume is large, the detection time is too long, the original data is redundant, and the detection result cannot be influenced by reducing the point cloud number, so that the point cloud number is reduced by the voxel grid downsampling method, and the point cloud number is reduced after downsampling, so that the detection speed is improved; the size of the cube in the scheme determines the number of point clouds after down-sampling.
In one embodiment, as shown in fig. 3, the processing the 3D point cloud image in S200 includes:
s210: removing outliers in the 3D point cloud image;
s220: identifying salient points and sunken points in the 3D point cloud image, specifically: firstly, fitting a reference plane; secondly, traversing all points in the 3D point cloud image, calculating the distance from each point to a reference plane, and finally, if the distance is greater than a set distance threshold, indicating that the point belongs to a convex point or a concave point;
s230: the method comprises the following steps of carrying out quantization processing on convex points or concave points, specifically: clustering the convex points or the concave points into a set by adopting a clustering algorithm to obtain a point set image; and calculating a first characteristic size of the point set image, and taking the first characteristic size as first defect data.
The working principle and the beneficial effects of the technical scheme are as follows: in the image acquisition process, noise points inevitably exist, and when the algorithm is used for detection, the detection result is inaccurate due to the fact that the noise points are easily interfered by abnormal noise points, so that the noise points need to be identified and removed from the point cloud set. Noise points are generally represented in a point cloud set in a discrete and isolated outlier form, and can be identified and removed through an outlier removing method. Traversing all points in the point cloud set, setting a detection radius r, calculating the number of point clouds in a sphere range of the radius r by taking the point as a sphere center, calculating the average value of the number of point clouds in all point spheres in the set, and deleting the point from the point cloud set when the number of point clouds in a certain point sphere is smaller than the average value and is smaller than a certain threshold (point cloud average number threshold), so as to achieve the purpose of removing outliers. The bulges and depressions on the weld of the seal nail affect the quality of the seal and need to be identified for such defects. The appearance of a welding seam of laser welding is a plane with a reference plane under normal conditions, if a bulge or a recess exists on the welding seam, the defect points of the bulge or the recess are not on the plane, and the distances between the points and the plane are also larger, so that the reference plane can be fitted first, the points in the set are traversed, the distance from the points to the reference plane is calculated, and if the distance is larger than a certain value, the points belong to the bulge points or the recess points. Single points have no length, width and height characteristics, and cannot be subjected to quantization processing, so that the points need to be clustered into sets, and the sets of the points with the number larger than a certain value are processed. Firstly, clustering the points with small intervals into an independent point set by using a clustering algorithm, then respectively calculating the characteristics of the point set, such as length, width and the like, and carrying out quantitative processing; clustering is carried out by utilizing the Euclidean distance between a point cloud center point and a point, when the Euclidean distance between the point cloud center point and the point is smaller than a set threshold value, the Euclidean distance is regarded as a class, the defect point is traversed, the defect point is clustered into a defect point set according to the Euclidean distance, then, the length, the width and the height defect characteristics of the defect point set are calculated, and the length, the width and the height of the defect point set are first characteristic dimensions; thus, first defect data for solving the convex points and the concave points by using a 3D algorithm is obtained.
In one embodiment, in step S230, by establishing a three-dimensional coordinate system, the clustering algorithm calculates the euclidean distance using the following formula:
Figure SMS_1
in the above-mentioned formula, the compound has the following structure,
Figure SMS_2
representing the euclidean distance between the ith and jth bump or dimple point;
Figure SMS_3
a three-dimensional coordinate value representing an ith bump point or pit point;
Figure SMS_4
a three-dimensional coordinate value representing a jth bump point or dimple point;
if the Euclidean distance of the two convex points or the concave points is not larger than a preset Euclidean distance threshold value, the two convex points or the concave points are clustered into the same set, and all the convex points or the concave points contained in the same set form a point set image.
The working principle and the beneficial effects of the technical scheme are as follows: the Euclidean distance of two convex points or concave points in the 3D point cloud image is calculated through the formula, whether the two calculated convex points or concave points are clustered is determined through a preset Euclidean distance threshold, the Euclidean distance algorithm is simple, effective and easy to operate and implement, the calculated amount is small, the calculating speed is high, and the detection efficiency of the appearance defects is improved.
In one embodiment, as shown in fig. 4, in step S220, the reference plane is fitted by using a random sampling consistency fitting plane algorithm, and the process of fitting the reference plane is as follows:
s221: randomly selecting 3 non-collinear target points in the 3D point cloud image, and solving a plane equation of a plane formed by the 3 target points to obtain a plane model;
s222: selecting other points in the 3D point cloud image, calculating the point-surface distance from the selected point to the plane model determined by the plane equation, comparing the point-surface distance with a preset minimum distance threshold, if the point-surface distance is smaller than the minimum distance threshold, the selected point is an inner point, otherwise, the selected point is an outer point, and recording the number of all inner points under the plane model;
s223: repeating the steps S221 and S222, and recording the current plane model if the number of the interior points of the current plane model exceeds the number of the interior points of the previous plane model;
s224: and repeating the steps S221-S223 to iterate until the iteration is finished, and obtaining the plane model with the most interior points, namely the reference plane.
The working principle and the beneficial effects of the technical scheme are as follows: the reference plane of the scheme is a plane in space, for example, the expression form is
Figure SMS_5
The plane is fitted by using a random sampling consistency fitting plane algorithm, and the method is to carry out 'random probability reality' by using a computerAnd (4) testing, recording the result of each test through iteration and precision control, exiting after the condition is met, finding the algorithm of 'the optimal plane in the data' from the test, resisting the abnormal interference of the data, and stably obtaining the plane. Firstly, randomly selecting 3 non-collinear points, and solving a plane equation to obtain a plane model determined by the plane equation; secondly, calculating the distance (point-surface distance) from the rest points to the plane equation, comparing the distance with the minimum distance precision (minimum distance threshold) of the set value, and if the distance is smaller than the minimum distance precision, taking the point as an inner point; if not, recording the number of all the inner points in the model parameters, then repeating the previous two steps, wherein the number of the inner points in the current model (plane model) exceeds the number of the inner points in the previous model, indicating that the current plane model is more optimized, and recording the current optimal model; and finally, repeating the previous three steps until iteration is finished, and finding a plane model with the most interior points, wherein the plane model is the optimal reference plane. After obtaining the model parameters, traversing all points in the weld set, and calculating the distance from the point to the reference plane, for example, setting the point as
Figure SMS_6
The equation of the reference plane is
Figure SMS_7
The distance from the point to the reference plane is
Figure SMS_8
And calculating the absolute value of dis, and if the absolute value is larger than a certain threshold (distance threshold), proving that the point belongs to the convex point or the concave point.
In one embodiment, as shown in fig. 5, the processing the 2D image in the S300 step includes:
s310: grabbing plane defects through an object detection technology, comprising:
firstly, collecting a certain number of plane defect pictures, and then labeling the positions of the plane defects to obtain a training set; secondly, training the training set by adopting a YOLO target detection network to obtain a detection model; finally, the detection model is used for detecting the plane defects of the 2D image, and the undercut plane defects and the specific positions of the plane defects on the 2D image are obtained;
s320: the method for quantizing the plane defects comprises the following steps:
carrying out confidence coefficient analysis on the plane defect to obtain a confidence coefficient; performing minimum bounding box algorithm analysis on the plane defects, and screening out the plane defects with the second characteristic size larger than a set second size threshold; and taking the second characteristic size of the screened plane defect and the confidence coefficient thereof as second defect data.
The working principle and the beneficial effects of the technical scheme are as follows: the scheme adopts the 2D image to more accurately acquire the color information of the surface of the object; the appearance of the laser welding seam has a seam inner side or outer side undercut black hole, and the shape of the defect is obvious on a 2D image; therefore, the undercut-shaped black hole defect (plane defect) can be detected by detecting through a detection model trained by a YOLO target detection network and detecting through a 2D image, and the specific position of the plane defect on the 2D image is obtained; after the plane defect of the 2D image is identified, obtaining the confidence coefficient of a plane defect result and the minimum bounding box of the plane defect, wherein the confidence coefficient of the plane defect result is the probability value that the result is the plane defect, and the result is more reliable as the result is closer to 1; calculating the length, the width and the pixel area of the smallest bounding box, wherein the length, the width and the pixel area of the smallest bounding box are the second characteristic dimension of the smallest bounding box, and when the length, the width and the pixel area are all larger than a certain threshold (namely a second dimension threshold), the result is classified as the screened plane defect, otherwise, the defect is not considered to exist; therefore, the second characteristic size of the screened plane defect can be extracted, and second defect data can be obtained.
In one embodiment, as shown in fig. 6, in step S400, the binding assay is as follows:
s410: screening out a point set image with a first characteristic size larger than a first size threshold value from the 3D point cloud image; taking a convex region or a concave region corresponding to the screened point set image as a first appearance defect;
s420: taking the area corresponding to the plane defect with the confidence coefficient larger than the confidence coefficient threshold value as a second appearance defect;
s430: for the plane defect with the confidence coefficient not greater than the confidence coefficient threshold value, searching whether a convex or concave point set image exists at the corresponding position in the 3D point cloud image according to the position of the plane defect on the 2D image; if not, rejecting the plane defect; and if the point set image at the corresponding position in the 3D point cloud image exists and is larger than a third size threshold, point location area combination is carried out on the plane defect and the point set image at the corresponding position, and the point location area after combination is used as a third appearance defect.
The working principle and the beneficial effects of the technical scheme are as follows: for the concave-convex defects, the characteristics are obvious on the depth image, and no characteristics are found on the 2D image, so that the concave-convex defects detected on the 3D image belong to the defect point set as long as the length, width and height of the defects are simultaneously larger than a certain threshold (namely a first size threshold), and can be directly classified as final output defects. For the undercut-shaped black hole defects on the inner side or the outer side of the welding seam detected on the 2D image, the interference of some black points exists on the welding seam, the black points are only abnormal in color and do not affect the product quality, the black points are similar to the color and the area of the undercut-shaped black hole defects and are easy to cause misjudgment, but the edges of the black points are slowly transited from black to normal color, the edge change of the undercut-shaped black hole defects is sharp, so the confidence coefficient of the detection result of the black points 2D is low, the undercut-shaped black hole defects are accompanied by slight concave-convex on 3D, and the black points are not concave-convex on 3D; therefore, the plane defect should be distinguished and judged, when the defect is detected on the 2D image and the confidence coefficient of the defect is greater than a certain value, the defect is classified as a final output result; otherwise, the 2D image and the 3D image are combined and judged, the length, the width and the height of the 3D point cloud at the corresponding position are calculated, when the three values are all larger than a certain value (a third size threshold value), the defect is classified as a final output defect, and otherwise, the defect is filtered. Through the combined analysis of the 3D and 2D images, the final output defect is obtained.
In one embodiment, in step S320, the confidence analysis includes:
determining a confidence interval of the plane defect;
analyzing the probability that the plane defect grabbing reliability falls into a confidence interval by adopting normal distribution;
and determining the confidence coefficient of the plane defect according to the probability that the grabbing reliability of the plane defect falls into the confidence interval.
The working principle and the beneficial effects of the technical scheme are as follows: according to the scheme, the confidence interval of the plane defect is determined, the normal distribution theory is introduced, the probability that the grabbing reliability of the plane defect falls into the confidence interval is investigated, and then the confidence coefficient of the plane defect is obtained according to the investigation result; the confidence coefficient analysis adopts probability evaluation to determine the confidence coefficient of the plane defect, so that the misjudgment rate of the plane defect is reduced, and the detection accuracy of the plane defect is improved.
In one embodiment, in step S300, the processing the 2D image further includes image preprocessing, specifically:
carrying out graying processing on the 2D image to obtain a corresponding 2D image subjected to graying processing;
based on each pixel point in the 2D image after the graying processing, the product of the R channel pixel value and the R channel weighted value, the product of the G channel pixel value and the G channel weighted value and the product of the B channel pixel value and the B channel weighted value are added to obtain the gray value of the corresponding pixel point, the Gaussian filter algorithm is adopted to combine the inherent variation of an image window and the total variation of the image window with the pixel point of the 2D image after the graying processing, a structure and texture decomposition regularizer is formed, and the 2D image after the smoothing processing is obtained;
performing image enhancement processing on the 2D image after the smoothing processing, namely performing supervised model training on the 2D image after the smoothing processing by adopting an image enhancement model to obtain the 2D image after the image enhancement;
and using the image enhanced 2D image for plane defect grabbing.
The working principle and the beneficial effects of the technical scheme are as follows: the scheme is used for preprocessing the 2D image and comprises graying processing, smoothing processing and image enhancement processing, wherein the graying processing can reduce the data volume of the 2D image processing and improve the processing efficiency; fusing meaningful structures and texture units in the 2D image together through smoothing processing, so that the image texture is clear; the image enhancement model is adopted for supervised model training, and the quality and identifiability of the 2D image are improved through image enhancement processing, so that the accuracy of appearance defect detection is improved; wherein inherent degradation refers to the fact that the dominant structures in one image window produce gradients with more similar directions than the complex texture contained in the other image window; the total image window variation is a parameter reflecting the quality of the 2D image.
In one embodiment, the image pre-processing uses a structure and texture decomposition regularizer, and the total variation of the image window is represented by the following total variation model:
Figure SMS_10
in the above equation, Q represents the total variation model of the image window; p represents an output structure image; s k Expressing the gray value of k pixel points in the output structural image; i is k Expressing the gray value of k pixel points in the input image; k represents a pixel point; q represents the index of all pixel points in a square area with k pixel points as the center;
Figure SMS_14
is shown in
Figure SMS_18
An index set of all pixel points in a square area with the pixel points as centers; x and y represent the horizontal and vertical pixel coordinates of the image, respectively;
Figure SMS_9
represents a correction factor, is available
Figure SMS_15
Figure SMS_17
Representing a Gaussian kernel function of
Figure SMS_20
In the above formula, the first and second carbon atoms are,
Figure SMS_12
representing the horizontal pixel coordinates of the k pixel points;
Figure SMS_13
representing the horizontal pixel coordinates of the q pixel point index;
Figure SMS_16
expressing the longitudinal pixel coordinates of k pixel points;
Figure SMS_19
expressing the index longitudinal pixel coordinates of the q pixel points;
Figure SMS_11
representing a gaussian spatial scale.
The working principle and the beneficial effects of the technical scheme are as follows: according to the scheme, when the 2D image is subjected to smoothing processing, the total variation of the image window is represented by the algorithm model, the algorithm model depends on the local data of the 2D image, the local gradient of the 2D image does not need to be considered to be isotropic, only the gradients in opposite directions in the local window of the 2D image are mutually offset, and the effect of sharpening the edge can be achieved regardless of whether the gradient mode is isotropic or anisotropic; the structural image is obtained through the processing of the algorithm model, and when the edge of the structural image is detected, the edge is convenient to extract, the identifiability of the 2D image is improved, and the grabbing precision of the plane defect is improved.
With the vigorous development of the manufacturing industry in China, the production efficiency is improved, the product quality is controlled more and more, an artificial intelligence technology is introduced, and the method becomes an important link for transformation of the manufacturing industry.
For the laser welding of the lithium battery, after welding, the invention is adopted to detect the appearance defects of the welding seam, as shown in fig. 7, firstly, a 3D camera is used for shooting to obtain a depth image of the lithium battery after the laser welding, and a CCD camera is used for shooting a 2D image of the lithium battery after the laser welding; secondly, the following processing procedures are sequentially carried out on the depth image: converting the image into point cloud, performing down-sampling, removing outliers, extracting abnormal bulges and sunken points, and performing 3D defect quantification treatment, wherein the following treatment processes are sequentially performed on the 2D image: capturing defects through an object detection technology, namely 2D defect quantification processing; and finally, analyzing by combining the processing results of the depth image and the 2D image to obtain a detection result of the weld appearance defects of the lithium battery, and outputting a final result. The method is based on a computer vision technology, and combines a point cloud algorithm and a deep learning algorithm, so that the defect of the appearance of the laser welding seam of the lithium battery can be successfully identified.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for detecting cosmetic defects of a laser weld, comprising:
s100: acquiring a depth image and a 2D image of a welded product;
s200: converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data;
s300: processing the 2D image, and identifying the processed 2D image to obtain second defect data;
s400: and outputting a welding seam appearance defect detection result by combining and analyzing the first defect data and the second defect data.
2. The method for detecting the visual defect of the laser welding seam according to the claim 1, characterized in that, in the step S100, the depth image is acquired by a 3D sensor; the 2D image is obtained by shooting with a CCD camera.
3. The method for detecting the visual defect of the laser welding seam according to claim 1, wherein in the step S200, converting the depth image into the 3D point cloud image comprises:
determining conversion coefficients of the depth image: adopting x _ res to represent the distance actually corresponding to one pixel at a distance on the same row, y _ res to represent the distance actually corresponding to one pixel at a distance on the same column, and z _ res to represent the conversion ratio of the pixel value size to the actual height on the depth image;
and (3) implementing space point cloud conversion: pixel points on the depth image are traversed, and the pixel points are converted into space points in the 3D point cloud image through conversion coefficients, wherein the conversion formula is as follows: x = i X _ res, Y = j Y _ res, Z = value Z _ res; wherein, X, Y and Z represent the coordinates of spatial points in the 3D point cloud image, i represents the row of the depth image, j represents the column of the depth image, and value represents the pixel value of the pixel point (i, j) on the depth image.
4. The method for detecting the appearance defects of the laser welding seam according to claim 1, wherein in the step S200, the depth image is converted into a 3D point cloud image, and then downsampling is performed, that is, the number of point clouds is reduced by a voxel grid downsampling method, specifically, the method comprises the following steps:
and (2) setting a point cloud set before downsampling as A and a point cloud set after downsampling as B, and adopting a voxel grid downsampling process as follows: and (3) dividing the space into a plurality of cubes with equal sizes, and if one or more points exist in the cubes in the point cloud set A, assigning a point to the center of the cube at the corresponding position in the point cloud set B, so as to achieve the purpose of downsampling.
5. The method for detecting the appearance defect of the laser welding seam according to claim 1, wherein in the step S200, the processing of the 3D point cloud image comprises:
s210: removing outliers in the 3D point cloud image;
s220: identifying salient points and sunken points in the 3D point cloud image, specifically: firstly, fitting a reference plane; secondly, traversing all points in the 3D point cloud image, calculating the distance from each point to a reference plane, and finally, if the distance is greater than a set distance threshold value, indicating that the point belongs to a convex point or a concave point;
s230: the method comprises the following steps of carrying out quantization processing on convex points or concave points, specifically: clustering the convex points or the concave points into a set by adopting a clustering algorithm to obtain a point set image; and calculating a first characteristic size of the point set image, and taking the first characteristic size as first defect data.
6. The method for detecting the visual defects of the laser welding seam according to claim 5, wherein in the step S220, the reference plane is fitted by using a random sampling consistency fitting plane algorithm, and the process of fitting the reference plane is as follows:
s221: randomly selecting 3 non-collinear target points in the 3D point cloud image, and solving a plane equation of a plane formed by the 3 target points to obtain a plane model;
s222: selecting other points in the 3D point cloud image, calculating the point-surface distance from the selected point to the plane model determined by the plane equation, comparing the point-surface distance with a preset minimum distance threshold, if the point-surface distance is smaller than the minimum distance threshold, the selected point is an inner point, otherwise, the selected point is an outer point, and recording the number of all inner points under the plane model;
s223: repeating the steps S221 and S222, and if the number of the inner points of the current plane model exceeds the number of the inner points of the previous plane model, recording the current plane model;
s224: and repeating the steps S221-S223 to perform iteration until the iteration is finished, and obtaining the plane model with the most interior points, namely the reference plane.
7. The method for detecting the visual defect of the laser welding seam according to the claim 5, wherein in the step S300, the processing the 2D image comprises:
s310: grabbing plane defects through an object detection technology, comprising:
firstly, collecting a certain number of plane defect pictures, and then labeling the positions of the plane defects to obtain a training set; secondly, training the training set by adopting a YOLO target detection network to obtain a detection model; finally, the detection model is used for detecting the plane defects of the 2D image, and the undercut plane defects and the specific positions of the plane defects on the 2D image are obtained;
s320: the method for quantizing the plane defects comprises the following steps:
carrying out confidence coefficient analysis on the plane defect to obtain a confidence coefficient; performing minimum bounding box algorithm analysis on the plane defects, and screening out the plane defects with the second characteristic size larger than a set second size threshold; and taking the second characteristic size of the screened plane defect and the confidence coefficient thereof as second defect data.
8. The method for detecting the visual defect of the laser welding seam according to claim 7, wherein in the step S400, the combination analysis mode is as follows:
s410: screening out a point set image of which the first characteristic size is larger than a first size threshold value from the 3D point cloud image; taking a convex region or a concave region corresponding to the screened point set image as a first appearance defect;
s420: taking the area corresponding to the plane defect with the confidence coefficient larger than the confidence coefficient threshold value as a second appearance defect;
s430: for the plane defect with the confidence coefficient not greater than the confidence coefficient threshold value, searching whether a convex or concave point set image exists at the corresponding position in the 3D point cloud image according to the position of the plane defect on the 2D image; if not, rejecting the plane defect; and if the point set image at the corresponding position in the 3D point cloud image exists and is larger than a third size threshold, point location area combination is carried out on the plane defect and the point set image at the corresponding position, and the point location area after combination is used as a third appearance defect.
9. The method for detecting the visual defect of the laser welding seam according to the claim 7, wherein in the step S320, the confidence analysis comprises:
determining a confidence interval of the plane defect;
analyzing the probability that the plane defect grabbing reliability falls into a confidence interval by adopting normal distribution;
and determining the confidence coefficient of the plane defect according to the probability that the plane defect grabbing reliability falls into the confidence interval.
10. The method for detecting the visual defects of the laser welding seam as claimed in claim 1, wherein in the step S300, the processing of the 2D image further comprises image preprocessing, specifically:
carrying out graying processing on the 2D image to obtain a corresponding 2D image subjected to graying processing;
based on each pixel point in the 2D image after the graying processing, the product of the R channel pixel value and the R channel weighted value, the product of the G channel pixel value and the G channel weighted value and the product of the B channel pixel value and the B channel weighted value are added to obtain the gray value of the corresponding pixel point, the Gaussian filter algorithm is adopted to combine the inherent variation of an image window and the total variation of the image window of the pixel point of the 2D image after the graying processing, a structure and texture decomposition regularization device is formed, and the 2D image after the smoothing processing is obtained through smoothing processing;
performing image enhancement processing on the 2D image after the smoothing processing, namely performing supervised model training on the 2D image after the smoothing processing by adopting an image enhancement model to obtain an image-enhanced 2D image;
and using the image enhanced 2D image for plane defect grabbing.
CN202310083936.4A 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam Active CN115797354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310083936.4A CN115797354B (en) 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310083936.4A CN115797354B (en) 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam

Publications (2)

Publication Number Publication Date
CN115797354A true CN115797354A (en) 2023-03-14
CN115797354B CN115797354B (en) 2023-05-30

Family

ID=85430482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310083936.4A Active CN115797354B (en) 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam

Country Status (1)

Country Link
CN (1) CN115797354B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309576A (en) * 2023-05-19 2023-06-23 厦门微亚智能科技有限公司 Lithium battery weld defect detection method, system and storage medium
CN116703914A (en) * 2023-08-07 2023-09-05 浪潮云洲工业互联网有限公司 Welding defect detection method, equipment and medium based on generation type artificial intelligence
CN116818780A (en) * 2023-05-26 2023-09-29 深圳市大德激光技术有限公司 Visual 2D and 3D detection system for button cell shell after laser welding
CN117078665A (en) * 2023-10-13 2023-11-17 东声(苏州)智能科技有限公司 Product surface defect detection method and device, storage medium and electronic equipment
CN117078666A (en) * 2023-10-13 2023-11-17 东声(苏州)智能科技有限公司 Two-dimensional and three-dimensional combined defect detection method, device, medium and equipment
CN118275450A (en) * 2024-05-30 2024-07-02 菲特(天津)检测技术有限公司 Weld joint detection method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967243A (en) * 2021-02-26 2021-06-15 清华大学深圳国际研究生院 Deep learning chip packaging crack defect detection method based on YOLO
CN113516660A (en) * 2021-09-15 2021-10-19 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
US20220198647A1 (en) * 2021-02-09 2022-06-23 Nanjing University Of Aeronautics And Astronautics Method for detecting and recognizing surface defects of automated fiber placement composite based on image converted from point cloud
CN115009794A (en) * 2022-06-30 2022-09-06 佛山豪德数控机械有限公司 Full-automatic plate conveying production line and production control system thereof
CN115147370A (en) * 2022-06-30 2022-10-04 章鱼博士智能技术(上海)有限公司 Battery top cover welding defect detection method and device, medium and electronic equipment
CN115496746A (en) * 2022-10-20 2022-12-20 复旦大学 Method and system for detecting surface defects of plate based on fusion of image and point cloud data
CN115601359A (en) * 2022-12-12 2023-01-13 广州超音速自动化科技股份有限公司(Cn) Welding seam detection method and device
CN115619738A (en) * 2022-10-18 2023-01-17 宁德思客琦智能装备有限公司 Detection method for module side seam welding after welding

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220198647A1 (en) * 2021-02-09 2022-06-23 Nanjing University Of Aeronautics And Astronautics Method for detecting and recognizing surface defects of automated fiber placement composite based on image converted from point cloud
CN112967243A (en) * 2021-02-26 2021-06-15 清华大学深圳国际研究生院 Deep learning chip packaging crack defect detection method based on YOLO
CN113516660A (en) * 2021-09-15 2021-10-19 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN115009794A (en) * 2022-06-30 2022-09-06 佛山豪德数控机械有限公司 Full-automatic plate conveying production line and production control system thereof
CN115147370A (en) * 2022-06-30 2022-10-04 章鱼博士智能技术(上海)有限公司 Battery top cover welding defect detection method and device, medium and electronic equipment
CN115619738A (en) * 2022-10-18 2023-01-17 宁德思客琦智能装备有限公司 Detection method for module side seam welding after welding
CN115496746A (en) * 2022-10-20 2022-12-20 复旦大学 Method and system for detecting surface defects of plate based on fusion of image and point cloud data
CN115601359A (en) * 2022-12-12 2023-01-13 广州超音速自动化科技股份有限公司(Cn) Welding seam detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵杰 等著, 北京:机械工业出版社, pages: 10 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309576A (en) * 2023-05-19 2023-06-23 厦门微亚智能科技有限公司 Lithium battery weld defect detection method, system and storage medium
CN116309576B (en) * 2023-05-19 2023-09-08 厦门微亚智能科技股份有限公司 Lithium battery weld defect detection method, system and storage medium
CN116818780A (en) * 2023-05-26 2023-09-29 深圳市大德激光技术有限公司 Visual 2D and 3D detection system for button cell shell after laser welding
CN116818780B (en) * 2023-05-26 2024-03-26 深圳市大德激光技术有限公司 Visual 2D and 3D detection system for button cell shell after laser welding
CN116703914A (en) * 2023-08-07 2023-09-05 浪潮云洲工业互联网有限公司 Welding defect detection method, equipment and medium based on generation type artificial intelligence
CN116703914B (en) * 2023-08-07 2023-12-22 浪潮云洲工业互联网有限公司 Welding defect detection method, equipment and medium based on generation type artificial intelligence
CN117078665A (en) * 2023-10-13 2023-11-17 东声(苏州)智能科技有限公司 Product surface defect detection method and device, storage medium and electronic equipment
CN117078666A (en) * 2023-10-13 2023-11-17 东声(苏州)智能科技有限公司 Two-dimensional and three-dimensional combined defect detection method, device, medium and equipment
CN117078666B (en) * 2023-10-13 2024-04-09 东声(苏州)智能科技有限公司 Two-dimensional and three-dimensional combined defect detection method, device, medium and equipment
CN117078665B (en) * 2023-10-13 2024-04-09 东声(苏州)智能科技有限公司 Product surface defect detection method and device, storage medium and electronic equipment
CN118275450A (en) * 2024-05-30 2024-07-02 菲特(天津)检测技术有限公司 Weld joint detection method and device
CN118275450B (en) * 2024-05-30 2024-09-10 菲特(天津)检测技术有限公司 Weld joint detection method and device

Also Published As

Publication number Publication date
CN115797354B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN115797354B (en) Method for detecting appearance defects of laser welding seam
CN113469177B (en) Deep learning-based drainage pipeline defect detection method and system
CN103499585B (en) Based on noncontinuity lithium battery film defect inspection method and the device thereof of machine vision
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN107742099A (en) A kind of crowd density estimation based on full convolutional network, the method for demographics
CN104268505A (en) Automatic cloth defect point detection and recognition device and method based on machine vision
CN109840483B (en) Landslide crack detection and identification method and device
CN112330593A (en) Building surface crack detection method based on deep learning network
CN110598613B (en) Expressway agglomerate fog monitoring method
CN104992429A (en) Mountain crack detection method based on image local reinforcement
CN116485717B (en) Concrete dam surface crack detection method based on pixel-level deep learning
CN113435460A (en) Method for identifying brilliant particle limestone image
CN113469097B (en) Multi-camera real-time detection method for water surface floaters based on SSD network
CN115597494B (en) Precision detection method and system for prefabricated part preformed hole based on point cloud
CN107610119A (en) The accurate detection method of steel strip surface defect decomposed based on histogram
CN116563262A (en) Building crack detection algorithm based on multiple modes
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN115018790A (en) Workpiece surface defect detection method based on anomaly detection
CN115656182A (en) Sheet material point cloud defect detection method based on tensor voting principal component analysis
CN110751687B (en) Apple size grading method based on computer vision minimum and maximum circle
CN116740036A (en) Method and system for detecting cutting point position of steel pipe end arc striking and extinguishing plate
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
CN116148880A (en) Method for automatically detecting power transmission line and dangerous object based on unmanned aerial vehicle laser radar point cloud data
CN115761606A (en) Box electric energy meter identification method and device based on image processing
CN111507423B (en) Engineering quantity measuring method for cleaning transmission line channel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 361000 room 201a, Jinfeng Building, information photoelectric Park, Xiamen Torch hi tech Zone, Xiamen City, Fujian Province

Patentee after: Xiamen Weiya Intelligent Technology Co.,Ltd.

Address before: 361000 room 201a, Jinfeng Building, information photoelectric Park, Xiamen Torch hi tech Zone, Xiamen City, Fujian Province

Patentee before: XIAMEN WEIYA INTELLIGENCE TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder