CN118015000B - Surface defect detection method for guide rail based on image processing - Google Patents

Surface defect detection method for guide rail based on image processing Download PDF

Info

Publication number
CN118015000B
CN118015000B CN202410420855.3A CN202410420855A CN118015000B CN 118015000 B CN118015000 B CN 118015000B CN 202410420855 A CN202410420855 A CN 202410420855A CN 118015000 B CN118015000 B CN 118015000B
Authority
CN
China
Prior art keywords
value
pixel point
mask
target
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410420855.3A
Other languages
Chinese (zh)
Other versions
CN118015000A (en
Inventor
吴建新
元强强
赵梓彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Sinote Precision Technology Co ltd
Original Assignee
Shaanxi Sinote Precision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Sinote Precision Technology Co ltd filed Critical Shaanxi Sinote Precision Technology Co ltd
Priority to CN202410420855.3A priority Critical patent/CN118015000B/en
Publication of CN118015000A publication Critical patent/CN118015000A/en
Application granted granted Critical
Publication of CN118015000B publication Critical patent/CN118015000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a surface defect detection method for a guide rail based on image processing; the method comprises the following steps: acquiring a guide rail gray level image; analyzing the target pixel point in the guide rail gray level image to obtain an edge detection response value of the target pixel point; the target pixel point is any pixel point in the guide rail gray level image; and responding to the situation that the edge detection response value of the target pixel point is larger than the set threshold value, taking the target pixel point as a concave defect edge point, and further obtaining the concave defect edge of the guide rail. Namely, the scheme of the invention can accurately detect the recess defect.

Description

Surface defect detection method for guide rail based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a surface defect detection method for a guide rail based on image processing.
Background
The rail transit occupies important positions in the fields of freight transportation and passenger transportation by virtue of the advantages of large and rapid transportation capacity, high safety and reliability and the like. In recent years, the rail transit industry in China has been rapidly developed.
The track supports the train and guides the running direction of the train, and the safety state of the track directly influences the safe running of the train. However, with the great improvement of the running speed, the running mileage and the departure density, and the occasional improper operation of a train driver, such as sudden braking, the rail can be damaged to different degrees, so that the defects generated on the surface of the rail are more and more, and especially the rail is partially recessed. Therefore, if the defects on the surface of the rail cannot be treated in time, the transportation safety is affected, and even derailment and rollover accidents occur or the vehicle is damaged in serious cases.
Therefore, the defects on the surface of the guide rail can be timely found and repaired, the abrasion and damage of the guide rail can be reduced, the service life of the guide rail is prolonged, and the cost of maintaining and replacing the guide rail is reduced. So the detection of the surface defects of the guide rail is important to ensure the safe running of the train. The defect detection system can monitor and control the quality of the guide rail in real time, and timely detect the defect problems existing in the manufacturing, mounting and using processes of the guide rail, so that the quality and reliability of products are improved.
At present, the detection of the defects on the surface of the guide rail is usually carried out by collecting images of the guide rail and analyzing the images, but due to scratches generated by the use of the guide rail, the defects possibly affect the detection of the recessed defects, so that the defect detection error is larger, and the problem of inaccurate recessed defect detection exists.
Therefore, it is particularly important how the rail surface pit defect detection can be accurately performed.
Disclosure of Invention
The invention aims to provide a surface defect detection method for a guide rail based on image processing, which is used for solving the problem that the existing detection of the surface concave defect of the guide rail is inaccurate.
In order to solve the technical problems, the invention provides a surface defect detection method for a guide rail based on image processing, which comprises the following steps:
Acquiring a guide rail gray level image;
Analyzing the target pixel point in the guide rail gray level image to obtain an edge detection response value of the target pixel point; the target pixel point is any pixel point in the guide rail gray level image;
Responding to the situation that the edge detection response value of the target pixel point is larger than a set threshold value, taking the target pixel point as a concave defect edge point, and further obtaining a concave defect edge of the guide rail;
The edge detection response value obtaining process comprises the following steps:
Performing convolution operation on the target pixel point by adopting eight templates in different directions in a Kirsch operator to obtain mask values in all directions;
Calculating the variance of the mask values in all directions; taking any mask value in the target pixel points as a target mask value, and acquiring a variance change value of the target mask value; taking the absolute value of the difference between the variance and the variance change value as a mask influence value of the target mask value;
Obtaining the protruding degree of the target mask value according to the mask influence value, the obtained direction change amplitude in the direction of the target mask value and the direction change amplitude in the direction perpendicular to the direction of the target mask value;
Respectively calculating difference absolute values of the salience degree of the target mask value and any other mask value, sequencing the difference absolute values in order to obtain a difference sequence, and obtaining direction included angles corresponding to the difference absolute values in the difference sequence one by one; calculating the product of the deviation degree of any two adjacent difference absolute values in the difference absolute value sequence and the difference degree of the included angles of the two corresponding directions, and taking the reciprocal of the sum of the products as the direction optimal parameter value of the target mask value;
And obtaining an edge detection response value of the target pixel point according to the mask influence value and the direction preference parameter value of the target mask value.
In one embodiment, the direction preference parameter values are:
Wherein, Direction preference parameter value representing jth mask value in ith pixel point,/>Representing the included angle in the direction corresponding to the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Representing the included angle of the direction corresponding to the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Representing the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Representing the absolute value of the p-th difference in the sequence of differences of the j-th mask value in the i-th pixel.
In one embodiment, the method further comprises the step of normalizing the orientation preference parameter values prior to obtaining the edge detection response values.
In one embodiment, the edge detection response value is:
Wherein, Representing the edge detection response value of the ith pixel point,/>Direction preference parameter value representing jth mask value in ith pixel point,/>The mask influence value representing the jth mask value in the ith pixel point, the norm () function is a normalization function.
In one embodiment, the variance change value is a variance of the calculated remaining mask values after the target mask value among the mask values of all directions of the target pixel point is removed.
In one embodiment, the magnitude of the directional change in the direction in which the obtained target mask value is located is:
Wherein, Expressed as the magnitude of the change in direction of the jth mask value in the corresponding direction,/>Representing the gradient amplitude of the ith pixel,/>Representing the gradient amplitude of the ith pixel point and the ith neighborhood pixel point in the direction corresponding to the jth mask value,/>Representing the gray value of the ith pixel,/>The gray value of the ith pixel point in the ith neighborhood pixel point in the direction corresponding to the jth mask value is represented, and avg () is an averaging function.
In one embodiment, the degree of protrusion is:
Wherein, Represent the degree of protrusion of the jth mask value of the ith pixel point,/>Representing the direction change amplitude of the ith pixel point in the direction corresponding to the jth mask value,/>The direction change amplitude of the i-th pixel point in the vertical direction of the direction corresponding to the j-th mask value is represented.
In one embodiment, the rail gray scale image is obtained by graying the acquired rail image.
The beneficial effects of the invention are as follows:
According to the scheme, the mask value of each pixel point in the guide rail gray level image is determined by combining a Kirsch operator, and the direction optimal parameter value of the pixel point is determined by combining the influence of the mask value, so that the edge detection response value of each pixel point is obtained; compared with the existing Kirsch algorithm, the edge detection effect that the maximum value is directly selected as the response value is more accurate, the influence that the detection of the concave defect cannot reach the optimal effect due to the scratch problem of the guide rail can be eliminated, the edges of the scratch and the concave defect are distinguished, and an accurate concave defect detection result is obtained.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 schematically shows a flowchart of the steps of a method for detecting surface defects for a guide rail based on image processing according to the present invention;
FIG. 2 is a flow chart schematically showing the steps of acquiring an edge detection response value in a surface defect detection method for a guide rail based on image processing according to the present invention;
FIG. 3 schematically shows a schematic of eight different directions in a Kirsch operator;
Fig. 4 schematically shows a 3 x 3 window diagram of a target pixel of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the specific implementation, structure, features and effects of the technical solution according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, a different one or another embodiment is not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
The invention aims at detecting the concave defects of the surface for the guide rail, and the problem that the concave defects are detected inaccurately due to the fact that scratches possibly exist on the surface for the guide rail is solved, so that the surface defect detection method for the guide rail based on image processing is provided, and the concave defects can be accurately detected.
Specifically, as shown in fig. 1, the surface defect detection method for the guide rail based on image processing of the invention comprises the following steps:
At step S1, a rail gray scale image is acquired. In this embodiment, a camera is used to capture the surface of the guide rail, so as to obtain a guide rail image, and then the guide rail image is subjected to graying treatment, so as to obtain a guide rail gray image.
At step S2, analyzing a target pixel point in the guide rail gray level image to obtain an edge detection response value of the target pixel point; the target pixel point is any pixel point in the guide rail gray level image.
As shown in fig. 2, the process of acquiring the edge detection response value in this embodiment is:
At step S21, convolution operation is performed on the target pixel point by using templates in eight different directions in the Kirsch operator, so as to obtain mask values in all directions.
In this embodiment, a Kirsch operator is adopted, and eight templates in different directions are adopted to respectively perform mask convolution in corresponding directions on target pixel points in the guide rail gray scale image, so as to obtain mask values in each direction of each pixel point in the guide rail gray scale image. The target pixel point is any pixel point in the guide rail gray level image. In this embodiment, the size of the eight templates in different directions is 3×3. Of course, as other embodiments, the template size may be set according to practical situations, such as 5×5.
In one embodiment, as shown in fig. 3, the Kirsch operator has eight directions of 0, 1, 2, 3, 4, 5, 6, and 7, and the angle between any two directions is 45 °. Since Kirsch operators are prior art, they are not described in detail here.
Illustratively, taking the ith pixel point (target pixel point) as an example, the mask value of the ith pixel point in the guide rail gray scale image isAnd further obtaining a mask sequence of 8 mask values, wherein j is the number of the mask values. Note that j also represents the number of directions, that is, the number of directions is equal to the number of mask values, and one direction corresponds to one mask value.
At step S22, the variances of the mask values in all directions are calculated; taking any mask value in the target pixel points as a target mask value, and acquiring a variance change value of the target mask value; the absolute value of the difference between the variance and the variance change value is used as the mask influence value of the target mask value. In this embodiment, taking the jth mask value in the ith pixel as an example, the variances of all the mask values of the ith pixel are calculated by using the jth mask value as the target mask value
The variance change value of the jth mask value in this embodiment is to reject the jth mask value from the mask sequence and calculate the variance of the remaining mask values.
Exemplary, the 1 st mask value is taken as the target mask valueMask sequence/>Target mask value/>Removing to obtain a sequence/>At this time, calculate the sequence/>Variance/>The variance/>As target mask value/>Variance change value of (a). The variance change value of each mask value in the mask sequence is obtained, and is recorded as/>
Further, the mask influence value of the target mask value is:
Wherein, The mask influence value of the j-th mask value of the i-th pixel point is represented, so that the influence intensity of the j-th mask value on the i-th pixel point compared with the mask values in other directions is represented, namely, the larger the value is, the more the j-th mask value is more prominent compared with the mask values in other directions.
It should be noted that, when no defect exists, the mask values in all directions of the target pixel point are smaller, and the volatility is smaller, that is, the variance is smaller; if there is a defect, the volatility will be larger, so the variance change value of each mask value is determined to determine the mask value affecting the target pixel point, so as to provide support for the subsequent analysis of the target pixel point.
At step S23, the protrusion degree of the target mask value is obtained according to the mask influence value, the obtained direction change amplitude in the direction of the target mask value, and the direction change amplitude in the direction perpendicular to the direction of the target mask value.
The degree of protrusion of the target mask value in this embodiment is:
Wherein, Represent the degree of protrusion of the jth mask value of the ith pixel point,/>Representing the direction change amplitude of the ith pixel point in the direction corresponding to the jth mask value,/>Representing the direction change amplitude of the ith pixel point in the vertical direction of the direction corresponding to the jth mask value; /(I)The larger the value of (c) is, the greater the probability that the pixel corresponding to the mask value is a concave edge.
The magnitude of the change in direction in the direction corresponding to the j-th mask value is:
Wherein, Expressed as the magnitude of the change in direction of the jth mask value in the corresponding direction,/>Representing the gradient amplitude of the ith pixel,/>Gradient amplitude of the ith pixel point in the ith neighborhood pixel point in the direction corresponding to the jth mask value. By/>The gradient difference of (a) represents the amplitude of the change of the pixel point edge characteristic in the direction corresponding to the jth mask value, and avg () is an averaging function.
Representing the gray value of the ith pixel,/>Representing the gray value of the ith pixel point in the ith neighborhood pixel point in the direction corresponding to the jth mask value, passing through/>The gradation difference of (2) represents the gradation floating of the jth mask value in the direction, that is, the gradation change prominence, and the larger the value is, the larger the gradation change of the value in the corresponding direction is.
Since the region where the scratch is located is also considered in the present embodiment, it is also necessary to determine the pixel variation difference in the vertical direction of the direction where the jth mask value is located, that is, calculate the vertical variation amplitude of the jth mask value in the corresponding directionThe calculation mode is the same as the calculation method of the direction change amplitude of the jth mask value in the corresponding direction, namely, the neighborhood pixel point in the vertical direction of the jth mask value in the direction is obtained, the gray level difference and the gradient difference of the target pixel point and the neighborhood pixel point in the vertical direction are calculated, and the direction change amplitude is further obtained; /(I)Is used for characterizing the salient amplitude of the influence of the jth mask value on the ith pixel point.
In the embodiment, the gradient change values of the pixels at the concave defect positions in all directions are similar, the gradient values of the pixels at the scratch positions in a single direction are more prominent, and the gradient changes in the protruding directions are unbalanced. Therefore, the influence of the surrounding adjacent pixel points on the target pixel point is determined by comparing the gradient difference and the gray scale difference of the target pixel point and the surrounding adjacent pixel points.
In one embodiment, when determining the neighbor pixel, a corresponding window size is further required to be set, and the neighbor pixel is determined in the window, and by taking the ith pixel as a center, the window size is determined to be 3×3, that is, two neighbor pixels exist in the direction corresponding to the jth mask value. As shown in fig. 4, based on the mask value of the direction 2, two neighboring pixel points of the target pixel point P A are P 1 and P 5; it should be noted that, the neighborhood pixels in the direction 2 and the direction 6 are the same, the neighborhood pixels in the direction 0 and the direction 4 are the same, the neighborhood pixels in the direction 1 and the direction 5 are the same, and the neighborhood pixels in the direction 3 and the direction 7 are the same.
At step S24, calculating the difference absolute values of the salience degree of the target mask value and any other mask value, and sorting the magnitudes of the difference absolute values to obtain a difference sequence, and obtaining direction included angles corresponding to the difference absolute values in the difference sequence one by one; and calculating the product of the deviation degree of any two adjacent difference absolute values in the difference absolute value sequence and the difference degree corresponding to the included angle between the two directions, and taking the reciprocal of the sum of the products as the direction optimal parameter value of the target mask value.
The included angle of the directions ranges from 0 degrees to 180 degrees, and the included angle can be calculated according to the included angle between two adjacent directions. Illustratively, as shown in FIG. 3, direction 1 is at 45 to direction 2, direction 1 is at 135 to direction 6, and direction 1 is at 180 to direction 5. The direction preference parameter values in this embodiment are:
Wherein, Direction preference parameter value representing jth mask value in ith pixel point,/>Representing the included angle in the direction corresponding to the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Representing the included angle of the direction corresponding to the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Representing the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Representing the absolute value of the p-th difference in the sequence of differences of the j-th mask value in the i-th pixel.
Wherein,The influence of the mask value representing the direction on the pixel point is larger, and the probability that the pixel point corresponding to the mask is a concave edge point is larger. By/>The deviation degree of the absolute value of the difference representing the degree of protrusion of the two mask values is larger the value thereof is. By/>The position relationship of the two mask values is characterized. By passing throughThe degree of deviation of the absolute values of two adjacent differences in the difference sequence is characterized because the deviation of the masks in each direction of the pixel point of the dent defect is smaller compared with the pixel point of the scratch position, so that the smaller the value is, the greater the possibility that the pixel point is the pixel point of the dent position is.
The difference value sequence is obtained by determining the difference absolute value of the salience degree of the target mask value and any other mask value according to the salience degree corresponding to each mask value of the target pixel point, and sequencing the difference absolute values to obtain the difference value sequence. In this embodiment, the sorting of the absolute values of the differences may be from small to large, or from large to small.
Illustratively, taking the jth mask value as an example, sorting the absolute values of the differences between the jth mask value and the protruding degree of any other mask value from small to large, so as to obtain the jth difference sequence.
When the absolute value of the difference is calculated, the two mask values in different directions are calculated, so that each absolute value of the difference corresponds to an included angle between the two mask values.
So far, the direction preference parameter value of each mask value in all the pixel points is obtained.
It should be noted that, the pixel points at the concave defect position have a relatively similar change relationship in all directions, and only have a change in a certain symmetry direction at the scratch position, so that the characteristic of concave or scratch can be represented by calculating the absolute value of the difference value of the degree of protrusion.
In step S25, an edge detection response value of the target pixel point is obtained according to the mask influence value and the direction preference parameter value of the target mask value.
In this embodiment, according to the direction preference parameter values of all mask values of the target pixel point and the mask influence values corresponding thereto, an edge detection response value can be obtained, and it should be noted that, before the edge detection response value is obtained, the direction preference parameter values of the mask values need to be normalized, and then the mask influence values of the mask values are weighted by using the direction preference parameter values, so as to obtain the edge detection response value of the pixel point, which specifically includes:
Wherein, Representing the edge detection response value of the ith pixel point,/>Direction preference parameter value representing jth mask value in ith pixel point,/>The mask influence value representing the jth mask value in the ith pixel point, the norm () function is a normalization function.
At step S3, in response to the edge detection response value of the target pixel point being greater than the set threshold, the target pixel point is taken as a recessed defect edge point, thereby obtaining a rail recessed defect edge. Specifically, in this embodiment, the edge detection response value of each pixel point in the guide rail gray level image is analyzed to determine the pixel point belonging to the edge point of the recess defect, thereby obtaining the edge of the recess defect of the guide rail, and realizing the detection of the recess defect of the guide rail surface.
In this embodiment, when analyzing the edge detection response value of each pixel point in the guide rail gray scale image, the threshold value is set and compared with the edge detection response value of the target pixel point, and when the edge detection response value of the target pixel point is greater than the set threshold value, the target pixel point is regarded as the concave defect edge point.
The set threshold value can be determined by statistics history data, for example, calculating edge detection response values of a plurality of known concave edges, and taking the average value as the set threshold value; of course, as other embodiments, the value of the oxford threshold may be used as the set threshold, where the oxford threshold is obtained by dividing the edge detection response values of all the obtained pixel points.
According to the scheme, the existing Kirsch operator is utilized to acquire the mask values of the target pixel points in different directions, the fluctuation analysis of the mask values in different directions is carried out, and the edge analysis is carried out on the target pixel points by combining the influence of the neighborhood pixel points in the corresponding directions of the mask values, so that whether the target pixel points are concave edge points or not can be accurately determined, and the accurate detection of the concave defects of the surface for the guide rail is realized.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.

Claims (6)

1. The surface defect detection method for the guide rail based on image processing is characterized by comprising the following steps of:
Acquiring a guide rail gray level image;
Analyzing the target pixel point in the guide rail gray level image to obtain an edge detection response value of the target pixel point; the target pixel point is any pixel point in the guide rail gray level image;
Responding to the situation that the edge detection response value of the target pixel point is larger than a set threshold value, taking the target pixel point as a concave defect edge point, and further obtaining a concave defect edge of the guide rail;
The edge detection response value obtaining process comprises the following steps:
Performing convolution operation on the target pixel point by adopting eight templates in different directions in a Kirsch operator to obtain mask values in all directions;
Calculating the variance of the mask values in all directions; taking any mask value in the target pixel points as a target mask value, and acquiring a variance change value of the target mask value; taking the absolute value of the difference between the variance and the variance change value as a mask influence value of the target mask value;
Obtaining the protruding degree of the target mask value according to the mask influence value, the obtained direction change amplitude in the direction of the target mask value and the direction change amplitude in the direction perpendicular to the direction of the target mask value;
Respectively calculating difference absolute values of the salience degree of the target mask value and any other mask value, sequencing the difference absolute values in order to obtain a difference sequence, and obtaining direction included angles corresponding to the difference absolute values in the difference sequence one by one; calculating the product of the deviation degree of any two adjacent difference absolute values in the difference absolute value sequence and the difference degree of the included angles of the two corresponding directions, and taking the reciprocal of the sum of the products as the direction optimal parameter value of the target mask value;
obtaining an edge detection response value of the target pixel point according to the mask influence value and the direction optimization parameter value of the target mask value;
the direction change amplitude value in the direction of the obtained target mask value is as follows:
Wherein, Representing the direction change amplitude of the ith pixel point in the direction corresponding to the jth mask value,/>Representing the gradient amplitude of the ith pixel,/>Representing the gradient amplitude of the ith pixel point and the ith neighborhood pixel point in the direction corresponding to the jth mask value,/>Representing the gray value of the ith pixel,/>Representing the gray value of the ith pixel point in the ith neighborhood pixel point in the direction corresponding to the jth mask value, wherein avg () is an average function;
The protruding degree is:
Wherein, Represent the degree of protrusion of the jth mask value of the ith pixel point,/>The direction change amplitude of the i-th pixel point in the vertical direction of the direction corresponding to the j-th mask value is represented.
2. The method for detecting surface defects for guide rails based on image processing according to claim 1, wherein the direction preference parameter values are:
Wherein, Direction preference parameter value representing jth mask value in ith pixel point,/>Representing the included angle in the direction corresponding to the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>The p+1st difference absolute value in the difference sequence of the jth mask value in the ith pixel point corresponds to the direction included angle,/>Representing the absolute value of the p-th difference in the difference sequence of the j-th mask value in the i-th pixel point,/>Represents the p+1st absolute value of the difference in the difference sequence of the jth mask value in the ith pixel point.
3. The method for detecting surface defects for guide rails based on image processing according to claim 1, further comprising the step of normalizing the orientation preference parameter values before acquiring the edge detection response values.
4. A surface defect detection method for a guide rail based on image processing according to claim 3, wherein the edge detection response value is:
Wherein, Representing the edge detection response value of the ith pixel point,/>Direction preference parameter value representing jth mask value in ith pixel point,/>The mask influence value representing the jth mask value in the ith pixel point, the norm () function is a normalization function.
5. The method according to claim 1, wherein the variance change value is a variance of the calculated remaining mask values after removing the target mask value from the mask values in all directions of the target pixel.
6. The method for detecting surface defects for a rail according to claim 1, wherein the rail gray scale image is obtained by subjecting the collected rail image to gray scale processing.
CN202410420855.3A 2024-04-09 2024-04-09 Surface defect detection method for guide rail based on image processing Active CN118015000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410420855.3A CN118015000B (en) 2024-04-09 2024-04-09 Surface defect detection method for guide rail based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410420855.3A CN118015000B (en) 2024-04-09 2024-04-09 Surface defect detection method for guide rail based on image processing

Publications (2)

Publication Number Publication Date
CN118015000A CN118015000A (en) 2024-05-10
CN118015000B true CN118015000B (en) 2024-06-21

Family

ID=90954299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410420855.3A Active CN118015000B (en) 2024-04-09 2024-04-09 Surface defect detection method for guide rail based on image processing

Country Status (1)

Country Link
CN (1) CN118015000B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069807A (en) * 2015-08-28 2015-11-18 西安工程大学 Punched workpiece defect detection method based on image processing
CN116152262A (en) * 2023-04-24 2023-05-23 东莞市群安塑胶实业有限公司 Method for detecting appearance defects of ionic intermediate film

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101175737B1 (en) * 2010-12-14 2012-08-21 엠텍비젼 주식회사 Defect pixel processing method and device
CN112819775B (en) * 2021-01-28 2022-07-19 中国空气动力研究与发展中心超高速空气动力研究所 Segmentation and reinforcement method for damage detection image of aerospace composite material
CN116309292A (en) * 2022-12-30 2023-06-23 沈阳派得林科技有限责任公司 Intelligent weld defect identification method based on visual conversion layer and instance segmentation
CN116757990A (en) * 2023-01-09 2023-09-15 河南省科学院应用物理研究所有限公司 Railway fastener defect online detection and identification method based on machine vision
CN117011297B (en) * 2023-10-07 2024-02-02 惠州市凯默金属制品有限公司 Aluminum alloy automobile accessory die defect detection method based on image processing
CN117197140B (en) * 2023-11-07 2024-02-20 东莞市恒兴隆实业有限公司 Irregular metal buckle forming detection method based on machine vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069807A (en) * 2015-08-28 2015-11-18 西安工程大学 Punched workpiece defect detection method based on image processing
CN116152262A (en) * 2023-04-24 2023-05-23 东莞市群安塑胶实业有限公司 Method for detecting appearance defects of ionic intermediate film

Also Published As

Publication number Publication date
CN118015000A (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN115272334B (en) Method for detecting tiny defects on surface of steel rail under complex background
CN111079747A (en) Railway wagon bogie side frame fracture fault image identification method
CN115984284B (en) X-ray ship body weld image detection method for ship maintenance
CN115601364A (en) Golden finger circuit board detection method based on image analysis
US20180253836A1 (en) Method for automated detection of defects in cast wheel products
CN107273802B (en) Method and device for detecting fault of brake shoe drill rod ring of railway train
CN111260629A (en) Pantograph structure abnormity detection algorithm based on image processing
CN110047070B (en) Method and system for identifying rail wear degree
CN106372667A (en) Method for detecting adverse state of inclined sleeve part screws of high-speed train overhead line system
CN114758322B (en) Road quality detection system based on machine identification
CN111080614A (en) Method for identifying damage to rim and tread of railway wagon wheel
CN113112501A (en) Vehicle-mounted track inspection device and method based on deep learning
CN117094916B (en) Visual inspection method for municipal bridge support
CN112288717A (en) Method for detecting foreign matters on side part of motor train unit train
CN117079219A (en) Vehicle running condition monitoring method and device applied to trailer service
CN116993804B (en) Stirrup size detection method and system based on LSM algorithm
CN118015000B (en) Surface defect detection method for guide rail based on image processing
Hashmi et al. Computer-vision based visual inspection and crack detection of railroad tracks
CN117314921A (en) RFID-based starting point detection and treatment method for track inspection equipment
CN109978879B (en) Box corner in-groove state detection method based on railway wagon loading video monitoring
CN112945976B (en) Method and device for detecting contact fatigue crack of steel rail
CN112699794B (en) Method for identifying dislocation fault images of middle rubber and upper and lower floor plates of wagon axle box rubber pad
CN112801110A (en) Target detection method and device for image distortion correction of linear array camera of rail train
CN112150434A (en) Tire defect detection method, device, equipment and storage medium
CN117291914B (en) Automobile part defect detection method, system, computer and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant