CN112329783A - Image processing-based coupler yoke break identification method - Google Patents

Image processing-based coupler yoke break identification method Download PDF

Info

Publication number
CN112329783A
CN112329783A CN202011231729.1A CN202011231729A CN112329783A CN 112329783 A CN112329783 A CN 112329783A CN 202011231729 A CN202011231729 A CN 202011231729A CN 112329783 A CN112329783 A CN 112329783A
Authority
CN
China
Prior art keywords
image
suspected
target image
features
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011231729.1A
Other languages
Chinese (zh)
Other versions
CN112329783B (en
Inventor
汤岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202011231729.1A priority Critical patent/CN112329783B/en
Publication of CN112329783A publication Critical patent/CN112329783A/en
Application granted granted Critical
Publication of CN112329783B publication Critical patent/CN112329783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A coupler yoke breaking identification method based on image processing belongs to the technical field of image detection. The invention aims to solve the problem of low accuracy of the existing detection method for the breakage of the coupler yoke. Firstly, enhancing a target image, extracting a boundary area of a supporting plate and a background by adopting a local self-adaptive threshold, counting a segmentation result according to a horizontal direction, searching an upper boundary and a lower boundary to obtain an accurate positioning intercepted image, then calculating a pixel variance according to a column direction to obtain a variance curve, traversing the pixel variance curve, finding a starting position and an ending position of a variance curve jumping point, correspondingly taking L pixels outside an initial position in the accurate positioning intercepted image as initial coordinates of a suspected fault area, and intercepting a suspected fault area subgraph according to the obtained coordinates; and then, based on the extracted features of the sub-image of the suspected fault area, identifying the fault by using an SVM classifier. The method is mainly used for identifying the break of the coupler yoke.

Description

Image processing-based coupler yoke break identification method
Technical Field
The invention belongs to the technical field of image detection, and particularly relates to a coupler yoke breaking identification method.
Background
In order to ensure the safe operation of a railroad train, railroad detection technicians need to inspect various components of the railroad train. The traditional method is that a detection device takes pictures of a train or each part, and after the pictures are taken, the fault point of the train is determined in a manual observation mode. This method allows fault detection during vehicle travel without requiring parking. However, the fault detection method for manually checking the image has the disadvantages of low efficiency, high labor consumption, easy fatigue, high strength, training requirement and the like.
For faults with obvious characteristics and fixed standards, the faults can be identified by using an image processing mode, so that the labor can be greatly saved, the detection time is shortened, and the detection accuracy is not easily influenced by fatigue and other reasons, so that the method for identifying the faults of the hook tail frame supporting plate by using the image processing technology is a better detection method. However, the problem of low detection accuracy still exists when the existing image processing technology is directly used for fault identification of the hook tail frame supporting plate, or the problem of long time for training a model and the problem of long detection time exist in the neural network identification.
Disclosure of Invention
The invention aims to solve the problem of low accuracy of the existing detection method for the breakage of the coupler yoke.
The coupler yoke breaking identification method based on image processing comprises the following steps:
s1, acquiring a target image containing the component to be detected; enhancing the target image to obtain an enhanced target image;
s2, extracting a boundary area between the supporting plate and the background by adopting local self-adaptive threshold segmentation; counting the segmentation result according to the horizontal direction and searching upper and lower boundaries by taking the vertical central point of the enhanced target image as a reference;
positioning and intercepting a hooking tail frame according to the position of an upper boundary or a lower boundary in an enhanced target image to serve as an accurate positioning intercepted image, calculating pixel variance of the accurate positioning intercepted image in a column direction to obtain a variance curve, traversing the variance curve, finding the initial position and the end position of a variance curve jumping point, correspondingly expanding L pixels outside the initial position in the accurate positioning intercepted image to serve as initial coordinates of a suspected fault area, and intercepting a suspected fault area sub-image according to the obtained initial coordinates;
s3, extracting the characteristics of the suspected fault area subgraph;
and S4, identifying the hook tail frame fault by using an SVM classifier based on the extracted characteristics of the suspected fault area subgraph.
Further, the process of enhancing the target image comprises the following steps:
normalizing the target image, counting the number of 10 interval pixel values of the gray value of the normalized image between 0 and 1 by taking 0.1 as a step length, and selecting an interval upper bound with the largest number of the pixel values as the center of an enhancement curve; and enhancing the target image according to the following formula:
Figure BDA0002765440850000021
in the formula imgvalIs the statistical value of the number of pixel values in the interval with the normalized image step length of 0.1, FoffIs the upper bound of the interval corresponding to the maximum number of pixel values in the statistical value, exp is an exponent with e as the base, y is the power of enhancement, imgnormalNormalized pixel value, img, for the target image before enhancementdstNormalizing the pixel values of the enhanced target image;
and recovering to obtain an enhanced target image according to the enhanced image.
Further, the process of counting the segmentation results in the horizontal direction and searching the upper and lower boundaries from the enhanced vertical center point of the target image to the upper and lower ends respectively in S2 includes the following steps:
searching the vertical central point of the enhanced target image to the upper end and the lower end respectively, finding out the position of a line with the number of white pixel points exceeding half of the width of the enhanced target image in a certain line of pixels, recording the position as A1, continuing to search, selecting A1 as a boundary to stop searching if 5 continuous lines meet the condition that the width of the enhanced target image exceeds half, and continuing to search if the width of the enhanced target image does not exceed half;
if two lines meeting the conditions are found at the upper end and the lower end simultaneously, an upper boundary and a lower boundary are found; if only one boundary is found, the other boundary is complemented according to the found upper boundary or lower boundary.
Further, in S2, 20 pixels corresponding to the starting position are extended in the precise positioning captured image as the starting coordinates of the suspected fault area.
Further, before extracting the features of the suspected-fault-area sub-image, S3 needs to adjust the suspected-fault-area sub-image to a fixed size, and then extracts the features of the suspected-fault-area sub-image.
Further, before extracting the suspected fault area sub-graph features, the suspected fault area sub-graph needs to be adjusted to 128 × 128.
Further, the features extracted from the suspected-faulty-area sub-image in S3 include gray scale, texture, and LBP features of the suspected-faulty-area sub-image.
Furthermore, the gray feature is the distribution of statistical gray, and the texture feature is the edge statistical gradient obtained by selecting a Sobel operator for the image.
Further, the process of identifying the fault by using the SVM classifier based on the features of the extracted sub-graph of the suspected fault area in S4 includes the following steps:
and performing normalization processing on the extracted features of the suspected fault area subgraphs, and then inputting the normalized features into an SVM classifier to recognize faults.
Further, after the features of the sub-image of the suspected fault area are extracted and normalized, before the features are input into the SVM classifier, the features after the normalization need to be processed by 3 sigma property to obtain a processed feature cls:
Figure BDA0002765440850000031
wherein cls is a characteristic after treatment, fxiIs the ith normalized feature whose feature belongs to class x;
inputting the feature cls subjected to the 3-sigma property processing into an SVM classifier to identify the fault;
the 3 sigma property is a 3 sigma property determined in the training process of the SVM classifier, u is a mean value of sample data in the training process, and v is a standard deviation of the sample in the training process; in the process of training the SVM classifier, the characteristics of the suspected fault area subgraphs are extracted by using images in a training set, normalization processing is carried out on the extracted characteristics, the characteristics, corresponding to each suspected fault area subgraph, after normalization are used as sample data, and the mean value u and the standard deviation v of the sample data corresponding to each characteristic are respectively calculated.
Has the advantages that:
1. the invention utilizes the image processing technology to carry out fault identification, can realize the automatic detection of the breakage of the coupler yoke, can greatly save manpower, reduce the workload of manpower and greatly improve the discovery rate and the detection accuracy rate of fault identification.
2. The invention can enhance the image details by using the improved self-adaptive image enhancement method, thereby ensuring the detection accuracy.
3. In the invention, the extracted mean value of the features is used as the position of the central point of the corresponding features of the target, and the closer the mean value is, the higher the similarity degree is, and the farther the difference is. The data are processed by using the 3 sigma property, the processed feature cls obtained by the feature extraction processing is processed according to the 3 sigma property, so that the weight from the central point position is increased, the weight far away from the central point position is reduced, and the influence of the feature with large prediction time difference on classification can be reduced.
4. The method for recognizing the images by using the SVM machine learning in the image processing ensures the detection speed and the processing time and the training time.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying a coupler yoke break based on image processing according to a first embodiment;
FIG. 2a is an image before enhancement and FIG. 2b is an image after enhancement;
FIG. 3 is a partial adaptive segmentation result;
FIG. 4 shows the statistical results in the horizontal direction;
FIG. 5 is a flow chart of an embodiment;
fig. 6 is an enhancement curve.
Detailed Description
It should be noted that, in the present invention, the embodiments disclosed in the present application may be combined with each other without conflict.
The first embodiment is as follows: the present embodiment is described in connection with figure 1,
the embodiment is a coupler yoke breaking identification method based on image processing, which comprises the following steps:
s1, acquiring a target image containing the component to be detected; enhancing the target image to obtain an enhanced target image;
s2, extracting a boundary area between the supporting plate and the background by adopting local self-adaptive threshold segmentation; counting the segmentation result according to the horizontal direction and respectively searching upper and lower boundaries towards the upper and lower ends by taking the enhanced vertical central point of the target image as a reference;
positioning and intercepting a hooking tail frame according to the position of an upper boundary or a lower boundary in an enhanced target image to serve as an accurate positioning intercepted image, calculating pixel variance of the accurate positioning intercepted image in a column direction to obtain a variance curve, traversing the variance curve, finding the initial position and the end position of a variance curve jumping point, correspondingly positioning the initial position and the end position in the intercepted image, externally expanding L pixels from the initial position to serve as initial coordinates of a suspected fault area, and intercepting a suspected fault area sub-image according to the obtained initial coordinates;
s3, extracting the characteristics of the suspected fault area subgraph;
and S4, identifying the hook tail frame fault by using an SVM classifier based on the extracted characteristics of the suspected fault area subgraph.
The second embodiment is as follows:
the embodiment is a coupler yoke breaking identification method based on image processing, and the process of enhancing a target image in the embodiment comprises the following steps:
normalizing the target image, counting the number of 10 interval pixel values of the gray value of the normalized image between 0 and 1 by taking 0.1 as a step length, and selecting an interval upper bound with the largest number of the pixel values as the center of an enhancement curve; and enhancing the target image according to the following formula:
Figure BDA0002765440850000041
in the formula imgvalIs the statistical value of the number of pixel values in the interval with the normalized image step length of 0.1, FoffIs the upper bound of the interval corresponding to the maximum number of pixel values in the statistical value, exp is an exponent with e as the base, y is the power of enhancement, imgnormalNormalized pixel value, img, for the target image before enhancementdstNormalizing the pixel values of the enhanced target image;
and recovering to obtain an enhanced target image according to the enhanced image.
Other steps and parameters are the same as in the first embodiment.
The third concrete implementation mode:
the embodiment is a method for identifying the break of the coupler yoke based on image processing, and the process of counting the segmentation result according to the horizontal direction and respectively searching the upper boundary and the lower boundary towards the upper end and the lower end by using the enhanced vertical central point of the target image in the S2 of the embodiment comprises the following steps:
searching the vertical central point of the enhanced target image to the upper end and the lower end respectively, finding out the position of a line with the number of white pixel points exceeding half of the width of the enhanced target image in a certain line of pixels, recording the position as A1, continuing to search, selecting A1 as a boundary to stop searching if 5 continuous lines meet the condition that the width of the enhanced target image exceeds half, and continuing to search if the width of the enhanced target image does not exceed half;
if two lines meeting the conditions are found at the upper end and the lower end simultaneously, an upper boundary and a lower boundary are found; if only one boundary is found, completing the other boundary according to the found upper boundary or lower boundary; and if one boundary is not positioned, the problem of image quality is indicated to be treated by alarm.
Other steps and parameters are the same as in the first or second embodiment.
The fourth concrete implementation mode:
in the present embodiment, the method for identifying the break of the hook tail frame based on the image processing is performed, and in S2 of the present embodiment, the initial position is extended by 20 pixels in the accurate positioning captured image as the initial coordinates of the suspected fault area.
Other steps and parameters are the same as in one of the first to third embodiments.
The fifth concrete implementation mode:
in the embodiment, the method for identifying the broken hook-tail frame based on image processing is adopted, and before extracting the features of the suspected fault area subgraph, the suspected fault area subgraph needs to be adjusted to a fixed size, and then the features of the suspected fault area subgraph are extracted in step S3.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode:
in the embodiment, the suspected-fault-region sub-graph needs to be adjusted to 128 × 128 before the suspected-fault-region sub-graph features are extracted.
Other steps and parameters are the same as in one of the first to fifth embodiments.
The seventh embodiment:
in the method for identifying the break of the hook tail frame based on image processing in the present embodiment, the features of the suspected-fault-area sub-image extracted in S3 in the present embodiment include gray scale, texture, and LBP features of the suspected-fault-area sub-image.
Other steps and parameters are the same as in one of the first to sixth embodiments.
The specific implementation mode is eight:
the embodiment is a coupler yoke break-off identification method based on image processing, wherein the gray feature is the distribution of statistical gray, and the texture feature is the edge statistical gradient obtained by selecting a Sobel operator to the image.
The other steps and parameters are the same as in the seventh embodiment.
The specific implementation method nine:
the embodiment is a method for recognizing the break of the hook tail frame based on image processing, and the process of recognizing the fault by using an SVM classifier based on the characteristics of the sub-image of the suspected fault area extracted in S4 of the embodiment comprises the following steps:
and performing normalization processing on the extracted features of the suspected fault area subgraphs, and then inputting the normalized features into an SVM classifier to recognize faults.
Other steps and parameters are the same as in one of the first to eighth embodiments.
The detailed implementation mode is ten:
the embodiment is a fishtail frame breakage identification method based on image processing, and in the embodiment, after normalization processing is carried out on the extracted features of a suspected fault area subgraph, before the features are input into an SVM classifier, the features after the normalization processing need to be subjected to 3-sigma property processing to obtain processed features cls; inputting the feature cls subjected to the 3-sigma property processing into an SVM classifier to identify the fault;
the 3 sigma property is a 3 sigma property determined in the training process of the SVM classifier, u is a mean value of sample data in the training process, and v is a standard deviation of the sample in the training process; in the process of training the SVM classifier, the characteristics of the suspected fault area subgraphs are extracted by using images in a training set, normalization processing is carried out on the extracted characteristics, the characteristics, corresponding to each suspected fault area subgraph, after normalization are used as sample data, and the mean value u and the standard deviation v of the sample data corresponding to each characteristic are respectively calculated.
The training process of the SVM classifier comprises the following steps:
s401, acquiring a target image containing a component to be detected, and constructing a training set; enhancing the target images in the training set to obtain enhanced target images;
s402, extracting a boundary area between the supporting plate and the background by adopting a local self-adaptive threshold; counting the segmentation result according to the horizontal direction and searching upper and lower boundaries by using the vertical central point of the enhanced target image to the upper and lower ends;
positioning and intercepting a hooking tail frame in the enhanced target image according to the upper boundary or the lower boundary to serve as an accurate positioning intercepted image, calculating pixel variance of the accurate positioning intercepted image in a column direction to obtain a variance curve, traversing the pixel variance curve to find the starting position and the ending position of a variance curve jumping point, correspondingly expanding L pixels outside the starting position in the accurate positioning intercepted image to serve as the starting coordinate of a suspected fault area, and intercepting a suspected fault area sub-image according to the obtained coordinate;
s403, extracting the characteristics of the suspected fault area subgraph;
s404, performing normalization processing on the extracted features, taking the features corresponding to each suspected fault area subgraph and normalized to serve as sample data, respectively calculating a mean value u and a standard deviation v of the sample data corresponding to each feature, and processing the normalized features according to the 3 sigma property to obtain a processed feature cls:
Figure BDA0002765440850000071
wherein cls is a characteristic after treatment, fxiThe characteristics belong to the ith normalized characteristics of the category x, u is the mean value of sample data, and v is the standard deviation;
and training the SVM classifier by using the processed features.
Other steps and parameters are the same as in the ninth embodiment.
Examples
Referring to fig. 5, this embodiment is a method for identifying a hook frame break based on image processing, and includes the following steps:
1. image acquisition
And acquiring the 2D linear array vehicle-passing image by using an image acquisition device arranged on the detection station. In order to ensure the diversity of data and high stability of a final design algorithm, images of different time and different weather need to be collected, and a vehicle-passing image set is constructed. 2. Rough cutting of sample subgraph
The image containing the vehicle-end linking part, namely the target image, is intercepted from the vehicle-passing image by using the prior knowledge, hardware data, the wheel base and the like, wherein the target image contains the vehicle-end linking part, and the upper part, the lower part, the left part and the right part of the target image are provided with reserved parts to prevent the condition of incomplete interception caused by errors such as the hardware wheel base and the like. Intercepting the target image can accelerate the program recognition speed.
3. Algorithm design and improvement
1) Improved image enhancement
In conventional image enhancement, the brightness of an image and the like are always enhanced as a whole. When the coupler yoke breaks off, the image gray scale of a fault area is changed, and a bright broken reflective or dark shadow area appears. And a new enhancement mode is adopted to mainly enhance the pixel points near the two ends of the gray scale, so that the image characteristics are clearer and more obvious. The brightness of the images in outdoor environments is not uniform, so it is not reasonable to use the same parameters for different images. The invention sets the center of the enhancement curve according to the gray mode of the image to optimize different brightness. Normalizing the target image, wherein the normalization mode of directly dividing the pixel value by 255 is adopted in the embodiment for normalization; then counting the number of 10 interval pixel values of the gray value of the normalized image between 0 and 1 by taking 0.1 as a step length, and selecting the interval upper bound with the largest number of pixel values as the center F of the enhancement curveoffThe drawing curve is shown in FIG. 6; in FIG. 6, the x-axis represents the gray level of the input image, the y-axis represents the gray level of the enhanced image, and the labels 0.1-0.8 in the upper left corner represent 8FoffThe corresponding curve. The specific enhancement formula is as follows:
Figure BDA0002765440850000081
in the formula imgvalIs the statistical value of the number of pixel values in the interval with the normalized image step length of 0.1, FoffIs the upper bound of the interval corresponding to the maximum number of pixel values in the statistical value, exp is an exponent with e as the base, y is the power of enhancement, imgnormalNormalized pixel value, img, for the target image before enhancementdstNormalizing the pixel values of the enhanced target image;
and multiplying the enhanced image by 255 to obtain an enhanced target image.
The enhancement method of the invention can realize the functions of enhancing the gray scales at two ends and inhibiting the middle area, and has better effect of enhancing faults, and the enhancement effect is shown in fig. 2a and fig. 2b, wherein fig. 2a is an image before enhancement, and fig. 2b is an image after enhancement.
2) Fault detection
And (3) accurate positioning, namely positioning the specific position of the coupler yoke in the vertical direction by adopting a method of local self-adaptive threshold segmentation plus horizontal projection:
more texture feature false interference is used during accurate positioning, the boundary region of the coupler yoke and the background is extracted by adopting local self-adaptive threshold segmentation, and the local self-adaptive segmentation result is shown in figure 3. Counting the sum of gray values of pixel points according to the horizontal (line) in fig. 3, then sorting the sum of pixel values of the line into an image according to the line unit, namely as shown in fig. 4, searching towards the upper end and the lower end respectively by taking the vertical central point of the enhanced target image as a reference, when the position of the line with the number of white pixel points exceeding half of the width of the enhanced target image in a certain line of pixels is found, recording the position as A1, continuing to search, if 5 continuous lines meet the condition that the width of the enhanced target image exceeds half, selecting A1 as a boundary to stop searching, and if not, continuing to search. If two lines meeting the conditions are found at the same time, finding an upper boundary and a lower boundary; if only one boundary is found, completing the other boundary according to the found upper boundary or lower boundary; and if one boundary is not positioned, the problem of image quality is indicated to be treated by alarm.
And accurately positioning and intercepting the hooking tail frame according to the position of the determined upper boundary or lower boundary in the enhanced target image, taking the hooking tail frame as an accurately positioned and intercepted image, and calculating the pixel variance of the accurately positioned and intercepted image according to the column direction to obtain a variance curve. Variance represents the degree to which a column of pixels changes, and the variance curve fluctuates when a break occurs. And traversing the variance curve, finding the initial position and the end position of a variance curve jumping point, and correspondingly taking 20 pixels of the initial position extension in the accurate positioning intercepted image as the initial coordinates of the suspected fault area so as to ensure that the complete suspected fault area is intercepted. And intercepting suspected fault area sub-graphs according to the obtained initial coordinates, and unifying the suspected fault area sub-graphs resize to (128 × 128).
And further processing the suspected fault area subgraph, and extracting the gray scale, texture and LBP characteristics of the suspected fault area subgraph. The gray feature is the distribution of statistical gray, and the texture feature selects a Sobel operator to solve the edge statistical gradient of the image. Different feature intervals may be different, so the extracted features are normalized respectively:
fout=fin/(max(fin)-min(fin))
foutfor normalized features, finIs the original feature of the input.
Further processing the normalized features, taking the features corresponding to each suspected fault area subgraph as sample data, respectively calculating the mean value u and the standard deviation v of the sample data corresponding to each feature, further processing the extracted features according to the 3 sigma property of the sample data corresponding to each feature to obtain the processed features cls, and using the processed features cls as a data set used for training to increase the classification accuracy;
Figure BDA0002765440850000091
wherein cls is a characteristic after treatment, fxiThe characteristics belong to the ith normalized characteristics of the category x (normal or fault), u is the mean value of sample data, and v is the standard deviation;
and training the SVM classifier by using the processed features to obtain a two-classification model. When a new image is obtained, the same processing method is adopted for fault identification.
The data set comprises positive and negative samples, the positive sample is a normal image, the negative sample is a fault image, each sample comprises three characteristics of gray scale, texture and lbp extracted from the image, the mean value and the variance of the three characteristics of the positive and negative samples are respectively calculated, the three data are respectively processed to obtain a processed characteristic cls, and the three characteristic data of the positive and negative samples are used as a training data set to train the SVM classifier.
In the process of processing the data line to obtain the processed characteristic cls, the mean value and the variance of the positive and negative samples are different, so that the weight of the characteristic value close to the mean value is increased according to the formula according to the 3 sigma property, and the weight far from the mean value is 0, so that the interference of abnormal characteristics can be effectively reduced, and the accuracy is increased.
4. Integral fault identification process
When the truck passes through the detection base station, a camera is used for acquiring a vehicle passing image (a 2D linear array image), and wheel base information is acquired according to hardware. Intercepting a sub-image containing a target, namely a target image, by using axle distance information, then accurately positioning the intercepted image at the position of the upper side and the lower side of a positioning coupler yoke, calculating pixel variance of the accurately positioned intercepted image in a column direction to obtain a variance curve, traversing the pixel variance curve, finding the starting position and the ending position of a variance curve jumping point, correspondingly taking 20 pixels of the starting position in the accurately positioned intercepted image as the starting coordinate of a suspected fault area, intercepting the sub-image of the suspected fault area according to the obtained coordinate, and extracting the characteristic of the sub-image of the suspected fault area. Normalizing the extracted features of the suspected fault area subgraph, taking the normalized features as sample data, further processing the extracted features according to the 3 sigma property of the sample data corresponding to each feature to obtain processed features (the mean variance counted by the training set is stored in a model, and the mean variance in the training set is directly used for carrying out the same processing on predicted data during prediction), and then inputting the processed features into an SVM classifier to identify faults. And if the image is identified as a fault, outputting alarm information, and if the identification result is that no fault exists, skipping the image identification for the next image until all identification is finished.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (10)

1. The method for identifying the break of the coupler yoke based on image processing is characterized by comprising the following steps of:
s1, acquiring a target image containing the component to be detected; enhancing the target image to obtain an enhanced target image;
s2, extracting a boundary area between the supporting plate and the background by adopting local self-adaptive threshold segmentation; counting the segmentation result according to the horizontal direction and respectively searching upper and lower boundaries towards the upper and lower ends by taking the enhanced vertical central point of the target image as a reference;
positioning and intercepting a hooking tail frame according to the position of an upper boundary or a lower boundary in an enhanced target image to serve as an accurate positioning intercepted image, calculating pixel variance of the accurate positioning intercepted image in a column direction to obtain a variance curve, traversing the variance curve, finding the initial position and the end position of a variance curve jumping point, correspondingly positioning the initial position and the end position in the intercepted image, externally expanding L pixels from the initial position to serve as initial coordinates of a suspected fault area, and intercepting a suspected fault area sub-image according to the obtained initial coordinates;
s3, extracting the characteristics of the suspected fault area subgraph;
and S4, identifying the hook tail frame fault by using an SVM classifier based on the extracted characteristics of the suspected fault area subgraph.
2. The image processing-based hooking tail frame breaking identification method according to claim 1, wherein the process of enhancing the target image comprises the following steps:
normalizing the target image, counting the number of 10 interval pixel values of the gray value of the normalized image between 0 and 1 by taking 0.1 as a step length, and selecting an interval upper bound with the largest number of the pixel values as the center of an enhancement curve; and enhancing the target image according to the following formula:
Figure FDA0002765440840000011
in the formula imgvalIs the statistical value of the number of pixel values in the interval with the normalized image step length of 0.1, FoffIs the upper bound of the interval corresponding to the maximum number of pixel values in the statistical value, exp is an exponent with e as the base, y is the power of enhancement, imgnormalNormalized pixel value, img, for the target image before enhancementdstNormalizing pixel values for an enhanced target image;
And recovering to obtain an enhanced target image according to the enhanced image.
3. The method for identifying the break of the coupler yoke based on the image processing as claimed in claim 2, wherein the step of counting the segmentation result according to the horizontal direction and respectively searching the upper and lower boundaries from the enhanced vertical center point of the target image to the upper and lower ends in S2 comprises the steps of:
searching the vertical central point of the enhanced target image to the upper end and the lower end respectively, finding out the position of a line with the number of white pixel points exceeding half of the width of the enhanced target image in a certain line of pixels, recording the position as A1, continuing to search, selecting A1 as a boundary to stop searching if 5 continuous lines meet the condition that the width of the enhanced target image exceeds half, and continuing to search if the width of the enhanced target image does not exceed half;
if two lines meeting the conditions are found at the upper end and the lower end simultaneously, an upper boundary and a lower boundary are found; if only one boundary is found, the other boundary is complemented according to the found upper boundary or lower boundary.
4. The image-processing-based hooking frame breakage recognition method of claim 3, wherein in step S2, 20 pixels outside the start position in the precisely positioned and cut image are used as the start coordinates of the suspected fault area.
5. The image-processing-based hooking frame breakage recognition method of claim 4, wherein the suspected-faulty-area sub-image needs to be adjusted to a fixed size before the extraction of the features of the suspected-faulty-area sub-image in step S3, and then the features of the suspected-faulty-area sub-image are extracted.
6. The image-processing-based hooking frame breakage recognition method of claim 5, wherein the suspected-failure-region sub-graph needs to be adjusted to 128 × 128 before the suspected-failure-region sub-graph features are extracted.
7. The image-processing-based hooking frame breaking identification method according to claim 4, wherein the features extracted from the suspected-fault-area sub-image in the step S3 include gray scale, texture, and LBP features of the suspected-fault-area sub-image.
8. The image-processing-based coupler yoke frame breakage identification method according to claim 7, wherein the gray scale feature is distribution of statistical gray scale, and the texture feature is edge statistical gradient obtained by selecting a Sobel operator for an image.
9. The method for identifying hooking frame breakage based on image processing as claimed in one of claims 1 to 8, wherein the step S4 of identifying the failure by using SVM classifier based on the extracted features of the sub-image of the suspected failure region comprises the following steps:
and performing normalization processing on the extracted features of the suspected fault area subgraphs, and then inputting the normalized features into an SVM classifier to recognize faults.
10. The method for identifying hooking frame breakage based on image processing as claimed in claim 9, wherein after the features of the sub-image extracted from the suspected failure area are normalized, before the normalized features are input into the SVM classifier, the normalized features need to be processed by 3 sigma property to obtain processed features cls:
Figure FDA0002765440840000021
wherein cls is a characteristic after treatment, fxiIs the ith normalized feature whose feature belongs to class x;
inputting the feature cls subjected to the 3-sigma property processing into an SVM classifier to identify the fault;
the 3 sigma property is a 3 sigma property determined in the training process of the SVM classifier, u is a mean value of sample data in the training process, and v is a standard deviation of the sample in the training process; in the process of training the SVM classifier, the characteristics of the suspected fault area subgraphs are extracted by using images in a training set, normalization processing is carried out on the extracted characteristics, the characteristics, corresponding to each suspected fault area subgraph, after normalization are used as sample data, and the mean value u and the standard deviation v of the sample data corresponding to each characteristic are respectively calculated.
CN202011231729.1A 2020-11-06 2020-11-06 Image processing-based coupler yoke break identification method Active CN112329783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011231729.1A CN112329783B (en) 2020-11-06 2020-11-06 Image processing-based coupler yoke break identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011231729.1A CN112329783B (en) 2020-11-06 2020-11-06 Image processing-based coupler yoke break identification method

Publications (2)

Publication Number Publication Date
CN112329783A true CN112329783A (en) 2021-02-05
CN112329783B CN112329783B (en) 2021-08-06

Family

ID=74315715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011231729.1A Active CN112329783B (en) 2020-11-06 2020-11-06 Image processing-based coupler yoke break identification method

Country Status (1)

Country Link
CN (1) CN112329783B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1679777A1 (en) * 2005-01-05 2006-07-12 Behr-Hella Thermocontrol GmbH Device for indicating an obstruction of a DC commutated motor based on the ripple of the field current
JP2008157660A (en) * 2006-12-21 2008-07-10 Nissan Diesel Motor Co Ltd Failure diagnosis apparatus and failure diagnosis method
CN111079819A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for judging state of coupler knuckle pin of railway wagon based on image recognition and deep learning
CN111079818A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon coupler joist breakage detection method
CN111091546A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon coupler tail frame breaking fault identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1679777A1 (en) * 2005-01-05 2006-07-12 Behr-Hella Thermocontrol GmbH Device for indicating an obstruction of a DC commutated motor based on the ripple of the field current
JP2008157660A (en) * 2006-12-21 2008-07-10 Nissan Diesel Motor Co Ltd Failure diagnosis apparatus and failure diagnosis method
CN111079819A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for judging state of coupler knuckle pin of railway wagon based on image recognition and deep learning
CN111079818A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon coupler joist breakage detection method
CN111091546A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon coupler tail frame breaking fault identification method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LONG ZHANG ET AL: "Bearing fault diagnosis using multi-scale entropy and adaptive neuro-fuzzy inference", 《EXPERT SYSTEM WITH APPLICATIONS》 *
ZIWANG LIU ET AL: "Infrared image combined with cnn based fault diagnosis for rotating machinery", 《2017 INTERNATIONAL CONFERENCE ON SENSING, DIAGNOSTICS, PROGNOSTICS, AND CONTROL (SDPC)》 *
宋鑫鑫: "基于图像处理技术的列车部件异常自动检测方法研究及应用", 《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》 *
戴鹏: "基于计算机视觉的故障自动识别系统的设计和实现", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Also Published As

Publication number Publication date
CN112329783B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
CN106373426B (en) Parking stall based on computer vision and violation road occupation for parking monitoring method
TWI497422B (en) A system and method for recognizing license plate image
CN111080620A (en) Road disease detection method based on deep learning
CN111814686A (en) Vision-based power transmission line identification and foreign matter invasion online detection method
CN110781839A (en) Sliding window-based small and medium target identification method in large-size image
CN109344864B (en) Image processing method and device for dense object
CN110751619A (en) Insulator defect detection method
CN112052782A (en) Around-looking-based parking space identification method, device, equipment and storage medium
CN108009574A (en) A kind of rail clip detection method
CN113537211A (en) Deep learning license plate frame positioning method based on asymmetric IOU
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
CN114494161A (en) Pantograph foreign matter detection method and device based on image contrast and storage medium
CN116524205A (en) Sewage aeration automatic detection and identification method
CN107392115B (en) Traffic sign identification method based on hierarchical feature extraction
CN112967224A (en) Electronic circuit board detection system, method and medium based on artificial intelligence
CN112329783B (en) Image processing-based coupler yoke break identification method
CN115797970B (en) Dense pedestrian target detection method and system based on YOLOv5 model
CN110992299B (en) Method and device for detecting browser compatibility
CN111402185A (en) Image detection method and device
CN112330633B (en) Jumper wire adhesive tape damage fault image segmentation method based on self-adaptive band-pass filtering
CN111046876B (en) License plate character rapid recognition method and system based on texture detection technology
CN113538418A (en) Tire X-ray image defect extraction model construction method based on morphological analysis
Tian et al. A new algorithm for license plate localization in open environment using color pair and stroke width features of character
CN111882507A (en) Metal element identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant