CN115352832A - Belt tearing detection method - Google Patents

Belt tearing detection method Download PDF

Info

Publication number
CN115352832A
CN115352832A CN202211127853.2A CN202211127853A CN115352832A CN 115352832 A CN115352832 A CN 115352832A CN 202211127853 A CN202211127853 A CN 202211127853A CN 115352832 A CN115352832 A CN 115352832A
Authority
CN
China
Prior art keywords
image
detected
template
gray
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211127853.2A
Other languages
Chinese (zh)
Inventor
王纪强
宋震
赵林
赵福军
侯墨语
李振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Laser Institute of Shandong Academy of Science
Original Assignee
Qilu University of Technology
Laser Institute of Shandong Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Laser Institute of Shandong Academy of Science filed Critical Qilu University of Technology
Priority to CN202211127853.2A priority Critical patent/CN115352832A/en
Publication of CN115352832A publication Critical patent/CN115352832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G43/00Control devices, e.g. for safety, warning or fault-correcting
    • B65G43/02Control devices, e.g. for safety, warning or fault-correcting detecting dangerous physical condition of load carriers, e.g. for interrupting the drive in the event of overheating

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses belt tearing detection method, includes: and acquiring a gray distribution histogram corresponding to the reference image, and determining a preset gray threshold according to the gray distribution histogram. And according to a preset gray threshold, performing threshold segmentation, expansion and enhancement operations on the reference image to form a template picture. And inputting the preprocessed training diagram into a training model for template training to obtain a template weight file. And acquiring an image to be detected and preprocessing the image to be detected to obtain a preprocessed image to be detected. And matching the preprocessed image to be detected with the template weight file, recording the position information of the preprocessed image to be detected if the preprocessed image to be detected is within the allowable range of the template weight file, and enabling the preprocessed image to be detected and the template image to achieve the maximum degree of engagement by affine transformation to obtain a transformed image. And carrying out differential calculation on the converted image and the template picture to obtain a differential image, wherein when the pixel gray value in the differential image is greater than a differential threshold value, the belt in the image to be detected is torn.

Description

Belt tearing detection method
Technical Field
The application relates to the technical field of machine vision, in particular to a belt tearing detection method.
Background
The belt is an indispensable part in transportation, plays an important role in the fields of grain transportation, ore delivery, smart mines and the like, and is a simple and practical delivery method. However, as the service life increases, the wear of the belt is indispensable, and the penetration and tearing are most serious in the breakage of the belt, and if the machine is not stopped in time after the tearing occurs, a long-distance and large-area tearing can be caused, which causes huge economic loss and even endangers the life safety of operators. Therefore, the belt is detected in real time, and whether tearing occurs or not can be judged in a short time, so that the belt tearing detection device has extremely important significance for maintaining the life safety of operators and reducing economic loss.
The existing belt tearing detection method basically adopts a machine vision algorithm, and the traditional machine vision algorithm directly judges the acquired picture, so that a large amount of computer resources are consumed, the detection efficiency is low, and the detection time is long.
Disclosure of Invention
The application provides a belt tearing detection method to improve belt tearing detection efficiency.
In order to solve the technical problem, the embodiment of the application discloses the following technical scheme:
the embodiment of the application discloses a belt tearing detection method, which comprises the following steps: acquiring a gray distribution histogram corresponding to the reference image, and determining a preset gray threshold according to the gray distribution histogram;
performing threshold segmentation, expansion and enhancement operations on the reference image according to the preset gray threshold to form a template picture;
inputting the preprocessed training diagram into a training model for template training to obtain a template weight file;
acquiring an image to be detected and preprocessing the image to be detected to obtain a preprocessed image to be detected;
matching the preprocessed image to be detected with the template weight file, recording the position information of the preprocessed image to be detected if the preprocessed image to be detected is within the allowable range of the template weight file, and then enabling the preprocessed image to be detected and the template picture to achieve the maximum degree of fit by using affine transformation according to the position information of the preprocessed image to be detected to obtain a transformed image;
and performing differential calculation on the transformed image and the template picture to obtain a differential image, wherein when the pixel gray value in the differential image is greater than a differential threshold value, the belt in the image to be detected is torn.
In some embodiments, the process of obtaining the pre-processed training images comprises: selecting a plurality of belt images which are not torn and have clear pictures as training images, and performing threshold segmentation, enhancement and expansion processing on the training images according to a preset gray threshold to obtain a preprocessed training image.
In some embodiments, the template weight file includes characteristic information of a pixel gray value, a line width, a line area, and the like of a laser line region in a training image.
In some embodiments, the obtaining a gray distribution histogram corresponding to the reference image and determining the preset gray threshold according to the gray distribution histogram includes: graying the reference image to obtain a belt surface grayscale image;
obtaining a corresponding gray distribution histogram according to the belt surface gray map;
and determining a preset gray threshold value according to the gray values of different areas on the gray distribution histogram.
In some embodiments, the inputting the preprocessed training images into the training model for template training further comprises: and calculating the correlation degree of the preprocessed training image and the template picture, and inputting the preprocessed training image into a training model for template training when the correlation degree is greater than or equal to 0.9.
In some embodiments, performing threshold segmentation, expansion and enhancement operations on the reference image according to the preset grayscale threshold to form a template picture includes:
acquiring a set of pixel points in the reference image, wherein the pixel points are in the range of the preset gray threshold value, and forming an initial template area image;
performing expansion operation on the ignored initial template area image to form an expansion template area;
and cutting the expansion template area in the original image, and performing enhancement processing on the expansion template area to form a template picture.
In some embodiments, the formula of the enhancement process includes:
G’=G Exponen/ (1)
wherein G is the gray value of the image before enhancement processing, G' is the gray value of the image after exponential transformation, and Exponent is the transformation index.
In some embodiments, the obtaining an image to be detected and preprocessing the image to be detected to obtain a preprocessed image to be detected includes: acquiring an image to be detected, and converting the image to be detected into a gray scale image to be detected;
acquiring a set of pixel points which accord with a preset gray threshold range in the gray image to be detected to form an initial region to be detected;
and expanding and enhancing the initial region to be detected to obtain a preprocessed image to be detected.
In some embodiments, the difference calculation formula comprises:
D(i,j)=|S(i,j)-T(i,j)| (2)
wherein, S (i, j) is a gray value corresponding to the pixel point (i, j) as the position coordinate of the template picture, T (i, j) is a gray value corresponding to the pixel point (i, j) as the position coordinate in the transformed image, and D (i, j) is a gray value corresponding to the pixel point (i, j) as the position coordinate in the differential image.
The beneficial effect of this application:
the application discloses belt tearing detection method, includes: and acquiring a gray distribution histogram corresponding to the reference image, and determining a preset gray threshold according to the gray distribution histogram. And performing threshold segmentation, expansion and enhancement operations on the reference image according to the preset gray threshold to form a template picture. And inputting the preprocessed training diagram into a training model for template training to obtain a template weight file. And acquiring an image to be detected and preprocessing the image to be detected to obtain a preprocessed image to be detected. And matching the preprocessed image to be detected with the template weight file, recording the position information of the preprocessed image to be detected if the preprocessed image to be detected is within the allowable range of the template weight file, and then enabling the preprocessed image to be detected and the template picture to achieve the maximum degree of fit by affine transformation according to the position information of the preprocessed image to be detected to obtain a transformed image. And carrying out differential calculation on the transformed image and the template picture to obtain a differential image, wherein when the pixel gray value in the differential image is greater than a differential threshold value, the belt in the image to be detected is torn, so that the image range for judgment is reduced, and the improvement of the judgment efficiency is facilitated.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure, the drawings required to be used in some embodiments of the present disclosure will be briefly described below, and it is apparent that the drawings in the following description are only drawings of some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art according to these drawings. Furthermore, the drawings in the following description may be regarded as schematic diagrams, and do not limit the actual size of products, the actual flow of methods, the actual timing of signals, and the like, involved in the embodiments of the present disclosure.
FIG. 1 is a schematic diagram of a belt tear monitoring system provided in accordance with some embodiments;
FIG. 2 is a schematic flow diagram of a belt tear detection method according to some embodiments;
fig. 3 is a histogram of gray scale distribution provided in the present application;
FIG. 4 is a comparison graph before and after an image enhancement process according to an example of the present application;
FIG. 5 is a schematic diagram of a differential calculation of an example of the present application.
Detailed Description
Technical solutions in some embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided by the present disclosure belong to the protection scope of the present disclosure.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The application discloses a belt tearing detection method, which comprises the steps of firstly creating a template picture. Then, selecting a plurality of non-tearing graphs as training graphs to perform model training to obtain a weight file of the training model, wherein the weight file comprises the following steps: the pixel gray value, line width and line area of the laser line shaped area of the training graph. And performing threshold segmentation on the image to be tested, and keeping the image of the laser line shape area as a preprocessed image to be tested to be matched with the template weight file. When the image to be detected is preprocessed within the allowable range of the template weight file, the preprocessed image to be detected is subjected to affine transformation, so that the preprocessed image to be detected and the template image are overlapped as much as possible to obtain a transformed image, the transformed image and the template image are subjected to differential calculation to obtain a differential image, and when the pixel gray value in the differential image is larger than a differential threshold value, a belt in the image to be detected is torn.
Fig. 1 is a schematic structural diagram of a belt tear monitoring system according to some embodiments, and fig. 2 is a schematic flow diagram of a belt tear detection method according to some embodiments, which is described below with reference to fig. 1 and fig. 2.
In some embodiments of the present application, the belt tearing picture collecting device is as shown in fig. 1, and in order to realize the cyclic conveying of the belt, two belts are arranged on the belt conveyor to circulate. A linear laser transmitter 3 is arranged between the two reciprocating belts, and the linear laser transmitter 3 transmits a laser beam towards the first belt. An image acquisition device 2 is also arranged and positioned between the two reciprocating belts and used for acquiring the image of the laser beam on the first belt 1.
In some embodiments of the present application, the image capture device is the same perpendicular distance from the line laser transmitter to the first belt. The image acquisition device is positioned on one side of the line laser emitter and is used for acquiring the image of the laser beam on the first belt.
The belt surface image that image acquisition device gathered in this application divide into two types, include: training images and images to be tested.
The training image is a belt surface image when the belt is not torn and is used for obtaining a template matching weight file. The image to be tested is the surface image of the belt obtained in the detection process.
Because the position changes occur due to vibration in the working process of the belt, a plurality of training images are needed to train the detection model in the training process.
The machine vision algorithm adopted in the belt tearing detection method specifically comprises the following steps:
s100: and acquiring a gray distribution histogram corresponding to the reference image, and determining a preset gray threshold according to the gray distribution histogram.
In this application, a gray distribution histogram corresponding to a reference image is obtained, and a preset gray threshold is determined according to the gray distribution histogram, which specifically includes: graying the reference image to obtain a belt surface gray image; obtaining a corresponding gray distribution histogram according to the belt surface gray map; and determining a preset gray threshold according to gray values of different areas on the gray distribution histogram.
Fig. 3 is a gray distribution histogram provided in the present application. As shown in fig. 3, the gradation distribution histogram has a binarization property, i.e., the closer the gradation value is to 0, the darker the luminance; the closer the gray value is to 255, the brighter the brightness. And then the brightness of the laser line and the brightness of the background color are combined, so that the following can be definitely judged: the low intensity regions with gray values near 0 correspond to background regions or darker regions in the image and the high intensity regions with gray values near 255 correspond to laser line regions in the image. Therefore, the preset gray threshold, namely the minimum gray value corresponding to the high-brightness area can be determined as the preset gray threshold according to the gray distribution histogram. Of course, to further prevent the omission of the laser lines, a gray error tolerance value is added on the basis of the minimum gray value, and the specific calculation formula is as follows: preset gray threshold = minimum gray value-gray error tolerance value corresponding to high brightness region.
In the method, the histogram is adopted to judge the threshold, and the image information is converted into the digital information in the histogram in a mode of combining the image with the histogram, so that the threshold determining process is more visual and rigorous and the persuasive force is stronger.
The selection of the preset gray level threshold value comprises the following steps: and converting the reference image into a gray distribution histogram, then performing threshold segmentation on the gray image, and taking a gray value range corresponding to a laser coverage area in the gray image as a preset gray threshold. As shown in fig. 3, where the abscissa is the gray value and the ordinate is the pixel area, pixels between the gray values 245-255 can cover the entire laser area, so the preset gray threshold value is 245-255 in this example. The preset gray threshold value can also be set according to a gray image corresponding to the reference image.
S200: and according to a preset gray threshold, performing threshold segmentation, expansion and enhancement operations on the reference image to form a template picture.
And acquiring a set of pixel points which are in accordance with a preset gray threshold range in the reference image to form an initial template area image. Performing expansion operation on the initial template area image to form an expansion template area, and cutting the expansion template area in the original image; and performing enhancement processing on the expansion template area to form a template picture.
Selecting a belt surface image with an unbroken belt and clear edges as a reference image, and forming an initial template area image by the collection of pixel points in the reference image, wherein the pixel points are in accordance with the preset gray threshold range. And performing expansion operation on the initial template area image to form an expansion template area, and cutting the expansion template area in the original image. And performing enhancement processing on the expansion template area to form a template picture.
And converting the reference image into a gray image, and acquiring the position information of the pixels in accordance with the preset gray threshold range, wherein the area of the pixels in accordance with the preset gray threshold range is the initial template area image.
Performing a dilation operation on the initial template region image, comprising: and according to the pixel position information of the initial template area image, acquiring pixel points which are less than or equal to a preset distance away from the pixels of the initial template area image, namely the expansion pixel set. And the distance between the pixel point in the expansion pixel set and the pixel of the initial template area image is less than or equal to the preset distance. In this example, a preset distance of 3.5 pixels is selected, that is, in this example, a set of pixel points where an area that is less than or equal to 3.5 pixels away from the initial template area image is located is an expanded pixel set, and a position of the expanded pixel set in the original image is an expanded template area.
Carrying out enhancement processing on the expansion template area to form a template picture, comprising the following steps: and acquiring position information and an original gray value of a pixel point in the expansion template area, and calculating to obtain an enhanced gray value according to an enhancement transformation formula. The position information of the template picture is the position information of the pixel points in the expansion template region, and the gray value of the template picture is the gray value of the pixel points in the expansion template region after the pixel points are enhanced.
The enhancement formula is shown in the following formula (1),
G’=G Exponent (1)
wherein G is the gray value of the original image, G' is the gray value of the image after exponential transformation, exponent is the transformation index, the uncertainty is determined according to the different values of the using scenes, and the Exponent value is determined to be 2 through experiments, so that the best effect is achieved.
In practical application, different algorithms can be adopted in the implementation process of image enhancement, for example, the contrast relation between a laser line area and a background area can be improved by enhancing the contrast; of course, the brightness of the laser line area can also be made stronger by reducing the brightness of the background area. Those skilled in the art can select a suitable algorithm according to actual needs, which all belong to the protection scope of the present application. In this application, through the image intensification operation increase image light again, but on the one hand highlighting laser line portion, on the other hand suppresses partial noise to improve the follow-up belt and tear the degree of accuracy of judging.
Fig. 4 is a comparison graph before and after image enhancement processing according to an example of the present application, as shown in fig. 4, where the left side of an arrow in the graph is an image before image enhancement processing, and the right side of the arrow is an image after image enhancement processing, it can be seen that other dark areas around are suppressed more obviously after image enhancement, and white areas are highlighted.
S300: selecting a plurality of belt images which are not torn and have clear pictures as training images, and performing threshold segmentation, enhancement and expansion processing on the training images according to a preset gray threshold to obtain a preprocessed training image.
And cutting the training image according to the position information of the expansion template area to obtain a training laser line area. And (3) performing enhancement processing on the training laser line area to obtain an enhanced training image, so that other dark areas of the training laser line area are more obviously inhibited, and a white area is brighter. And (3) performing expansion treatment on the enhanced training picture, so that the expanded area can cover all laser line areas to obtain a preprocessed training picture.
S400: and inputting the preprocessed training image into a training model to perform template training to obtain a template weight file, wherein the template weight file comprises characteristic information of laser line region pixel gray value, line width, line area and the like of the training image.
Before inputting the pre-processing training image into the training model for template training, the correlation degree of the pre-processing training image and the template picture can be calculated. And calculating the pixel matching number of the preprocessing training picture and the template picture, and recording the pixel matching number. The ratio of the pixel matching number to the total pixel number of the pre-processing training image is recorded as the correlation degree of the pre-processing training image to the template image.
In the application, the preprocessing training picture with the relevance degree of the template picture being more than or equal to 0.9 is subjected to template training, and the preprocessing training picture with the relevance degree of the template picture being less than 0.9 is discarded.
And inputting the preprocessed training diagram into a training model for template training to obtain a template weight file. The template weight file comprises characteristic information of laser line area pixel gray value, line width, line area and the like of the training image. The training model may be a deformation model.
S500: and performing threshold segmentation, expansion and enhancement on the image to be detected to obtain a preprocessed image to be detected.
The method comprises the steps of obtaining an image to be detected, converting the image to be detected into a gray-scale image to be detected, obtaining position information of pixels in the gray-scale image to be detected, wherein the pixels meet a preset gray-scale threshold range, and the area where the pixels meet the preset gray-scale threshold range is an initial region to be detected. And expanding and enhancing the initial region to be detected to obtain a preprocessed image to be detected.
In this example, the expansion and enhancement processing is performed on the initial region to be measured, which is consistent with the reference image method for the reference image described in the foregoing description, and will not be described in detail here.
In this example, the preset gray threshold range of the image to be measured is consistent with the preset gray threshold of the reference image.
S600: and matching the pre-processed image to be detected with the template weight file, recording the position information of the pre-processed image to be detected if the pre-processed image to be detected is within the allowable range of the template weight file, and then performing rotation or translation operation on the pre-processed image to be detected by affine transformation according to the position information, so that the pre-processed image to be detected and the template picture can reach the maximum degree of engagement, and recording the pre-processed image to be detected after affine transformation as a transformed image.
The affine transformation matrix is as follows:
Figure BDA0003848898360000051
the rear matrix is a matrix of the image before transformation, the front matrix is a matrix after transformation, and translation or rotation transformation operation is carried out on the original image by controlling values of small matrixes R and T in the middle matrix.
S700: and carrying out differential calculation on the transformed image and the template picture to obtain a differential image, and judging that the belt in the image to be detected is torn when the pixel gray value in the differential image is greater than a differential threshold value.
Carrying out differential calculation on the transformed image and the template picture to obtain a differential image, wherein the differential image comprises the following steps: acquiring pixel position information and a corresponding gray value of a transformed image; and carrying out difference calculation on the gray values of the corresponding pixel points of the transformed image and the template picture to obtain a difference image.
The difference calculation formula is as follows:
D(i,j)=|S(i,j)-T(i,j)| (2)
wherein, S (i, j) is a gray value corresponding to a certain pixel point of the template picture, T (i, j) is a gray value corresponding to the same pixel point as the position of the template picture in the transformed image, and D (i, j) is a gray value corresponding to the same pixel point as the position of the template picture in the differential image.
In some embodiments of the present application, to further reduce the operation time and improve the operation efficiency, before performing the differential calculation on the transformed image and the template picture, the transformed image and the template picture are obtained as a set of pixel points whose pixel gray values are greater than the differential gray threshold, and the differential calculation is performed on the pixel points whose pixel gray values are greater than the differential gray threshold in the transformed image and the template picture.
For convenience of calculation, the position coordinates of the image may also be coordinates of a pre-divided block of pixels, e.g. from the original picture
Fig. 5 is a schematic diagram of differential calculation according to an example of the present application, and as shown in the diagram, in fig. 5, a is a template picture, b is a transformed image, and c is a differential image after differential operation, a gray value at a corresponding position is subjected to differential calculation, and information of a pixel point having a difference is output.
In conclusion, the application provides a belt tearing monitoring method, which adopts laser line imaging and a machine vision algorithm to process images, namely, the surface images of the belt projected by laser are collected, and the machine vision algorithm is utilized to judge whether the laser lines are lost or not, so as to judge the belt tearing condition. In the process, on one hand, compared with the imaging of a traditional camera, the laser line imaging can record more information, so that the acquired surface information of the belt is more accurate, for example, the tearing condition is ensured, and the accuracy of the subsequent detection result is ensured.
On the other hand, in the process of judging the belt surface tearing by adopting the machine vision algorithm, the laser line gray level threshold is utilized to carry out threshold segmentation on the image and carry out expansion processing on the segmented region, and the region-of-interest image containing the laser line is obtained by cutting the gray level image of the belt surface image, so that only the region-of-interest image is needed to be judged in the subsequent belt tearing judgment process, the judged image range is greatly reduced, the judgment efficiency is improved, and the judgment precision is also improved. In addition, compare with carrying out direct tear judgement to gathering the image among the prior art, in this application, tear before judging, carry out processing such as preliminary treatment and image re-enhancement to the image, make the laser line part in the image highlight, and when increasing the laser line brightness, effectual some noise of having restrained to make the judgement to the belt tear more accurate, and, reduced the erroneous judgement condition. In addition, in the application, the laser line gray threshold of the gray distribution histogram is utilized to perform threshold segmentation on the belt surface image and perform expansion processing on the segmented area, the template picture containing the laser line is obtained by cutting the belt surface image gray threshold, and only the template picture is judged in the subsequent belt tearing judgment process, so that the judgment image range is greatly reduced, the judgment efficiency is improved, and meanwhile, the judgment precision is also improved. And the difference between the image to be tested and the template picture is highlighted by utilizing differential calculation, so that the calculation is simple and convenient, and the accuracy is high.
In conclusion, the machine vision algorithm adopted by the application is used for judging the surface tearing process of the belt, the used judgment statement is simple, the accuracy rate is high, the identification is strong, the applicability is wide, the belt tearing can be quickly identified, namely, the belt tearing can be timely responded when the belt is torn or at the initial tearing stage, and other faults of the belt can be avoided.
Since the above embodiments are all described by referring to and combining with other embodiments, the same portions are provided between different embodiments, and the same and similar portions between the various embodiments in this specification may be referred to each other. And will not be described in detail herein.
It is noted that, in this specification, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a circuit structure, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such circuit structure, article, or apparatus. Without further limitation, the phrases "comprising a" \8230; "defining an element do not exclude the presence of additional like elements in a circuit structure, article, or device comprising the element.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (9)

1. A method of detecting a belt tear, comprising:
acquiring a gray level distribution histogram corresponding to the reference image, and determining a preset gray level threshold value according to the gray level distribution histogram;
performing threshold segmentation, expansion and enhancement operations on the reference image according to the preset gray threshold to form a template picture;
inputting the preprocessed training diagram into a training model for template training to obtain a template weight file;
acquiring an image to be detected and preprocessing the image to be detected to obtain a preprocessed image to be detected;
matching the preprocessed to-be-detected image with the template weight file, recording the position information of the preprocessed to-be-detected image if the preprocessed to-be-detected image is within the allowable range of the template weight file, and then enabling the preprocessed to-be-detected image and the template image to achieve the maximum degree of fit by utilizing affine transformation to obtain a transformed image;
and performing differential calculation on the transformed image and the template picture to obtain a differential image, wherein when the pixel gray value in the differential image is greater than a differential threshold value, the belt in the image to be detected is torn.
2. The belt tear detection method of claim 1, wherein the pre-training pattern acquisition process comprises: selecting a plurality of belt images which are not torn and have clear pictures as training images, and cutting, enhancing and expanding the training images according to a preset gray threshold value to obtain a preprocessed training image.
3. The belt tear detection method of claim 1, wherein the template weight file includes laser line area, line width, and line area characteristic information of a training image.
4. The method for detecting belt tearing according to claim 1, wherein the obtaining a grayscale distribution histogram corresponding to the reference image and determining the preset grayscale threshold according to the grayscale distribution histogram includes:
graying the reference image to obtain a belt surface grayscale image;
obtaining a corresponding gray distribution histogram according to the belt surface gray map;
and determining a preset gray threshold value according to the gray values of different areas on the gray distribution histogram.
5. The belt tear detection method of claim 1, wherein the inputting the preprocessed training images into the training model for template training further comprises:
and calculating the association degree of the preprocessed training diagram and the template picture, and inputting the preprocessed training diagram into a training model for template training when the association degree is more than or equal to 0.9.
6. The belt tearing detection method according to claim 1, wherein performing threshold segmentation, expansion and enhancement operations on the reference image according to the preset grayscale threshold to form a template picture comprises:
acquiring a set of pixel points in the reference image, wherein the pixel points are in the range of the preset gray threshold value, and forming an initial template area image;
performing expansion operation on the ignored initial template area image to form an expansion template area;
and cutting the expansion template area in the original image, and performing enhancement processing on the expansion template area to form a template picture.
7. The method of claim 6, wherein the enhancement process formula comprises:
G’=G Exponent (1)
wherein G is the gray value of the image before enhancement processing, G' is the gray value of the image after exponential transformation, and Exponent is the transformation index.
8. The method for detecting belt tearing according to claim 1, wherein the obtaining an image to be detected and preprocessing the image to be detected to obtain a preprocessed image to be detected includes:
acquiring an image to be detected, and converting the image to be detected into a gray scale image to be detected;
acquiring a set of pixel points in the gray-scale image to be detected, wherein the pixel points accord with a preset gray-scale threshold range, and forming an initial region to be detected;
and expanding and enhancing the initial region to be detected to obtain a preprocessed image to be detected.
9. The belt tear detection method of claim 1, wherein the differential calculation formula comprises:
D(i,j)=|S(i,j)-T(i,j)| (2)
wherein, S (i, j) is a gray value corresponding to the pixel point with the position coordinate of the template picture (i, j), T (i, j) is a gray value corresponding to the pixel point with the position coordinate of (i, j) in the transformed image, and D (i, j) is a gray value corresponding to the pixel point with the position coordinate of (i, j) in the differential image.
CN202211127853.2A 2022-09-16 2022-09-16 Belt tearing detection method Pending CN115352832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211127853.2A CN115352832A (en) 2022-09-16 2022-09-16 Belt tearing detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211127853.2A CN115352832A (en) 2022-09-16 2022-09-16 Belt tearing detection method

Publications (1)

Publication Number Publication Date
CN115352832A true CN115352832A (en) 2022-11-18

Family

ID=84006757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211127853.2A Pending CN115352832A (en) 2022-09-16 2022-09-16 Belt tearing detection method

Country Status (1)

Country Link
CN (1) CN115352832A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115993365A (en) * 2023-03-23 2023-04-21 山东省科学院激光研究所 Belt defect detection method and system based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115993365A (en) * 2023-03-23 2023-04-21 山东省科学院激光研究所 Belt defect detection method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN114937055B (en) Image self-adaptive segmentation method and system based on artificial intelligence
CN110148130B (en) Method and device for detecting part defects
CN101030256B (en) Method and apparatus for cutting vehicle image
US20070253040A1 (en) Color scanning to enhance bitonal image
CN111982916A (en) Welding seam surface defect detection method and system based on machine vision
CN107798293A (en) A kind of crack on road detection means
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN113724258A (en) Conveyor belt tearing detection method and system based on image processing
CN111539927B (en) Detection method of automobile plastic assembly fastening buckle missing detection device
CN105447489B (en) A kind of character of picture OCR identifying system and background adhesion noise cancellation method
CN113177924A (en) Industrial production line product flaw detection method
CN110263662B (en) Human body contour key point and key part identification method based on grading
CN112581452B (en) Industrial accessory surface defect detection method, system, intelligent equipment and storage medium
CN116524196B (en) Intelligent power transmission line detection system based on image recognition technology
CN112132821B (en) Cotter pin loss detection method based on image processing
CN114820612B (en) Roller surface defect detection method and system based on machine vision
CN116665011A (en) Coal flow foreign matter identification method for coal mine belt conveyor based on machine vision
CN117095004A (en) Excavator walking frame main body welding deformation detection method based on computer vision
CN115352832A (en) Belt tearing detection method
CN111968082A (en) Product packaging defect detection and identification method based on machine vision
CN102610104A (en) Onboard front vehicle detection method
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN113971681A (en) Edge detection method for belt conveyor in complex environment
CN115159027B (en) Belt tearing monitoring method
CN106530292A (en) Strip steel surface defect image rapid identification method based on line scanning camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination